DBA Blogs

Partner Webcast – Delivering your Application Solutions on Autonomous Database

Cloud has changed customer expectations and software buying preferences causing partners to adapt their deployment, development and customer support models. Increasingly customers consider &...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How can I do a variable "in list"

Tom Kyte - Wed, 2020-10-21 01:06
I have a simple stored procedure, that I would like to have a passed in string(varchar2) for used in select from where col1 in (var1) in a stored procedure. I've tried everything but doesn't work. Followed is my proc. Thanks CREATE OR REPLACE PROCEDURE WSREVSECT_5 pSectNos varchar2, pRetCode OUT varchar2 ) AS nCount number; BEGIN SELECT count(fksrev) into nCount FROM SREVSECT WHERE sectno IN (pSectNos ) /* as in 'abc', 'xyz', '012' */ ; pRetCode:=to_char(ncount); End;
Categories: DBA Blogs

Complete transient session for testing

Tom Kyte - Wed, 2020-10-21 01:06
We have looked at dbms_flashback and dbms_wm but both don't deliver what we actually want: Is there a possibility to make an entire session transient? We would like to use this for our testing scenario's: 1. Start the session in "transient mode" 2. execute a lot of pl/sql with commits, rollbacks and savepoints 3. drop/stop the session ....and everything is magically back to before point 1. dbms_flashback.ENABLE_AT_SYSTEM_CHANGE_NUMBER is almost there by showing the data at a specified scn, but then you have to reverse every table that is hit by the -third party- code by hand. Not feasible in our environment I'm not very optimistic, but perhaps I missed a new capability of the DB. Is there a way?
Categories: DBA Blogs

Query never finish without clear reason

Tom Kyte - Wed, 2020-10-21 01:06
Some times an aleatory select statement stop working without a clear reason. Some times the statement is inside procedure, some times it is executed directly from ODAC FOR .NET 4 Then only thing in common it was always executed by ODAC client, so I never got this problem from one of my jobs ! When I check gv$sesion the session is active When I check plan using DBMS_XPLAN.DISPLAY_CURSOR I it is the best one When I simulate via pl/sql the query works fine with the very same plan When I kill the session and execute again from the ODAC , the same problem happen When I kill the session, SYS.DBMS_SHARED_POOL.PURGE(ADDRESS || ', ' || HASH_VALUE, 'C'), and execute again from the ODAC then BINGO the problem is SOLVED. ...however we know that it will happen again ... What kind of evidence am I missing? did you ever see this kind of behavior?
Categories: DBA Blogs

Select for update statement too slow

Tom Kyte - Wed, 2020-10-21 01:06
Hi Connor, Chris, I have a FOR UPDATE SQL used to lock certain rows in ORDERS table but it seems to be bit slow (takes around 1 min). I tried getting plan from dbms_xplan.display_awr. Could you please give me some lead from your past experience and I can look for the any SQL tuning stuff. <code> SELECT PT.ORDER_ID FROM STAGING_001 PN JOIN GTT_TAB IDS ON IDS.MSG_ID = PN.MSG_ID, XMLTABLE ( 'hsbcEnvelope/hsbcMessageBody/pymtTran' PASSING PN.XML_MSG COLUMNS REF_001 VARCHAR2 (50 CHAR) PATH 'REF_001', REF_002 VARCHAR2 (50) PATH 'REF_001', REF_003 VARCHAR2 (10 CHAR) PATH 'REF_001') PMT, ORDERS PT WHERE 1 = 1 AND ( ( PMT.REF_002 IS NOT NULL AND PMT.REF_001 IS NOT NULL AND PMT.REF_002 = PT.REF_002 AND PT.REF_001 = PMT.REF_001 AND NVL (PMT.REF_003, :B1) = PT.REF_003) OR ( PMT.REF_002 IS NOT NULL AND PMT.REF_002 = PT.REF_002 AND NVL (PMT.REF_003, :B1) = PT.REF_003) OR ( PMT.REF_001 IS NOT NULL AND PT.REF_001 = PMT.REF_001 AND NVL (PMT.REF_003, :B1) = PT.REF_003) ) FOR UPDATE OF PT.ORDER_ID NOWAIT; </code> <code> ----------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ----------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 27043 (100)| | | | | 1 | FOR UPDATE | | | | | | | | | 2 | BUFFER SORT | | | | | | | | | 3 | CONCATENATION | | | | | | | | | 4 | NESTED LOOPS | | 1003 | 1935K| 11972 (1)| 00:00:01 | | | | 5 | NESTED LOOPS | | 2940 | 3930K| 210 (1)| 00:00:01 | | | | 6 | NESTED LOOPS | | 10 | 13630 | 13 (0)| 00:00:01 | | | | 7 | INDEX FAST FULL SCAN | SYS_C006227 | 10 | 130 | 2 (0)| 00:00:01 | | | | 8 | TABLE ACCESS BY INDEX ROWID | STAGING_001 | 1 | 1350 | 2 (0)| 00:00:01 | | | | 9 | INDEX UNIQUE SCAN | PK_STG_INT | 1 | | 1 (0)| 00:00:01 | | | | 10 | XPATH EVALUATION | | | | | | | | | 11 | TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| ORDERS | 1 | 607 | 4 (0)| 00:00:01 | ROWID | ROWID | | 12 | INDEX RANGE SCAN | IDX_PT_REF_001 | 1 | | 3 (0)| 00:00:01 | | | | 13 | NESTED LOOPS | | 1011 | 1950K| 14172 (1)| 00:00:01 | | | | 14 | NESTED LOOPS ...
Categories: DBA Blogs

SQL Profile example when best plan not clear

Bobby Durrett's DBA Blog - Tue, 2020-10-20 12:54

I resolved another production performance issue with a SQL Profile yesterday. I have several posts about SQL Profiles, so I do not want to be redundant, but this case was a little different because it was not clear that I had a better plan. I want to document the challenge that I had deciding if I had the best plan and show the resolution.

On September 21 when I think I was on vacation or otherwise not in the office there was a big performance issue on an older production database. The on-call DBA identified the sql_id of the top SQL statement as 30q69rbpn7g75. But he and an application support developer together could not connect that SQL statement back to the long running reports that were impacting the business. Eventually the issue went away later that night. Here is some of the execution history from that original issue:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average disk write megabytes Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ---------------------------- ----------------------
30q69rbpn7g75      1463081877 21-SEP-20 AM                5         420798.922         144724    209959.863                  0                      0                      0           4824516.8           710748.4                            0                    870
30q69rbpn7g75      1732425686 21-SEP-20 AM               13         66.9426923     56.1538462    7.25523077                  0                      0                      0          6410.23077         249.769231                            0             414.538462
30q69rbpn7g75       592872749 21-SEP-20 AM                1           4144.437           3240       955.246                  0                      0                      0               63878              29725                            0                     89
30q69rbpn7g75      4175108531 21-SEP-20 AM               11         172328.731     92788.1818    62448.1775                  0                      0                      0             3463219         466919.273                            0             610.090909
30q69rbpn7g75      2823019537 21-SEP-20 AM               19         332354.362     212357.895     22580.778                  0                      0                      0          11145610.8         163882.105                            0             303.526316
30q69rbpn7g75      2908690533 21-SEP-20 AM                1          23377.169          13070     11680.972                  0                      0                      0             1582917              89479                            0                    258
30q69rbpn7g75       291832905 21-SEP-20 AM                2         40314.0255          15940      24490.36                  0                      0                      0             1831813             128525                            0                    248
30q69rbpn7g75      1467059382 21-SEP-20 AM                1          20179.636           5760     16155.407                  0                      0                      0              124599              84761                            0                    780
30q69rbpn7g75      1033740578 21-SEP-20 AM                1            1728.49           1570         2.906                  0                      0                      0                1525                 35                            0                     12
30q69rbpn7g75      4175108531 21-SEP-20 AM                8         147782.833        59617.5    65356.3268                  0                      0                      0             2280007          245985.25                            0                    402
30q69rbpn7g75      3938646712 21-SEP-20 AM                2         139722.393          55905     86105.482                  0                      0                      0             6080269             616766                            0                 1143.5
30q69rbpn7g75      2823019537 21-SEP-20 PM               48         238332.678     138706.875    19077.4738                  0                      0                      0          6928661.85         99573.2708                            0             145.395833
30q69rbpn7g75      2823019537 21-SEP-20 PM               64         147520.373     80835.1563    19092.0985                  0                      0                      0          4148771.28         106131.016                            0              79.890625
30q69rbpn7g75      2823019537 21-SEP-20 PM               58         180185.939     113102.931    14365.2987                  0                      0                      0          5926129.21         123920.569                            0             22.0344828
30q69rbpn7g75      2823019537 21-SEP-20 PM               37         307432.645     201436.216    22904.6901                  0                      0                      0          10204978.9         158950.973                            0             201.243243
30q69rbpn7g75      2823019537 21-SEP-20 PM               28         465140.082     326940.357    30687.9033                  0                      0                      0          16715547.3             263153                            0             460.571429
30q69rbpn7g75      2823019537 21-SEP-20 PM               14         934982.157     690958.571    41595.1995                  0                      0                      0          34940770.8         365038.357                            0             243.285714
30q69rbpn7g75      2823019537 21-SEP-20 PM               14         818768.534     640054.286    42596.9506                  0                      0                      0          33547406.9         451864.786                            0                  471.5
30q69rbpn7g75      2823019537 21-SEP-20 PM                4         2329248.39        2013515    75722.5718                  0                      0                      0           104343531         1027683.25                            0                    859
30q69rbpn7g75      2823019537 21-SEP-20 PM                1         4006478.22        3707840     88265.422                  0                      0                      0           186157328            1082000                            0                   2744
30q69rbpn7g75      2823019537 21-SEP-20 PM                1         1818375.63        1771470     20586.628                  0                      0                      0            88206433             374924                            0                   2692
30q69rbpn7g75      2823019537 21-SEP-20 PM                2            1742051        1399440     41061.122                  0                      0                      0          68750135.5             335797                            0                 1479.5
30q69rbpn7g75      2823019537 21-SEP-20 PM                0         3552963.71        3183770    142948.208                  0                      0                      0           154159601             633488                            0                      0

This output is from my sqlstat.sql script. Notice how earlier in the day there are several plans with widely varying elapsed times. Plan hash value 1732425686 averages 66 millisconds during the hour ending at 10 am. I do not know why the query has so many different plans. 2823019537 was the bad plan and it got locked in throughout the afternoon and into the night.

Later in that same week I reviewed the plans and found that the longest running plans used certain indexes on the top table and the shorter ones used another index. I looked at the “Segments by Logical Reads” section of an AWR report during the problem time and found that the top segment was index X6_WFCTOTAL. I used by getplans.sql to get all the plans for SQL id 30q69rbpn7g75 and found that the fastest ones used range scans against index X1_WFCTOTAL and the slow ones did range or skip scans against indexes X5_WFCTOTAL or X6_WFCTOTAL. So I picked one plan, 382022017, and used coe_xfr_sql_profile.sql to force 30q69rbpn7g75 to always run with plan 382022017 which used index X1_WFCTOTAL. Here is some execution history of the plan I picked:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average disk write megabytes Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ---------------------------- ----------------------
30q69rbpn7g75       382022017 24-SEP-20 AM                2          1713.2625            865         5.805                  0                      0                      0                2135               76.5                            0                    141
30q69rbpn7g75       382022017 24-SEP-20 AM                2            355.329            120      265.0765                  0                      0                      0                8183             1324.5                            0                    673
30q69rbpn7g75       382022017 24-SEP-20 AM                1            190.386            180        28.981                  0                      0                      0               17505               1759                            0                   1878
30q69rbpn7g75       382022017 24-SEP-20 PM                4          217.37625            200        20.723                  0                      0                      0            21009.25             392.25                            0                   1865
30q69rbpn7g75       382022017 24-SEP-20 PM                5           507.1578            114      432.2858                  0                      0                      0              7076.6              936.2                            0                  407.8
30q69rbpn7g75       382022017 24-SEP-20 PM                3         47.5793333             10    35.1866667                  0                      0                      0          504.333333         81.3333333                            0                     54
30q69rbpn7g75       382022017 24-SEP-20 PM                1            313.107            110       229.071                  0                      0                      0                8178                399                            0                    396
30q69rbpn7g75       382022017 25-SEP-20 AM                3         30.7433333     16.6666667    15.0446667                  0                      0                      0                 927                 89                            0             110.666667

This plan was averaging less than 1800 milliseconds. But after putting in the supposedly better plan it seemed to have worse execution times that other plans. Also, the plan hash value was different than 382022017. Here is the execution history from when I put this SQL Profile in place:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average disk write megabytes Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ---------------------------- ----------------------
30q69rbpn7g75       653445232 28-SEP-20 PM                1          92139.015          26930     61332.901                  0                 20.998                      0             2202073            2197909                            0                    171
30q69rbpn7g75       653445232 28-SEP-20 PM                9         83373.8866     25152.2222    56386.2852                  0             14.6918889                      0          2102264.11            2065530                            0             2613.88889
30q69rbpn7g75       653445232 28-SEP-20 PM                5         101830.312          29264    70157.0998                  0                 9.1714                      0           2439219.6          2432260.2                            0                    348
30q69rbpn7g75       653445232 28-SEP-20 PM                9          89705.846     26606.6667    61716.6917                  0             14.2046667                      0          2212464.67         2196829.33                            0             825.111111
30q69rbpn7g75       653445232 28-SEP-20 PM                2          91173.826          26425     63832.838                  0                22.1385                      0             2181959          2195820.5                            0                  310.5
30q69rbpn7g75       653445232 28-SEP-20 PM                4         90240.0368          26430    62587.1345                  0                 16.558                      0             2200555          2197764.5                            0                 168.75
30q69rbpn7g75       653445232 28-SEP-20 PM                3         107541.972     26816.6667     75418.071                  0                 16.164                      0             2193977         2193579.33                            0             39.3333333
30q69rbpn7g75       653445232 29-SEP-20 AM                1          92416.895          26410     66108.169                  0                 21.449                      0             2194591            2193764                            0                    158
30q69rbpn7g75       653445232 29-SEP-20 AM                1         103773.265          26510     75455.905                  0                 13.887                      0             2200242            2198725                            0                    122
30q69rbpn7g75       653445232 29-SEP-20 AM               13         74637.6784     20391.5385    51139.8206                  0             16.9292308                      0          1686133.69         1684935.15                            0             24.2307692
30q69rbpn7g75       653445232 29-SEP-20 AM               10         105894.074          28882    72971.0734                  0                20.1169                      0           2418827.8          2417314.8                            0                     56
30q69rbpn7g75       653445232 29-SEP-20 AM                4         89408.2108          26080    61537.7755                  0                10.0275                      0             2174791            2169846                            0                 421.75
30q69rbpn7g75       653445232 29-SEP-20 AM                7         71644.7906     20574.2857    48465.1234                  0                  8.331                      0             1697739         1694385.43                            0             232.857143
30q69rbpn7g75       653445232 29-SEP-20 AM                2         113993.942          32580    72589.2515                  0                 6.2465                      0             2672441            2667206                            0                    145
30q69rbpn7g75       653445232 29-SEP-20 AM                6         99793.2442          28600    69181.7687                  0             4.65783333                      0          2394135.83         2387505.17                            0                    246
30q69rbpn7g75       653445232 29-SEP-20 AM               10         94787.9044          28367    65535.8735                  0                10.6895                      0           2353904.6          2336951.3                            0                  476.8
30q69rbpn7g75       653445232 29-SEP-20 PM                5           89167.86          26462     61499.462                  0                14.4808                      0           2200557.8            2195895                            0                  201.8

Notice first that the plan hash value is 653445232 instead of 382022017. I usually see things like this when the plan has system generated table temporary table names, but I have not seen that in the plan. There must be something like that going on. Either that on the SQL Profile just locks in a slightly different plan. Anyway, 653445232 is the plan caused by the SQL Profile. Notice how the average elapsed time hangs around 90,000 to 100,000 milliseconds. But 382022017 has elapsed times under 1800 milliseconds. Seeing these results, I dropped the SQL Profile. Kind of like a doctor who swears to “do no harm” I did not want to slow down queries that were running very efficiently in my attempt to prevent another system slowdown or outage like we had on September 21. I dropped the SQL Profile on September 29.

Then yesterday, October 19, the problem recurred. This time the bad plan was 3908549093 and used a skip scan on X6_WFCTOTAL. I seem to see a lot of bad plans with skip scans. Here was the execution history before I put the SQL Profile back in:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average disk write megabytes Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ---------------------------- ----------------------
30q69rbpn7g75      3908549093 19-OCT-20 AM               16         832180.951      450528.75    68650.1103                  0                      0                      0          21204921.8           495900.5                            0                    562
30q69rbpn7g75      3908549093 19-OCT-20 PM               33         392068.144     194336.364    47412.7197                  0                      0                      0          9271475.06         221593.545                            0             309.454545
30q69rbpn7g75      3908549093 19-OCT-20 PM                4         3543778.15        1793980    261653.391                  0                      0                      0          82176276.8            1088971                            0                 1036.5

Between 12 and 1 pm the system was really bogged down with executions of 30q69rbpn7g75 taking almost an hour, 3543778 milliseconds. So, I put the SQL Profile back in. I had the script sitting there from the first time I tried it. These are all kept in the sqlt/utl directory. After putting it in things cleared up and the backlog of reports slowly emptied. I think we killed one long running session and had the user rerun the report with the SQL Profile in place.

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average disk write megabytes Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ---------------------------- ----------------------
30q69rbpn7g75       653445232 19-OCT-20 PM               27         106287.519     26936.2963     61603.699                  0             7.41166667                      0          2226367.89         2204705.19                            0             1358.25926
30q69rbpn7g75      3908549093 19-OCT-20 PM                0         9380042.72        6465310    308289.185                  0                      0                      0           294469425            1581492                            0                   7067
30q69rbpn7g75       653445232 19-OCT-20 PM               80         103522.356       26259.25    67322.2418                  0             21.1012125                      0           2159835.1         2142347.26                            0               914.2625
30q69rbpn7g75       653445232 19-OCT-20 PM               31         99252.0065          26990    66069.6995                  0             24.5769355                      0          2227363.45         2213021.26                            0             885.709677
30q69rbpn7g75       653445232 19-OCT-20 PM                5         94587.0244          26988    64440.3338                  0                24.1514                      0           2223326.4          2204672.6                            0                 1553.6
30q69rbpn7g75       653445232 19-OCT-20 PM                5         93301.2074          26690    65105.9732                  0                14.0782                      0           2213653.6            2203033                            0                  736.4
30q69rbpn7g75       653445232 19-OCT-20 PM                3         101117.233     27193.3333    72020.9813                  0             15.4373333                      0          2225218.33            2207644                            0                   1623
30q69rbpn7g75       653445232 20-OCT-20 AM                1           92061.35          26550     64953.945                  0                 22.245                      0             2210157            2206170                            0                    235
30q69rbpn7g75       653445232 20-OCT-20 AM                1          92872.242          26470     66092.822                  0                 11.999                      0             2208305            2206231                            0                    158
30q69rbpn7g75       653445232 20-OCT-20 AM                2         88107.2095          26075     61670.129                  0                17.2175                      0           2205332.5            2203981                            0                    116
30q69rbpn7g75       653445232 20-OCT-20 AM                1          91007.493          26210     64276.474                  0                  9.972                      0             2208516            2206310                            0                    177
30q69rbpn7g75       653445232 20-OCT-20 AM                4         101878.314          26940     65491.475                  0                 30.476                      0          2210945.25         2204828.75                            0                  461.5
30q69rbpn7g75       653445232 20-OCT-20 AM               10         97441.3635          26496    67549.4579                  0                  3.178                      0           2197412.4          2192467.5                            0                  478.1
30q69rbpn7g75       653445232 20-OCT-20 AM                5         59362.9672          17038    41573.7714                  0                  7.767                      0           1416804.4            1416061                            0                   10.2
30q69rbpn7g75       653445232 20-OCT-20 AM                6         108681.505     30798.3333    75082.4997                  0             10.5146667                      0          2558854.67         2549531.83                            0                    405
30q69rbpn7g75       653445232 20-OCT-20 AM                1         186021.778          53440    131017.985                  0                 18.976                      0             4285650            4202806                            0                    814

But with the SQL Profile in place it still in consistently around 90,000 milliseconds even today. There are no executions under 1800 milliseconds like before. It is a puzzle, but it got us through the issue. I am planning to leave this in place to prevent another production outage, but I suspect that these reports may be running longer than normal in many cases. But at least they are consistent and the business needs are being met.

So, this is another example of a SQL Profile to the rescue in a production database performance issue. But this was different because it was not clear that the one plan was always the best one. I went so far as to put it in and take it out and then put it in again. I ended up leaving it in because it resolved a performance issue that we had twice and do not want to have in the future. Hopefully any less than optimal performance outside of these peak times will be worth it since we are preventing an outage with significant impact to the business.


Categories: DBA Blogs

Generate string based on pattern

Tom Kyte - Tue, 2020-10-20 07:06
Hi Chirs, Connor, Could you please help or suggest a way to generate string based on pattern Pattern - <b>^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[89ab][a-f0-9]{3}-[a-f0-9]{12}$</b> I need to generate millions of string like <b>f9f8c8e2-0b20-4160-8f74-e836f4661fc5</b> matches with the pattern. e.g. <code>SELECT * FROM dual WHERE REGEXP_LIKE('f9f8c8e2-0b20-4160-8f74-e836f4661fc5', '^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[89ab][a-f0-9]{3}-[a-f0-9]{12}$');</code>
Categories: DBA Blogs

Update rows when cursor returns no data

Tom Kyte - Tue, 2020-10-20 07:06
Purchase_Mas contains data of purchase master and payment_amt contains data of payment received from party. Cursor c2 does not return any value when not found in payment but still i want to some calculation happen even data not found in payment table. run following script and when you fire a query " select * from purchase_mas_tmp ; " Look at ( ID =5 and party code=12 ) and ( ID = 6 and party code= 14 ) when payment amount not found in cursor c2 but i want to os_amt display as 10000 for 5 and 20000 for 6 id of payment. so how its possible <code>create table PURCHASE_MAS ( id NUMBER, party_code NUMBER, total_pcs NUMBER, total_amt NUMBER, purchase_date DATE, reg_flg CHAR(1), discount_amt NUMBER ); create table PAYMENT ( id NUMBER, party_code NUMBER, payment_date DATE, payment_amt NUMBER ); create global temporary table PURCHASE_MAS_TMP ( id NUMBER, party_code NUMBER, total_pcs NUMBER, total_amt NUMBER, purchase_date DATE, reg_flg CHAR(1), payment_date DATE, payment_amt NUMBER, os_amt NUMBER, discount_amt NUMBER ) on commit preserve rows; insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (5, 12, 456, 10000, to_date('01-08-2018', 'dd-mm-yyyy'), 'Y', 100); insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (6, 14, 878, 20000, to_date('21-08-2018', 'dd-mm-yyyy'), 'N', 200); insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (1, 11, 457, 30000, to_date('11-08-2018', 'dd-mm-yyyy'), 'Y', 300); insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (2, 12, 658, 40000, to_date('10-08-2018', 'dd-mm-yyyy'), 'Y', 400); insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (3, 11, 1454, 50000, to_date('07-08-2018', 'dd-mm-yyyy'), 'Y', 500); insert into PURCHASE_MAS (ID, PARTY_CODE, TOTAL_PCS, TOTAL_AMT, PURCHASE_DATE, REG_FLG, DISCOUNT_AMT) values (4, 13, 1254, 60000, to_date('18-08-2018', 'dd-mm-yyyy'), 'N', 600); insert into PAYMENT (ID, PARTY_CODE, PAYMENT_DATE, PAYMENT_AMT) values (1, 11, to_date('01-09-2018', 'dd-mm-yyyy'), 2500); insert into PAYMENT (ID, PARTY_CODE, PAYMENT_DATE, PAYMENT_AMT) values (2, 12, to_date('12-09-2018', 'dd-mm-yyyy'), 3000); insert into PAYMENT (ID, PARTY_CODE, PAYMENT_DATE, PAYMENT_AMT) values (3, 11, to_date('11-09-2018', 'dd-mm-yyyy'), 30000); insert into PAYMENT (ID, PARTY_CODE, PAYMENT_DATE, PAYMENT_AMT) values (4, 13, to_date('21-09-2018', 'dd-mm-yyyy'), 400); declare cursor c1 is select id, party_code, total_pcs, total_amt, purchase_date, reg_flg , discount_amt from purchase_mas; cursor c2 is select id, party_code, sum(payment_amt) payment_amt from payment group by id, party_code ; begin for i in c1 loop insert into purchase_mas_tmp (id, party_code, total_pcs, total_amt, purchase_date, reg_flg,discount_amt) values (i.id, i.party_code, i.total_pcs, i.total_amt, i.purchase_date, i.reg_flg, i.discount_amt); end loop; for i in c2 loop update purchase_mas_tmp tbl set payment_amt = nvl(i.payment_amt,0), os_amt = tbl.total_amt - nvl(tbl.discount_amt,0) - nvl(i.payment_amt,0) where id = i.id and party_code = i.party_code ; end loop; end; -- select * from purchase_mas_tmp ; /* drop table PURCHASE_MAS purge ; drop table PAYMENT purge ; drop table purchase_mas_tmp purge ; */</code>
Categories: DBA Blogs

How to return header and data using sys_refcursor in oracle pl sql

Tom Kyte - Tue, 2020-10-20 07:06
I want to return headers of column along with data while returning result using refcursor. <code> create table fetch_header_rows ( company_name varchar2(500), Company_id number, ammount number(20,8), data_commnets varchar2(500) ); insert into fetch_header_rows values('company1' , 1, 128.80,'test data1'); insert into fetch_header_rows values('company2' , 2, 129.80,'test data1'); insert into fetch_header_rows values('company3' , 3, 128,'test data1'); insert into fetch_header_rows values('company4' , 4, 100.80,'test data1'); create or replace procedure SP_fetch_header_rows(data_fetch out sys_refcursor ) as begin open data_fetch for select * from fetch_header_rows; end; </code> Here we are fetching cursor result in file. hence we required header as a first row in data. ( in current scenario we have more than 150 columns)
Categories: DBA Blogs

From DBA to DBI

Pakistan's First Oracle Blog - Mon, 2020-10-19 18:48

Recently Pradeep Parmer at AWS had a blog post about transitioning from DBA to DBI or in other words from database administrator to database innovator. I wonder what exactly is the difference here as any DBA worth his or her salt is an innovator in itself.

Administering a database is not about sleepily issuing backup commands or in terms of Cloud managed databases clicking here and there. Database administration has evolved over time just like other IT roles and is totally different what it was few years back. 

Regardless of the database engine you use, you have to have a breadth of knowledge about operating systems, networking, automation, scripting, on top of database concepts. With managed database services in cloud like AWS RDS or GCP Cloud SQL or Big Query many of the skills have become outdated but new ones have sprung up. That has always  been the case with DBA field. 

Taking the example of Oracle; what we were doing in Oracle 8i became obsolete in Oracle 11g and Oracle 19c  is a totally different beast. Oracle Exadata, RAC, various types of DR services, fusion middleware are in itself a new ballgame with every version. 

Even with managed database services, the role of DBA has become more involved in terms of migrations and then optimizing what's running within the databases from stopping the database costs going through the roof.

So the point here is that DBAs have always been innovators. They have always been trying to find out new ways to automate the management and healing of their databases. They always are under the pressure to eke out last possible optimization out of their system and that's still the case even if those databases are supposedly managed by cloud providers. 

With purpose built databases which are addressed different use case for different database technology the role of DBA has only become more relevant as they have to evolve to address all this graph, in-memory, and other cool nifty types of databases.

We have always been innovators my friend. 

Categories: DBA Blogs

DBMS_ASSERT returning the ORA-44002: invalid object name exception for existing database links and directories

Tom Kyte - Mon, 2020-10-19 12:46
Hi, In my procedure I'm trying to check whether a database link and a directory exist. If they don't I wanted to display a nice message about them needing to be created etc. I thought of using dbms_assert.sql_object_name, this seems to do the trick for tables, views, functions but not for database links or directories. Here is my test case (not my actual procedure, but I have the same issue) The table, view, function return the name / don't error when running the dbms_assert part. The database link, directory returns "ORA-44002: invalid object name" -- table <code>create table test_tbl (x number); select table_name from user_tables where table_name = 'TEST_TBL'; select sys.dbms_assert.sql_object_name('TEST_TBL') from dual;</code> -- view <code>create view test_vw as select * from test_tbl; select view_name from user_views where view_name = 'TEST_VW'; select sys.dbms_assert.sql_object_name('TEST_VW') from dual;</code> -- function <code>create or replace function test_f return date is dt date; begin dt := sysdate; return dt; end; select object_name from user_objects where object_name = 'TEST_F'; select sys.dbms_assert.sql_object_name('TEST_F') from dual</code>; -- database link <code>create database link test_link connect to user123 identified by user123 using 'dwh'; select db_link from user_db_links where db_link = 'TEST_LINK'; select sysdate from dual@test_link; select sys.dbms_assert.sql_object_name('test_link') from dual; select sys.dbms_assert.sql_object_name('TEST_LINK') from dual;</code> -- directory <code>create directory test_dir as '/apps1/oradata/big_dump'; select directory_name from all_directories where directory_name = 'TEST_DIR'; select sys.dbms_assert.sql_object_name('test_dir') from dual; select sys.dbms_assert.sql_object_name('TEST_DIR') from dual;</code> Thanks Nick
Categories: DBA Blogs

Oracle TNS poison attack vulnerability

Tom Kyte - Mon, 2020-10-19 12:46
Hi Team We are running Non-RAC Oracle and the TNS poison attack vulnerability (Oracle Security Alert for CVE-2012-1675 - https://www.oracle.com/security-alerts/alert-cve-2012-1675.html) looks affected this version of Oracle. What we have done: * Have applied the latest version of Oracle critical patches update (July/2015) on What we are expecting: * We hope applying the latest CPU (July/2015) for can fix it and no further actions required. My question is: * Do we still need to follow the steps in <Using Class of Secure Transport (COST) to Restrict Instance Registration (Doc ID 1453883.1)> mentioned in Oracle Security Alert for CVE-2012-1675 to fix this issue? Best Regards
Categories: DBA Blogs

Oracle Processes consuming 100% CPU

Tom Kyte - Mon, 2020-10-19 12:46
The server machine on which the DB resides has 32 CPUs. (2 sockets * 8 cores per socket * 2 threads per core) I notice many oracle processes (both non-background and background) consuming high (sometimes 100%) of the CPU. Now, this CPU % is only for 1 CPU out of the total 32 in the server. And once a process hits 100% of CPU consumption, the process cannot take resources from other CPUs. (please correct me if I am wrong) Right now, THREADED_EXECUTION parameter is set to FALSE. I was thinking, if we can enable multi threading in the Database, then, may be the process that is hitting 100% and is looking for more CPU can take it from the other CPUs. Is this a good idea? If yes, then how should we enable multi threading in the DB and what is the possible impact on the DB? Please Note: This is a single instance DB (non-RAC) OS: SUSE Linux Enterprise Server 12 Thanks in Advance, Abhishek, Kolkata, India
Categories: DBA Blogs

Oracle Apex 20.2 REST API data sources and database tables

Kubilay Çilkara - Sat, 2020-10-17 02:11

Oracle Apex 20.2 is out with an interesting new feature!  The one that caught my attention was the REST Data Source Synchronisation feature. 

Why is REST Data Source feature interesting? 

Oracle Apex REST Data Source Synchronisation is exciting because it lets you query REST endpoints on the internet on a schedule or on-demand basis and saves the results automatically in database tables. I think this approach could suit better slow-changing data accessible with REST APIs. If a REST endpoint data is known to be changing every day, why should we call the REST endpoint via HTTP every time we wanted to display data on an Apex page? Why should we do too many calls, cause traffic and keep machines busy for data which is not changing? Would it not be better to store the data in a table in the database, think cache here, and display it from there with no extra REST endpoint calls every time a page is viewed? Then automatically synch it by making an automatic call to the REST endpoint on predetermined intervals?

This is exactly what the REST Data Source Synchronisation does. It queries the REST API endpoint and saves (caches) the JSON response as data in a database table on a schedule of your choice. 

For my experiment I used the London TfL Disruption REST Endpoint from the TfL API which holds data for TfL transportation disruptions. I configured this endpoint to synchronise my database table every day. 

I created the Oracle Apex REST Data source inside apex.oracle.com I used the TfL API Dev platform provided key to make the call to the TfL REST endpoint and I am synching it once a day on an Oracle Apex Faceted Search page. 

I was able to do all this with zero coding, just pointing the Oracle Apex REST Data Source I created for the TfL API to a table and scheduling the sync. To see the working app go to this link: https://apex.oracle.com/pls/apex/databasesystems/r/tfl-dashboard/home

Categories: DBA Blogs

load date into apex 20.1 with defaults for some columns

Tom Kyte - Fri, 2020-10-16 10:06
I want to insert data from excel or csv file (Id, Name, Address, Amount) into database table by using data load but during the insert of this data i want to decide this data is related to HSBC Bank (Because i have many bank and each bank have thousand of customer). so how i can insert the name of HSBC bank to the table during the load of data (Note that the excel or csv file did not contain the name of HSBC bank).
Categories: DBA Blogs

Upgrade Data Migration Choices – HP to Linux

Bobby Durrett's DBA Blog - Thu, 2020-10-15 18:40
1. Introduction

I just finished moving and upgrading an Oracle database from on HP Unix Itanium to 19c on x86-64 Linux on VMWare along with an army of coworkers on our project team. I want to use this post to describe the choices I considered for migrating the data from the old to the new system. I mention my coworkers because this was a major collaborative effort and I do not want to take credit for the work of others. But I did have to think through the best way to move this big database cross platform and I think that my evaluation of those options in dialog with the project team was a way that I contributed. So, I can blog about it. Also, I think it is worth posting about this subject because the choice of data migration method was challenging and interesting to me. So, if I can find a way to communicate about it clearly enough it might have value to others and even to myself if I come back to this later.

First, I think I should explain some of the parameters for this upgrade/migration. The database files total about 15 terabytes of space. I guess actual data is 7-8 TB but everything else together adds up to 15. The database has tens of thousands of partitions and subpartitions. These partitions/subpartitions are manipulated by batch processes on the weekend and in the weekday evenings. Some of the tables are partitioned by date but others by columns that are not date related. The source system is running and the target system 19c. The source and target platforms have different endianness. At the beginning of the project our goal was to minimize downtime for the final cutover weekend, so I tried to find methods of cutting over from the old to new database with only an hour or two of downtime max. The system has an online web front end that business users use during the weekdays as well as very resource intensive batch jobs on the weekend and on some weeknights. One goal of the project besides upgrading to a newer platform was to improve performance. On some weekends the batch processes did not complete in time for the users to see the updated data on Monday morning. The much faster x86-64 processors on the new system should speed up the weekend batch processes to meet the business need to complete before the Monday workday. CPU was pegged for several hours each weekend on the older HP Itanium system so we knew that newer, faster CPUs would help.

2. Transportable Tablespaces and Cross Platform Incremental Backup

Next, I want to talk about different options we considered. Another DBA on my team worked with an earlier proof of concept for this project to show the potential benefits of it. At some point she tried out the ideas in an earlier version of this Oracle support document:

V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

This is a fairly complicated procedure designed to speed up the move from Unix to Linux by allowing you to use Transportable Tablespaces to move your data over, converting to the new endianness, and then apply incremental backups of the source system to catch up any changes, changing endianness of the backups as you go. Transportable Tablespaces are kind of a pain, but I have worked on them before and they can be done. But the key limitation of this process is that just before you go live on the new system you must use Data Pump to import all the metadata from your source database. But since we have tens of thousands of partitions and subpartitions the time to import the metadata could be several hours. So, I felt like we could use this approach but with all the complexity and risk we still might have a lengthy period of downtime and another approach might work better. I think the key lesson from this approach is how hard it is to change endianness using a physical backup when you have a huge data dictionary. If we were moving from Linux to Linux, we probably could have used some sort of physical copy along with the application of incremental changes without the big metadata import at the end. But with the big data dictionary and the cross-platform nature of the upgrade the physical copy option (transportable tablespaces) seemed too slow.

One reason I liked this option was that it would work well with NOLOGGING changes. I was concerned early on that I could not use replication methods that pulled changes from the redo logs because some updates would not be logged. But incremental backups do pick up blocks that are updated even if the updates are not written to the redo logs. Eventually I did a bunch of research and found that I could turn on FORCE LOGGING on the source database and that opened up the option of using the redo logs. The problem of the long time to export/import the metadata with the Transportable Tablespaces option pushed me to pursue the FORCE LOGGING research to make sure we could use it.

3. Pre-Loading Static Historical Data

The second approach I considered and discussed with the team early on was exporting and importing historical data in advance and only migrating over actively updated data during the cutover window. If you have most of your data in tables that are partitioned by a date column you could potentially pre-load the older static data on your new system and then migrate a small percentage of your 15 terabyte database during the cutover window. I found that about 4 terabytes had not been updated in the past 90 days and that made me stop pursuing this option. Here is the script I used: statictabsum.sql. I just looked at the last_analyzed date on tables, indexes, partitions, and subpartitions. Not perfect but it gave me some idea of how much data was static. Slightly edited output:

    >select sum(bytes)/(1024*1024*1024*1024) total_tb
  2  from withlastanalyzed
  3  where
  4  last_analyzed < sysdate - &&DAYSOLD;


The system that I was upgrading is a pretty complicated system with many batch jobs and many interfaces with external systems. I would have to work with the development team to understand what data was truly static and there would be the chance that we would make a mistake and not copy something that we should. I would hate to go live and find that I missed some data that should have been updated on the partitions that I thought were static. Yikes! With my imperfect estimate of how much data could be pre-loaded and knowing the complexity of the system I felt that it was not worth trying the pre-load of the static data.

4. Oracle GoldenGate Replication

The third thing I tried, and thought would work really well for us, was Oracle GoldenGate replication. Several years ago, a coworker of mine had upgraded and migrated the database for a transactional system using GoldenGate with minimal downtime. Oracle has a white paper about how to do this:

Zero Downtime Database Upgrade Using Oracle GoldenGate

The idea is to start capturing changes to your source system and then do a full export/import to your target system. Then apply all the changes that happened on the source database during the export/import to the target db using GoldenGate. We had this setup and began testing it but we had failures because of changes to table partitions and subpartitions by the application batch jobs. We had setup GoldenGate to replicate DML but not DDL. We researched it in the manuals and found that to replicate DDL (partition drops and adds) the source system had to be or later and have compatibility set to or later to do DDL replication in the most current and desirable way. See this section of the installation manual:

13.1.1 Support for DDL capture in integrated capture mode

Unfortunately, even though our source system was on the compatible parameter was set to 11.2.0, the default. I wonder why defaults to compatible=11.2.0? Sure, we could change the parameter, but you cannot easily back out a change to compatible. And we would essentially have to retest our entire application before changing it. Given that our project was running out of time and money that was not an option. So, that led to our final choice.

5. Data Pump Export and Import

The last migration method that we considered and the one we just used in the production migration was Oracle Data Pump export and import, the expdp and impdp utilities. This is the simplest and most widely used method for copying data from one Oracle database to another and it works great across platforms with different endianness and different versions of Oracle. We used this method to populate our new databases for testing. Originally we thought about using the same data migration method for testing that we planned to do for the final production cutover but we did not have the time and money to get GoldenGate setup before all the testing began so we were not able to use it to populate our test databases. We had to use the simpler Data Pump utility. We had a whole crew of people lined up to work on the project, so we had to simply do a full export/import and cut them loose. As the project went on we streamlined our export/import method including working with Oracle support to get a fix for a bug that affected our data migrations. As our project had dragged on for a number of months beyond our initial deadline and as we discovered that GoldenGate was not going to work for us without a lot of more work I started to think about using Data Pump export/import, our last resort. At the time it was taking 50 hours to do the full export/import and I asked if we could get approval for two extra days of downtime – two full working days of application downtime. To me 50 hours to migrate a 15-terabyte database was not bad. I had expected it to take several days based on prior experiences populating dev and test databases from this same source system. The HP Unix hardware was older and had slower network adapters so that was part of the reason for the long data migration time.

Once we got the approval to take the extra downtime, I had to fend off suggestions to go back to preloading the historical data or to ignore tables that we think are obsolete. By this time, I was convinced that a full export/import made the most sense because all our testing was done on systems that were populated with the same method. We had the business approval for the downtime, and we knew that we had been testing for months on databases that had been populated with this method. Finally, our Unix/Linux/Storage team came up with a clever way to cut our export/import time almost in half without changing anything on the database side. Essentially, they figured out how to move the filesystem that we were exporting to onto an HP Unix blade with a faster network adapter. The original system had a 1 gigabit network adapter and the new system had 10 gigabit. Also, they setup an NFS mounted filesystem so we could import straight over the network rather than copy all the files to the target server and have all that duplicate storage space. We tested this approach several times and then the real thing went in without issues and even a little faster than we expected.

6. Conclusion

My main concern for this project was how to migrate such a large database and change the endianness of the data in a short enough time to meet the needs of our users and our company. This concern drove me to investigate several potentially complicated approaches to this data migration. But in the end, we used the simplest and surest method that we had already used to populate our test databases. We took several steps to tune our full Data Pump export/import process. The seven largest tables were broken out into their own parfile and exported to disk uncompressed in parallel. Their indexes were rebuilt parallel nologging. Then the Unix/Linux/Storage team did their magic with the faster network adapter. But even with these helpful performance enhancements our approach was still simple – Data Pump export and import. The more exotic methods that we tried were thwarted by the nature of the database we were upgrading. It had too many subpartitions. The application manipulated the subpartitions during the week. We had the wrong compatible value. Finally, budget and schedule concerns forced the decision to go with what worked, export/import. And in the end, it worked well.

I want to conclude this post with high level lessons that I learned from this process. There is value in simplicity. Do not use a more complex solution when a simpler one meets your needs. The Transportable Tablespaces and GoldenGate options both were cool to work on and think about. But it takes time to dig into things and complexity adds risk of failure. If a simpler solution is safer, meets the business need, and takes less time and money why not use it? I guess the other high-level lesson is that it is easier to get approval for downtime when your project is behind schedule and over budget. When push came to shove downtime was less important than when we went live. Maybe if I had pushed for the longer downtime from the beginning and planned all along to use export/import the project would have been shorter. But I did not know that GoldenGate would run into the issues it did, so it was harder to insist that there was no way to eliminate substantial downtime up front. I also did not know at the beginning of the project that the export/import method could be streamlined to run in 1 day instead of several. Maybe the real lesson is that you have to work through these data migration options along with your project team and make the best choices that you can at the time based on technical issues that you discover and the business needs as they evolve with the project. This project ended with a simple data migration approach that met the company’s needs, but future projects may have different technical and business parameters and the other more complex approaches may be more appropriate to future situations.


Categories: DBA Blogs

In database row archival

Tom Kyte - Thu, 2020-10-15 15:46
I do have a question about Oracle Indatabase archiving in Oracle 19c, given below table has a XMLtype column alter table HTS_SUMMARY_DATA_CELL row archival Error report - SQL Error: ORA-38353: This table type is not supported by the ROW ARCHIVAL clause. This error comes. The same Indatabase archiving in Oracle 12c, alter table HTS_SUMMARY_DATA_CELL row archival The table gets archived. What is the differnce there in 12C and 19C versions,
Categories: DBA Blogs

In-database row archival and compression

Tom Kyte - Thu, 2020-10-15 15:46
In Oracle 12c 12.1 version when a table is compressed, we cannot do a row archival upon that In Oracle 19c, if a table is compressed, we can do row archival , Why it is so And what's the method to do indatabase archiving for compressed tables in 12c And what would be its rolling back scripts. if don't need archiving for a compressed table
Categories: DBA Blogs

Oracle .DBF file format structure

Tom Kyte - Thu, 2020-10-15 15:46
Hi, i want to iterate all rows in table and do some operations on them, but i don't want to get data with some select statement. In order to do that i tried to export data as .dbf and read it with Java and do some stuff. But it seems Oracle .dbf format differs than xBase (https://www.clicketyclick.dk/databases/xbase/format/) .dbf format. Where can i get Oracle .dbf data file structure details. Or can you give me more suitable techniques and advice in this purpose. Thank you.
Categories: DBA Blogs

Homepage URL without "/apex/workspace/r/app"

Tom Kyte - Thu, 2020-10-15 15:46
Hi Tom and team, let's suppose my public website, built on Oracle Apex of course and running on a shared host, is at www.example.com Unfortunately, any new customers visiting "www.example.com" will be immediately redirected to a URL of something like "www.example.com/apex/workspace/r/exampleapp/app_homepage" (assuming we use friendly URLs). Is there any way to display the URL as just "www.example.com", at least for the home page? Even something like "www.example.com/app_homepage" would be great. All guidance much appreciated! Many thanks, Greg https://www.comparefootballodds.co.uk/ PS I've seen some discussion of masking, but that has various downsides (e.g. poor for SEO, and unable to bookmark pages).
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator - DBA Blogs