Feed aggregator

Restore Entreprise Edition RMAN Backup to Standard Edition

Tom Kyte - Wed, 2021-05-12 03:46
Hello experts , Is it possible to restore entreprise edition rman backup which contains partitionned tables to a Standard edition instance? Thanks in advance. Regards Nabil.
Categories: DBA Blogs

Deleting many rows from a big table

Tom Kyte - Tue, 2021-05-11 09:26
Tom: We have a 6 millons rows table and we need to clean it. This process will delete 1,5 millons. My first approach was create a SP with this lines: SET TRANSACTION USE ROLLBACK SEGMENT Rbig; DELETE FROM CTDNOV WHERE CTDEVT IN (4,15); (1,5m rows) COMMIT; Then I submited a job to run the SP at night. This process took more than 4 hours to finish and generated a huge amounts of archives. 2nd Approach (i didn't test it, i think this is slower than the 1st approach. According to you book is faster one commit at the end than a lots of m/n-rows commits) from Metalink Create PL/SQL procedure: CREATE OR REPLACE PROCEDURE delete_tab (tablename IN VARCHAR2, empno IN NUMBER , nrows IN NUMBER ) IS sSQL1 VARCHAR2(2000); sSQL2 VARCHAR2(2000); nCount NUMBER; BEGIN nCount := 0; sSQL1:='delete from '|| tablename || ' where ROWNUM < ' || nrows || ' and empno=' || empno; sSQL2:='select count(ROWID) from ' || tablename || ' where empno= ' || empno; LOOP EXECUTE IMMEDIATE sSQL1; EXECUTE IMMEDIATE sSQL2 INTO nCount; DBMS_OUTPUT.PUT_LINE('Existing records: ' || to_char(ncount) ); commit; EXIT WHEN nCount = 0; END LOOP; END delete_tab; / 3. Execute above created procedure SQL> execute delete_tab('big_emp',7369,5000) Existing records: 60537 Existing records: 55538 Existing records: 50539 Existing records: 45540 Existing records: 40541 Existing records: 35542 Existing records: 30543 Existing records: 25544 Existing records: 20545 Existing records: 15546 Existing records: 10547 Existing records: 5548 Existing records: 549 Existing records: 0 PL/SQL procedure successfully completed. 3th Approach (it's seems to be the better choice) When a DELETE is issued, Oracle stores the whole deleted row in the rollback segments, so you can undo the changes later, if you want to. So there is an image of the rows in rollback which are currently not present in the table. Hence the phrase 'before image'. Now all the rollback blocks are written to the redo log files too. So you have the data blocks with the table (without the deleted rows, of course) and the rollback blocks with the old image both producing redo, which accounts for additional archive logs. I cant comment on the ratio of deleted records to ratio of redo produced, though. If you do an insert, only the rowid is stored in the rollback segs, which generates less redo. Depending on the number of rows in your tables, it may be better for you to insert the records you dont want to delete into a temp table, truncate the main table and move the records back to the main. This would probably generate less redo than delete, but that depends on number of rows. Above, i describe the table i want to clean from dba_segments SEGMENT_NAME EXTENTS BYTES ---------------------------------------- ---------- ---------- CTDNOV 26 2276614144 from dba_extents SEGMENT_NAME EXTENT_ID Kbytes ---------------------------------------- ---------- ---------- CTDNOV 0 520 1 520 2 800 3 1200 4 1800 5 2680 6 4000 7 ...
Categories: DBA Blogs

how to delete data from table

Tom Kyte - Mon, 2021-05-10 15:06
with this normal query of delete delete from table where condition-'a'; will it work in table data are present from year 2014. i want to delete all data starting from 2014 to till now.
Categories: DBA Blogs

Column default value as another column from same table

Tom Kyte - Mon, 2021-05-10 15:06
Hello, We have a requirement to add a new column in the table which needs a default value like column1 || column2. For some reason application code can not be changed to deal with this new column and so the default value. I thought of two approaches that can solve this, using trigger to update the new column if any of two columns column1 or column2 are updated - so the new column can be initially updated and then trigger can be enabled to handle any future changes. Other approach is use virtual column. Now it seems that a direct insertion of data or an update might be required for the new column, that rules out the virtual column. And on trigger, web is full of articles that they are problematic and I am having tough time convincing that for a low volume table (number of records as well as the number of transactions) trigger may not be the worst idea, though I understand the maintenance headaches and side effects etc. Is there any other approach? Also why Oracle does not support the column default value as another column? Thank you, Priyank
Categories: DBA Blogs

Single row cursor for short text string from dual produces CHAR(32767)

Tom Kyte - Mon, 2021-05-10 15:06
Hi I have tried 19.9 - 19.11 I have noticed some suspicious behaviour regarding dual. I will create an example. At the moment, here you can see that from mytab, there comes only single row. Then I will dump the datatype to output. <code> SQL> set serveroutput on size unlimited; declare a clob; l_msg_content_begin CLOB := EMPTY_CLOB(); CURSOR cur IS with mytab as ( select 'SOMERANDOMTABLE' as main_table from dual --union select 'ALSOSOMERANDOMTABLE' as main_table from dual ) select main_table, lower_main_table from ( select main_table, lower(main_table) as lower_main_table from mytab ) order by 1 desc; rec cur%rowtype; BEGIN FOR rec IN cur LOOP dbms_output.put_line(rec.main_table); select dump(rec.lower_main_table) into a from dual; dbms_output.put_line(a); -- ORA-06502: PL/SQL: numeric or value error: character string buffer too small -- If you have only one row from dual, then you get error if you uncomment this: "l_msg_content_begin := ..." -- With 2 or more rows from dual, all good --l_msg_content_begin := 'blabla '||rec.lower_main_table||' blablabla '||rec.lower_main_table||'bla'||UTL_TCP.CRLF; END LOOP; --dbms_output.put_line(substr(l_msg_content_begin, 1, 2000) || 'AA'); END; / </code> And here you can see, datatype is CHAR (Typ=96), and check the length (so whole string padded with spaces "ascii32 == space") <code> SOMERANDOMTABLE Typ=96 Len=32767: 115,111,109,101,114,97,110,100,111,109,116,97,98,108,101,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,.................... </code> Seems like <b>lower()</b> function somehow produces this strange behaviour. Is this normal..? Also when I dump rec.main_table instead (so not lower() function output) <code>select dump(rec.main_table) into a from dual;</code> Then I get type CHAR and an actual length. So it is expected. On contrast, when I uncomment this second line also <code>--union select 'ALSOSOMERANDOMTABLE' as main_table from dual</code> Then it is expected: <code> SOMERANDOMTABLE Typ=1 Len=15: 115,111,109,101,114,97,110,100,111,109,116,97,98,108,101 ALSOSOMERANDOMTABLE Typ=1 Len=19: 97,108,115,111,115,111,109,101,114,97,110,100,111,109,116,97,98,108,101 </code> Type is varchar and length is actual length. Regards Raul
Categories: DBA Blogs

The most peculiar Oracle situation in my career- Oracle changes how it records a block read from direct read to not recording it in an I/O wait event at all

Tom Kyte - Mon, 2021-05-10 15:06
Greetings, I have this extremely perplexing situation where Oracle changes how it records a block read. Last week it wasn't COUNTING block reads at all in an I/O wait event; this week it started to add it to the ?direct read? wait event. This is occurring in our production environment; however, I was able to reproduce the situation in our test environment with test data. I used all_source view to create two test tables until I reached 1.2 million for table 1 and 4 million for table 2: Table1 ( 1.2 Mil records) create table table1 as select * from dba_source where rownum; Table2 ( 4 Mil records ) create table table2 as select * from dba_source; create index t1_pk on table1(owner); create index t2_pk on table2(owner, line); exec dbms_stats.gather_schema_stats('JOHN'); Then I ran this select statement 120 times: <code>select count(*) from Table1 where line=1 and owner in (select Table2.owner from Table2 where Table2.owner=Table1.owner) order by owner;</code> In some cases Oracle 19c records the I/O in "direct path read" wait events and in other cases, it doesn't seem to report in any I/O wait event. That is soooo odd. TEST CASE 1: IOStats summary doesn't record I/O nor does it in a wait event: <code>Top 10 Foreground Events by Total Wait Time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Wait Avg % DB Wait Event Waits Time (sec) Wait time Class ------------------------------ ----------- ---------- --------- ------ -------- DB CPU 20.2 99.6 PGA memory operation 2,524 .1 20.27us .3 Other Disk file operations I/O 520 0 59.49us .2 User I/O db file sequential read 211 0 12.33us .0 User I/O Parameter File I/O 8 0 257.00us .0 User I/O enq: RO - fast object reuse 2 0 784.50us .0 Applicat control file sequential read 209 0 5.32us .0 System I log file sync 1 0 .95ms .0 Commit SQL*Net message to client 546 0 1.53us .0 Network SQL*Net more data to client 22 0 33.77us .0 Network SQL ordered by Gets DB/Inst: ORACLE/stbyoracle Snaps: 2727-2728 -> Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code. -> %Total - Buffer Gets as a percentage of Total Buffer Gets -> %CPU - CPU Time as a percentage of Elapsed Time -> %IO - User I/O Time as a percentage of Elapsed Time -> Total Buffer Gets: 3,399,948 -> Captured SQL account for 98.1% of Total Buffer Gets Elapsed Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id ----------- ----------- ------------ ------ ---------- ----- ----- ------------- 3,241,728 120 27,014.4 95.3 14.4 99.5 0 82mps751cqh84 Module: SQL*Plus select count(*) from Table1 where line=1 and owner in (select Table2.owner from Table2 where Table2.owner=Table1.owner) order by owner IOStat by Function summary DB/Inst: ORACLE/stbyoracle Snaps: 2727-2728 -> 'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000 -> ordered by (Data Read + Write) desc Reads: Reqs Data Writes: Reqs Data Waits: Avg Function Name Data per sec per sec Data per sec per sec Count Time --------------- ------- ------- ------- ------- ------- ------- ------- -------- LGWR 3M 1.5 .022M 10M 3.6 .075M 678 368.73us Others 7M 2...
Categories: DBA Blogs

Method to measure performance gain of clustered table vs non-clustered tables

Tom Kyte - Mon, 2021-05-10 15:06
I have 2 pairs of parent and child tables ,1 pair is stored in a clustered object and the other is non-clustered. The primary key of the master table (which is foreign key) in the child table is clustered. An index on cluster is also created. The structure of 2 the parent tables is identical and structure of 2 child tables is also identical. Records in the 2 pairs are also identical. I want measure the performance gain of clustered tables vs non clustered table for SELECT statement. I am using SET TIMING ON and printing the elapsed time after the SELECT is executed on the tables. The SELECT statement is also identical. I was expecting the elapsed time of clustered tables to be less than the non-clustered table, consistently. But it is the not. Can you please explain this? Also , is there other way to measure the performance of non-clustered vs clustered, using auto trace or explain plan?
Categories: DBA Blogs

Celery Distributed Task Queue Display with Flower UI

Andrejus Baranovski - Mon, 2021-05-10 06:09
I explain how you can visualize Celery distributed task queue with Flower UI. This is useful when monitoring asynchronous tasks.

 

Tracking the Standby Lag from the Primary

Hemant K Chitale - Sun, 2021-05-09 10:38

 Here is a quick way of tracking the Standby Lag from the Primary.

This relies on the information in V$ARCHIVE_DEST on the Primary.

Note that this query will not work if the lag is so great that the SCN_TO_TIMESTAMP mapping fails (because the underlying table holds only a limited number of records) OR if the Standby instance is shutdown and the Primary cannot communicate with it.


Note : The lag based on "SCN_TO_TIMESTAMP" is always an approximation.  

SQL> l
1 select scn_to_timestamp(current_scn) - scn_to_timestamp(applied_scn) Time_Diff
2 from v$database d,
3* (select applied_scn from v$archive_dest a where target = 'STANDBY')
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 00:41:09.000000000

SQL>
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:07:22.000000000

SQL>
SQL> l
1 select scn_to_timestamp(current_scn) - scn_to_timestamp(applied_scn) Time_Diff
2 from v$database d,
3* (select applied_scn from v$archive_dest a where target = 'STANDBY')
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:07:58.000000000

SQL>
SQL> l
1 select scn_to_timestamp(current_scn) - scn_to_timestamp(applied_scn) Time_Diff
2 from v$database d,
3* (select applied_scn from v$archive_dest a where target = 'STANDBY')
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:13:16.000000000

SQL>
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000004 01:13:37.000000000

SQL>
SQL> /

TIME_DIFF
---------------------------------------------------------------------------
+000000000 00:00:00.000000000

SQL>


Here, the lag was 4 days and it took some time for the Standby to catchup with the Primary.
(this is my Lab environment, not a real production environment at my work place, so don't ask how I managed to create a lag of 4 days or how long it took for the Standby to catch-up with the Pirmary)

Note : If the Standby database is down and/or the lag is very high, you will get error :
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1

for the "applied_scn" from v$archive_dest.  (If the Standby is down, the value for "applied_scn" in v$archive_dest on the Primary is "0").


If you have access to the Standby you can run this query :

select name, value from v$dataguard_stats where name like '%lag'


The demo above is only a quick away by querying the Primary without accessing the Standby
Categories: DBA Blogs

Driving site patch

Jonathan Lewis - Sun, 2021-05-09 06:23

A recent question on the Oracle-L list server asked “Is there a way I can use an SQL_PATCH to add a driving_site() hint to a query?” to which the reply was: “Just do it, it should work.” Unfortunately this produced the response: “I’m on 11.2.0.4, I’ve tried it, it doesn’t seem to be working.” I do approve of the “doesn’t seem to be” – it’s much more encouraging than a flat assertion that “it doesn’t work”, and helps encourage further dialogue.

I’ve come across this problem before (though I couldn’t find any notes I’d written about it – so possibly they’re only on a client site, or maybe it was a different hint displaying the same symptom) and it’s possible that the solution is very easy. Here’s a little data set to test with – created on 11.2.0.4 and then tested on 19.3.0.0:

rem
rem     Script:         patch_driving_site.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2021
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             11.2.0.4
rem

define m_target=orclpdb@loopback

execute sys.dbms_sqldiag.drop_sql_patch('driving_site');

create table t1
as
select
        *
from
        all_objects
where
        rownum <= 10000
;

alter table t1 add constraint t1_pk primary key (object_id);

create table t2
as
select
        *
from
        all_objects
where
        rownum <= 10000
;


begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T2',
                method_opt  => 'for all columns size 1 for columns owner size 254'
        );
end;
/

I’ve created two tables which I’m going to join, but I’ve created a loopback database link that I’ll use to make one of them appear to be at a remote database. The data comes from view all_objects, and includes just 7 rows out of 10,000 for owner=’OUTLN’, which is why I’ve created a histogram on the owner column – but only for one of the tables.

You’ll notice I’ve issued a call to the dbms_sqldiag package to drop an SQL patch called “driving_site” just to make sure I don’t confuse the issue (i.e. myself) if I re-run the test in a couple of days time (I’ve also got a similar call at the end of the script). To run this test you’ll have to grant execute privileges on this package to your test schema, and if you’re on 11g you’ll also have to grant execute privileges on the package dbms_sqldiag_internal. so that you can create the SQL patch.

Here’s my query, with the default execution plan I got on the run against 19.3.0.0:

select
        t1.object_name,
        t1.object_type,
        t2.object_name,
        t2.object_type
from
        t1,
        t2@&m_target    t2
where
        t2.object_id = t1.object_id
and     t2.owner     = 'OUTLN'
/

select * from table(dbms_xplan.display_cursor(format=>'outline alias'));


SQL_ID  4dssxbjvzzrc0, child number 0
-------------------------------------
select  t1.object_name,  t1.object_type,  t2.object_name,
t2.object_type from  t1,  t2@orclpdb@loopback t2 where  t2.object_id =
t1.object_id and t2.owner     = 'OUTLN'

Plan hash value: 3828311863

------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |       |       |    54 (100)|          |        |      |
|   1 |  NESTED LOOPS                |       |  1111 |   165K|    54   (8)| 00:00:01 |        |      |
|   2 |   NESTED LOOPS               |       |       |       |            |          |        |      |
|   3 |    REMOTE                    | T2    |  1111 |   130K|    26   (4)| 00:00:01 | ORCLP~ | R->S |
|*  4 |    INDEX UNIQUE SCAN         | T1_PK |       |       |            |          |        |      |
|   5 |   TABLE ACCESS BY INDEX ROWID| T1    |     1 |    33 |    27   (8)| 00:00:01 |        |      |
------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   3 - SEL$1 / T2@SEL$1
   4 - SEL$1 / T1@SEL$1
   5 - SEL$1 / T1@SEL$1

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      INDEX(@"SEL$1" "T1"@"SEL$1" ("T1"."OBJECT_ID"))
      NLJ_BATCHING(@"SEL$1" "T1"@"SEL$1")
      USE_NL(@"SEL$1" "T1"@"SEL$1")
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T2"@"SEL$1")
      LEADING(@"SEL$1" "T2"@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."OBJECT_ID"="T1"."OBJECT_ID")

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "OWNER","OBJECT_NAME","OBJECT_ID","OBJECT_TYPE" FROM "T2" "T2" WHERE
       "OWNER"='OUTLN' (accessing 'ORCLPDB.LOCALDOMAIN@LOOPBACK' )

Note
-----
   - this is an adaptive plan

The optimizer has decided to do a nested loop join, fetching an estimated 1,111 rows (there are 10,000 rows in total and 9 distinct values for owner – and the optimizer doesn’t examine remote histograms!) The costing may seem a little surprising – an incremental cost of 27 for 1,111 probes of the index and table does seem a little low, but I’m not going to comment on that in this note.

We are actually going to get only 7 rows in the tablescan, so the path is a good one; but the Note tells us it is an adaptive plan and if at run-time the number of rows had been too large (as it would probably be for some other value of owner) Oracle would have switched to a hash join as the query is running.

Let us pretend, however, that we know that we could get better performance if the remote database optimised and executed the query. If we add the hint /*+ driving_site(t2) */ to the query we get the following from the call to dbms_xplan.display_cursor():

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
SQL_ID  86n3j9s9q9k47, child number 0

select  /*+ driving_site(t2) */  t1.object_name,  t1.object_type,
t2.object_name,  t2.object_type from  t1,  t2@orclpdb@loopback t2 where
 t2.object_id = t1.object_id and t2.owner     = 'OUTLN'

NOTE: cannot fetch plan for SQL_ID: 86n3j9s9q9k47, CHILD_NUMBER: 0
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

The call can’t find the plan because it’s the remote database that generated it from a piece of text that the local database sent to it, and the plan “belongs to” that text, not to the original query. So that almost tells us that the driving_site() hint has done its job properly – but we can double-check by searching the remote database’s library cache for the SQL that it actually ran.

set linesize 132
column sql_text wrap word format a75

select  sql_id, sql_text 
from    V$sql 
where   sql_text like '%OUTLN%'
;


SQL_ID        SQL_TEXT
------------- ---------------------------------------------------------------------------
5hmjcxgt0jc8t SELECT
              "A2"."OBJECT_NAME","A2"."OBJECT_TYPE","A1"."OBJECT_NAME","A1"."OBJECT_TYPE"
              FROM "T1"@! "A2","T2" "A1" WHERE "A1"."OBJECT_ID"="A2"."OBJECT_ID" AND
              "A1"."OWNER"='OUTLN'


select * from table(dbms_xplan.display_cursor('5hmjcxgt0jc8t'));


SQL_ID  5hmjcxgt0jc8t, child number 0
-------------------------------------
SELECT "A2"."OBJECT_NAME","A2"."OBJECT_TYPE","A1"."OBJECT_NAME","A1"."OB
JECT_TYPE" FROM "T1"@! "A2","T2" "A1" WHERE
"A1"."OBJECT_ID"="A2"."OBJECT_ID" AND "A1"."OWNER"='OUTLN'

Plan hash value: 3485226535

-------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    33 (100)|          |        |      |
|   1 |  NESTED LOOPS      |      |     7 |   924 |    33   (4)| 00:00:01 |        |      |
|*  2 |   TABLE ACCESS FULL| T2   |     7 |   280 |    26   (4)| 00:00:01 |        |      |
|   3 |   REMOTE           | T1   |     1 |    92 |     1   (0)| 00:00:01 |      ! | R->S |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("A1"."OWNER"='OUTLN')

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "OBJECT_NAME","OBJECT_ID","OBJECT_TYPE" FROM "T1" "A2" WHERE
       :1="OBJECT_ID" (accessing '!' )


As we can see from the plan – which executed from the remote instance – the (now-local) histogram comes into play with a correct cardinality estimate for t2 and we do a nested loop requesting one row at a time from the (now remote) t1 by object_id. So the driving_site() hint works when written inline, which means we can work on creating a patch to apply it from “outside”.

begin
        dbms_output.put_line('19.3 patch name: ' || sys.dbms_sqldiag.create_sql_patch(
--      sys.dbms_sqldiag_internal.i_create_patch(
                sql_text        =>  
q'{
select
        t1.object_name,
        t1.object_type,
        t2.object_name,
        t2.object_type
from
        t1,
        t2@&m_target    t2
where
        t2.object_id = t1.object_id
and     t2.owner     = 'OUTLN'
}',
                hint_text       => 'driving_site(t2)',
--              hint_text       => 'driving_site(t2@sel$1)',
                name            => 'driving_site'
        ))       -- extra bracket for dbms_output.put_line
        ;
end;
/

There are a couple of funny format details in this code fragment. First, dbms_sqldiag in 19c uses a function to create an SQL patch while 11g uses a procedure in dbms_sqldiag_internal, so there’s a messy optional bit near the top of the code and the reminder to count closing brackets near the bottom.

Second – when I used an SQL patch to add the hint that had worked it didn’t do what it was suppoed to do (even though the Note for the execution plan reported: “SQL patch “driving_site” used for this statement”. That’s why you see two versions (one commented) of the hint_text parameter. To get the SQL patch working I had to use the fully qualified alias of the remote table which, as you can see in the Query Block / Object Alias information from the first plan I reported, is t2@sel$1.

With the corrected SQL patch in place the text sent to the remote database was exactly the same as it was when I used the inline (but slightly lucky) /*+ driving_site(t2) */ hint. Quite possibly I should have gone one step further and made the hint_text include the query block as well, vizl: driving_site(@sel$1 t2@sel$1) I suspect, though, that that might not always be necessary (or even correct) – at some point I’ll have to check what happens if the reported query block has appeared as a consequence of a transformation and no longer matches the original query block from the object alias.

Summary

I’ve often made the point that you have to be very thorough with hints and this note demonstrates a variation of that theme. There are cases where you can get away with being a little sloppy when specifying an object alias in an “inline” hint but have to be more precise when imposing it from outside the original source SQL.

Adding the “originating” query block to construct the “fully qualified” object alias is the first refinement that you should think of; adding in a “target” query block name may also be necessary for some hints. Both these pieces of information can be reported in the various calls to dbms_xplan by including the “alias” option in the format parameter as you build your test cases towards a complete solution.

Zip a .csv file present at DB directory using PL/SQL

Tom Kyte - Fri, 2021-05-07 13:46
We have a requirement where we are generating .csv file from DB and placed it to a DB directory. We want to zip these .csv files so that size can be optimised. Could you please suggest a way to achieve it by using PL/SQL.
Categories: DBA Blogs

Is it safe to re-sequence table columns using invisible columns ?

Tom Kyte - Fri, 2021-05-07 13:46
Hello Team, First of all, thanks for all the good work you are doing. Request your help with a query related to re-sequencing of table columns using invisible columns. Is it safe to change order of columns in a production environment, following the method described in the following link ? https://connor-mcdonald.com/2013/07/22/12c-invisible-columns/ We tested it and are not able to find anything unusual. However, any particular "gotchas" we should lookout for? I know that ideally order of table columns should not matter. However, in our situation, codebase can have legacy code that don't use column names in insert statements. Pasted below is the detailed scenario on how/why we are planning to use this. Thanks, A ---------------------------------------------- Our requirement is to encrypt a column in existing tables in PROD ENVT. These tables can have hundreds of millions of rows. This task has be done during a down time window that is not large enough. In order to achieve this, we are trying to do as much work as possible out side the downtime window. Our plan is to add an invisible column to the tables. Data from the original column will be encrypted and stored into these invisible columns. This can be done outside the downtime window and will not affect the day to day operations. We also have a mechanism to identify and handle delta in the original column. The only task pending for the downtime will be to move values from the invisible column to the original column. In order to complete it in the short downtime window, We will make the invisible column visible and will swap it's name with the original column. The redundant original column can then be dropped. This approach works fine except that the order of the columns change. The encrypted column now appears as the last column in the table. Ideally, the order should not matter. However, these tables are used by some applications that have legacy code that inserts without specifying the column name. We are exploring if we can add the new column at the position of the original column.
Categories: DBA Blogs

Shuffle quantities between buckets

Tom Kyte - Thu, 2021-05-06 19:26
Hi Tom, I am given the "current" allocation of items to eight buckets, and I want to make it more efficient by filling as much as possible of bucket A, then of bucket B, then of bucket C (as indicated by the "priority"), by moving items between buckets by taking as many P1 items from the bucket H and reassigning them to bucket A (as many as possible), then to bucket B, etc., until you allocated all of them. Then you take the next lowest-priority bucket and repeat. How much of P1 and P2 to fill is defined in the volumes table i.e. each A, B-H can have quantities in multiple of eight and seven of P1 and P2 in sample data. I also want to include round-up and round-down logic to nearly distribute the quantities across buckets if one bucket has too big a quantity. The items P1 and P2 are completely independent and one's result shouldn't impact the other. The height, weight, and width don't matter here so not present in any sample data. The below I have started with but couldn't make round-up and round-down cases work. Also, in the cases when quantity is moved into two or more buckets from one bucket or moved out from two buckets into one bucket, can we show a single comma-separated row instead of multiple step-by-step rows? In the cases when one row has too much quantity, can we implement round up and round down logic to distribute the quantities near equally in multiple of the quantities of value table in buckets e.g. if we update the quantity as seventy-two in bucket H of P1 part, the current result gives five rows for H bucket. Can we round buckets A-H with sixteen and then the remaining ones in the H bucket?
Categories: DBA Blogs

APEX Message box label

Tom Kyte - Thu, 2021-05-06 01:06
Is it possible to change the labels of the confirm dialog buttons from ?Cancel/Ok? to ?No/Yes? in APEX?
Categories: DBA Blogs

Is there an Oracle document that has a checklist to be able to answer whether database server will handle "peak" load

Tom Kyte - Thu, 2021-05-06 01:06
Greetings, A question from the client that comes up every few years is to predict if the Oracle database server will be able to handle a new application's peak load. Instead of trying to think of all that needs to be considered on the fly, it would be great if there was an Oracle document that had a checklist with all of the questions that we must answer so that we can give the client a definite answer of yes we can predict if x,y and z or performed. I know that in most cases, this will be nearly impossible to answer as it will take too much time to answer and we can't control the variables for other apps that share the same resources like database, network, SAN, etc. For instance, usually the network and SAN are shared with the database server so we will need to get peak loads of all the other applications plus the expected max throughput for the network and SAN. Thanks for your help, John
Categories: DBA Blogs

Kubernetes ConfigMaps and Secrets: Guide to Create and Update

Online Apps DBA - Thu, 2021-05-06 00:24

For applications to be portable, an application’s configuration will change between environments. So, we should try to keep the configuration out of the container image with our Docker containers. ➤ ConfigMaps A ConfigMap affords a method to include non-sensitive data information in your deployment. A configmap contains information in key-value pairs. However, the values can […]

The post Kubernetes ConfigMaps and Secrets: Guide to Create and Update appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Google Cloud Run: Fully Managed Compute Environment | Use-Case | Steps To Deploy

Online Apps DBA - Thu, 2021-05-06 00:01

Google Cloud Run is a fully managed compute platform that enables users to run stateless containers which can be invoked via Pub/Sub events or web requests. It is serverless which implies that it abstracts all the infrastructure management tasks so that users can focus on building great applications.It is built from Knative that allows serving […]

The post Google Cloud Run: Fully Managed Compute Environment | Use-Case | Steps To Deploy appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece

Hemant K Chitale - Wed, 2021-05-05 09:40

 You are attempting to restore a database to another server.  

So, you have verified that you have controlfile and datafile backups on the source server  :



RMAN> list backup of controlfile;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
51 Full 11.52M DISK 00:00:01 20-FEB-21
BP Key: 51 Status: AVAILABLE Compressed: NO Tag: TAG20210220T114245
Piece Name: /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
Control File Included: Ckp SCN: 1093419 Ckp time: 20-FEB-21

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
55 Full 11.52M DISK 00:00:02 04-MAY-21
BP Key: 55 Status: AVAILABLE Compressed: NO Tag: TAG20210504T232054
Piece Name: /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
Control File Included: Ckp SCN: 1126526 Ckp time: 04-MAY-21

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
56 Full 11.48M DISK 00:00:01 04-MAY-21
BP Key: 56 Status: AVAILABLE Compressed: NO Tag: TAG20210504T232851
Piece Name: /home/oracle/controlfile.bak
Control File Included: Ckp SCN: 1126757 Ckp time: 04-MAY-21

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
57 Full 11.52M DISK 00:00:02 04-MAY-21
BP Key: 57 Status: AVAILABLE Compressed: NO Tag: TAG20210504T232853
Piece Name: /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
Control File Included: Ckp SCN: 1126766 Ckp time: 04-MAY-21

RMAN>


You have copied the backups to the target, new, server and attempt to restore :

oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Wed May 5 22:27:26 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database (not started)

RMAN> startup nomount;

Oracle instance started

Total System Global Area 1207958960 bytes

Fixed Size 8895920 bytes
Variable Size 318767104 bytes
Database Buffers 872415232 bytes
Redo Buffers 7880704 bytes

RMAN> restore controlfile from '/home/oracle/controlfile.bak';

Starting restore at 05-MAY-21
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 05/05/2021 22:27:47
RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece

RMAN>
RMAN> quit


Recovery Manager complete.
oracle19c>ls /home/oracle/controlfile.bak
/home/oracle/controlfile.bak
oracle19c>ls /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
/opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
oracle19c>ls /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
/opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
oracle19c>ls /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
/opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
oracle19c>


So, why do you get the RMAN-06172 error ?  All the controlfile backups, including the manual backup to /home/oracle/controlfile.bak and the three autobackups, one from February 2021 and two from 04-May-2021 are available.

oracle19c>oerr rman 6172
6172, 1, "no AUTOBACKUP found or specified handle is not a valid copy or piece"
// *Cause: A restore could not proceed because no AUTOBACKUP was found or
// specified handle is not a valid copy or backup piece.
// In case of restore from AUTOBACKUP, it may be the case that a
// backup exists, but it does not satisfy the criteria specified in
// the user's restore operands.
// In case of restore from handle, it may be the handle is not a
// backup piece or control file copy. In may be that it does not
// exist.
// *Action: Modify AUTOBACKUP search criteria or verify the handle.
oracle19c>
oracle19c>ls -l /home/oracle/controlfile.bak
-rw-r-----. 1 root root 12058624 May 4 23:28 /home/oracle/controlfile.bak
oracle19c>ls -l /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
-rw-r-----. 1 root root 12091392 Feb 20 11:42 /opt/oracle/FRA/HEMANT/autobackup/2021_02_20/o1_mf_s_1065008565_j3119p5t_.bkp
oracle19c>ls -l /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
-rw-r-----. 1 root root 12091392 May 4 23:20 /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703254_j92slr2m_.bkp
oracle19c>ls -l /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
-rw-r-----. 1 root root 12091392 May 4 23:28 /opt/oracle/FRA/HEMANT/autobackup/2021_05_04/o1_mf_s_1071703733_j92t1pow_.bkp
oracle19c>


You get the "error" message that there are no AUTOBACKUPs because the "oracle19c" account is unable to actually *read* those pieces.  It can list them using "ls" because it has permission to read the OS folders containing them, but it does no have permission to read the files owned by root without having granted read permission.

So, before you start wondering about your AUTOBACKUP configuration or search criteria specification like "RESTORE CONTROLFILE FROM AUTOBACKUP MAXDAYS 30",  check if the backup pieces are readable.


Categories: DBA Blogs

Celery Distributed Task Queue with FastAPI for Machine Learning

Andrejus Baranovski - Wed, 2021-05-05 06:43
This sample app demonstrates how to implement Celery distributed task queues on top of RabbitMQ broker for Machine Learning model training and data processing. We are using TensorFlow in this example to train the model. API request comes through FastAPI and it is being processed asynchronously by Celery. There is a separate API endpoint to check task status. Multiple requests can be initiated and processed at the same time in parallel. Celery tasks can be monitored using Flower monitoring tool.

 

[DP-100] Design & Implement a Data Science Solution on Azure Training Day 1 FAQ’s

Online Apps DBA - Wed, 2021-05-05 02:38

This blog post- k21academy.com/dp10024 will cover the Q/A’s from Day 1 of Design & Implement a Data Science Solution on Azure [DP-100] in which we have covered Module 1: Getting Started with Azure Machine Learning, and Module 2: No-Code Machine Learning (Auto ML) FAQs. This blog will help you to get started with Design & […]

The post [DP-100] Design & Implement a Data Science Solution on Azure Training Day 1 FAQ’s appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator