Feed aggregator

oracle utl_file encoding from utf8; missing character

Tom Kyte - 5 hours 52 min ago
I like to export large amount of text data from db to file. The characterset in db is UTF8. The excepted result in the file is ISO8859P2 or MSWIN1250. My db settings: <code> SELECT * FROM v$nls_parameters; 1 NLS_LANGUAGE HUNGARIAN 0 ...
Categories: DBA Blogs

How do I determine how much storage will be required for NUMBER(p, s)?

Tom Kyte - 5 hours 52 min ago
<i>Oracle Database SQL Language Reference</i> states, "Each NUMBER value requires from 1 to 22 bytes." I was curious how much space a NUMBER(10, 0) would take, so I did: <code>SQL> create table t (a number(10, 0)) 2 / Table created. SQL>...
Categories: DBA Blogs

Exception when executing Bulk Collect

Tom Kyte - 5 hours 52 min ago
I am using bulk collect with save exceptions, it runs fine sometimes and sometimes it errors out. The error is as follows - <code>BEGIN * ERROR at line 1: ORA-00028: your session has been killed ORA-00028: your session has been killed ORA-...
Categories: DBA Blogs

Performance issue after database upgrade from 11.2.0.4 to 12.1.0.2

Tom Kyte - 5 hours 52 min ago
Hi Team, We have recently upgraded a SAP database from 11.2.0.4 to 12.1.0.2, After encountered performance issues that did not exist on the previous version(11.2.0.4). Proper keeping sga_target/sga_maxsize did not help Proper keeping db_file_m...
Categories: DBA Blogs

DBMS_SQLTUNE_UTIL0

Tom Kyte - 5 hours 52 min ago
Hi Tom I have a question regarding DBMS_SQLTUNE_UTIL0 package. This seems to be helper package. Is it okay to use this package directly in code or use of such internal packages should be avoided? I did not find any documentation on this packa...
Categories: DBA Blogs

Space Fragmentation in a table which has only inserts

Tom Kyte - 5 hours 52 min ago
Hi Tom, This is Praveen. 1)I have table x which has only inserts in it and updates rarely,so my doubt is till now i have heard and read that there will be space fragmentation for delete operation done on a table and i couldn't find out how the ...
Categories: DBA Blogs

definition of v$lock table

Tom Kyte - 5 hours 52 min ago
please find below query output.. select OWNER,OBJECT_NAME,OBJECT_TYPE from DBA_OBJECTS where OBJECT_NAME in ('V$LOCK'); ---PUBLIC V$LOCK SYNONYM as per above query output V$LOCK is SYNONYM. so i checked in DBA_SYNONYMS for further informa...
Categories: DBA Blogs

Rownum = 1 no longer returning results after upgrade to 12.1

Tom Kyte - 5 hours 52 min ago
Hi, I have a view that I created in 11g, after we upgraded the database to 12.1 it is no longer returning the dates for the view. I pulled the SQL for the view and confirmed that it no longer works. It does return the comment. I need to use the r...
Categories: DBA Blogs

Partner Webcast – Monitoring your cloud-based Infrastructure with Oracle Management Cloud

Today’s Modern IT supports today’s requirements by making IT infrastructure more accessible and available based on demand.  IT infrastructure can be made available both in your...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partitioning -- 3b : More Indexes on Partitioned Table

Hemant K Chitale - 17 hours 15 min ago
In the previous blog post, I had demonstrated a Global (Non-Partitioned) Index and a Local (Partitioned) Index.  A Global Index itself may be partitioned.  A Local Index is actually Equi-Partitioned with the Tablee.

Let me demonstrate a Global Partitioned Index on the same table now.

Given the SALES_DATA table which is partitioned by SALE_DATE, I might want to support queries by CUSTOMER_ID.  What if I have millions of customers?  My CUSTOMERS table might be partitioned by CUSTOMER_ID.  Can I have an Index on SALES_DATA that is also partitioned by CUSTOMER_ID ?

SQL> l
1 create index sales_data_glbl_part_ndx_2
2 on sales_data (customer_id)
3 global
4 partition by range (customer_id)
5 (
6 partition p_1 values less than (501000) tablespace new_indexes,
7 partition p_2 values less than (1000001) tablespace new_indexes,
8 partition p_3 values less than (1500001) tablespace new_indexes,
9 partition p_4 values less than (maxvalue)
10* )
SQL> /

Index created.

SQL>


This is a Global Partitioned Index which is prefixed on a left prefix (CUSTOMER_ID) of the Index Column(s). Oracle does not support Global Partitioned Indexes that are non-prefixed.  Let me demonstrated the difference between the two.

SQL> drop index sales_data_glbl_part_ndx_2;

Index dropped.

SQL>
SQL> l
1 create index sales_data_glbl_part_ndx_2
2 on sales_data (customer_id, sale_date)
3 global
4 partition by range (customer_id)
5 (
6 partition p_1 values less than (501000) tablespace new_indexes,
7 partition p_2 values less than (1000001) tablespace new_indexes,
8 partition p_3 values less than (1500001) tablespace new_indexes,
9 partition p_4 values less than (maxvalue)
10* )
SQL> /

Index created.

SQL>
SQL>
SQL> l
1 create index sales_data_glbl_part_ndx_3
2 on sales_data (sale_date)
3 global
4 partition by range (customer_id)
5 (
6 partition p_1 values less than (501000) tablespace new_indexes,
7 partition p_2 values less than (1000001) tablespace new_indexes,
8 partition p_3 values less than (1500001) tablespace new_indexes,
9 partition p_4 values less than (maxvalue)
10* )
SQL> /
partition by range (customer_id)
*
ERROR at line 4:
ORA-14038: GLOBAL partitioned index must be prefixed


SQL>
SQL> !oerr ora 14038
14038, 00000, "GLOBAL partitioned index must be prefixed"
// *Cause: User attempted to create a GLOBAL non-prefixed partitioned index
// which is illegal
// *Action: If the user, indeed, desired to create a non-prefixed
// index, it must be created as LOCAL; otherwise, correct the list
// of key and/or partitioning columns to ensure that the index is
// prefixed

SQL>


My first attempt above is valid because the Index is partitioned on CUSTOMER_ID although it is defined on (CUSTOMER_ID, SALE_DATE).  Thus, it is partitioned on a left prefix. The second index attempt fails because the Partition Key (CUSTOMER_ID) is not a left prefix (part of) the Index key (SALE_DATE alone).

Remember : The Index I've created is  partitioned on CUSTOMER_ID while the Table itself is partitioned on SALE_DATE.
This Index is a Global Index so any Partition Maintenance operation (TRUNCATE, DROP, SPLIT, MERGE) may result in the *entire* Index becoming UNUSABLE unless the UPDATE INDEXES clause is used in the Partition Maintenance DDL statement.  This was demonstrated with the Primary Key index being a Global Index in my earlier blog post.


What about Bitmap Indexes ?  Can a Bitmap Index on a partitioned Table be either Global or Local ?  Oracle does not support Global Bitmap Indexes.  A Bitmap Index on a Partitioned Table has to be Local.

SQL> create bitmap index sales_data_glbl_ndx_4
2 on sales_data(product_id)
3 /
on sales_data(product_id)
*
ERROR at line 2:
ORA-25122: Only LOCAL bitmap indexes are permitted on partitioned tables


SQL> !oerr ora 25122
25122, 00000, "Only LOCAL bitmap indexes are permitted on partitioned tables"
// *Cause: An attempt was made to create a global bitmap index on a partioned
// table.
// *Action: create a local bitmap index instead.

SQL>
SQL> create bitmap index sales_data_lcl_ndx_2
2 on sales_data(product_id) local
3 /

Index created.

SQL>
SQL> select partition_name, tablespace_name
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------
P_2018 TBS_YEAR_2018
P_2019 TBS_YEAR_2019
P_2020 TBS_YEAR_2020
P_MAXVALUE USERS

SQL> select partition_name, tablespace_name, status
2 from user_ind_partitions
3 where index_name = 'SALES_DATA_LCL_NDX_2'
4 order by partition_position
5 /

PARTITION_NAME TABLESPACE_NAME STATUS
------------------------------ ------------------------------ --------
P_2018 TBS_YEAR_2018 USABLE
P_2019 TBS_YEAR_2019 USABLE
P_2020 TBS_YEAR_2020 USABLE
P_MAXVALUE USERS USABLE

SQL>


As demonstrated earlier, a Local Index automatically inherits the Partition Name and Tablespace Name from the Table Partitions.

To summarise, these are the Indexes on my SALES_DATA table :

SQL> l        
1 select index_name, index_type, partitioned, tablespace_name, status
2 from user_indexes
3* where table_name = 'SALES_DATA'
SQL> /

INDEX_NAME INDEX_TYPE PAR
------------------------------ --------------------------- ---
TABLESPACE_NAME STATUS
------------------------------ --------
SYS_C0017514 NORMAL NO
HEMANT VALID

SALES_DATA_LCL_NDX_1 NORMAL YES
N/A

SALES_DATA_GLBL_PART_NDX_2 NORMAL YES
N/A

SALES_DATA_LCL_NDX_2 BITMAP YES
N/A


SQL>


Remember that for a Partitioned Index (either Global or Local), the Tablespace Name and Status are attributes at the Partition definition level not at the Index definition.


Categories: DBA Blogs

Learn Why and When Website prototyping is important

Nilesh Jethwa - Fri, 2018-08-17 23:07

A Simple Explanation to Website Prototyping A website prototype is an initial design of a website. Developers, testers, and users can interact with it. Developers often create these prototypes to gather feedback from stakeholders or clients. They create one during … Continue reading ?

Source: MockupTiger Wireframes

Court Upholds Permanent Injunction Against Rimini Street

Chris Warticki - Fri, 2018-08-17 16:18

On August 15, a Federal Court in Nevada once again granted Oracle's motion for a permanent injunction against Rimini Street and required payment to Oracle of $28 million in attorney’s fees due to Rimini’s egregious litigation misconduct. 

In upholding the injunction, the Court made clear that Rimini’s business “was built entirely on its infringement of Oracle’s copyrighted software.” In addition, the court noted that Rimini’s improper conduct not only enabled the company to grow quickly, but also unfairly “eroded the bonds and trust that Oracle has with its customers.”

In a statement, Dorian Daley, Oracle's Executive Vice President and General Counsel, reinforced the Court’s findings. “As the Court's Order today makes clear, Rimini Street's business has been built entirely on unlawful conduct, and Rimini's executives have repeatedly lied to cover up their company's illegal acts.”

To learn more about the Federal Court ruling:

1. Read full press release

2. "Oracle Punches Rimini Street Hard in Court" - Enterprise Times

The size of Oracle Home: from 9GB to 600MB – What about PostgreSQL?

Yann Neuhaus - Fri, 2018-08-17 13:20

A recent blog post from Franck and a tweet around that topic is the inspiration for this blog post, thanks Jan for requesting :). In short it is about how small you can get the binaries. Is that important? At least when it comes to Docker images it might get important as you usually try make the image as small as possible. Well, comparing PostgreSQL and Oracle in that field is unfair as Oracle comes with many stuff by default which PostgreSQL is just not shipping (e.g. Apex, SQL Developer, …), so please treat this more a as fun post, please.

The way we usually compile PostgreSQL is this (not in /var/tmp in real life):

postgres@pgbox:/home/postgres/ [pg103] cd /var/tmp/
postgres@pgbox:/var/tmp/ [pg103] wget https://ftp.postgresql.org/pub/source/v10.5/postgresql-10.5.tar.bz2
postgres@pgbox:/var/tmp/ [pg103] tar -axf postgresql-10.5.tar.bz2
postgres@pgbox:/var/tmp/ [pg103] cd postgresql-10.5/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] PGHOME=/var/tmp/pg105
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] SEGSIZE=2
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] BLOCKSIZE=8
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] WALSEGSIZE=16
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] ./configure --prefix=${PGHOME} \
                                                             --exec-prefix=${PGHOME} \
                                                             --bindir=${PGHOME}/bin \
                                                             --libdir=${PGHOME}/lib \
                                                             --sysconfdir=${PGHOME}/etc \
                                                             --includedir=${PGHOME}/include \
                                                             --datarootdir=${PGHOME}/share \
                                                             --datadir=${PGHOME}/share \
                                                             --with-pgport=5432 \
                                                             --with-perl \
                                                             --with-python \
                                                             --with-openssl \
                                                             --with-pam \
                                                             --with-ldap \
                                                             --with-libxml \
                                                             --with-libxslt \
                                                             --with-segsize=${SEGSIZE} \
                                                             --with-blocksize=${BLOCKSIZE} \
                                                             --with-wal-segsize=${WALSEGSIZE}  \
                                                             --with-systemd
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 all
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make install
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] cd contrib
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 install

When we do this against the PostgreSQL 10.5 source code the result is this (without the documentation, of course, but containing all the extensions ):

postgres@pgbox:/var/tmp/postgresql-10.5/contrib/ [pg103] du -sh /var/tmp/pg105/
28M	/var/tmp/pg105/

Can we get that even smaller? Let’s try to skip the extensions:

postgres@pgbox:/var/tmp/postgresql-10.5/contrib/ [pg103] cd ..
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make clean
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] ./configure --prefix=${PGHOME} \
                                                             --exec-prefix=${PGHOME} \
                                                             --bindir=${PGHOME}/bin \
                                                             --libdir=${PGHOME}/lib \
                                                             --sysconfdir=${PGHOME}/etc \
                                                             --includedir=${PGHOME}/include \
                                                             --datarootdir=${PGHOME}/share \
                                                             --datadir=${PGHOME}/share \
                                                             --with-pgport=5432 \
                                                             --with-perl \
                                                             --with-python \
                                                             --with-openssl \
                                                             --with-pam \
                                                             --with-ldap \
                                                             --with-libxml \
                                                             --with-libxslt \
                                                             --with-segsize=${SEGSIZE} \
                                                             --with-blocksize=${BLOCKSIZE} \
                                                             --with-wal-segsize=${WALSEGSIZE}  \
                                                             --with-systemd
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 all
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] rm -rf /var/tmp/pg105/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make install

What do we have now?

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
25M	/var/tmp/pg105/

We saved another 3MB. Can we do more? Let’s try to skip all the “–with” flags that enable perl and so on for the configure command:

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make clean
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] ./configure --prefix=${PGHOME} \
                                                             --exec-prefix=${PGHOME} \
                                                             --bindir=${PGHOME}/bin \
                                                             --libdir=${PGHOME}/lib \
                                                             --sysconfdir=${PGHOME}/etc \
                                                             --includedir=${PGHOME}/include \
                                                             --datarootdir=${PGHOME}/share \
                                                             --datadir=${PGHOME}/share \
                                                             --with-pgport=5432 \
                                                             --with-segsize=${SEGSIZE} \
                                                             --with-blocksize=${BLOCKSIZE} \
                                                             --with-wal-segsize=${WALSEGSIZE}  \
                                                             --with-systemd
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 all
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] rm -rf /var/tmp/pg105/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make install

Do we see a change?

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
25M	/var/tmp/pg105/

No, that does not change anything. Franck stripped the Oracle binaries and libraries, so lets try to do the same (although I am not sure right now if that is supported):

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
25M	/var/tmp/pg105/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] strip /var/tmp/pg105/bin/*
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] strip /var/tmp/pg105/lib/*
strip: Warning: '/var/tmp/pg105/lib/pkgconfig' is not an ordinary file
strip: Warning: '/var/tmp/pg105/lib/postgresql' is not an ordinary file
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
24M	/var/tmp/pg105/

So, another 1MB less. Can we still initialize and start PostgreSQL?

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] /var/tmp/pg105/bin/initdb -D /var/tmp/testpg
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.utf8
  CTYPE:    en_US.utf8
  MESSAGES: en_US.utf8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /var/tmp/testpg ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg -l logfile start

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ start
waiting for server to start....2018-08-17 18:57:50.329 CEST [8528] LOG:  listening on IPv6 address "::1", port 5432
2018-08-17 18:57:50.329 CEST [8528] LOG:  listening on IPv4 address "127.0.0.1", port 5432
2018-08-17 18:57:50.334 CEST [8528] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-08-17 18:57:50.354 CEST [8529] LOG:  database system was shut down at 2018-08-17 18:57:31 CEST
2018-08-17 18:57:50.358 CEST [8528] LOG:  database system is ready to accept connections
 done
server started

Looks good and we are able to connect:

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] /var/tmp/pg105/bin/psql -c "select version()" postgres
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Time: 1.428 ms

What else can we do? When you do not need the utilities on the server you could just remove them (as said, this is a fun post, don’t do this):

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] cd /var/tmp/pg105/bin
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] rm clusterdb createdb createuser dropdb dropuser pg_archivecleanup pg_basebackup pg_dump pg_dumpall pg_isready pg_receivewal pg_recvlogical pg_resetwal pg_restore pg_rewind pg_test_fsync pg_test_timing pg_upgrade pg_waldump reindexdb vacuumdb

We could probably even remove pgbench and psql but these two I will need to show that the server is still working. What do we have now?

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/
21M	/var/tmp/pg105/

Another 3MB less. Can we still restart and connect?

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ stop
waiting for server to shut down....2018-08-17 19:08:49.588 CEST [9144] LOG:  received fast shutdown request
2018-08-17 19:08:49.593 CEST [9144] LOG:  aborting any active transactions
2018-08-17 19:08:49.597 CEST [9144] LOG:  worker process: logical replication launcher (PID 9151) exited with exit code 1
2018-08-17 19:08:49.598 CEST [9146] LOG:  shutting down
2018-08-17 19:08:49.625 CEST [9144] LOG:  database system is shut down
 done
server stopped
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ start
waiting for server to start....2018-08-17 19:08:51.949 CEST [9368] LOG:  listening on IPv6 address "::1", port 9999
2018-08-17 19:08:51.949 CEST [9368] LOG:  listening on IPv4 address "127.0.0.1", port 9999
2018-08-17 19:08:51.953 CEST [9368] LOG:  listening on Unix socket "/tmp/.s.PGSQL.9999"
2018-08-17 19:08:51.966 CEST [9369] LOG:  database system was shut down at 2018-08-17 19:08:49 CEST
2018-08-17 19:08:51.969 CEST [9368] LOG:  database system is ready to accept connections
 done
server started
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/psql -c "select version()" postgres
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Time: 2.043 ms

Looks good. Now lets do the final step and remove the rest which is not required for the server, but before that we do an initdb as we can not do that afterwards:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ stop
waiting for server to shut down....2018-08-17 19:10:31.693 CEST [9368] LOG:  received fast shutdown request
2018-08-17 19:10:31.696 CEST [9368] LOG:  aborting any active transactions
2018-08-17 19:10:31.696 CEST [9368] LOG:  worker process: logical replication launcher (PID 9375) exited with exit code 1
2018-08-17 19:10:31.697 CEST [9370] LOG:  shutting down
2018-08-17 19:10:31.712 CEST [9368] LOG:  database system is shut down
 done
server stopped
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] rm -rf /var/tmp/testpg/
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/initdb -D /var/tmp/testpg
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.utf8
  CTYPE:    en_US.utf8
  MESSAGES: en_US.utf8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /var/tmp/testpg ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg -l logfile start

So, remove the rest:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] rm pg_config pg_controldata psql pgbench initdb ecpg pgbench pg_ctl
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/
20M	/var/tmp/pg105/

We are down to 20MB but we can still start the instance:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/postgres -D /var/tmp/testpg/ &
[1] 9486
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] 2018-08-17 19:13:54.917 CEST [9486] LOG:  listening on IPv6 address "::1", port 9999
2018-08-17 19:13:54.917 CEST [9486] LOG:  listening on IPv4 address "127.0.0.1", port 9999
2018-08-17 19:13:54.924 CEST [9486] LOG:  listening on Unix socket "/tmp/.s.PGSQL.9999"
2018-08-17 19:13:54.955 CEST [9487] LOG:  database system was shut down at 2018-08-17 19:10:56 CEST
2018-08-17 19:13:54.960 CEST [9486] LOG:  database system is ready to accept connections

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] ps -ef | grep postgres
root      1061   941  0 18:26 ?        00:00:00 sshd: postgres [priv]
postgres  1064  1061  0 18:26 ?        00:00:02 sshd: postgres@pts/0
postgres  1065  1064  0 18:26 pts/0    00:00:01 -bash
postgres  9486  1065  0 19:13 pts/0    00:00:00 /var/tmp/pg105/bin/postgres -D /var/tmp/testpg/
postgres  9488  9486  0 19:13 ?        00:00:00 postgres: checkpointer process   
postgres  9489  9486  0 19:13 ?        00:00:00 postgres: writer process   
postgres  9490  9486  0 19:13 ?        00:00:00 postgres: wal writer process   
postgres  9491  9486  0 19:13 ?        00:00:00 postgres: autovacuum launcher process   
postgres  9492  9486  0 19:13 ?        00:00:00 postgres: stats collector process   
postgres  9493  9486  0 19:13 ?        00:00:00 postgres: bgworker: logical replication launcher  
postgres  9496  1065  0 19:14 pts/0    00:00:00 ps -ef
postgres  9497  1065  0 19:14 pts/0    00:00:00 grep --color=auto postgres

Using another psql on that box we can confirm that we can connect:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /u01/app/postgres/product/10/db_4/bin/psql -c "select version()" postgres
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Still too much? What else can we? What is consuming space:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/*
6.6M	/var/tmp/pg105/bin
5.9M	/var/tmp/pg105/include
4.1M	/var/tmp/pg105/lib
2.9M	/var/tmp/pg105/share

We can not do more in the “bin” directory, nothing left to delete:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] ls -l /var/tmp/pg105/bin
total 6660
-rwxr-xr-x. 1 postgres postgres 6817480 Aug 17 18:56 postgres
lrwxrwxrwx. 1 postgres postgres       8 Aug 17 18:54 postmaster -> postgres

Everything else will probably safe us a few bytes such as the sample files:

postgres@pgbox:/var/tmp/pg105/ [pg103] find . -name *sample*
./share/postgresql/tsearch_data/synonym_sample.syn
./share/postgresql/tsearch_data/thesaurus_sample.ths
./share/postgresql/tsearch_data/hunspell_sample.affix
./share/postgresql/tsearch_data/ispell_sample.affix
./share/postgresql/tsearch_data/ispell_sample.dict
./share/postgresql/tsearch_data/hunspell_sample_long.affix
./share/postgresql/tsearch_data/hunspell_sample_long.dict
./share/postgresql/tsearch_data/hunspell_sample_num.affix
./share/postgresql/tsearch_data/hunspell_sample_num.dict
./share/postgresql/pg_hba.conf.sample
./share/postgresql/pg_ident.conf.sample
./share/postgresql/postgresql.conf.sample
./share/postgresql/recovery.conf.sample
./share/postgresql/pg_service.conf.sample
./share/postgresql/psqlrc.sample

So how much space do we consume for the PostgreSQL installation and the files which make up the instance?

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/
20M	/var/tmp/pg105/
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/testpg/
41M	/var/tmp/testpg/

… 61MB. When we add the wal file Jan mentioned in his tweet we come the 77MB. Not much.

The final question is if PostgreSQL is still working. Let’s use pgbench from another installation on the same server against this:

postgres@pgbox:/var/tmp/pg105/ [pg103] /u01/app/postgres/product/10/db_3/bin/pgbench -i -s 10 postgres
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
creating tables...
100000 of 1000000 tuples (10%) done (elapsed 0.08 s, remaining 0.75 s)
200000 of 1000000 tuples (20%) done (elapsed 0.24 s, remaining 0.95 s)
300000 of 1000000 tuples (30%) done (elapsed 0.42 s, remaining 0.98 s)
400000 of 1000000 tuples (40%) done (elapsed 0.49 s, remaining 0.74 s)
500000 of 1000000 tuples (50%) done (elapsed 0.70 s, remaining 0.70 s)
600000 of 1000000 tuples (60%) done (elapsed 0.88 s, remaining 0.58 s)
700000 of 1000000 tuples (70%) done (elapsed 0.95 s, remaining 0.41 s)
800000 of 1000000 tuples (80%) done (elapsed 1.14 s, remaining 0.29 s)
900000 of 1000000 tuples (90%) done (elapsed 1.32 s, remaining 0.15 s)
1000000 of 1000000 tuples (100%) done (elapsed 1.41 s, remaining 0.00 s)
vacuum...
set primary keys...
done.
postgres@pgbox:/var/tmp/pg105/ [pg103] /u01/app/postgres/product/10/db_3/bin/pgbench -s 10 postgres
scale option ignored, using count from pgbench_branches table (10)
starting vacuum...end.
transaction type: 
scaling factor: 10
query mode: simple
number of clients: 1
number of threads: 1
number of transactions per client: 10
number of transactions actually processed: 10/10
latency average = 4.436 ms
tps = 225.435296 (including connections establishing)
tps = 285.860401 (excluding connections establishing)

Looks good. So you can come down to 20MB for the PostgreSQL installation and another 41Mb for the files you need to start the instance. You could even drop the postgres database to save another 7MB. But remember: Please don’t do that, you are still fine with around 30MB :)

 

Cet article The size of Oracle Home: from 9GB to 600MB – What about PostgreSQL? est apparu en premier sur Blog dbi services.

Oracle Database Configurations using Docker and Vagrant

Hemant K Chitale - Fri, 2018-08-17 02:16
Oracle now makes available configurations for the Database (and other products) on both Docker and Vagrant via GitHub.

Good time to familiarize oneself with GitHub, Docker and/or Vagrant.


For the Docker configuration see : https://github.com/oracle/docker-images/tree/master/OracleDatabase


For the Vagrant configuration see : https://github.com/oracle/vagrant-boxes/tree/master/OracleDatabase


(Note : The Examples have been available on GitHub for quite some time at https://github.com/oracle/oracle-db-examples )



Categories: DBA Blogs

Subtract hours and show milliseconds for a TIMESTAMP(6) column

Tom Kyte - Fri, 2018-08-17 02:06
I want to SELECT a TIMESTAMP(6) with milliseconds, but at the same time I need to subtract 3 hours (0.125 of a day) from that TIMESTAMP to convert it to my timezone. So I tried: <code>SELECT To_Char(UTCSCANTIME-0.125,'YYYY-MM-DD HH24:MI:SS') AS LO...
Categories: DBA Blogs

RMAN Commands

Michael Dinh - Fri, 2018-08-17 00:21

Note to self to configure and clear settings, and all things RMAN.

Will add more as time goes by.

catalog backuppiece '/u01/app/oracle/backup/HAWK_3241949199_20180816_bctand12_1_1.bkp';
++++ Remove summary to get full details.
list backup of archivelog all summary completed after 'sysdate-1';
list backup of archivelog from sequence 243 summary;
list backup summary completed after 'sysdate-1';

list backup of controlfile summary tag=STBY;
list backup of controlfile summary;
list backup of spfile summary;

list backup by file;

backup incremental level 0 check logical database filesperset 1 tag=LEVEL0 plus archivelog filesperset 8 tag=LEVEL0;
backup current controlfile for standby tag=STBY;

+++ Remove preview/validate/summary for actual restore.
restore controlfile validate from tag=STBY;
restore controlfile validate preview summary from tag=STBY; (Error if insufficient backup)
restore spfile validate preview summary;

restore database validate;
restore database validate preview summary;
restore database until time "TRUNC(sysdate)+17/24" validate preview summary;

configure controlfile autobackup on;
configure controlfile autobackup clear;

configure controlfile autobackup format for device type disk to '/media/swrepo/backup/%d_%i_%f.ctl';
configure controlfile autobackup format for device type disk clear;

configure device type disk backup type to compressed backupset parallelism 2;
configure device type disk clear;

configure channel device type disk format '/media/swrepo/backup/%d_%i_%t_%u.bkp' maxpiecesize 1 g maxopenfiles 1;
configure channel device type disk clear;

configure archivelog deletion policy to backed up 1 times to disk applied on all standby;
configure archivelog deletion policy clear;

configure db_unique_name 'hawka' connect identifier 'hawka';
configure db_unique_name 'hawka' clear;

“Let’s Talk Database” coming to Wellington, Auckland, Brisbane and Perth in September.

Richard Foote - Thu, 2018-08-16 19:08
Due to popular demand, I’ve been asked by Oracle to again run some more “Let’s Talk Database” events in September. Dates and venues are as follows: Monday, 2 September – Wellington (Wellington Oracle Office): Registration Link. Tuesday, 3 September – Auckland (Auckland Oracle Office): Registration Link. Wednesday, 12 September – Brisbane (Brisbane Oracle Office): Registration Link. Thursday, 13 September – Perth (Perth Oracle Office): Registration Link. […]
Categories: DBA Blogs

ADF BC REST Query and SQL Nesting Control Solution

Andrejus Baranovski - Thu, 2018-08-16 15:04
I will talk about expert mode View Object (with hand written SQL), this View Object is created based on SQL join. So, thats my use case for today example. I will describe issue related to generated SQL statement and give a hint how to solve it. This is in particular useful, if you want to expose complex VO (SQL with joins and calculating totals) over ADF BC REST service and then run queries against this REST resource.

Code is available on my GitHub repository.

Here is SQL join and expert mode VO (the one where you can modify SQL by hand):


This VO is exposed through ADF BC REST, I will not go through those details, you can find more info about it online. Once application is running, REST resource is accessible through GET. ADF BC REST syntax allows to pass query string along with REST request, here I'm filtering based on StreetAddress='ABC':


On backend this works OK by default and generates nested query (this is expected behaviour for expert mode VOs, all additional criteria clauses will be added through SQL wrapping). While such query executes just fine, this is not what we want in some use cases. If we calculate totals or average aggregated values in SQL, we don't want it to be wrapped:


To prevent SQL wrapping we can call ADF BC API method in VO constructor:


While probably this works with regular ADF BC, it doesn't work with criteria coming from ADF BC REST. SQL query is generated with two WHERE clauses, after query nesting was disabled:


Possible solution proposed by me - override executeQueryForCollection method, do some parsing and change second WHERE to be AND, apply changed query string and then execute super:


This trick helps and query is generated as we would expect, criteria added from ADF BC REST query call is appended at the end of WHERE clause:

The size of Oracle Home: from 9GB to 600MB

Yann Neuhaus - Thu, 2018-08-16 14:43

This is research only and totally unsupported. When building docker images to run Oracle Database in a container, we try to get the smallest image possible. One way is to remove some subdirectories that we know will not be used. For example, the patch history is not used anymore once we have the required version. The dbca templates can be removed as soon as we have created the database… In this post I take the opposite approach: run some workload on a normal Oracle Home, and keep only the files that were used.

I have Oracle Database 18c installed in /u00/app/oracle/product/18EE and it takes 9GB on my host:

[oracle@vmreforatun01 ~]$ du --human-readable --max-depth=1 $ORACLE_HOME | sort -h | tail -10
 
352M /u00/app/oracle/product/18EE/jdk
383M /u00/app/oracle/product/18EE/javavm
423M /u00/app/oracle/product/18EE/inventory
437M /u00/app/oracle/product/18EE/assistants
605M /u00/app/oracle/product/18EE/md
630M /u00/app/oracle/product/18EE/bin
673M /u00/app/oracle/product/18EE/apex
1.4G /u00/app/oracle/product/18EE/.patch_storage
2.3G /u00/app/oracle/product/18EE/lib
9.4G /u00/app/oracle/product/18EE

Gigabytes of libraries (most of them used only to link the executables), hundreds of megabytes of binaries, templates for new databases, applied patches, old object files, options, tools, command line and graphical interfaces,… Do we need all that?

For a full installation in production, yes for sure. The more we have, the better it is. When you have to connect at 2 a.m because you are on-call and a critical alert wakes you up, then you will appreciate to have all tools on the server. Especially if you connect through a few security obstacles such as remote VPN, desktop, Wallix, tunnels to finally get a high latency tty with no copy-paste possibilities. With a full Oracle Home, you can face any issue. You have efficient command line interfaces (sqlplus and hopefully sqlcl) or graphical (SQLDeveloper, asmca,…). For severe problems, you can even re-link, apply or rollback patches, quickly create a new database to import something in it,…

But what if you just want to provide a small container where a database is running, and no additional administration support? Where you will never re-install the software, apply patches, re-create the database, troubleshoot weird issues. Just have users connect through the listener port and never log to the container. Then, most of these 9.4 GB are useless.

But how to know which files are useful or not?

If you can rely on Linux ‘access time’ then you may look at the files accessed during the last days – after any installation or database creation is done:

[oracle@vmreforatun01 ~]$ find $ORACLE_HOME -atime -1 -exec stat -L -c "%x %y %z %F %n" {} \; | sort

But this is not reliable. Access time depends on the file type, filesystem, mount options,… and is usually bypassed as much as possible because writing something just to log that you read something is not a very good idea.

Here, I’ll trace all system calls related to file names (strace -e trace=file). I’ll trace them from the start of the database, so that I run strace on dbstart with the -f arguments to trace across forks. Then, I’ll trace the listener, the instance processes and any user process created through the listener.

I pipe the output to an awk script which extracts the file names (which is enclosed in double quotes in the strace output). Basically, the awk is just setting the field separator with -F” and prints the $2 token for each line. There are many single and double quotes here because of shell interpretation.

[oracle@vmreforatun01 ~]$ dbshut $ORACLE_HOME ; strace -fe trace=file -o "|awk -F'"'"'"' '"'{print $2}'"'" sh -xc "dbstart $ORACLE_HOME >&2" | grep "^$ORACLE_HOME" | sort -u > /tmp/files.txt &

Then I run some activity. I did this on our Oracle Tuning training workshop lab, when reviewing all exercises after upgrading the lab VM to 18c. This runs some usual SQL for application (we use Swingbench) and monitoring. The idea is to run through all features that you want to be available on the container you will build.

When I’m done, I dbshut (remember this is for a lab only – strace is not for production) and then strace output gets deduplicated (sort -u) and written to a file.txt in /tmp.

This file contains all files referenced by system calls. Surprisingly, there is one that is not captured here, the ldap messages file, but if I do not take it then the remote connections will fail with:

ORA-07445: exception encountered: core dump [gslumcCalloc()+41] [SIGSEGV] [ADDR:0x21520] [PC:0x60F92D9] [Address not mapped to object] []

I got it with a very empirical approach, will try to understand later. For the moment, I just add it to the list:

[oracle@vmreforatun01 ~]$ ls $ORACLE_HOME/ldap/mesg/ldapus.msb >> /tmp/files.txt

I also add adrci and dbshut scripts as they are small and may be useful:

[oracle@vmreforatun01 ~]$ ls $ORACLE_HOME/bin/adrci $ORACLE_HOME/bin/dbshut >> /tmp/files.txt

From this list, I check thise which are not directories, and tar all regular files and symbolic links into /tmp/smalloh.tar:

[oracle@vmreforatun01 ~]$ stat -c "%F %n" $(cat /tmp/files.txt) | awk '!/^directory/{print $3}' | tar -cvf /tmp/smalloh.tar --dereference --files-from=-

This is a 600M tar:

[oracle@vmreforatun01 ~]$ du -h /tmp/smalloh.tar
 
598M /tmp/smalloh.tar

Then I can remove my Oracle Home

[oracle@vmreforatun01 ~]$ cd $ORACLE_HOME/..
[oracle@vmreforatun01 product]$ rm -rf 18EE
[oracle@vmreforatun01 product]$ mkdir 18EE

and extract the files from my tar:

[oracle@vmreforatun01 /]$ tar -xf /tmp/smalloh.tar

I forgot that there are some setuid executables so I must be root to set them:

[oracle@vmreforatun01 /]$ ls -l $ORACLE_HOME/bin/oracle
-rwxr-x--x. 1 oracle oinstall 437157251 Aug 11 18:40 /u00/app/oracle/product/18EE/bin/oracle
[oracle@vmreforatun01 /]$ su
Password:
[root@vmreforatun01 /]# tar -xf /tmp/smalloh.tar
[root@vmreforatun01 /]# exit
[oracle@vmreforatun01 /]$ ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x. 1 oracle oinstall 437157251 Aug 11 18:40 /u00/app/oracle/product/18EE/bin/oracle

That’s a 600MB Oracle Home then. You can reduce it further by stripping the binaries:

[oracle@vmreforatun01 18EE]$ du -hs $ORACLE_HOME
599M /u00/app/oracle/product/18EE
[oracle@vmreforatun01 18EE]$ strip $ORACLE_HOME/bin/* $ORACLE_HOME/lib/*
[oracle@vmreforatun01 18EE]$ du -hs $ORACLE_HOME
570M /u00/app/oracle/product/18EE

but for only 30MB I really prefer to have all symbols. As I’m doing something completely unsupported, I may have to do some toubleshooting.

Now I’m ready to start the database and the listener:

[oracle@vmreforatun01 18EE]$ dbstart $ORACLE_HOME
Processing Database instance "DB1": log file /u00/app/oracle/product/18EE/rdbms/log/startup.log

and I run some Swingbench workload to check that everything is fine:

[oracle@vmreforatun01 18EE]$ /home/oracle/swingbench/bin/charbench -cs //localhost:1521/APP -u soe -p soe -uc 10 -min 5 -max 20 -a -v
Author : Dominic Giles
Version : 2.5.0.932
 
Results will be written to results.xml.
 
Time Users TPM TPS
 
6:35:15 PM 0 0 0
...
6:35:44 PM 10 12 9
6:35:45 PM 10 16 4
6:35:46 PM 10 21 5
6:35:47 PM 10 31 10

The only errors in alert.log are about checking the patches at install:

QPI: OPATCH_INST_DIR not present:/u00/app/oracle/product/18EE/OPatch
Unable to obtain current patch information due to error: 20013, ORA-20013: DBMS_QOPATCH ran mostly in non install area
ORA-06512: at "SYS.DBMS_QOPATCH", line 767
ORA-06512: at "SYS.DBMS_QOPATCH", line 547
ORA-06512: at "SYS.DBMS_QOPATCH", line 2124

Most of those 600MB are in the server executable (bin/oracle) and client shared library (lib/libclntsh.so):

[oracle@vmreforatun01 ~]$ size -td /u00/app/oracle/product/18EE/bin/* /u00/app/oracle/product/18EE/lib/* | sort -n
 
text data bss dec hex filename
2423 780 48 3251 cb3 /u00/app/oracle/product/18EE/lib/libofs.so
4684 644 48 5376 1500 /u00/app/oracle/product/18EE/lib/libskgxn2.so
5301 732 48 6081 17c1 /u00/app/oracle/product/18EE/lib/libodm18.so
10806 2304 1144 14254 37ae /u00/app/oracle/product/18EE/bin/sqlplus
13993 2800 1136 17929 4609 /u00/app/oracle/product/18EE/bin/adrci
46456 3008 160 49624 c1d8 /u00/app/oracle/product/18EE/lib/libnque18.so
74314 4824 1248 80386 13a02 /u00/app/oracle/product/18EE/bin/oradism
86396 23968 1144 111508 1b394 /u00/app/oracle/product/18EE/bin/lsnrctl
115523 2196 48 117767 1cc07 /u00/app/oracle/product/18EE/lib/libocrutl18.so
144591 3032 160 147783 24147 /u00/app/oracle/product/18EE/lib/libdbcfg18.so
216972 2564 48 219584 359c0 /u00/app/oracle/product/18EE/lib/libclsra18.so
270692 13008 160 283860 454d4 /u00/app/oracle/product/18EE/lib/libskjcx18.so
321701 5024 352 327077 4fda5 /u00/app/oracle/product/18EE/lib/libons.so
373988 7096 9536 390620 5f5dc /u00/app/oracle/product/18EE/lib/libmql1.so
717398 23224 110088 850710 cfb16 /u00/app/oracle/product/18EE/bin/orabaseconfig
717398 23224 110088 850710 cfb16 /u00/app/oracle/product/18EE/bin/orabasehome
878351 36800 1144 916295 dfb47 /u00/app/oracle/product/18EE/bin/tnslsnr
928382 108920 512 1037814 fd5f6 /u00/app/oracle/product/18EE/lib/libcell18.so
940122 56176 2376 998674 f3d12 /u00/app/oracle/product/18EE/lib/libsqlplus.so
1118019 16156 48 1134223 114e8f /u00/app/oracle/product/18EE/lib/libocr18.so
1128954 5936 160 1135050 1151ca /u00/app/oracle/product/18EE/lib/libskgxp18.so
1376814 18548 48 1395410 154ad2 /u00/app/oracle/product/18EE/lib/libocrb18.so
1685576 130464 160 1816200 1bb688 /u00/app/oracle/product/18EE/lib/libasmclntsh18.so
2517125 16496 15584 2549205 26e5d5 /u00/app/oracle/product/18EE/lib/libipc1.so
3916867 86504 111912 4115283 3ecb53 /u00/app/oracle/product/18EE/lib/libclntshcore.so.18.1
4160241 26320 69264 4255825 40f051 /u00/app/oracle/product/18EE/lib/libmkl_rt.so
5120001 459984 7784 5587769 554339 /u00/app/oracle/product/18EE/lib/libnnz18.so
10822468 302312 21752 11146532 aa1524 /u00/app/oracle/product/18EE/lib/libhasgen18.so
11747579 135320 160 11883059 b55233 /u00/app/oracle/product/18EE/lib/libshpkavx218.so
61758209 2520896 134808 64413913 3d6e0d9 /u00/app/oracle/product/18EE/lib/libclntsh.so.18.1
376147897 3067672 602776 379818345 16a39169 /u00/app/oracle/product/18EE/bin/oracle
487369241 7106932 1203944 495680117 1d8b7a75 (TOTALS)

Of course, this is probably not sufficient, especially if you want to run APEX, OJVM, OracleText. The method is there: run a workload that covers everything you need, and build the Oracle Home from the files used there. I used strace here, but auditd can also be a good idea. Ideally, this job will be done one day by Oracle itself in a supported way, so that we can build a core container for Oracle Database and add features as Dockerfile layers. This had be done to release Oracle XE 11g which is 300MB only. However Oracle XE 18c announced for October will probably be larger as it includes nearly all option.

 

Cet article The size of Oracle Home: from 9GB to 600MB est apparu en premier sur Blog dbi services.

Unable to gather table stats in parallel

Tom Kyte - Thu, 2018-08-16 07:46
Hi, We're running gather_table_stats with the following arguments: dbms_stats.gather_table_stats(ownname=>'&owner', tabname=>'&table',estimate_percent=>0.1,block_sample=>true, method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO',cascade=>fals...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator