Feed aggregator

Can you shed some light on this ora error [session idle bit]

Tom Kyte - 11 hours 52 min ago
SYS@XYZ> select sum(ksmchsiz) ||' bytes' "ToSHRPOOLMem" from x$ksmsp; ^C ^C select sum(ksmchsiz) ||' bytes' "ToSHRPOOLMem" from x$ksmsp * ERROR at line 1: ORA-00603: ORACLE server session ...
Categories: DBA Blogs

How to restore a packages from rman backups

Tom Kyte - 11 hours 52 min ago
Hello, Is it possible to restore packages from rman backups? I know the export method that can do it, but I want to know if extended rman can do such a thing?
Categories: DBA Blogs

Encryption

Tom Kyte - 11 hours 52 min ago
HI Tom - facing this issue of incorrect string sizing while decrypting.Usage of trim doesnt solve the issue. Below are the codes. 1.CREATE OR REPLACE FUNCTION CRYPT( P_STR IN VARCHAR2 ) RETURN VARCHAR2 AS ...
Categories: DBA Blogs

ODA 12.2.1.2.0: Some curious password management rules

Yann Neuhaus - 12 hours 25 min ago

While deploying an ODA based on the DCS stack (odacli), it is mandatory to provide a “master” password at appliance creation. The web GUI provides for that a small tooltip which describes the rules applied on password management. However it looks like there is some flexibility with those rules. Lets try to check this out with some basics tests.

First of all here are the rules as provided by the ODA interface:

41-Web-CreateAppliance-PWDRules

So basically it has to start with an alpha character and be at least 9 characters long. My first reaction was that 9 characters is not to bad even if 10 would be better as minimum. Unfortunately it is not requesting any additional complexity mixing uppercase, lowercase, numbers… My second reaction, as most of IT guys, was to try to not respect these rules and see what happen :-P

I started really basically by using an “high secured” password: test

44-Web-CreateAppliance-PWD-test

Perfect the ODA reacted as expect and tells me I should read the rules once again. Next step is try something a bit more complicated: manager

..and don’t tell me you never used it in any Oracle environment ;-)

42-Web-CreateAppliance-PWD-manager

Fine, manager is still not 9 character long, 7 indeed, and the installer is still complaining. For now, everything is okay.
Next step was to try a password respecting the rules of 9 characters: welcome123

43-Web-CreateAppliance-PWD-welcome123

Still a faultless reaction of ODA!

Then I had the strange idea to test the historical ODA password: welcome1

43-Web-CreateAppliance-PWD-welcome1

Oops! The password starts with an alpha character fine, but if I’m right welcome1 is only 8 characters long :-?
If you don’t believe me, try to count the dot on the picture above….and I swear I didn’t use Gimp to “adjust” it ;-)

Finally just to be sure I tried another password of 8 characters: welcome2

43-Web-CreateAppliance-PWD-welcome2

Ah looks better. This time the installer sees that the password is not long enough and shows a warning.

…but would it mean that welcome1 is hard-coded somewhere??

 

Not matter, let’s continue and run the appliance creation with welcome123. Once done I try log using SSH to my brandly new created ODA using my new master password

43-CreateAppliance-PWD-SSH-Login-

it doesn’t work! 8-O

I tried multiple combination from welcome123, welcome1, Welcome123 and much more. Unfortunately none of them work.
At this point there are only 2 solutions to connect back to your ODA:

  1. There is still a shell connected as root to the ODA and then the root password can easily be changed using passwd
  2. No session is open to the ODA anymore and then it requires to open the remote console to reboot the ODA in Single User mode :-(

As the master password should be set to both root, grid and oracle users, I tried the password for grid and oracle too:

43-CreateAppliance-PWD-oracle-login

Same thing there the master password provided during the appliance creation hasn’t be set properly.

Hope it help!

 

Cet article ODA 12.2.1.2.0: Some curious password management rules est apparu en premier sur Blog dbi services.

Oracle Communications Network Charging and Control Enables Mobile Service Providers to Differentiate and Monetize Their Brand

Oracle Press Releases - 14 hours 57 min ago
Press Release
Oracle Communications Network Charging and Control Enables Mobile Service Providers to Differentiate and Monetize Their Brand Delivers agile online charging for high-growth mobile, cloud and IoT services

Redwood Shores, Calif.—Feb 22, 2018

Oracle today announced the latest version of Oracle Communications Network Charging and Control (NCC), a key product in Oracle’s complete digital monetization portfolio which addresses communications, cloud and IoT services. A modern, standards-based online charging system for prepaid dominant mobile markets, Oracle Communications NCC expands the reach of Oracle’s digital monetization portfolio to help service providers, mobile virtual network enablers (MVNEs) and operators (MVNOs) in high growth markets, introduce innovative and interactive mobile offers to rapidly and cost effectively monetize their brands. Key capabilities introduced in this new release include 3GPP advanced data charging and policy integration together with support for contemporary, cost effective deployments on Oracle Linux.

The pre-paid market for consumer mobile broadband and Intelligent-Network (IN) voice services continues to grow globally. Ovum forecasts1 that the market for pre-paid mobile voice and data subscriptions will grow from 5.5B subscriptions in 2017 to 6.0B subscriptions in 2022 with highest net growth in developing markets. In addition, the GSMA estimates there to be almost 1,000 MVNOs globally with more than 250 mobile network operator (MNO) sub-brands, all seeking growth through brand differentiation.

For such operators, Oracle Communications NCC provides advanced mobile broadband and IN monetization, intuitive graphical service logic design and complete prepaid business management in a single solution. It supports flexible recharge and targeted real-time promotions, complete and secure voucher lifecycle management, and a large set of pre-built and configurable service templates for the rapid launch of new innovative offers. This is critical as competitive pressures and customer expectations mount, requiring service providers to rethink their services and how they can increase brand engagement and loyalty. With this evolution in services, it’s imperative that underlying charging systems evolve to meet these changing business requirements—across digital, cloud and IoT services.

ASPIDER-NGI builds, supports and operates innovative MVNO and IoT platforms for Operator, Manufacturer and Enterprise sectors,” said David Traynor, Chief Marketing Officer, ASPIDER-NGI. “We use Oracle Communications Network Charging and Control as part of our MVNE infrastructure, allowing our clients to quickly deploy new mobile data and intelligent network services. Our clients demand the controls to deliver competitive offerings to specific customer segments and to support their own IoT business models. This release provides us the agility to accelerate our pace of innovation with an online charging platform that supports the latest 3GPP technologies.”

Oracle Communications NCC aligns with 3GPP Release 14 Policy and Charging Control (PCC) standards, including Diameter Gy data services charging, and supports comprehensive SS7 Service Control (CAP, INAP, and MAP) for IN services. In addition, it supports integration with Policy and Charging Rules Function (PCRF) deployments, including Oracle Communications Policy Management, via the Diameter Sy interface. Such integration provides support for a wide range of value added scenarios from on-demand bandwidth purchases for video or data intensive services to fair usage policies that gracefully reduce mobile bandwidth as threshold quotas are met to ensure an optimal customer experience. Oracle Communications NCC may be deployed in a virtualized or bare metal configuration on Oracle Linux using the Oracle Database to provide a highly cost effective, performant and scalable online charging solution.

“This major release of Oracle Communications Network Charging and Control reiterates Oracle’s continued commitment to provide a complete and cost effective online charging and business management platform for the pre-paid consumer mobile market,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “With new features including support for policy integration and deployment flexibility on a contemporary, open platform, we are offering our customers a modern alternative to traditional IN platforms, enabling them to differentiate and grow their brands, and in turn, delight their customers.”

In addition to Oracle Communications Network Charging and Control, Oracle’s digital monetization portfolio also includes Oracle Communications Billing and Revenue Management and Oracle Monetization Cloud, which collectively support the rapid introduction and monetization of subscription and consumption based offerings.

Oracle Communications provides the integrated communications and cloud solutions that enable users to accelerate their digital transformation journey—from customer experience to digital business to network evolution. See Oracle Communications NCC in action at Mobile World Congress, Barcelona, February 26–March 1, 2018, Hall 3, Booth 3B30. Ovum, TMT Intelligence, Informa, Active Users, Prepaid and Postpaid Mobile Subscriptions, February 09, 2018

1. GSMA Intelligence—Segmenting the global MVNO footprint—https://www.gsmaintelligence.com/research/2015/03/infographic-segmenting-the-global-mvno-footprint/482/

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Blanc & Otus
+1.925.787.6744
kreeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Migrate Oracle Database(s) and ASM diskgroups from VMWARE to Oracle VM

Yann Neuhaus - 15 hours 13 min ago

This is a step by step demonstration on how to migrate any ASM disk groups from a cluster to another. May be use, with or without virtualization and may be used with storage layer snapshot for fast environment provisioning.

Step 01 – Shutdown source database(s) on VMWARE servers

Shutdown all databases hosted in the targeted Disk groups for which you want consistency. Then unmount the disk groups.

$ORACLE_HOME/bin/srvctl stop database -db cdb001

$ORACLE_HOME/bin/asmcmd umount FRA

$ORACLE_HOME/bin/asmcmd umount DATA

 

Step 02 – Re route LUNs from the storage array to newf servers

Create a snapshot and make the snapshot LUNs visible for Oracle Virtual Server (OVS) according the third-party storage technology.

Step 03 – Add LUNs to DomUs (VMs)

Then, we refresh the storage layer from OVM Manager to present LUNs in each OVS

OVM> refresh storagearray name=STORAGE_ARRAY_01

Step 04 – Then, tell OVM Manager to add LUNs to the VMs in which we want our databases to be migrated

create VmDiskMapping slot=20 physicalDisk=sa01_clus01_asm_data01 name=sa01_clus01_asm_data01 on Vm name=rac001
create VmDiskMapping slot=21 physicalDisk=sa01_clus01_asm_data02 name=sa01_clus01_asm_data02 on Vm name=rac001
create VmDiskMapping slot=22 physicalDisk=sa01_clus01_asm_data03 name=sa01_clus01_asm_data03 on Vm name=rac001
create VmDiskMapping slot=23 physicalDisk=sa01_clus01_asm_data04 name=sa01_clus01_asm_data04 on Vm name=rac001
create VmDiskMapping slot=24 physicalDisk=sa01_clus01_asm_data05 name=sa01_clus01_asm_data05 on Vm name=rac001
create VmDiskMapping slot=25 physicalDisk=sa01_clus01_asm_data06 name=sa01_clus01_asm_data06 on Vm name=rac001
create VmDiskMapping slot=26 physicalDisk=sa01_clus01_asm_reco01 name=sa01_clus01_asm_reco01 on Vm name=rac001
create VmDiskMapping slot=27 physicalDisk=sa01_clus01_asm_reco02 name=sa01_clus01_asm_reco02 on Vm name=rac001

create VmDiskMapping slot=20 physicalDisk=sa01_clus01_asm_data01 name=sa01_clus01_asm_data01 on Vm name=rac002
create VmDiskMapping slot=21 physicalDisk=sa01_clus01_asm_data02 name=sa01_clus01_asm_data02 on Vm name=rac002
create VmDiskMapping slot=22 physicalDisk=sa01_clus01_asm_data03 name=sa01_clus01_asm_data03 on Vm name=rac002
create VmDiskMapping slot=23 physicalDisk=sa01_clus01_asm_data04 name=sa01_clus01_asm_data04 on Vm name=rac002
create VmDiskMapping slot=24 physicalDisk=sa01_clus01_asm_data05 name=sa01_clus01_asm_data05 on Vm name=rac002
create VmDiskMapping slot=25 physicalDisk=sa01_clus01_asm_data06 name=sa01_clus01_asm_data06 on Vm name=rac002
create VmDiskMapping slot=26 physicalDisk=sa01_clus01_asm_reco01 name=sa01_clus01_asm_reco01 on Vm name=rac002
create VmDiskMapping slot=27 physicalDisk=sa01_clus01_asm_reco02 name=sa01_clus01_asm_reco02 on Vm name=rac002

At this stage we have all LUNs of our both disk groups for DATA and FRA available on both nodes of the cluster.

Step 05 – Migrate disk in AFD

We can rename disk groups if required or if a disk group with the same name already exists

renamedg phase=both dgname=DATA newdgname=DATAMIG verbose=true asm_diskstring='/dev/xvdr1','/dev/xvds1','/dev/xvdt1','/dev/xvdu1','/dev/xvdv1','/dev/xvdw1'
renamedg phase=both dgname=FRA  newdgname=FRAMIG  verbose=true asm_diskstring='/dev/xvdx1','/dev/xvdy1'

 

Then we migrate disks into AFD configuration

$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdr1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvds1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdt1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdu1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdv1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdw1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label FRAMIG  /dev/xvdx1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label FRAMIG  /dev/xvdy1 --migrate

 

Step 06 – Mount disk groups on the new cluster and add database(s) in the cluster

$ORACLE_HOME/bin/asmcmd mount DATAMIG
$ORACLE_HOME/bin/asmcmd mount FRAMIG

 

Then add database(s) to cluster (repeat for each database)

$ORACLE_HOME/bin/srvctl add database -db cdb001 \
-oraclehome /u01/app/oracle/product/12.2.0/dbhome_1 \
-dbtype RAC \
-spfile +DATAMIG/CDB001/spfileCDB001.ora \
-diskgroup DATAMIG,FRAMIG

 

Step 06 – Startup database

In that case, we renamed the disk groups so we need to modify file locations and some parameter values

create pfile='/tmp/initcdb001.ora' from spfile='+DATAMIG/<spfile_path>' ;
-- modify controlfiles, recovery area and any other relevant paramters
create spfile='+DATAMIG/CDB001/spfileCDB001.ora' from pfile='/tmp/initcdb001.ora' ;

ALTER DATABASE RENAME FILE '+DATA/<datafile_paths>','+DATAMIG/<datafile_paths>'
ALTER DATABASE RENAME FILE '+DATA/<tempfile_paths>','+DATAMIG/<tempfile_paths>'
ALTER DATABASE RENAME FILE '+DATA/<onlinelog_paths>','+DATAMIG/<onlinelog_paths>'
ALTER DATABASE RENAME FILE '+FRA/<onlinelog_paths>', '+FRAMIG/<onlinelog_paths>'

 

Then start the database

$ORACLE_HOME/bin/srvctl start database -db cdb001

 

This method can be used to easily migrated TB of data with almost no pain, reducing at most as possible the downtime period. For near Zero downtime migration, just add a GoldenGate replication on top of that.

The method describes here is also perfectly applicable for ASM snapshot in order to duplicate huge volume from one environment to another. This permits fast environment provisioning without the need to duplicate data over the network nor impact storage layer with intensive I/Os.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

 

 

Cet article Migrate Oracle Database(s) and ASM diskgroups from VMWARE to Oracle VM est apparu en premier sur Blog dbi services.

Oracle Systems Partner Webcast-Series: SPARC value for Partners

  February...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Huge Pages

Jonathan Lewis - 18 hours 54 min ago

A useful quick summary from Neil Chandler replying to a thread on Oracle-L:

Topic: RAC install on Linux

You should always be using Hugepages.

They give a minor performance improvement and a significant memory saving in terms of the amount of memory needed to handle the pages – less Transaction Lookaside Buffers, which also means less TLB misses (which are expensive).

You are handling the memory chopped up into 2MB pieces instead of 4K. But you also have a single shared memory TLB for Hugepages.

The kernel has less work to do, bookkeeping fewer pointers in the TLB.

You also have contiguous memory allocation and it can’t be swapped.

If you are having problems with Hugepages, you have probably overallocated them (I’ve seen this several times at clients so it’s not uncommon). Hugepages can *only* be used for your SGA’s. All of your SGA’s should fit into the Hugepages and that should generally be no more than about 60% of the total server memory (but there are exceptions), leaving plenty of “normal” memory (small pages) for PGA , O/S and other stuff like monitoring agendas.

As an added bonus, AMM can’t use Hugepages, so your are forced to use ASMM. AMM doesn’t work well and has been kind-of deprecated by oracle anyway – dbca won’t let you setup AMM if the server has more than 4GB of memory.

 

Oracle database backup

Tom Kyte - Wed, 2018-02-21 15:46
Hi Developers, I am using Oracle 10g. I need to take backup of my database. I can take a back-up of tables, triggers etc using sql developers' Database Backup option but there are multiple users created in that database. Can you please support ...
Categories: DBA Blogs

How do you purge stdout files generated by DBMS_SCHEDULER jobs?

Tom Kyte - Wed, 2018-02-21 15:46
When running scheduler jobs, logging is provided in USER_SCHEDULER_JOB_LOG and USER_SCHEDULER_JOB_RUN_DETAILS. And stdout is provided in $ORACLE_HOME/scheduler/log. The database log tables are purged either by default 30 days (log_history attribute)....
Categories: DBA Blogs

V$SQL history

Tom Kyte - Wed, 2018-02-21 15:46
How many records/entry are there in v$sql,v$ession. and how they flush like Weekly or Space pressure. Thanks
Categories: DBA Blogs

Dynamic SQL in regular SQL queries

Tom Kyte - Wed, 2018-02-21 15:46
Hi, pardon me for asking this question (I know I can do this with the help of a PL/SQL function) but would like to ask just in case. I'm wondering if this doable in regular SQL statement without using a function? I'm trying to see if I can write a ...
Categories: DBA Blogs

Adding hash partitions and spreading data across

Tom Kyte - Wed, 2018-02-21 15:46
Hi, I have a table with a certain number of range partitions and for each partitions I have eight hash subpartitions. Is there a way to increase the subpartitions number to ten and distributing evenly the number of rows? I have tried "alter tabl...
Categories: DBA Blogs

Bug when using 1 > 0 at "case when" clause

Tom Kyte - Wed, 2018-02-21 15:46
Hello, guys! Recently, I've found a peculiar situation when building a SQL query. The purpose was add a "where" clause using a "case" statement that was intented to verify if determined condition was greater than zero. I've reproduced using a "wit...
Categories: DBA Blogs

Difference between explain and execute plan and actual execute plan

Tom Kyte - Wed, 2018-02-21 15:46
Hi, I have often got questions around explain plan and execute plan. As per my knowledge, explain plan gives you the execute plan of the query. But I have also read that Execute plan is the plan which Oracle Optimizer intends to use for the query and...
Categories: DBA Blogs

Oracle Data Cloud Launches Data Marketing Program to Help Savvy Auto Dealer Agencies Better Use Digital Data

Oracle Press Releases - Wed, 2018-02-21 12:55
Press Release
Oracle Data Cloud Launches Data Marketing Program to Help Savvy Auto Dealer Agencies Better Use Digital Data Nine Leading Retail Automotive Marketing Agencies Are First to Complete Comprehensive Program, Receive Oracle Data Cloud’s Auto Elite Data Marketer (EDM) Designation

Redwood City, Calif.—Feb 21, 2018

Oracle Data Cloud today launched an advanced data training and marketing program to help savvy auto dealer agencies better use digital data. Oracle also announced the first nine leading Tier 3 auto marketing agencies to qualify for the rigorous program and receive Oracle Data Cloud’s Auto Elite Data Marketer (EDM) designation. Those companies included: C-4 Analytics, Dealer Inspire, Dealers United, Goodway Group, L2TMedia, SocialDealer, Stream Marketing, Team Velocity, and TurnKey Marketing. Oracle’s Auto Elite Data Marketer program will help agencies effectively allocate their marketing resources as advertising budgets shift from offline media to digital platforms.

“As the automotive industry goes through an era of transformational change, dealers are literally where the rubber meets the road, and they need cutting edge marketing tools to help maintain or grow market share,” said Joe Kyriakoza, VP and GM of Automotive for the Oracle Data Cloud. “Tier 3 marketers know that reaching the right audience drives measurable campaign results. By increasing the data skills of our marketing agency partners, Oracle can help them directly impact and improve their clients’ campaign results.”

Oracle Data Cloud’s Auto Elite Data Marketer Program includes:

  1. 1. Education & training - Expert training to the marketing agency and their extended teams on advanced targeting strategies and audience planning techniques

  2. 2. Customized collateral - Co-branded collateral pieces to support client marketing efforts, including summary sheets, decks, activation guides, and other materials.

  3. 3. Co-branding marketing - Co-branded marketing initiatives through thought leadership, speaking opportunities, and co-hosted webinars.

  4. 4. Strategic sales support - Access to Oracle’s specialized Retail Solutions Team and the Oracle Data Hotline to support strategic pitches, events, and RFP inquiries.

“We are proud to have worked with Oracle Data Cloud since the beginning, shaping the program together to drive more business for dealers using audience data,” said Joe Chura, CEO of Dealer Inspire. “Our team is excited to continue this relationship as an Elite Data Marketer, empowering Dealer Inspire clients with the unique advantage of utilizing Oracle data for automotive retail targeting.”

“We are consumed with data that allows for hyper-personalization and better targeting of in-market consumers,” said David Boice, CEO and Chairman of Team Velocity Marketing. “Oracle is a new goldmine of data to drive excellent sales and service campaigns and a perfect complement to our Apollo Technology Platform.”  According to Joe Castle, Founder of SOCIALDEALER, “We are excited to be one of the few Auto Elite Data Marketers which provides us a deeper level of custom audience data access from Oracle. Our companies look forward to working closely to further deliver a superior ROI to all our dealership and OEM relationships.”

Through the Auto Elite Data Marketer program, retail marketers learn how to use Oracle’s expansive selection of automotive audiences, which cover the entire vehicle ownership lifecycle, like in-market car shoppers, existing owners, and individuals needing auto finance, credit assistance, or vehicle service. This comprehensive data set allows clients to precisely target the right prospects for any automotive retail campaign. Oracle has teamed up with industry leading data providers to build the robust dataset, like IHS Markit’s Polk for vehicle ownership and intent data, Edmunds.com for online car shopper data and TransUnion the trusted source for consumer finance audiences.

Oracle Data Cloud plans to expand the Auto Elite Data Marketer program to include additional dealer marketing agencies, as well as working directly with dealers and dealer groups and their media partners to use data effectively for advanced targeting and audience planning efforts. For more information about the Auto Elite Data Marketer program, please contact the Oracle Auto team at dealersolutions@oracle.com.

Oracle Data Cloud

Oracle Data Cloud operates the BlueKai Data Management Platform and the BlueKai Marketplace, the world’s largest audience data marketplace. Leveraging more than $5 trillion in consumer transaction data, more than five billion global IDs and 1,500+ data partners, Oracle Data Cloud connects more than two billion consumers around the world across their devices each month. Oracle Data Cloud is made up of AddThis, BlueKai, Crosswise, Datalogix and Moat.

Oracle Data Cloud helps the world’s leading marketers and publishers deliver better results by reaching the right audiences, measuring the impact of their campaigns and improving their digital strategies. For more information and free data consultation, contact The Data Hotline at www.oracle.com/thedatahotline

Contact Info
Simon Jones
Oracle
+1.650.506.0325
s.jones@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Simon Jones

  • +1.650.506.0325

Early morning RMOUG post

Bobby Durrett's DBA Blog - Wed, 2018-02-21 06:44

Well, it is early Wednesday morning here at the Westin hotel in Denver where the RMOUG Training Days conference is being held. I can’t sleep anyway so I thought I would write-up some of my impressions of yesterday’s presentations.

I appreciate all the effort people put in making their presentations. Since I have done Toastmasters I’ve learned to appreciate more what goes into being an effective speaker. But, the nature of my work is that I have to be critical of everything people say about technology. Maybe I should say “I have to think critically” instead of “be critical”. The problem with the type of work we do is that it involves a lot of money and that inevitably obscures the truth about the technical details of how things work. So, I want to just sit back and applaud but my technical side wants to tear apart every detail.

A nice perk of being a RMOUG presenter is that I got to attend the pre-conference workshops for free as well as the rest of the talks. In past conferences that I have spoken at that was not the case. So, I went to a four-hour Snowflake workshop. I have read a fair amount on Snowflake so much that the speaker presented was familiar. I wonder how people who had no Snowflake background perceived the talk? Being a nuts and bolts Oracle person I would have liked to dig in more to Snowflake internals and discuss its limitations. Surely any tool has things it does better and things that it does not do so well because of the choices that the developers made in its design. I’m interested in how Snowflake automatically partitions data across files on S3 and caches data in SSD and RAM at the compute level. At least, that is what the information on the web site suggests. But with cloud computing it seems that people frown upon looking under the covers. The goal is to spin up new systems quickly and Snowflake is fantastic at that. Also, it seems to get great performance with little effort. No tuning required! Anyway, it was a good presentation but didn’t get into nuts and bolts tuning and limitations which I would have liked to see.

I spent the rest of the day attending hour-long presentations on various topics. AWS offered a 3 hour session on setting up Oracle on RDS but since I’ve played with RDS at work I decided to skip it. Instead I went to mostly cloud and Devops sessions. I accidentally went to an Oracle performance session which was amusing. It was about tuning table scans in the cloud. The speaker claimed that in Oracle’s cloud you get sub-millisecond I/O which raised a bunch of questions in my mind. But the session was more about using Oracle database features to speed up a data warehouse query. It was fun but not what I expected.

I was really surprised by the Devops sessions. Apparently Oracle has some free Devops tools in their cloud that you can use for on premise work. My office is working with a variety of similar tools already so it is not something we would likely use. But it could be helpful to someone who doesn’t want to install the tools yourself. I’m hopeful that today’s Devops session(s) will fill in more details about how people are using Devlops with databases. I’m mostly interested in how to work with large amounts of data in Devops. It’s easy to store PL/SQL code in Git for versioning and push it out with Flywaydb or something like it. It is hard to make changes to large tables and have a good backout. Data seems to be Devops’s Achilles heel and I haven’t seen something that handles it well. I would love to hear about companies that have had success handling data changes with Devops tools.

Well, I’ve had one cup of coffee and Starbucks doesn’t open for another half hour but this is probably enough of a pre-dawn RMOUG data dump. Both of my talks are tomorrow so today is another day as a spectator. Likely it will be another day of cloud and Devops but I might sneak an Oracle performance talk in for one session.

Bobby

Categories: DBA Blogs

Interval Partition Problem

Jonathan Lewis - Wed, 2018-02-21 02:40

Assume you’ve got a huge temporary tablespace, there’s plenty of space in your favourite tablespace, you’ve got a very boring, simple table you want to copy and partition, and no-one and nothing is using the system. Would you really expect a (fairly) ordinary “create table t2 as select * from t1” to end with an Oracle error “ORA-1652: unable to extend temp segment by 128 in tablespace TEMP” . That’s the temporary tablespace that’s out of space, not the target tablespace for the copy.

Here’s a sample data set (tested on 11.2.0.4 and 12.1.0.2) to demonstrate the surprise – you’ll need about 900MB of space by the time the entire model has run to completion:

rem
rem     Script:         pt_interval_threat_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2018
rem

column tomorrow new_value m_tomorrow
select to_char(sysdate,'dd-mon-yyyy') tomorrow from dual;

create table t1
as
with g as (
        select rownum id
        from dual
        connect by level <= 2e3
)
select
        rownum id,
        trunc(sysdate) + g2.id  created,
        rpad('x',50)            padding
from
        g g1,
        g g2
where
        rownum  comment to avoid WordPress format mess
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

I’ve created a table of 4 million rows, covering 2,000 dates out into the future starting from sysdate+1 (tomorrow). As you can see there’s nothing in the slightest bit interesting, unusual, or exciting about the data types and content of the table.

I said my “create table as select” was fairly ordinary – but it’s actually a little bit out of the way because it’s going to create a partitioned copy of this table.


execute snap_my_stats.start_snap

create table t2
partition by range(created)
interval(numtodsinterval(7, 'day'))
(
        partition p_start       values less than (to_date('&m_tomorrow','dd-mon-yyyy'))
)
storage(initial 1M)
nologging
as
select
        *
from
        t1
;

set serveroutput on
execute snap_my_stats.end_snap

I’ve created the table as a range-partitioned table with an interval() declared. Conveniently I need only mention the partitioning column by name in the declaration, rather than listing all the columns with their types, and I’ve only specified a single starting partition. Since the interval is 7 days and the data spans 2,000 days I’m going to end up with nearly 290 partitions added.

There’s no guarantee that you will see the ORA-01652 error when you run this test – the data size is rather small and your machine may have sufficient other resources to hide the problem even when you’re looking for it – but the person who reported the problem on the OTN/ODC database forum was copying a table of 2.5 Billion rows using about 200 GB of storage, so size is probably important, hence the 4 million rows as a starting point on my small system.

Of course, hitting an ORA-01652 on TEMP when doing a simple “create as select” is such an unlikely sounding error that you don’t necessarily have to see it actually happen; all you need to see (at least as a starting point in a small model) is TEMP being used unexpectedly so, for my first test (on 11.2.0.4), I’ve included some code to calculate and report changes in the session stats – that’s the calls to the package snap_my_stats. Here are some of the more interesting results:


---------------------------------
Session stats - 20-Feb 16:58:24
Interval:-  14 seconds
---------------------------------
Name                                                                     Value
----                                                                     -----
table scan rows gotten                                               4,000,004
table scan blocks gotten                                                38,741

session pga memory max                                             181,338,112

sorts (rows)                                                         2,238,833

physical reads direct temporary tablespace                              23,313
physical writes direct temporary tablespace                             23,313

The first couple of numbers show the 4,000,000 rows being scanned from 38,741 table blocks – and that’s not a surprise. But for a simple copy the 181MB of PGA memory we’ve acquired is a little surprising, though less so when we see that we’ve sorted 2.2M rows, and then ended up spilling 23,313 blocks to the temporary tablespace. But why are we sorting anything – what are those rows ?

My first thought was that there was a bug in some recursive SQL that was trying to define or identify dynamically created partitions, or maybe something in the space management code trying to find free space, so the obvious step was to enable extended tracing and look for any recursive statements that were running a large number of times or doing a lot of work. There weren’t any – and the trace file (particularly the detailed wait events) suggested the problem really was purely to do with the CTAS itself; so I ran the code again enabling events 10032 and 10033 (the sort traces) and found the following:


---- Sort Statistics ------------------------------
Initial runs                              1
Input records                             2140000
Output records                            2140000
Disk blocks 1st pass                      22292
Total disk blocks used                    22294
Total number of comparisons performed     0
Temp segments allocated                   1
Extents allocated                         175
Uses version 1 sort
Uses asynchronous IO

One single operation had resulted in Oracle sorting 2.14 million rows (but not making any comparisons!) – and the only table in the entire system with enough rows to do that was my source table! Oracle seems to be sorting a large fraction of the data for no obvious reason before inserting it.

  • Why, and why only 2.14M out of 4M ?
  • Does it do the same on 12.1.0.2 (yes), what about 12.2.0.1 (no – hurrah: unless it just needs a larger data set!).
  • Is there any clue about this on MoS (yes Bug 17655392 – though that one is erroneously, I think, flagged as “closed not a bug”)
  • Is there a workaround ? (Yes – I think so).

Playing around and trying to work out what’s happening the obvious pointers are the large memory allocation and the “incomplete” spill to disc – what would happen if I fiddled around with workarea sizing – switching it to manual, say, or setting the pga_aggregate_target to a low value. At one point I got results showing 19M rows (that’s not a typo, it really was close to 5 times the number of rows in the table) sorted with a couple of hundred thousand blocks of TEMP used – the 10033 trace showed 9 consecutive passes (that I can’t explain) as the code executed from which I’ve extract the row counts, temp blocks used, and number of comparisons made:


Input records                             3988000
Total disk blocks used                    41544
Total number of comparisons performed     0

Input records                             3554000
Total disk blocks used                    37023
Total number of comparisons performed     0

Input records                             3120000
Total disk blocks used                    32502
Total number of comparisons performed     0

Input records                             2672000
Total disk blocks used                    27836
Total number of comparisons performed     0

Input records                             2224000
Total disk blocks used                    23169
Total number of comparisons performed     0

Input records                             1762000
Total disk blocks used                    18357
Total number of comparisons performed     0

Input records                             1300000
Total disk blocks used                    13544
Total number of comparisons performed     0

Input records                             838000
Total disk blocks used                    8732
Total number of comparisons performed     0

Input records                             376000
Total disk blocks used                    3919
Total number of comparisons performed     0

There really doesn’t seem to be any good reason why Oracle should do any sorting of the data (and maybe it wasn’t given the total number of comparisons performed in this case) – except, perhaps, to allow it to do bulk inserts into each partition in turn or, possibly, to avoid creating an entire new partition at exactly the moment it finds just the first row that needs to go into a new partition. Thinking along these lines I decided to pre-create all the necessary partitions just in case this made any difference – the code is at the end of the blog note. Another idea was to create the table empty (with, and without, pre-created partitions), then do an “insert /*+ append */” of the data.

Nothing changed (much – though the number of rows sorted kept varying).

And then — it all started working perfectly with virtually no rows reported sorted and no I/O to the temporary tablespace !

Fortunately I thought of looking at v$memory_resize_ops and found that the automatic memory management had switched a lot of memory to the PGA, allowing Oracle to do whatever it needed to do completely in memory without reporting any sorting. A quick re-start of the instance fixed that “workaround”.

Still struggling with finding a reasonable workaround I decided to see if the same anomaly would appear if the table were range partitioned but didn’t have an interval clause. This meant I had to precreate all the necessary partitions, of course – which I did by starting with an interval partitioned table, letting Oracle figure out which partitions to create, then disabling the interval feature – again, see the code at the end of this note.

The results: no rows sorted on the insert, no writes to temp. Unless it’s just a question of needing even more data to reproduce the problem with simple range partitioned tables, it looks as if there’s a problem somewhere in the code for interval partitioned tables and all you have to do to work around it is precreate loads of partitions, disable intervals, load, then re-enable the intervals.

Footnote:

Here’s the “quick and dirty” code I used to generate the t2 table with precreated partitions:


create table t2
partition by range(created)
interval(numtodsinterval(7, 'day'))
(
        partition p_start values less than (to_date('&m_tomorrow','dd-mon-yyyy'))
)
storage(initial 1M)
nologging
monitoring
as
select
        *
from
        t1
where
        rownum <= 0
;


declare
        m_max_date      date;
begin
        select  max(created)
        into    expand.m_max_date
        from    t1
        ;

        
        for i in 1..expand.m_max_date - trunc(sysdate) loop
                dbms_output.put(
                        to_char(trunc(sysdate) + loop.i,'dd-mon-yyyy') || chr(9)
                );
                execute immediate
                        'lock table t2 partition for ('''  ||
                        to_char(trunc(sysdate) + loop.i,'dd-mon-yyyy') ||
                        ''') in exclusive mode'
                ;
        end loop;
        dbms_output.new_line();
end;
/

prompt  ========================
prompt  How to disable intervals
prompt  ========================

alter table t2 set interval();

The code causes partitions to be created by locking the relevant partition for each date between the minimum and maximum in the t1 table; locking the partition is enough to create it if it doesn’t already exists. The code is a little wasteful since it locks each partition 7 times as we walk through the dates – but it’s only a quick demo for a model, and for copying a very large table wastage would probably be very small compared to the work of doing the actual data copy. Obviously one could be more sophisticated and limit the code to locking and creating only the partitions needed, and only locking them once each.

 

ODA X7-2S/M 12.2.1.2.0: update-repository fails after re-image

Yann Neuhaus - Wed, 2018-02-21 00:54

While playing with a brand new ODA X7-2M, I faced a strange behaviour after re-imaging the ODA with the latest version 12.2.1.2.0. Basically after re-imaging and doing the configure-firstnet the next step is to import the GI clone in the repository before creating the appliance. Unfortunately this command fails with an error DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070. Why not having a look on how to fix it…

First of all doing a re-image is really straight forward and work very well. I simply access to the ILOM remote console to attach the ISO file for the ODA, in this case the patch 23530609 from the MOS, and restart the box on the CDROM. After approx. 40 minutes you have a brand new ODA running the latest release.

Of course instead re-imaging, I could “simply” update/upgrade the DCS agent to the latest version. Let say that I like to start from a “clean” situation when deploying a new environment and patching a not installed system sound a bit strange for me ;-)

So once re-imaged the ODA is ready for deployment. The first step is to configure the network that I can SSH to it and go ahead with the create appliance. This takes only 2 minutes using the command configure-firstnet.

The last requirement before running the appliance creation is to import the GI Clone, here the patch p27119393_122120, in the repository. Unfortunately that’s exactly where the problem starts…

Screen Shot 2018-02-19 at 12.11.23

Hmmm… I can’t get it in the repository due to a strange hand shake error. So I will check if the web interface is working at least (…of course using Chrome…)

Screen Shot 2018-02-19 at 12.11.14

Same thing here, it is not possible to come in the web interface at all.

While searching a bit for this error, we finally landed in the Know Issue chapter of the ODA 12.2.1.2.0 Release Note, which sounds promising. Unfortunately none of the listed error did really match to our case. However doing a small search in the page for the error message pointed us the following case out:

Screen Shot 2018-02-19 at 12.12.28

Ok the error is ODA X7-2HA related, but let’s give a try.

Restart-DCS

Once DCS is restarted, just re-try the update-repository

Import-GIClone

Here we go! The job has been submitted and the GI clone is imported in the repository :-)

After that the CREATE APPLIANCE will run like a charm.

Hope it helped!

 

 

Cet article ODA X7-2S/M 12.2.1.2.0: update-repository fails after re-image est apparu en premier sur Blog dbi services.

Strange dependency in user_dependency: view depends on unreferenced function

Tom Kyte - Tue, 2018-02-20 21:26
Dear Team, I will try to simplify the scenario we have, using a simple test case: <code> SQL> create table test_20 ( a number) 2 / Table created. SQL> SQL> create or replace function test_function (p_1 in number) 2 return num...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator