Feed aggregator

Schema only account with Oracle 18.3

Yann Neuhaus - 6 hours 23 min ago

With Oracle 18.3, we have the possibility to create schemas without a password. Effectively in a perfect world, we should not be able to connect to application schemas. For security reasons it is a good thing that nobody can connect directly to the application schema.

A good way is to use proxy connections, in fact connect as app_user but using the psi_user password for example:

Let’s create a user named app_user:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.

SQL> create user app_user identified by app_user
  2  quota unlimited on users;

User created.

SQL> grant create session , create table to app_user;

Grant succeeded.

Let’s create a proxy user named psi_user:

SQL> create user psi_user identified by psi_user;

User created.

SQL> grant create session to psi_user;

Grant succeeded.
We allow the proxy connection to the app_user:

SQL> alter user app_user grant connect through psi_user;

User altered.

Now we can connect via the proxy user using the following syntax:

SQL> connect psi_user[app_user]/psi_user@pdb 
Connected.

We can see we are connected as user app_user but using the psi_user password:

SQL> select sys_context('USERENV','SESSION_USER') as session_user,
sys_context('USERENV','SESSION_SCHEMA') as session_schema,
sys_context('USERENV','PROXY_USER') as proxy,
user
from dual;

SESSION_USER	SESSION_SCHEMA	        PROXY		USER
APP_USER	APP_USER		PSI_USER	APP_USER

But there is a problem, if the app_user is locked the proxy connection does not work anymore:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
SQL> alter user app_user account lock;

User altered.

SQL> connect psi_user[app_user]/psi_user@pdb
ERROR:
ORA-28000: The account is locked.

Warning: You are no longer connected to ORACLE.

The good solution is to use the schema only Oracle 18c new feature:

We drop the old accounts:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
SQL> drop user psi_user cascade;

User dropped.

SQL> drop user app_user cascade;

User dropped.

And we recreate them in the following way, we first create the schema owner with no authentication:

SQL> create user app_user no authentication
  2  quota unlimited on users;

User created.

SQL> grant create session , create table to app_user;

Grant succeeded.
We create the proxy user as before:
SQL> create user psi_user identified by psi_user;

We allow the proxy user to connect to the app_user:

SQL> alter user app_user grant connect through psi_user;

User altered.

We now can connect via psi_user:

SQL> connect psi_user[app_user]/psi_user@pdb
Connected.

And as the app_user has been created in no authentication, you receive the classical ORA-01017 error when you try to connect directly with the app_user account:

SQL> connect app_user/app_user@pdb
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Using no authentication is a good protection, but you cannot grant system privileges to such users:

SQL> grant sysdba to app_user;
grant sysdba to app_user
*
ERROR at line 1:
ORA-40366: Administrative privilege cannot be granted to this user.

We can try to alter the app_user with a password and grant it to sysdba but it does not work:

SQL> alter user app_user identified by password;

User altered.

SQL> grant sysdba to app_user;

Grant succeeded.

SQL> alter user app_user no authentication;
alter user app_user no authentication
*
ERROR at line 1:
ORA-40367: An Administrative user cannot be altered to have no authentication
type.

SQL> revoke sysdba from app_user;

Revoke succeeded.

SQL> alter user app_user no authentication;

User altered.

To understand correctly the behavior, I made the following test:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
I remove the no authentication:

SQL> alter user app_user identified by app_user;

User altered.

Now I can connect on the app_user schema, I create a table and insert some values:

SQL> connect app_user/app_user@pdb
Connected.
SQL> create table employe (name varchar2(10));

Table created.

SQL> insert into employe values('Larry');

1 row created.

SQL> commit;

Commit complete.

I reset the app_user to no authentication:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
SQL> alter user app_user no authentication;

User altered.

I connect with the proxy user, I can display the employe table content:

SQL> connect psi_user[app_user]/psi_user@pdb
Connected.
SQL> select * from employe;

NAME
----------
Larry

The table belongs to the app_user schema:

SQL> select object_name, object_type, owner from all_objects where object_name ='EMPLOYE';

OBJECT_NAME	OBJECT_TYPE	OWNER
EMPLOYE		TABLE		APP_USER
SQL> insert into employe values ('Bill');

1 row created.

SQL> commit; 

Commit complete.

SQL> select * from employe;

NAME
----------
Larry
Bill

What is the behavior in the audit trail ?

We create an audit policy to detect any table creation:

SQL> create audit policy psi_user_audit_policy
  2  privileges create table
  3  when 'SYS_CONTEXT(''USERENV'',''SESSION_USER'') = ''APP_USER'''
  4  evaluate per session
  5 container=current

Audit policy created.

SQL> audit policy psi_user_audit_policy whenever successful;

Audit succeeded.
If now we have a look at the unified_audit_trail view:

SQL> select event_timestamp, dbusername, dbproxy_username from unified_audit_trail where object_name = 'SALARY' and action_name = 'CREATE TABLE'

EVENT_TIMESTAMP		DBUSERNAME	DBPROXY_USERNAME
16-OCT-18 03.40.49	APP_USER	PSI_USER

We can identify clearly the proxy user in the audit trail.

Conclusion:

The schema only accounts is an interesting new feature. In resume we can create a schema named app_user and set the authentication to NONE, the consequence is that you cannot be logged in. We can create a proxy account named psi_user which connects through app_user and we can create tables , views … to this app_user schema.








Cet article Schema only account with Oracle 18.3 est apparu en premier sur Blog dbi services.

Things I still believe in

Robert Baillie - Fri, 2018-10-19 09:49
Over 10 years ago I wrote a blog post on things that I believe in - as a developer, and when I re-read it recently I was amazed at how little has changed. I'm not sure if that's a good thing, or a bad thing - but it's certainly a thing. Anyway - here's that list - slightly updated for 2018... it you've seen my talk on Unit Testing recently, you might recognise a few entries. (opinions are my own, yada yada yada) It's easier to re-build a system from its tests than to re-build the tests from their system. You can measure code complexity, adherence to standards and test coverage; you can't measure quality of design. Formal and flexible are not mutually exclusive. The tests should pass, first time, every time (unless you're changing them or the code). Test code is production code and it deserves the same level of care. Prototypes should always be thrown away. Documentation is good, self documenting code is better, code that doesn't need documentation is best. If you're getting bogged...

Things I still believe in

Rob Baillie - Fri, 2018-10-19 09:49
Over 10 years ago I wrote a blog post on things that I believe in - as a developer, and when I re-read it recently I was amazed at how little has changed.

I'm not sure if that's a good thing, or a bad thing - but it's certainly a thing.

Anyway - here's that list - slightly updated for 2018... it you've seen my talk on Unit Testing recently, you might recognise a few entries.

(opinions are my own, yada yada yada)
  • It's easier to re-build a system from its tests than to re-build the tests from their system.

  • You can measure code complexity, adherence to standards and test coverage; you can't measure quality of design.

  • Formal and flexible are not mutually exclusive.

  • The tests should pass, first time, every time (unless you're changing them or the code).

  • Test code is production code and it deserves the same level of care.

  • Prototypes should always be thrown away.

  • Documentation is good, self documenting code is better, code that doesn't need documentation is best.

  • If you're getting bogged down in the process then the process is wrong.

  • Agility without structure is just hacking.

  • Pair programming allows good practices to spread.

  • Pair programming allows bad practices to spread.

  • Team leaders should be inside the team, not outside it.

  • Project Managers are there to facilitate the practice of developing software, not to control it.

  • Your customers are not idiots; they always know their business far better than you ever will.

  • A long list of referrals for a piece of software does not increase the chances of it being right for you, and shouldn't be considered when evaluating it.

  • You can't solve a problem until you know what the problem is. You can't answer a question until the question's been asked.

  • Software development is not complex by accident, it's complex by essence.

  • Always is never right, and never is always wrong.

  • Interesting is not the same as useful.

  • Clever is not the same as right.

  • The simplest thing that will work is not always the same as the easiest thing that will work.

  • It's easier to make readable code correct than it is to make clever code readable.

  • If you can't read your tests, then you can't read your documentation.

  • There's no better specification document than the customer's voice.

  • You can't make your brain bigger, so make your code simpler.

  • Sometimes multiple exit points are OK. The same is not true of multiple entry points.

  • Collective responsibility means that everyone involved is individually responsible for everything.

  • Sometimes it's complex because it needs to be; but you should never be afraid to double check.

  • If every time you step forward you get shot down you're fighting for the wrong army.

  • If you're always learning you're never bored.

  • There are no such things as "Best Practices". Every practice can be improved upon.

  • Nothing is exempt from testing. Not even database upgrades or declarative tools.

  • It's not enough to collect data, you need to analyse, understand and act upon that data once you have it.

  • A long code freeze means a broken process.

  • A test hasn't passed until it has failed.

  • A test that can't fail isn't a test.

  • If you give someone a job, you can't guarantee they'll do it well; If you give someone two jobs you can guarantee they'll do both badly

  • Every meeting should start with a statement on its purpose and context, even if everyone in the meeting already knows.

add_colored_sql

Jonathan Lewis - Fri, 2018-10-19 09:08

The following request appeared recently on the Oracle-L mailing list:

I have one scenario related to capturing of sql statement in history table..  Like dba_hist_sqltext capture the queries that ran for 10 sec or more..  How do I get the sql stmt which took less time say in  millisecond..  Any idea pleae share.

An AWR snapshot captures statements that (a) meet some workload criteria such as “lots of executions” and (b) happen to be in the library cache when the snapshot takes place but if you have some statements which you think are important or interesting enough to keep an eye on that don’t do enough work to meet the normal workload requirements of the AWR snapshots it’s still possible to tell Oracle to capture them by “coloring” them.  (Apologies for the American spelling – it’s necessary to avoid error ‘PLS_00302: component %s must be declared’.)

Somewhere in the 11gR1 timeline the package dbms_workload_repository acquired the following two procedures:


PROCEDURE ADD_COLORED_SQL
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_ID                         VARCHAR2                IN
 DBID                           NUMBER                  IN     DEFAULT

PROCEDURE REMOVE_COLORED_SQL
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_ID                         VARCHAR2                IN
 DBID                           NUMBER                  IN     DEFAULT


You have to be licensed to use the workload repository, of course, but if you are you can call the first procedure to mark an SQL statement as interesting, after which its execution statistics will be captured whenever it’s still in the library cache at snapshot time. The second procedure lets you stop the capture – and you will probably want to use this procedure from time to time because there’s a limit (currently 100) to the number of statements you’re allowed to register as colored and if you try to exceed the limit your call will raise Oracle error ORA-13534.


ORA-13534: Current SQL count(100) reached maximum allowed (100)
ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 751
ORA-06512: at line 3

If you want to see the list of statements currently marked as colored then you can query table wrm$_colored_sql, exposed through the views dba_hist_colored_sql and (in 12c) cdb_hist_colored_sql. (Note: I haven’t tested whether the limit of 100 views is per PDB or summed across the entire CDB – and the answer may vary with version of Oracle, of course).


SQL> select * from sys.wrm$_colored_sql;

      DBID SQL_ID             OWNER CREATE_TI
---------- ------------- ---------- ---------
3089296639 aedf339438ww3          1 28-SEP-18

1 row selected.

If you’ve had to color a statement to force the AWR snapshot capture it the statement probably won’t appear in the standard AWR reports; but it will be available to the “AWR SQL” report (which I usually generate from SQL*Plus with a call to $ORACLE_HOME/rdbms/admin/awrsqrpt./sql).

Footnote

If the statement you’re interested in executes very infrequently and often drops out of the library cache before it can be captured in an AWR snapshot then an alternative strategy is to enable system-wide tracing for that statement so that you can capture every execution in a trace file.

 

 

ADF Task Flow Performance Boost with JET UI Shell Wrapper

Andrejus Baranovski - Fri, 2018-10-19 01:09
ADF application with UI Shell and ADF Task Flows rendered in dynamic tabs would not offer instant switch from one tab to another experience. Thats because tab switch request goes to the server and only when browser gets response - tab switch happens. There is more to this - even if tab in ADF is not currently active (tab is disclosed), tab content (e.g. region rendered from ADF Task Flow) still may participate in the request processing. If user opens many tabs, this could result in slightly slower request processing time overall.

ADF allows to render ADF Task Flows directly by accessing them through URL, if it is configured with page support on the root level. ADF Task Flow can be accessed by URL, this means we can include it into iframe. Imagine using iframe for each tab and rendering ADF Task Flows inside. This will enable ADF Task Flow independent processing in each tab, similar to opening them in separate browser tab.

Iframe can be managed in Oracle JET, using plain JavaScript and HTML code. My sample implements dynamic JET tabs with iframe support. Iframe renders ADF Task Flow. While navigating between tabs, I simply hide/show iframes, this allows to keep the state of ADF Task Flow and return to the same state, when opening back the tab. Huge advantage in this case - tab navigation and switching between tabs with ADF Task Flows works very fast - it takes only client time processing. Look at this recorded gif, where I navigate between tabs with ADF content:


Main functions are listed below.

1. Add dynamic iframe. Here we check if frame for given ADF Task Flow is already created, if no we create it and append to HTML element


2. Select iframe, when switching tabs. Hide all frames first, select frame which belongs to the selected tab


3. Remove iframe. Remove frame, when tab is closed


4. Select frame after remove. This method helps to set focus to the next frame, after current tab was removed


We can control when iframe or regular JET module is rendered, by using flag computed function assigned to main div:


In this app I have defined static URL's for displayed ADF Task Flows. Same can be loaded by fetching menu, etc.:


To be able to load ADF Task Flow by URL, make sure to use ADF Task Flow with page (you can include ADF region with fragments into that page). Set url-invoke-allowed property:


This is how it looks like. By default, JET dashboard module is displayed, select item from the menu list to load tab with ADF Task Flow:


JET tab rendering iframe with ADF table:


You can monitor ADF content loading in iframe within JET application:


JET tab rendering iframe with ADF form:


Download sample app from GitHub repository.

How to extract XML data using extract function

Tom Kyte - Thu, 2018-10-18 16:06
inserted row by using below statemet :- <code>insert into xmlt values('<?xml version="1.0"?> <ROWSET> <ROW> <NAME>karthick</NAME> <SALARY>3400</SALARY> </ROW> <ROW> <NAME>c</NAME> <SALARY>1</SALARY> </ROW> <ROW> <NAME>mani</NAME> <SALARY>1</SAL...
Categories: DBA Blogs

Elaborate why 5 & same table used in below query

Tom Kyte - Thu, 2018-10-18 16:06
<code>select distinct * from t t1 where 5 >= ( select count ( distinct t2.sal ) from t t2 where t2.deptno = t1.deptno and t2.sal >= t1.sal );</code> I'll be grateful if you can explain. how the number work ,5, without var...
Categories: DBA Blogs

Firefox Quantum ESR 60 Certified with EBS 12.1 and 12.2 for macOS High Sierra 10.13

Steven Chan - Thu, 2018-10-18 13:04

Mozilla Firefox Quantum Extended Support Release 60 is certified as a macOS-based client browser on macOS High Sierra (macOS 10.13) for Oracle E-Business Suite 12.1 and 12.2.

What Use-cases Are Certified?

Oracle E-Business Suite (EBS) R12 has two interfaces: a web-based (OA Framework/HTML) model for modules like iProcurement and iStore, and Oracle Forms/Java based model for our professional services modules like Oracle Financials.

Firefox Quantum Extended Support Release (ESR) 60.x is now certified with macOS for both web-based and Oracle Forms/Java based models as outlined below.

  • Firefox ESR 60.x is certified for EBS users running web-based (HTML / OA Framework) screens.
  • Firefox ESR 60.x is certified for running Java content in EBS using Java Web Start (JWS) technology.
  • Firefox ESR 60.x is not certified for running Java content in EBS using Java Plug-in technology.

Certified Versions

Oracle E-Business Suite

  • Oracle E-Business Suite 12.2
  • Oracle E-Business Suite 12.1

Desktop Operating System

  • macOS High Sierra (macOS 10.13.3 or higher)

Java Web Start (JWS)

  • JRE 8 Update 171 or higher

While this is the minimum recommended Java release, users are encouraged to upgrade to the latest and therefore most secure Java 8 CPU release available.

Prerequisite Patch Requirements

Running Firefox on macOS using Java Web Start (JWS) with EBS requires additional patching.

For further information on patch and set up requirements see

Implications for Safari 12 on macOS

Customers have been asking about the compatibility of new versions of the following Apple products with Oracle E-Business Suite Releases 12.1 and 12.2:

  • Safari 12 (works with macOS 10.13.6 and 10.12.6)
  • macOS Mojave (macOS 10.14)

Neither of these two products have been certified with either EBS 12.1 or EBS 12.2 as of October 18, 2018.  

Safari 12 is unable to launch Java in the way that prior Safari versions could. This will prevent E-Business Suite 12.1 and 12.2 customers from running Forms-based products. Therefore, customers should *NOT* upgrade to Safari 12 on macOS desktop platforms.

macOS Mojave (macOS 10.14) includes Safari 12. Customers should *NOT* upgrade to macOS Mojave.

What changed in Safari 12?

Safari 12 introduces an important change: it removes support for “legacy NPAPI plug-ins”. This affects all EBS releases. macOS Mojave includes Safari 12.

Some products within Oracle EBS 12.1 and 12.2 run via HTML in browsers. These products are sometimes called “self-service web applications”. They are expected to run without issue in Safari 12, but our certification testing is still underway.

Some products within Oracle EBS 12.1 and EBS 12.2 use Oracle Forms. Oracle Forms requires Java for desktop clients. On the macOS desktop platform, the only certified option today for launching Java is via the JRE plugin via the NPAPI approach.

This means that Safari 12 and macOS Mojave (macOS 10.4) will be unable to use the current JRE plugin-based launching technology for Java and Forms for EBS desktop users.

Recommendations for EBS customers on macOS platforms

As of today, the latest certified versions of Safari and macOS are:

  • Safari 11 (works with macOS 10.13)
  • macOS High Sierra (macOS 10.13)

EBS customers should use only certified configurations. EBS customers who use Forms-based products should avoid upgrading to Safari 12 or macOS Mojave today.

EBS customers who have upgraded to Safari 12 on macOS 10.13 can use Firefox ESR 60 to run Forms-based products via the Java Web Start technology.

What is Mozilla Firefox ESR?

Mozilla offers an Extended Support Release based on an official release of Firefox for organizations that are unable to mass-deploy new consumer-oriented versions of Firefox every six weeks.  For more details about Firefox ESR, see the Mozilla ESR FAQ.

E-Business Suite certified with Firefox Extended Support Releases Only

New personal versions of Firefox on the Rapid Release channel are released roughly every six weeks.  It is impractical for us to certify these new personal Rapid Release versions of Firefox with the Oracle E-Business Suite because a given Firefox release is generally obsolete by the time we complete the certification.

From Firefox 10 and onwards, Oracle E-Business Suite is certified only with selected Firefox Extended Support Release versions. Oracle has no current plans to certify new Firefox personal releases on the Rapid Release channel with the E-Business Suite.

Plug-in Support removed in Firefox ESR 60

Mozilla has removed plug-in support in Firefox ESR 60. This means Firefox ESR 60 cannot run Forms-based content in EBS using the Java plugin method. 

If your Firefox ESR 60 end-users run Forms-based content in EBS, you must switch from the JRE plugin to Java Web Start

EBS patching policy for Firefox compatibility issues

Mozilla stresses their goal of ensuring that Firefox personal versions will continue to offer the same level of application compatibility as Firefox Extended Support Releases. 

Oracle E-Business Suite Development will issue new E-Business Suite patches or workarounds that can be reproduced with Firefox Extended Support Releases.  If you report compatibility issues with Firefox personal releases that cannot be reproduced with Firefox Extended Support Releases, your options are:

  1. Deploy a certified Firefox Extended Support Release version instead of the Firefox personal version
  2. Report the incompatibility between Firefox ESR and Firefox personal to Mozilla
  3. Use Internet Explorer (on Windows) or Safari (on Mac OS X) until Mozilla resolves the issue

EBS Compatibility with Firefox ESR security updates

Mozilla may release new updates to Firefox ESR versions to address high-risk/high-impact security issues.  These updates are considered to be certified with the E-Business Suite on the day that they're released.  You do not need to wait for a certification from Oracle before deploying these new Firefox ESR security updates.

References

Related Articles

Categories: APPS Blogs

Monitoring Linux With Nmon

Yann Neuhaus - Thu, 2018-10-18 09:37

I was looking for tools to monitor linux servers and I found an interesting one nmon ( short for Nigel’s Monitor). I did some tests. In this blog I am describing how to install nmon and how we can use it
I am using a Oracle Enterprise Linux System.

[root@condrong nmon]# cat /etc/issue
Oracle Linux Server release 6.8
Kernel \r on an \m

[root@condrong nmon]#

For the installation I used the repository epel

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm 
yum search nmon
yum install nmon.x86_64

Once installed, the tool is launched by just running the nmon command

[root@condrong nmon]# nmon

nmon1

If we type c we have CPU statistics
nmon2
If we type m we have memory statistics
nmon3
If we type t we can see Top Processes and so on
nmon4

nmon can be also scheduled. The data are collected in a file and this file can be analyzed later. For this we can use following options

OPTIONS
       nmon follow the usual GNU command line syntax, with long options starting
       with  two  dashes  (‘-’).   nmon  [-h] [-s ] [-c ] [-f -d
        -t -r ] [-x] A summary of options is included below.

       -h     FULL help information

              Interactive-Mode: read startup banner and type:  "h"  once  it  is
              running For Data-Collect-Mode (-f)

       -f            spreadsheet output format [note: default -s300 -c288]
              optional

       -s   between refreshing the screen [default 2]

       -c    of refreshes [default millions]

       -d     to increase the number of disks [default 256]

       -t            spreadsheet includes top processes

       -x            capacity planning (15 min for 1 day = -fdt -s 900 -c 96)

In my example I just create a file my_nmon.sh and execute the script

[root@condrong nmon]# cat my_nmon.sh 
#! /bin/bash
nmon -f -s 60 -c 30

[root@condrong nmon]# chmod +x my_nmon.sh 
[root@condrong nmon]# ./my_nmon.sh

Once executed, the script will create a file in the current directory with an extension .nmon

[root@condrong nmon]# ls -l *.nmon
-rw-r--r--. 1 root root 55444 Oct 18 09:51 condrong_181018_0926.nmon
[root@condrong nmon]#

To analyze this file, we have many options. For me I downloaded the nmon_analyzer
This tool works with Excel 2003 on wards and supports 32-bit and 64-bit Windows.
After copying my nmon output file in my windows station, I just have to launch the excel file and then use the button Analyze nmon data
nmon5
And below I show some graphs made by the nmon_analyzer
nmon6

nmon7

nmon8

Conclusion
As we can see nmon is a very useful tool which can help monitoring our servers. It works also for Aix systems.

Cet article Monitoring Linux With Nmon est apparu en premier sur Blog dbi services.

[BLOG] Oracle WebLogic Administration: Machine and Node Manager

Online Apps DBA - Thu, 2018-10-18 07:21

Do you want to enhance your knowledge of Weblogic Administration and want to know what are machine and node managers in Weblogic Server? Visit: http://bit.ly/2EvGFEs to go through the blog which covers: ✔What is a Node Manager and what are its requirements ✔How to Start & Stop the Node Manager? ✔What is Machine in Weblogic […]

The post [BLOG] Oracle WebLogic Administration: Machine and Node Manager appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Delivers the Trifecta of Retail Insight with New Cloud Service

Oracle Press Releases - Thu, 2018-10-18 07:00
Press Release
Oracle Delivers the Trifecta of Retail Insight with New Cloud Service Oracle Retail Insights Cloud Service Suite Delivers Descriptive, Prescriptive and Predictive Analytics to the Retail Enterprise

Redwood Shores, Calif.—Oct 18, 2018

Oracle Retail has combined three cloud services into a new Oracle Retail Insights Cloud Service Suite. By combining existing science and insight cloud services, Oracle can provide a spectrum of analytics that align to key performance indicators for the retail community. These metrics render in a beautiful user experience with dashboards organized by persona and organizational responsibilities in Oracle Retail Home to encourage more strategic decisions that drive growth and operational efficiency. Oracle Retail customers including Gap Inc., Lojas Renner and Al Nahdi have already experienced the benefits of Oracle Retail Insights and Science solutions and continue to inform their strategic decisions with in-depth insights and science-enabled analytics.

"The Advanced and Predictive Analytics software market, which in 2017 reached $3.1 billion worldwide, is expected to grow at a five-year CAGR of 9.4%. Sophisticated analytical techniques are being embedded into more and more applications," said Chandana Gopal, Research Manager, Analytics and Information Management, IDC. "Forward-looking analytics is going to become much more mainstream, as enterprises are able to harness more and more data from a variety of sources."

“We are working with several retailers who are anxious to adopt cloud to bridge the gap between operations and innovations,” said Jeff Warren, Vice President, Oracle Retail. “To capitalize on the surge of unstructured and structured data in retail, we have applied advanced techniques for analyzing retail data from multiple perspectives into a single cloud services suite that integrates with retail-rich applications and cloud services. With these tools we can deliver analysis on what happened (descriptive), what is going to happen (predictive) and what a retailer should do about it going forward (prescriptive).”

The Trifecta: A Powerful Adaptive Intelligence Suite for the Entire Retail Enterprise

With the new Oracle Retail Insights Cloud Service Suite retail organizations can experience benefits including:

  • Enhanced User Experience and Relevance: The cloud suite leverages Oracle Retail Home to provide a single and modern access point to the data. The user experience streamlines and simplifies access to data and applications to provide relevant and actionable information based on roles and responsibilities. The federated user interfaces support integrated insights-to-action loops.

  • Speed to Value: With one rapidly-deployed cloud service, the solution represents the application of Oracle's analytical core to modern retailing: a comprehensive big data warehouse founded on industry best practices and the scalability, reliability, and economy of a complete Oracle analytic tech stack in the Oracle Cloud.

  • Better Understanding of Customer Context: Gain a better understanding of who your customers are, how they behave and why, so you can make the more intelligent product and promotion decisions. Leverage complete visibility into what motivates customers at each stage of their journey and how they are interacting with your brand across all touchpoints.

  • Uncover Merchandising Intelligence: Identify actionable merchandising opportunities across touchpoints, including backorder and returns, top/bottom seller, demand/fulfillment and price and promotion analysis.

  • Inspire Customer Loyalty: Leverage a highly visual, intuitive, end-to-end workflow to define and execute local market assortments, improve conversion of traffic into sales, and increase customer satisfaction.

  • Leverage Artificial Intelligence and Machine Learning: Retail business users can conduct advanced analyses to understand better and optimize affinity, store clustering, customer segmentation, consumer decision trees, demand transference, and attribute extraction.

  • Unleash the Power of Flexibility and Ad Hoc Reporting: Business analysts and data science teams can leverage innovation workbench for additional ad hoc analysis.

  • Leverage Common Foundational Data Architecture: The suite can exploit the logical value of the data generated by Oracle Retail's comprehensive application footprint, and surfaces properly-filtered and secured descriptive, predictive and prescriptive analytics to whomever, however, whenever and wherever desired.

  • Drive Retail Investment: Optimize assortments to available space to maximize planogram performance, return-on-space, sales, revenue, and profits, while improving customer satisfaction with the optimal variety for each store.

  • Improve Gross Margin: Drive optimal recommendations for promotions, markdowns, and targeted offers that maximize profits and sell through leveraging prescriptive analytics.

Contact Info
Matt Torres
Oracle
14155951584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 14155951584

Find if a string is Upper, Lower or Mixed Case, numeric, Alpha Numeric etc

Tom Kyte - Wed, 2018-10-17 21:46
Dear Experts, I populated a table with few rows of strings that are Upper/ Lower/ Mixed case, alpha-numeric, numeric etc. 1. Now I would like to evaluate they type of string using a case statement. I tried using regexp_like, but it fails when ...
Categories: DBA Blogs

Deadlock issue came while using set based sql

Tom Kyte - Wed, 2018-10-17 21:46
Hi Tom, We are using set based sql in my process, In that we are creating so many GTT tables in a package. And we are executing this package concurrently in more than ten sessions, these sessions will create temporary tables with different name a...
Categories: DBA Blogs

Foreign Keys with default values

Tom Kyte - Wed, 2018-10-17 21:46
Hello. I'm designing a database in Oracle 12.2 in Toad Data Modeler. It would get lots of inserts. I'm using identity columns as PK (basically I create a sequence and use it as default value in the column, sequence.nextval). When I connect the...
Categories: DBA Blogs

Migration of 6i Forms to APEX

Tom Kyte - Wed, 2018-10-17 21:46
Hi Team, I am trying to migrate forms 6i to APEX, but problem that i pose here is that i cannot completely migrate all the functionalities of my forms to Apex even after trying to correct Metadata it does not migrate forms completely. So, my q...
Categories: DBA Blogs

Critical Patch Update for October 2018 Now Available

Steven Chan - Wed, 2018-10-17 11:57

The Critical Patch Update (CPU) for October 2018 was released on 16 October 2018. Oracle strongly recommends applying the patches as soon as possible.

The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents. 

Supported products not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied.

The Critical Patch Update Advisory is available at the following location:

It is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches.

The next four Critical Patch Update release dates are:

  • 15 January 2019
  • 16 April 2019
  • 16 July 2019
  • 15 October 2019
References Related Articles
Categories: APPS Blogs

Problem Solving

Jonathan Lewis - Wed, 2018-10-17 10:11

Here’s a little question that popped up on the Oracle-L list server a few days ago:

I am facing this issue running this command in 11.2.0.4.0 (also in 12c R2 I got the same error)

SQL> SELECT TO_TIMESTAMP('1970-01-01 00:00:00.0','YYYY-MM-DD HH24:MI:SS.FF') + NUMTODSINTERVAL(2850166802000/1000, 'SECOND') FROM DUAL;
SELECT TO_TIMESTAMP('1970-01-01 00:00:00.0','YYYY-MM-DD HH24:MI:SS.FF') + NUMTODSINTERVAL(2850166802000/1000, 'SECOND') FROM DUAL
ORA-01873: a precisão precedente do intervalo é pequena demais

 

How do you go about finding out what’s going on ? In my case the first thing is to check the translation the error message (two options):

SQL> execute dbms_output.put_line(sqlerrm(-1873))
ORA-01873: the leading precision of the interval is too small

SQL> SELECT TO_TIMESTAMP('1970-01-01 00:00:00.0','YYYY-MM-DD HH24:MI:SS.FF') + NUMTODSINTERVAL(2850166802000/1000, 'SECOND') FROM DUAL;
SELECT TO_TIMESTAMP('1970-01-01 00:00:00.0','YYYY-MM-DD HH24:MI:SS.FF') + NUMTODSINTERVAL(2850166802000/1000, 'SECOND') FROM DUAL
                                                                                                       *
ERROR at line 1:
ORA-01873: the leading precision of the interval is too small

That didn’t quite match my guess, but it was similar, I had been guessing that it was saying something about precision – but it doesn’t really strike me as an intuitively self-explanatory message, so maybe a quick check in $ORACLE_HOME/rdbms/mesg/oraus.msg to find the error number with cause and action will help:


01873, 00000, "the leading precision of the interval is too small"
// *Cause: The leading precision of the interval is too small to store the
//  specified interval.
// *Action: Increase the leading precision of the interval or specify an
//  interval with a smaller leading precision.

Well, that doesn’t really add value – and I can’t help feeling that if the leading precision of the interval is too small it won’t help to make it smaller. So all I’m left to go on is that there’s a precision problem of some sort and it’s something to do with the interval, and probably NOT with adding the interval to the timestamp. So let’s check that bit alone:


SQL> SELECT NUMTODSINTERVAL(2850166802000/1000, 'SECOND') FROM DUAL;
SELECT NUMTODSINTERVAL(2850166802000/1000, 'SECOND') FROM DUAL
                                    *
ERROR at line 1:
ORA-01873: the leading precision of the interval is too small


So the interval bit is the problem. Since the problem is about “precision”, let’s try messing about with the big number. First I’ll do a bit of cosmetic tidying by doing the division to knock off the trailing zeros, then I’ll see what happens when I divide by 10:

SQL> SELECT NUMTODSINTERVAL(285016680, 'SECOND') from dual;

NUMTODSINTERVAL(285016680,'SECOND')
---------------------------------------------------------------------------
+000003298 19:18:00.000000000

So 285 million works, but 2.85 billion doesn’t. The value that works give an interval of about 3,298 days, which is about 10 years, so maybe there’s an undocumented limit of 100 years on the input value; on the other hand the jump from 285 million to 2.85 billion does take you through a critical computer-oriented limit: 231 – 1, the maximum signed 32 bit integer (2147483647) so lets try using that value, and that value plus 1 in the expression:


SQL> SELECT NUMTODSINTERVAL(power(2,31), 'SECOND') from dual;
SELECT NUMTODSINTERVAL(power(2,31), 'SECOND') from dual
                       *
ERROR at line 1:
ORA-01873: the leading precision of the interval is too small


SQL> SELECT NUMTODSINTERVAL(power(2,31)-1, 'SECOND') from dual;

NUMTODSINTERVAL(POWER(2,31)-1,'SECOND')
---------------------------------------------------------------------------
+000024855 03:14:07.000000000

1 row selected.

Problem identified – it’s a numeric limit of the numtodsinterval() function. Interestingly it’s not documented in the Oracle manuals, in fact the SQL Reference manual suggests that this shouldn’t be a limit because it says that “any number value or anything that can be cast as a number is legal” and in Oracle-speak a number allows for roughly 38 digits precision.

Whilst we’ve identified the problem we still need a way to turn the input number into the timestamp we need – the OP didn’t need help with that one: divide by sixty and convert using minutes instead of seconds:


SQL> SELECT TO_TIMESTAMP('1970-01-01 00:00:00.0','YYYY-MM-DD HH24:MI:SS.FF') + NUMTODSINTERVAL(2850166802000/1000/60, 'MINUTE') FROM DUAL;

TO_TIMESTAMP('1970-01-0100:00:00.0','YYYY-MM-DDHH24:MI:SS.FF')+NUMTODSINTER
---------------------------------------------------------------------------
26-APR-60 01.00.02.000000000 AM

1 row selected

Job done.

Oracle Buys goBalto

Oracle Press Releases - Wed, 2018-10-17 07:00
Press Release
Oracle Buys goBalto Adds Leading Solution for Accelerating Clinical Trial Site Selection and Activation to Oracle Health Sciences Cloud

Redwood Shores, Calif.—Oct 17, 2018

Oracle today announced that it has entered into an agreement to acquire goBalto, which delivers leading cloud solutions to accelerate clinical trials by streamlining and automating the selection and set up of the best performing clinical research sites to conduct trials.

goBalto’s study startup solutions are activated at over 90,000 research sites across 2,000+ studies in over 80 countries to deliver significant savings to customers with over 30 percent quantifiable reduction in study startup cycle times.

Today, Oracle Health Sciences offers customers the industry's most advanced cloud solution for clinical trial planning, data collection, trial execution and safety management. goBalto adds the leading industry cloud solution that significantly reduces clinical trial startup time.  Together, Oracle and goBalto will provide the most complete end-to-end cloud platform dedicated to unifying action and accelerating results for the Life Sciences industry. 

“Clinical trial site selection and activation is one of the most manual and time-consuming processes for our customers,” said Steve Rosenberg, Senior Vice President and General Manager of Oracle Health Sciences Global Business Unit. “Oracle Health Sciences is designed to provide the industry with the best end-to-end clinical trial experience and the addition of goBalto will further allow our customers to remove another barrier from delivering treatments to patients faster.”

“We set out on a mission to streamline the clinical trial study startup process ten years ago because we saw how untenable it was for pharmaceutical companies and contract research organizations to track 1,000+ sites by 1,000+ specialists on spreadsheets,” said Jae Chung, Founder and President of goBalto.  

“We are delighted to join forces with Oracle as the benefits offered to both our customers and employees as a broader clinical trial continuum are unparalleled in the industry,” said Sujay Jadhav, CEO of goBalto.

More information about this announcement is available at www.oracle.com/gobalto.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

Oracle is currently reviewing the existing goBalto product roadmap and will be providing guidance to customers in accordance with Oracle’s standard product communication policies. Any resulting features and timing of release of such features as determined by Oracle’s review of goBalto’s product roadmap are at the sole discretion of Oracle. All product roadmap information, whether communicated by goBalto or by Oracle, does not represent a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. It is intended for information purposes only, and may not be incorporated into any contract.

Cautionary Statement Regarding Forward-Looking Statements
This document contains certain forward-looking statements about Oracle and goBalto, including statements that involve risks and uncertainties concerning Oracle’s proposed acquisition of goBalto, anticipated customer benefits and general business outlook. When used in this document, the words “anticipates”, “can”, “will”, “look forward to”, “expected” and similar expressions and any other statements that are not historical facts are intended to identify those assertions as forward-looking statements. Any such statement may be influenced by a variety of factors, many of which are beyond the control of Oracle or goBalto, that could cause actual outcomes and results to be materially different from those projected, described, expressed or implied in this document due to a number of risks and uncertainties. Potential risks and uncertainties include, among others, the possibility that the transaction will not close or that the closing may be delayed, the anticipated synergies of the combined companies may not be achieved after closing, the combined operations may not be successfully integrated in a timely manner, if at all, general economic conditions in regions in which either company does business may deteriorate and/or Oracle or goBalto may be adversely affected by other economic, business, and/or competitive factors. Accordingly, no assurances can be given that any of the events anticipated by the forward-looking statements will transpire or occur, or if any of them do so, what impact they will have on the results of operations or financial condition of Oracle or goBalto. You are cautioned to not place undue reliance on forward-looking statements, which speak only as of the date of this document. Neither Oracle nor goBalto is under any duty to update any of the information in this document.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Ken Bond

  • +1.650.607.0349

[BLOG] Oracle Critical Patch Update October 2018 Now Available

Online Apps DBA - Wed, 2018-10-17 06:32

Do you know that Oracle has released Critical Patch Update (CPU) for October 2018 with wide-ranging security update? Visit: https://k21academy.com/appsdba36 to check: ✔Affected Products and Patch Information ✔Doc to Refer to Apply CPU October patches & much more… Do you know that Oracle has released Critical Patch Update (CPU) for October 2018 with wide-ranging security […]

The post [BLOG] Oracle Critical Patch Update October 2018 Now Available appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Connect to DV Datasets and explore many more new features in OAC / OAAC 18.3.3.0

Tim Dexter - Wed, 2018-10-17 05:26

Greetings !

Oracle Analytics Cloud (OAC) and Oracle Autonomous Analytics Cloud (OAAC) version 18.3.3.0 (also known as V5) got released last month. A rich set of new features have been introduced in this release across different products (with product version 12.2.5.0.0) in the suite. You can check all the new features of OAC / OAAC in the video here.

The focus for BI Publisher on OAC / OAAC in this release has been to compliment Data Visualization for pixel perfect reporting, performance optimizations and adding self service abilities. Here is a list of new features added this release:

BI Publisher New Features in OAC V5.0

New Feature Description 1. DV Datasets

Now you can leverage a variety of data sources covered by Data Visualization data sets, including Cloud based data sources such as Amazon Redshift, Autonomous Data Warehouse Cloud; Big Data sources such as Spark, Impala, Hive; and Application data sources such as Salesforce, Oracle Applications etc. BI Publisher is here to compliment DV to create pixel perfect reports using DV datasets.

Check the documentation for additional details. Also, check this video to see how this feature works.

2. Upload Center

Now upload all files for custom configuration such as fonts, ICC Profile, Private Keys, Digital Signature etc.from the Upload Center as a self service feature available in the Administration page.

Additional details can be found in the documentation here.

3. Validate Data Model

Report Authors can now validate a data model before deploying the report in a production environment. This will help during a custom data model creation where data sets, LOVs and Bursting Queries can be validated against standard guidelines to avoid any undesired performance impact to the report server. 

Details available here.

4. Skip unused data sets

When a data model contains multiple data sets for different layouts, each layout might not use all the data sets defined in the data model. Now Report Authors can select data model property to skip the execution of the unused data sets in a layout. Setting this property reduces the data extraction time, memory usage and improves overall report performance.

Additional details can be found here.

5. Apply Digital Signature to PDF Documents

Digital Signature is widely used feature in on-prem deployments and now this has been added in OAC too, where in Digital Signature can be applied to a PDF output. Digital Signatures can be uploaded from the Upload Center, required signature can be selected under security center, and then applied to PDF outputs by configuring attributes under report properties or run-time properties. 

You can find the documentation here. Also check this video for a quick demonstration.

6. Password protect MS Office Outputs - DocX, PPTX, XLSX

Now protect your MS Office output files with a password defined at report or server level.

Check the PPTX output properties, DocX output properties, Excel 2007 output properties

7. Deliver reports in compressed format

You can select this option to compress the output by including the file in a zip file before delivery via email, FTP, etc.

Additional details can be found here.

8. Request read-receipt and delivery confirmation notification 

You can opt to get delivery and read-receipt notification for scheduled job delivery via email.

Check documentation for additional details. 

9. Add scalability mode for Excel Template to handle large data size

Now you can set up scalability mode for an excel template. This can be done at system level, report level or at template level. By setting this attribute to true, the engine will flush memory after a threshold value and when the data exceeds 65K rows it will rollover data into multiple sheets.

You can find the documentation here.

 

Stay tuned to hear more updates on features and functionalities ! Happy BIP'ing ...

 

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator