Feed aggregator

How to verify the parallel execution in a custom User-Defined Aggregate Function

Tom Kyte - Tue, 2017-12-12 15:06
I want to verify the parallel execution in my User-Defined Aggregate Function. So I put some DBMS_OUTPUT inside the code - but it seems not to work correctly... You can reproduce the behaviour by simple create the example user-defined aggregate fu...
Categories: DBA Blogs

How to concatenate string having length greater than 4000 characters

Tom Kyte - Tue, 2017-12-12 15:06
I am using xmlagg function to concatenate records with comma separated values. But i am getting error when columns count is than 300 I getting below error <code>Error starting at line : 8 in command - select rtrim (xmlagg (xmlelement (e...
Categories: DBA Blogs

Shrink partition table

Tom Kyte - Tue, 2017-12-12 15:06
Hi, The shrink table doesn't defragments the partition tables. executed : alter table test_table_1 shrink space and checked with below query wasted space before and after but the values are identical. <code>select table_name,rou...
Categories: DBA Blogs

Imdp xml schema date format issue ORA-01858

Tom Kyte - Tue, 2017-12-12 15:06
Hi, I exported a schema from Oracle 11.2.0.1 and trying to import it in Oracle 12c. My Oracle schema contains xml schema and xmltype columns in a table. My xml fragment is <code><Tag0> <Tag1> <Tag2 Id="10202" date1="2017-11-15T13:36:34.00000...
Categories: DBA Blogs

Removal of Archive Files when using OS to backup offline DB

Tom Kyte - Tue, 2017-12-12 15:06
I have been thrown in at the deep end and given an Oracle DB to look after. I have no prior experience of Oracle so everything I am doing is new and a massive learning curve. The current DB data set is approx 400GB and we are working with the appl...
Categories: DBA Blogs

cx_Oracle 6.1 with Oracle Database Sharding Support is now Production on PyPI

Christopher Jones - Tue, 2017-12-12 14:45

cx_Oracle 6.1, the extremely popular Oracle Database interface for Python, is now Production on PyPI.

cx_Oracle logo

cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features.

In the words of the creator and maintainer, Anthony Tuininga: The cx_Oracle 6.1 release has a number of enhancements building upon the release of 6.0 made a few months ago. Topping the list is support for accessing sharded databases via new shardingkey and supershardingkey parameters for connections and session pools. Support for creating connections using the SYSBACKUP, SYSDG, SYDKM and SYSRAC roles was also added, as was support for identifying the id of the transaction which spawned a subscription message. For those on Windows, improved error messages were created for when the wrong architecture Oracle Client is in the PATH environment variable. Improvements were also made to the debugging infrastructure and a number of bugs were squashed. The test suite has also been expanded. See the full release notes for more information.

cx_Oracle References

Home page: https://oracle.github.io/python-cx_Oracle/index.html

Installation instructions: http://cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: http://cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: http://cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: https://github.com/oracle/python-cx_Oracle

Does pg_upgrade in check mode raises a failure when the old cluster is running?

Yann Neuhaus - Tue, 2017-12-12 14:31

Today I had the pleasure to have Bruce Momjian in my session about PostgreSQL Upgrade Best Practices at the IT Tage 2017 in Frankfurt. While browsing through the various options you have for upgrading there was one slide where I claimed that the old cluster needs to be down before you run pg_upgrade in check mode as you will hit a (non-critical) failure message otherwise. Lets see if that really is the case or I did something wrong…

To start with lets initialize a new 9.6.2 cluster:

postgres@pgbox:/home/postgres/ [PG962] initdb --version
initdb (PostgreSQL) 9.6.2 dbi services build
postgres@pgbox:/home/postgres/ [PG962] initdb -D /tmp/aaa
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     de_CH.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /tmp/aaa ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /tmp/aaa -l logfile start

Start that:

postgres@pgbox:/home/postgres/ [PG962] pg_ctl -D /tmp/aaa -l logfile start
postgres@pgbox:/home/postgres/ [PG962] psql -c "select version()" postgres
                                                           version                                                           
-----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.2 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
(1 row)

Time: 0.861 ms

For being able to upgrade we’ll need a new cluster, so:

postgres@pgbox:/home/postgres/ [PG10] initdb --version
initdb (PostgreSQL) 10.0 dbi services build
postgres@pgbox:/home/postgres/ [PG10] initdb -D /tmp/bbb
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     de_CH.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /tmp/bbb ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /tmp/bbb -l logfile start

We’ll not start that one but will just run pg_upgrade in check mode from the new binaries:

postgres@pgbox:/home/postgres/ [PG10] pg_upgrade --version
pg_upgrade (PostgreSQL) 10.0 dbi services build
postgres@pgbox:/home/postgres/ [PG10] export PGDATAOLD=/tmp/aaa
postgres@pgbox:/home/postgres/ [PG10] export PGDATANEW=/tmp/bbb
postgres@pgbox:/home/postgres/ [PG10] export PGBINOLD=/u01/app/postgres/product/96/db_2/bin/
postgres@pgbox:/home/postgres/ [PG10] export PGBINNEW=/u01/app/postgres/product/10/db_0/bin/
postgres@pgbox:/home/postgres/ [PG10] pg_upgrade -c

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
...

… and here we go. From the log:

postgres@pgbox:/home/postgres/ [PG10] cat pg_upgrade_server.log

-----------------------------------------------------------------
  pg_upgrade run on Tue Dec 12 21:23:43 2017
-----------------------------------------------------------------

command: "/u01/app/postgres/product/96/db_2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/tmp/aaa" -o "-p 50432 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c unix_socket_permissions=0700" start >> "pg_upgrade_server.log" 2>&1
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....FATAL:  lock file "postmaster.pid" already exists
HINT:  Is another postmaster (PID 2194) running in data directory "/tmp/aaa"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

So, @Bruce: Something to improve :)
Again: It was a pleasure to have you there and I hope we’ll meet again at one of the conferences in 2018.

 

Cet article Does pg_upgrade in check mode raises a failure when the old cluster is running? est apparu en premier sur Blog dbi services.

ODPI-C 2.1 is now available for all your Oracle Database Access Needs

Christopher Jones - Tue, 2017-12-12 13:56
ODPI-C logo ODPI-C

Anthony Tuininga has just released version 2.1 of the Oracle Database Programming Interface for C (ODPI-C) on GitHub.

ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.

Key new feature: support for accessing sharded databases.

 

In the words of Anthony: This release has a number of small enhancements intended to build upon the release of 2.0 made a few months ago. Topping the list is support for accessing sharded databases, a new feature in Oracle Database 12.2. Support for creating connections using the SYSBACKUP, SYSDG, SYDKM and SYSRAC roles was also added, as was support for identifying the id of the transaction which spawned a subscription message. For those on Windows, improved error messages were created for when the wrong architecture Oracle Client is in the PATH environment variable. Improvements were also made to the debugging infrastructure. The test suite has also been expanded considerably. A number of bugs were squashed to improve ODPI-C and in preparation for the upcoming releases of cx_Oracle 6.1 and node-oracledb 2.0, both of which use ODPI-C. See the full release notes for more information.

ODPI-C References

Home page: https://oracle.github.io/odpi/

Code: https://github.com/oracle/odpi

Documentation: https://oracle.github.io/odpi/doc/index.html

Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html

Report issues and discuss: https://github.com/oracle/odpi/issues

Notes on artificial intelligence, December 2017

DBMS2 - Tue, 2017-12-12 12:53

Most of my comments about artificial intelligence in December, 2015 still hold true. But there are a few points I’d like to add, reiterate or amplify.

1. As I wrote back then in a post about the connection between machine learning and the rest of AI,

It is my opinion that most things called “intelligence” — natural and artificial alike — have a great deal to do with pattern recognition and response.

2. Accordingly, it can be reasonable to equate machine learning and AI.

  • AI based on machine learning frequently works, on more than a toy level. (Examples: Various projects by Google)
  • AI based on knowledge representation usually doesn’t. (Examples: IBM Watson, 1980s expert systems)
  • “AI” can be the sexier marketing or fund-raising term.

3. Similarly, it can be reasonable to equate AI and pattern recognition. Glitzy applications of AI include:

  • Understanding or translation of language (written or spoken as the case may be).
  • Machine vision or autonomous vehicles.
  • Facial recognition.
  • Disease diagnosis via radiology interpretation.

4. The importance of AI and of recent AI advances differs greatly according to application or data category. 

  • Machine learning and AI have little relevance to most traditional transactional apps.
  • Predictive modeling is a huge deal in customer-relationship apps. The most advanced organizations developing and using those rely on machine learning. I don’t see an important distinction between machine learning and “artificial intelligence” in this area.
  • Voice interaction is already revolutionary in certain niches (e.g. smartphones — Siri et al.). The same will likely hold other natural language or virtual/augmented reality interfaces if and when they go more mainstream. AI seems likely to make a huge impact on user interfaces.
  • AI also seems likely to have huge impact upon the understanding and reduction of machine-generated data.

5. Right now it seems as if large companies are the runaway leaders in AI commercialization. There are several reasons to think that could last.

  • They have deep pockets. Yes, but the same is true in any other area of technology. Small companies commonly out-innovate large one even so.
  • They have access to lots of data for model training. I find this argument persuasive in some specific areas, most notably any kind of language recognition that can be informed by search engine uses.
  • AI technology is sometimes part of a much larger whole. That argument is not obviously persuasive. After all, software can often be developed by one company and included as a module in somebody else’s systems. Machine vision has worked that way for decades.

I’m sure there are many niches in which decision-making, decision implementation and feedback are so tightly integrated that they all need to be developed by the same organization. But every example that remotely comes to mind is indeed the kind of niche that smaller companies are commonly able to address.

6. China and Russia are both vowing to lead the world in artificial intelligence. From a privacy/surveillance standpoint, this is worrisome. China also has a reasonable path to doing so (Russia not so much), in line with the “Lots of data makes models strong” line of argument.

The fiasco of Japan’s 1980s “Fifth-Generation Computing” initiative is only partly reassuring.

7. It seems that “deep learning” and GPUs fit well for AI/machine learning uses. I see no natural barriers to that trend, assuming it holds up on its own merits.

  • Since silicon clock speeds stopped increasing, chip power improvements have mainly taken the form of increased on-chip parallelism.
  • The general move to the cloud is also not a barrier. I have little doubt major cloud providers could do a good job of providing GPU-based capacity, given that:
  • They build their own computer systems.
  • They showed similar flexibility when they adopted flash storage.
  • Several of them are AI research leaders themselves.

Maybe CPU vendors will co-opt GPU functionality. Maybe not. I haven’t looked into that issue. But either way, it should be OK to adopt software that calls for GPU-style parallel computation.

8. Computer chess is in the news, so of course I have to comment. The core claim is something like:

  • Google’s AlphaZero technology was trained for four hours playing against itself, with no human heuristic input.
  • It then decisively beat Stockfish, previously the strongest computer chess program in the world.

My thoughts on that start:

  • AlphaZero actually beat a very crippled version of Stockfish.
  • That’s still impressive.
  • Google only released a small fraction of the games. But in the ones it did release, about half had a common theme — AlphaZero seemed to place great value on what chess analysts call “space”.
  • This all fits my view that recent splashy AI accomplishments are focused on pattern recognition.
Categories: Other

Different Authentication Settings for Internal and External Users with EBS 12.2 Now Available

Steven Chan - Tue, 2017-12-12 11:05

I'm pleased to inform you of a recent enhancement delivered for Oracle E-Business Suite 12.2  If you are using Oracle E-Business Suite 12.2.6 or higher you may now configure single sign-on and local authentication at the site and server level.   

Note:  Local authentication simply means that native Oracle E-Business Suite authentication is used. 

With prior Oracle E-Business Suite releases, if you integrated with Oracle Access Manager for Single Sign-On  all users were configured for single sign-on authentication.  Now, you may choose to register your Oracle E-Business Suite 12.2.6+ instance with Oracle Access Manager for single sign-on for all internal users while external users may use local user authentication.  This enhancement also eliminates the requirement to add external users to your corporate directory service.

 

 

For additional details regarding the configuration options refer to Section 6.5 Configure Single Sign-on at Site or Server Level  in Integrating Oracle E-Business Suite Release 12.2 with Oracle Access Manager 11gR2 (11.1.2) using Oracle E-Business Suite AccessGate (Doc ID 1576425.1)

Related Articles

References

Categories: APPS Blogs

Reporting and Dashboard Automation for Marketing

Nilesh Jethwa - Tue, 2017-12-12 09:45

Marketing Reporting and Dashboard Automation

For anyone who works in the marketing sector, customer satisfaction and company progress are of utmost importance.

In the past, entrepreneurs lived by the adage “The customer is always right.” Even then, the satisfaction of customers was the gauge of company owners as to what products or services they would cater. But with technology automating almost every task today, these satisfaction indicators are further expounded and interpreted through analytics and data reports.

Reporting and Dashboard Automation in-a-Nutshell

In analytics, there is a systematic collection and analysis of data taken from several departments, usually interpreted to form a summative assessment of an office, performance, product, or service, or to help companies to develop their plans and achieve their goals. Basically, it’s all about numbers, but these data are relevant to a given office unit.

However, this task of collection and interpretation is a breeze because of reporting and dashboard automation for marketing. What can be done and encoded in days can be accomplished in a matter of minutes, provided the system used is up-to-date.

Read more at http://www.infocaptor.com/dashboard/reporting-and-dashboard-automation-for-marketing

10 reasons NOT to implement Blockchain now

Amis Blog - Tue, 2017-12-12 06:35

A secure distributed ledger with smart contract capabilities not requiring a bank as an intermediary! Also a single source of truth with complete traceability. Definitely something we want! Blockchain technology promises to make this possible. Blockchain became famous through cryptocurrency like Bitcoin and Ethereum. The technology could also be considered to replace B2B functionality. With new technologies it is not a bad idea to look at pro’s and con’s before starting an implementation. Blockchain is the new kid on the block and there is not much experience yet on how well he will play with others and will mature. In this blog I summarize some of my concerns concerning blockchain of which I hope will be solved in due time.

Regarding new/emerging technologies in the integration space, I’m quite open to investigate the potential value which they can offer. I’m a great proponent of for example Kafka, the highly scalable streaming platform and Docker to host microservices. However, I’ve been to several conferences and did some research online regarding blockchain and I’m sceptical. I definitely don’t claim to be an expert on this subject so please correct me if I’m wrong! Also, this is my personal opinion. It might deviate from my employers and customers views.

Most of the issues discussed here are valid for public blockchains. Private blockchains are of course more flexible since they can be managed by companies themselves. You can for example more easily migrate private blockchains to a new blockchain technology or fix issues with broken smart contracts. These do require management tooling, scripts and enough developers / operations people around your private blockchain though. I don’t think it is a deploy and go solution just yet.

1 Immutable is really immutable!

A pure public blockchain (not taking into account sidechains and off chain code) is an immutable chain. Every block uses a hashed value of the previous block in its encryption. You cannot alter a block which is already on the chain. This makes sure things you put on the chain cannot suddenly appear or disappear. There is traceability. Thus you cannot accidentally create money for example on a distributed ledger (unless you create immutable smart contracts to provide you with that functionality). Security and immutability are great things but they require you to work in a certain way we are not that used to yet. For example, you cannot cancel a confirmed transaction. You have to do a new transaction counteracting the effects of the previous one you want to cancel. If you have an unconfirmed transaction, you can ‘cancel’ it by creating a new transaction with the same inputs and a higher transaction fee (at least on a public blockchain). See for example here. Also if you put a smart contract on a public chain and it has a code flaw someone can abuse, you’re basically screwed. If the issue is big enough, public blockchains can fork (if ‘the community’ agrees). See for example the DAO hack on Etherium. In an enterprise environment with a private blockchain, you can fork the chain and replay the transactions after the issue you want corrected on the chain. This however needs to be performed for every serious enough issue and can be a time consuming operation. In this case it helps (in your private blockchain) if you have a ‘shadow administration’ of transactions. You do have to take into account however that transactions can have different results based on what has changed since the fork. Being careful here is probably required.

2 Smart Contracts

Smart contracts! It is really cool you can also put a contract on the chain. Execution of the contract can be verified by nodes on the chain which have permission and the contract is immutable. This is a cool feature!

However there are some challenges when implementing smart contracts. A lot becomes possible and this freedom creates sometimes unwanted side-effects.

CryptoKitties

You can lookup CryptoKitties, a game implemented by using Smart Contracts on Etherium. They can clog a public blockchain and cause transactions to take a really long time. This is not the first time blockchain congestion occurs (see for example here). This is a clear sign there are scalability issues, especially with public blockchains. When using private blockchains, these scalability issues are also likely to occur eventually if the number of transactions increases (of course you can prevent CryptoKitties on a private blockchain). The Bitcoin / VISA comparison is an often quoted one, although there is much discussion on the validity of the comparison.

Immutable software. HelloWorld forever!

Smart contracts are implemented in code and code contains bugs and those bugs, depending on the implementation, sometimes cannot be fixed since the code on the chain is immutable. Especially since blockchain is a new technology, many people will put buggy code on public blockchains and that code will remain there forever. If you create DAO‘s (decentralized autonomous organizations) on a blockchain, this becomes even more challenging since the codebase is larger. See for example the Etherium DAO hack.

Because the code is immutable, it will remain on the chain forever. Every hello world tryout, every CryptoKitten from everyone will remain there. Downloading the chain and becoming a node will thus become more difficult as the amount of code on the chain increases, which it undoubtedly will.

Business people creating smart contracts?

A smart contract might give the idea a business person or lawyer should be able to design/create them. If they can create deterministic error free contracts which will be on the blockchain forever, that is of course possible. It is a question though how realistic that is. It seems like a similar idea that business people could create business rules in business rule engines (‘citizen developers’). In my experience technical people need to do that in a controlled, tested manner.

3 There is no intermediary and no guarantees

There is no bank in between you and the (public) blockchain. This can be a good thing since a bank eats money. However in case of for example the blockchain loses popularity, steeply drops in value or has been hacked (compare with a bank going bankrupt, e.g. Icesave) than you won’t have any guarantees like for example the deposit guarantee schemes in the EU. Your money might be gone.

4 Updating the code of a blockchain

Updating the core code of a running blockchain is due to its distributed nature, quite the challenge. This often leads to forks. See for example Bitcoin forks like Bitcoin Cash and Bitcoin Gold and an Etherium fork like Byzantium. The issue with forks is that it makes the entire cryptocurrency landscape crowded. It is like Europe in the past when every country had their own coin. You have to exchange coins if you want to spend in a certain country (using the intermediaries everyone wants to avoid) or have a stack of each of them. Forks, especially hard forks come with security challenges such as replay attacks (transactions which can be valid on different chains). Some reasons you might want to update the code is because transactions are slow, security becomes an issue in the future (quantum computing) or new features are required (e.g. related to smart contracts).

5 Blockchain and privacy legislation (GDPR) Security

Security is one of the strong points of blockchain technology and helps with the security by design and by default GDPR requirements. There are some other things to think about though.

The right to be forgotten

Things put on a blockchain are permanent. You cannot delete them afterwards, although you might be able to make then inaccessible in certain cases. This conflicts with the GDPR right to be forgotten.

Data localization requirements

Every node has the entire blockchain and thus all the data. This might cause issues with legislation. For example requirements to have data contained within the same country. This becomes more of a challenge when running blockchain in a cloud environment. In Europe with many relatively small countries, this will be more of an issue compared to for example the US, Russia or China.

Blockchain in the cloud

It is really dependent on the types of services the blockchain cloud provider offers and how much they charge for it. It could be similar to using a bank, requiring you to pay per transaction. In that case, why not stick to a bank? Can you enforce the nodes being located in your country? If you need to fix a broken smart contract, will there be a service request and will the cloud provider fork and replay transactions for you? Will you get access to the blockchain itself? Will they provide a transaction manager? Will they guarantee a max transactions per second in their SLA? A lot of questions for which there are probably answers (which differ per provider) and based on those answers, you can make a cost calculation if it will be worthwhile to use the cloud blockchain. In the cloud, the challenges with being GDPR compliant are even greater (especially for European governments and banks).

6 Lost your private key?

If you have lost your private key or lost access to your wallet (more business friendly name of a keystore) containing your private key, you might have lost your assets on the blockchain. Luckily a blockchain is secure and there is no easy way to fix this. If you have a wallet which is being managed by a 3rd party, they might be able to help you with recovering it. Those 3rd parties however are hacked quite often (a lot of value can be obtained from such a hack). See for example here, here and here.

7 A blockchain transaction manager is required

A transaction is put on the blockchain. The transaction is usually verified by several several nodes before it is distributed to all nodes and becomes part of the chain. Verification can fail or might take a while. This can be hours on some public blockchains. It could be the transaction has been caught up by another transaction with higher priority. In the software which is integrated with a blockchain solution, you have to keep track on the state of transactions since you want to know what the up to date value is of your assets. This causes an integration challenge and you might have to introduce a product which has a blockchain transaction manager feature.

8 Resource inefficient; not good for the environment

Blockchain requires large amounts of resources when compared to classic integration.
Everyone node has the complete chain so everyone can verify transactions. This is a good thing since if a single node is hacked, other nodes will overrule the transactions which this node offers to the chain if they are invalid in whatever way. However this means every transaction is distributed to all nodes (network traffic) and every verification is performed on every node (CPU). Also when the chain becomes larger, every node has a complete copy and thus diskspace is not used efficiently. See for example some research on blockchain electricity usage here. Another example is that a single Bitcoin transaction (4 can be processed per second) requires the same amount of electricity as 5000 VISA transactions (while VISA can do 4000 transactions per second, see here). Of course there is discussion on the validity of such a comparison and in the future this will most likely change. Also an indication blockchains are still in the early stages.

9 Maturity

Blockchain is relatively new and new implementations appear almost daily. There is little standardisation. The below picture was taken from a slide at the UKOUG Apps17 conference in Birmingham this year (the presentation was given by David Haimes).

Even with this many (partially open source) products, it seems every implementation requires a new product. For example the Estonian government has implemented their own blockchain flavor; KSI Blockchain. It is likely that eventually there will be a most common used product which will hopefully be the one that works best (not like what happened in the videotape format wars).

Which product?

If you choose a product now to implement, you will most likely not choose the product which will be most popular in a couple of years time. Improvements to the technology/products will quite quickly catch up to you. This will probably mean you would have to start migration projects.

Standards?

For example in case of webservice integrations, there are many internationally acknowledged standards such as the WS-* standards and SOAP. For REST services there is JOSE, JWT and of course JSON. What is there for blockchain? Every product uses its own protocol. Like in the times when Microsoft conjured up its own HTML/JavaScript standards causing cross browser compatibility issues. Only this time there are hundreds of Microsofts.

10 Quantum computing

Most of the blockchain implementations are based on ECDSA signatures. Elliptic curve cryptography is vulnerable to a modified Shor’s algorithm for solving the discrete logarithm problem on elliptic curves. This potentially makes it possible to obtain a user’s private key from their public key when performing a transaction (see here and here). Of course this will be fixed, but how? By forking the public blockchains? By introducing new blockchains? As indicated before, updating the technology of a blockchain can be challenging.

How to deal with these challenges?

You can jump on the wagon and hope the ride will not carry you off a cliff. I would be a bit careful when implementing blockchain. I would not expect in an enterprise to quickly get something to production which will actually be worthwhile in use without requiring a lot of expertise to work on all the challenges.

Companies will gain experience with this technology and architectures which mitigate these challenges will undoubtedly emerge. A new development could also be that the base assumptions the blockchain technology is based on, are not practical in enterprise environments and another technology arises to fill the gap.

Alternatives Not really

To be honest, a solid alternative which covers all the use cases of blockchain is not easily found. This might also help in explaining the popularity of blockchain. Although there are many technical challenges, in absence of a solid alternative, where should you go to implement those use cases?

SWIFT?

Exchanging value internationally has been done by using the SWIFT network (usually by using a B2B application to provide a bridge). This however often requires manual interventions (at least in my experience) and there are security considerations. SWIFT has been hacked for example.

Kafka?

The idea of having a consortium which guards a shared truth has been around for quite a while in the B2B world. The technology such a consortium uses can just as well for example be a collection of Kafka topics. It would require a per use-case study if all the blockchain features can be implemented. It will perform way better, the order of messages (like in a blockchain) can be guaranteed, a message put on a topic is immutable and you can use compacted topics to get the latest value of something. Kafka has been designed to be easily scalable. It might be worthwhile to explore if you can create a blockchain like setup which does not have its drawbacks.

Off-chain transactions and sidechains

Some blockchain issues can be mitigated by using so-called off-chain transactions and code. See for example here. Sidechains are extensions to existing blockchains, enhancing their privacy and functionality by adding features like smart contracts and confidential transactions.

Finally

It might not seem like it, from the above, but I’m excited about this new technology. Currently however in my opinion, it is still immature. It lacks standardization, performance and the immutable nature of a blockchain might be difficult to deal with in an actual enterprise. With this blog I tried to create some awareness on things you might think about when considering an implementation. Currently implementations (still?) require much consultancy on the development and operations side, to make it work. This is of course a good thing in my line of work!

The post 10 reasons NOT to implement Blockchain now appeared first on AMIS Oracle and Java Blog.

ORA-00240: control file enqueue held for more than 120 seconds 12c

Pakistan's First Oracle Blog - Mon, 2017-12-11 22:30
Occasionally, In Oracle 12c database, you may get ORA-00240: control file enqueue held for more than 120 seconds  error in the alert log file.


Well, as this error contains the word control file so it looks scary but if you are getting it infrequently and the instance stays up and running then there is no need to worry and this can be ignored as a fleeting glitch.

But if it starts happening too often and worst case if it hangs or crashes the instance then more than likely its  a bug and you need to apply either the latest PSU or any one-off patch available from Oracle support after raising SR.

There have been some occurances where this error occured due to high number of sessions and conflicting with the ulimit of the operating system resulting in hanging of ASM and RDBMS. Some time it could be due to shared pool latch contention as per few MOS documents.

So if its rare, then ignore it and move on with the life as there are other plenty of things to worry about. If its frequent and a show-stopper then by all means raise a SEV-1 SR with Oracle support as soon as possible.
Categories: DBA Blogs

Using an "On Field Value Changes" Event in Oracle Visual Builder Cloud Service

Shay Shmeltzer - Mon, 2017-12-11 16:51

This entry is based on previous entries from John and Shray that deal with the same topic and provide the same type of solution. John's entry was created before VBCS provided the UI id for components, and Shray's entry is dealing with a more complex scenario that also involve fetching new data. So I figured I'll write my version here - mostly for my own future reference if I'll need to do this again.

The Goal is to show how you can modify the UI shown in a VBCS page in response to data changes in fields. For example how to hide or show a field based on the value of another field.

To do this, you need to hook into the HTML lifecycle of your VBCS page and subscribe to events in the UI. Then you code the changes you want to happen. Your gateway into manipulating/extending the HTML lifecycle in VBCS is the custom component available in the VBCS component palette. It provides a way to add your own HTML+JavaScript into an existing page.

The video below shows you the process (along with a couple of small mistakes along the route):

The basic steps to follow:

Find out the IDs of the business object field whose value changes you want to listen to. You'll also need to know the IDs of the UI component you want to manipulate - this is shown as the last piece of info in the property inspector when you click on a component. 

Once you have those you'll add a custom component into your page, and look up the observable that relates to the business object used in the page. This can be picked up from the "Generated Page Model (read-only)" section of the custom component and it will look something like : EmpEntityDetailArchetype

Next you are going to add a listener to your custom component model. Add it after the lines 

//the page view model this.pageViewModel = params.root;

your code would look similar to this:

this._listener = this.pageViewModel.Observables.EmpEntityDetailArchetype.item.ref2Job.currentIDSingle.subscribe(function (value) { if (value === "2") { $("#pair-currency-32717").show(); } else { $("#pair-currency-32717").hide(); } }); CustomComponentViewModel.prototype.dispose = function () { this._listener.dispose(); };

Where you will replace the following:

  • EmpEntityDetailArchetype  should be replaced with the observable for your page model.
  • ref2Job  should be replaced with the id of the column in the business object whose value you are monitoring.
  • pair-currency-32717 should be replaced with the id of the UI component you want to modify. (in our case show/hide the component).

You can of course do more than just show/hide a field with this approach.

Categories: Development

TensorFlow Linear Regression Model Access with Custom REST API using Flask

Andrejus Baranovski - Mon, 2017-12-11 13:28
In my previous post - TensorFlow - Getting Started with Docker Container and Jupyter Notebook I have described basics about how to install and run TensorFlow using Docker. Today I will describe how to give access to the machine learning model from outside of TensorFlow with REST. This is particularly useful while building JS UIs on top of TensorFlow (for example with Oracle JET).

TensorFlow supports multiple languages, but most common one is Python. I have implemented linear regression model using Python and now would like to give access to this model from the outside. For this reason I'm going to use Flask, micro-framework for Python to allow simple REST annotations directly in Python code.

To install Flask, enter into TensorFlow container:

docker exec -it RedSamuraiTensorFlowUI bash

Run Flask install:

pip install flask

Run Flask CORS install:

pip install -U flask-cors

I'm going to call REST from JS, this means TensorFlow should support CORS, otherwise request will be blocked. No worries, we can import Flask CORS support.

REST is enabled for TensorFlow model with Flask in these two lines of code:


As soon as Flask is imported and enabled we can annotate a method with REST operations and endpoint URL:


There is option to check what kind of REST operation is executed and read input parameters from POST request. This is useful to control model learning steps, for example:


If we want to collect all x/y value pairs and return in REST response, we can do that by collecting values into array:


And construct JSON response directly out of the array structure using Flask jsonify:


After we run TensorFlow model in Jupyter, it will print URL endpoint for REST service. For URL to be accessible outside TensorFlow Docker container, make sure to run TensorFlow model with 0.0.0.0 as in the screenshot below:


Here is example of TensorFlow model REST call from Postman. POST operation is executed payload and response:


All REST calls are logged in TensorFlow:


Download TensorFlow model enabled with REST from my GitHub.

Top 12 Rank Tracking Software and services

Nilesh Jethwa - Mon, 2017-12-11 13:23

The role of Search Engine Optimization and keyword tracking tools are important in this technological age. This is especially true for people involved in business. One sure way to track the performance of a business is to use software specifically designed to look for rank and popularity.

Rank tracking software gauges the performance of a business. The use of this rank tracker tool allows one to evaluate and track ranks in search engines as well as give them the range of their visibility to the prospect market. This can also observe the progress of competitors in the market. This knowledge can act as their edge and chance for better improvement.

Monitoring business using keyword rank checker can offer numerous benefits to a business enterprise, but before it can happen, you must first find the suitable rank tracker tool for you.

Here are a list of the top rank tracking software and services

  1. ADVANCE WEB RANKING (AWR)

Initially founded in 2002, this company offers an extensive variety of different keyword tracking tools. It prides itself on providing worry-free reporting that allows a user to focus on website optimization rather than tracking.  It also has cloud service and desktop software.

Features and benefits of this rank tracker:

  • Allows white label reporting
  • It offers unlimited observation of campaigns and websites competition
  • Allows easy assignment of read/write permissions
  • This keyword rank checker has an international customer base that supports special characters
  • Simple project setup and easy sharing of data online
  • User-intuitive interface and can be accessed by mobile devices
  • Because of its custom google location, it aren support the international markets and gets results from over 50+ countries.

What makes it better?

Compared to its competitors, Advanced Web Ranking can give localized ranking with pinpoint accuracy. Geolocation proxies in the company system is the reason for this better accuracy. It also offers a 30-day trial for free.

But what’s the price?

The starting least priced option is $49 per month and the highest value is at $499. The price is a bit costly, but with its given features, it is understandable.

 

Read more at https://www.infocaptor.com/dashboard/top-12-rank-tracking-software-and-services-for-your-business-needs

Secure Configuration Guidelines for Oracle E-Business Suite 12.2 and 12.1

Steven Chan - Mon, 2017-12-11 11:32

We've been providing Oracle E-Business Suite secure configuration guidelines or best practices in our published MOS Notes and guides for some time now. Our secure configuration deployment guidelines include the following recommendations:

Related Articles

References

Categories: APPS Blogs

DBMS_COMPRESSION can be run in parallel, at least from 11.2.0.4

Jeff Moss - Mon, 2017-12-11 10:12

I was trying to use DBMS_COMPRESSION on an 11gR2 (11.2.0.4 on RHEL 6.3) database the other day and was helped by this article from Neil Johnson. I was having some trouble with permissions until I read that article which nicely summarises everything you need to know – thanks Neil!

One thing I did notice is that Neil stated that you can’t parallelise the calls to the advisor since it uses the same named objects each time and this would then cause conflicts (and problems). Neil illustrated the example calls that the advisor is making based on him tracing the sessions…

create table "ACME".DBMS_TABCOMP_TEMP_UNCMP tablespace "SCRATCH" nologging
 as select /*+ DYNAMIC_SAMPLING(0) FULL("ACME"."ACCS") */ *
 from "ACME"."ACCS" sample block( 99) mytab
 
create table "ACME".DBMS_TABCOMP_TEMP_CMP organization heap 
 tablespace "SCRATCH" compress for all operations nologging
 as select /*+ DYNAMIC_SAMPLING(0) */ *
 from "ACME".DBMS_TABCOMP_TEMP_UNCMP mytab

Because I kept having permissions issues I was repeatedly running the advisor and I ended up with a situation where one of the transient objects (above, or so I thought) had been left in place and when I tried the next rerun it complained that the object existed. I can’t reproduce this as I can’t remember all the steps that I took and I wasn’t recording my session at the time – it’s not really the point of this blog in any case, rather the knowledge it led to. Because the error was that the object existed, I figured I just needed to find the object and drop it and I’d be good to carry on – obviously I looked at the above code fragments and started to search for the two objects in question (DBMS_TABCOMP_TEMP_UNCMP and DBMS_TABCOMP_TEMP_CMP) but found nothing. I started looking for DBMS_TABCOMP% and again found nothing.

Somewhat confused, I then looked for the latest object created and found that the objects were actually called something completely different and of the form CMPx$yyyyyyyy. I think this must have changed since Neil wrote his article (it is from 2013 after all).

I can’t work out what “x” is – at first I thought it was the RAC instance but that was just a coincidence that I saw a 3 and I was on instance 3 of a RAC cluster. In fact on a single instance database (test below) I saw numbers higher than 1 so it’s not the RAC instance number and I can’t work out what it is. “yyyyyyyy” is definitely the OBJECT_ID of the data object, confirmed by cross referencing the data dictionary.

Given this naming standard is therefore object specific, it suggests that you could execute these advisor calls in parallel.

Just to be clear, I’m not sure what version of 11g Neil was using but I am using 11.2.0.4:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

A little test using Neil’s code (and a bit more as I don’t have acme user on my database):

First create a script called dbms_comp.sql with the following content:

set serveroutput on
set feedback on
set verify off
 
declare
 blkcnt_cmp BINARY_integer;
 blkcnt_uncmp BINARY_integer;
 row_cmp BINARY_integer;
 row_uncmp BINARY_integer;
 cmp_ratio number;
 comptype_str varchar2(60);
begin
 dbms_compression.get_compression_ratio(
 scratchtbsname => upper('&3.')
 , ownname => upper('&1.')
 , tabname => upper('&2.')
 , partname => null
 , comptype => dbms_compression.comp_for_oltp
 , blkcnt_cmp => blkcnt_cmp
 , blkcnt_uncmp => blkcnt_uncmp
 , row_cmp => row_cmp
 , row_uncmp => row_uncmp
 , cmp_ratio => cmp_ratio
 , comptype_str => comptype_str
 , subset_numrows => &4.
 );
 DBMS_OUTPUT.PUT_LINE('Block count compressed = ' || blkcnt_cmp);
 DBMS_OUTPUT.PUT_LINE('Block count uncompressed = ' || blkcnt_uncmp);
 DBMS_OUTPUT.PUT_LINE('Row count per block compressed = ' || row_cmp);
 DBMS_OUTPUT.PUT_LINE('Row count per block uncompressed = ' || row_uncmp);
 --DBMS_OUTPUT.PUT_LINE('Compression type = ' ||comptype_str);
 DBMS_OUTPUT.PUT_LINE('Compression ratio = '||round(blkcnt_uncmp/blkcnt_cmp,1)||' to 1');
 DBMS_OUTPUT.PUT_LINE('Compression % benefit = '||round((blkcnt_uncmp-blkcnt_cmp)/blkcnt_uncmp*100,1));
 --DBMS_OUTPUT.PUT_LINE('Compression ratio org= '||cmp_ratio);
end;
/
set verify on

Then create another script called setup.sql with the following content – I’m using auditing (thanks Tim!) to see the statements rather than tracing in this instance:

conn sys as sysdba
drop user acme cascade;
drop user nj cascade;
drop tablespace acme including contents and datafiles;
drop tablespace scratch including contents and datafiles;
drop role nj_dba;
create user acme identified by acme;
grant create session,create table to acme;
create tablespace acme datafile '/u01/app/oracle/oradata/db11g/acme01.dbf' size 2G;
alter user acme quota unlimited on acme;
create tablespace scratch datafile '/u01/app/oracle/oradata/db11g/scratch01.dbf' size 2G;
create role nj_dba;
create user nj identified by nj;
REM Use auditing instead of tracing to identify the statements run:
audit all by nj by access;
audit create table by nj by access;
grant create session, create any table, drop any table, select any table to nj_dba;
grant execute on sys.dbms_monitor to nj_dba;
grant nj_dba to nj;
alter user acme quota unlimited on scratch;
alter user nj quota unlimited on scratch;
grant ANALYZE ANY to NJ_DBA;

Now login to sqlplus /nolog and run setup.sql which should show this:

SQL> @setup
Enter password:
Connected.

User dropped.



User dropped.



Tablespace dropped.



Tablespace dropped.



Role dropped.



User created.



Grant succeeded.



Tablespace created.



User altered.



Tablespace created.



Role created.



User created.



Audit succeeded.



Audit succeeded.



Grant succeeded.



Grant succeeded.



Grant succeeded.



User altered.



User altered.



Grant succeeded.

 

Now login to acme and create the subject table for the compression advisor:

conn acme/acme
create table test tablespace acme as select lpad(TO_CHAR(ROWNUM),2000,'x') char_col from dual connect by level < 300000;

Table created.

Now check the compression using the advisor (ignore the actual compression results as we’re not interested in those at this time):

conn nj/nj
@dbms_comp acme test scratch 200000
Block count compressed = 2048
Block count uncompressed = 2048
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Now check the audit trail to find the statements run:

conn sys as sysdba
column username format a8 
column obj_name format a30
column timestamp format a30
column action_name format a20
set linesize 200
set pagesize 1000
SELECT USERNAME, action_name,obj_name, to_char(extended_TIMESTAMP,'DD-MON-YYYY HH24:MI:SSxFF') timestamp FROM DBA_AUDIT_TRAIL WHERE USERNAME='NJ' and action_name='CREATE TABLE';

USERNAME ACTION_NAME OBJ_NAME TIMESTAMP
-------- -------------------- ------------------------------ --------------------
NJ CREATE TABLE CMP3$87401 10-DEC-2017 18:01:15.119890
NJ CREATE TABLE CMP4$87401 10-DEC-2017 18:01:15.177518

(Abridged to remove non relevant tests)

Now check the dictionary to see the OBJECT_ID:

select object_name from dba_objects where object_id=87401;

OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
TEST

1 row selected.

OK, how about the parallelism? Let’s create a second table called TEST2 in ACME:

conn acme/acme
create table test2 tablespace acme as select lpad(TO_CHAR(ROWNUM),2000,'x') char_col from dual connect by level < 300000;

Table created.

Now run two parallel sessions – I did it by firing off the calls manually in separate SQL*Plus sessions rather than being clever:

In session 1:

conn nj/nj
@dbms_comp acme test scratch 200000

In session 2:

conn nj/nj
@dbms_comp acme test2 scratch 200000

 

First one gives:

Block count compressed = 1920
Block count uncompressed = 1920
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Second one gives:

Block count compressed = 2432
Block count uncompressed = 2432
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Both ran at the same time and didn’t fail

Now check the audit trail:

conn sys as sysdba
column username format a8 
column obj_name format a30
column timestamp format a30
column action_name format a20
set linesize 200
set pagesize 1000
SELECT USERNAME, action_name,obj_name, to_char(extended_TIMESTAMP,'DD-MON-YYYY HH24:MI:SSxFF') timestamp FROM DBA_AUDIT_TRAIL WHERE USERNAME='NJ' and action_name='CREATE TABLE';
USERNAME ACTION_NAME OBJ_NAME TIMESTAMP
-------- -------------------- ------------------------------ --------------------
NJ CREATE TABLE CMP3$87401 10-DEC-2017 18:01:15.119890
NJ CREATE TABLE CMP4$87401 10-DEC-2017 18:01:15.177518 
NJ CREATE TABLE CMP3$87408 10-DEC-2017 18:12:18.114321
NJ CREATE TABLE CMP3$87409 10-DEC-2017 18:12:18.114353
NJ CREATE TABLE CMP4$87408 10-DEC-2017 18:12:22.730715
NJ CREATE TABLE CMP4$87409 10-DEC-2017 18:12:22.735908
(Abridged to remove non relevant tests)

And from the dictionary:

select object_name from dba_objects where object_id IN(87408,87409);

OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
TEST
TEST2

2 rows selected.

So, it appears to allow parallel running without issue.

If anyone works out what the “x” part of the object names is (3 and 4 in the above example), please shout out in the comments…

Oracle Expands Security Portfolio with New Capabilities and Partner Program Designed to Help Organizations Detect and Protect Against Threats

Oracle Press Releases - Mon, 2017-12-11 07:00
Press Release
Oracle Expands Security Portfolio with New Capabilities and Partner Program Designed to Help Organizations Detect and Protect Against Threats New Identity Governance, A.I.-based Configuration Management, and Consumer Identity Management are features designed to help organizations protect their clouds and digital services

Redwood Shores, Calif.—Dec 11, 2017

Oracle today announced that it is expanding its security portfolio and unveiling a new partner program. The Oracle Identity SOC portfolio now includes new capabilities to help enterprises manage and certify user identities, applications, and confidential data more securely and through a richer, consumerized user experience. Additionally, the new partner program will help improve collaboration with security vendors and simplify customer adoption.

First unveiled at Oracle OpenWorld 2017, Oracle’s integrated suites—Oracle Identity Security Operations Center (SOC) portfolio of services and Oracle Management Cloud—are designed to help enterprises forecast, reduce, detect, and resolve security threats and assist in efforts to remediate application and infrastructure performance issues.  Leveraging artificial intelligence to analyze a unified data set consisting of the full breadth of security and operational telemetry, as well as provide automated remediation, Oracle’s integrated suite is designed to enable customers to quickly adapt their security and operational posture as their risk landscape changes. This application of machine learning can potentially help thwart attacks, reduce the detection window from months to minutes, and more quickly address security breaches and performance outages.

“Built on our advanced artificial intelligence and automation technologies, Oracle Identity SOC portfolio now offers cloud-native identity governance combined with adaptive security designed to protect our customer’s cloud and digital services,” said Rohit Gupta, group vice president, Cloud Security, Oracle. “In addition to strengthening internal security monitoring, Oracle’s comprehensive cross-stack threat detection capabilities provide broad threat detection range, less noise from false positives and a high level of accuracy in detecting and remediating threats against the enterprise’s most important IT asset–its data.”

The Oracle Identity SOC portfolio now includes new automated identity governance for hybrid clouds, expanded consumer identity management, machine learning-driven configuration management and enhanced artificial intelligence capabilities.

First Cloud-Native Identity Governance Offering

Oracle announced the first cloud-native identity governance service for hybrid cloud environments, which will be fully integrated and native to Oracle’s SaaS applications, Oracle Identity SOC portfolio (including Oracle Identity Cloud Service and Oracle CASB Cloud Service), as well as Oracle Management Cloud. This combination helps provide a better user experience for identity governance and automation of traditional processes and assessments through intelligent machine learning and cloud application risk feeds from Oracle CASB Cloud Service, all of which can be unified with security and operational telemetry from on-premises and hybrid environments. The new identity governance service features out-of-the-box connectors for integration with external systems and a consumer-grade interface for enterprise campaigns including application access requests, approvals and certifications.

Oracle CASB Cloud Service now also includes risk-based cloud access control to help enterprises mitigate risks to cloud applications. The new controls help manage configuration changes or application access based on rich context including device type, geo-location, and dynamic risk scores. Combined, these factors are designed to help prevent user access based on stolen credentials and mitigate the risk of cloud application administrator privileges misuse.

Industry-First Machine Learning-Driven Configuration Management

Additionally, Oracle introduced a game-changing feature in Oracle Configuration and Compliance Cloud Service, built on Oracle Management Cloud, which automatically discovers the configuration settings across a customer’s entire organization in real time and uses machine learning to find outlier configurations, which can then be automatically remediated.  In addition, Oracle Management Cloud now supports rulesets to enforce DISA’s Security Technical Implementation Guide (STIG) against the Oracle Database.  Oracle has also now extended its user and entity behavior analytics (UEBA) across the entire stack.  Specifically, Oracle Security Monitoring and Analytics (SMA) Cloud Service continuously baselines data access patterns at the SQL level and then detects anomalies by user, database or application.

Expanded Consumer Identity Management

Recognizing the need for enterprises to correlate consumer data with marketing information, Oracle expanded its consumer identity management capabilities in Oracle Identity Cloud Service with integrations with Oracle Marketing Cloud and Oracle Data Cloud. By leveraging built-in consent management, social profiles, preference management and activity attributes from Oracle Identity Cloud Service with these solutions, enterprises can build more targeted marketing campaigns with better consumer insights and analytics. In addition, easy integration with third-party or custom applications via Oracle Self-Service Integration Cloud Service will allow marketers to incorporate additional services.

New Partner Programs Help Deliver Complete Security Solutions

Continuing its commitment to supporting the development of a heterogeneous security ecosystem, Oracle today launched the Identity SOC Security Network in Oracle Cloud Marketplace. This program offers technology integrations with leading security vendors across critical areas including Enterprise Mobility Management, Next Generation Firewalls, Endpoint Security and Threat Intelligence, enabling enterprises to enrich the dynamic context, intelligence and automation capabilities of Oracle Identity SOC. In addition, Oracle continues to enable hundreds of partners worldwide to achieve Oracle PartnerNetwork Specializations in Security and Systems Management Cloud Services.

"We're incredibly excited to work with the Oracle Identity SOC portfolio," said Prakash Linga, CTO and co-founder of Vera. "Identity is the single most critical element of all modern applications and systems. Enhancing Oracle’s portfolio with Vera’s data-centric security and encryption solution makes Oracle’s services incredibly compelling for businesses.”

“We are excited to work with Oracle to introduce Compromised Credentials as a new category in Oracle CASB Cloud Service for refining organization and identity-centric risk assessment,” said Steve Tout, CEO of Seattle based VeriClouds. “With VeriClouds-powered credential monitoring and detection, more than 7 billion compromised credentials are surfaced to enable the automation of risk insight from the dark web into actionable intelligence for the Oracle Identity SOC.”

“Our digital workspace platform, VMware Workspace ONE powered by AirWatch unified endpoint management technology, helps IT secure apps based on conditional access policies and provides a seamless end user experience for any app from any device,” said Ashish Jain, vice president of product management, End-User Computing, VMware. “Workspace ONE and Oracle’s security solutions enables us to strengthen application security and provide data-driven insights to help mutual customers make better informed decisions related to device and app management.”

Contact Info
Jesse Caputo
Oracle
+1.650.506.5967
jesse.caputo@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jesse Caputo

  • +1.650.506.5967

Kristin Reeves

  • +1.415.856.5145

OBIEE 12c (12.2.1.3) post upgrade challenges - unable to save HTML based dashboard reports

Syed Jaffar - Mon, 2017-12-11 03:18
Very recently, an OBIEE v12.2.1.1 upgraded to v12.2.1.3 on an Exalytics machine. Though it wasn't a major upgrade, we had to do it to fix some bugs encountered in v12.2.1.1. For sure, it wasn't a easy walk in the park during the upgrade and post upgrade.

I am here to discuss a particular issue we faced post upgrade. This should be interesting.

Post upgrade to v12.2.1.3, we heard from the application team that all the reports working perfectly fine, except the dashboard reports with HTML tag unable to save. The below error was thrown when try to save the existing HTML reports:

"You do not currently have sufficient privileges to save a report or dashboard page that contains HTML markup"

It appeared to be a very generic error, and we tried all the workarounds we found on various websites and Oracle docs. Unfortunately, none of the solution fixed the issues. One of the popular advice is the following:

you need to go and check the HTML markup privileges in Administration.


1. Login with OBIEE Admin user,
2. Go to Administration link - Manage Privileges,
3. Search for "Save Content with HTML Markup" and see if it has necessary privileges. else assign the necessary roles and re-check the issue. 


After giving an additional privilege we could save the existing HTML reports. However, still the issue exists when we create a new dashboard report, with HTML tag. This time the below error msg was appeared:



While doing the research we also opened a SR with Oracle support. Luckily, (I mean it), the engineer who assigned this SR was good enough to catch the issue and provide the solution.

The engineer tried to simulate the issue on his 12.2.1.3 environment and surprisingly he faced the similar problem. He then contacted the OBIEE development team and reaised the above concerns.

A few hours later, he provided us the below workaround, which seems to be a generic solution to the first and second problem we faced. I believe this is a common issue on 12.2.1.3.


Please follow below steps ::

Take a backup of Instanceconfig.xml file, which is located in the following location, 

<obiee_home>/user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml 

2. Add the following entry under security. 

<Security> 

<CheckUrlFreshness>false</CheckUrlFreshness> 
<EnableSavingContentWithHTML>true</EnableSavingContentWithHTML> 

</Security> 

3. Restart the presentation services with command below :: 

cd /refresh/home/oracle/12c/Oracle_Home/Oracle_Config/bi/bitools/bin

./stop.sh -i obips1 

./start.sh -i obips1 

4. Now you will see the "Contains HTML MARK up" in the answers , check it and try to save the report now. 

The solution perfectly worked and all HTML dashboard reports were able to save.

If you are on OBIEE 12.2.1.3, I strongly recommend you to test the HTML reports, and if you encounter one similar to ours, apply the workaround.



Pages

Subscribe to Oracle FAQ aggregator