The Predictive Analytics World conference is finishing up today in New York. Over the past few days the conference has had some of the leading analytic type people presenting at it.
Twitter, as usual, has been busy and there has been some very interesting and important quotes.
The list of tweets (#pawcon) below are the ones I found most interesting:
Manu Sharma from LinkedIn: “Guru” job title is down, “Ninja” is up.
Despite the “data science” buzz, the biggest skill among #pawcon attendees is ” #DataMining
Andrea Medinaceli: Visualization is very powerful for making analytics results accessible to upper management (and for buy-in)
Social Network Analytics (SNA) with Zynga, 20M daily active users, 90M monthly active users; 10K nodes, 45K edges (big!)
Vertica: Zynga is an analytics company in the disguise of a gaming company; graph analytics find users/influencers
Colin Shearer: Find me something interesting in my data is a question from hell (analysis should be guided by business goals)
John Elder advocates ensemble methods – usually improve analytics results
Tom Davenport: to get real value, #analytics need to move from one-time craft to industrialized activity
10 years from now all Fortune 500 companies will have a Chief Analytics Officer at the level of COO or CFO
Must be a sign of the economy, so much of the focus on the value of predictive is on retaining customers. #PAWCON.
Tom Davenport: #Analytics is not about math, it is about relationships (with your business client) – says Intel Chief Mathematician
Karl Rexer: companies with higher analytic capabilities are doing better than their peers
If you have been using Oracle Data Miner to develop your data mining workflows and models, at some point you will want to move away from the tool and start using the ODM APIs.
Oracle Data Mining provides a PL/SQL API and a Java API for creating supervised and unsupervised data mining models. The two APIs are fully interoperable, so that a model can be created with one API and then modified or applied using the other API.
I will cover the Java APIs in a later post, so watch out for that.
To help you get started with using the APIs there are a number of demo PL/SQL programs available. These were available as part of the the pre-11.2g version of the tool. But they don’t seem to packaged up with the 11.2 (SQL Developer 3) application.
The following table gives a list of the PL/SQL demo programs that are available. Although these were part of the pre-11.2g tool, they still seem to work on your 11.2g database.
You can download a zip of these files from here.
The sample PL/SQL programs illustrate each of the algorithms supported by Oracle Data Mining. They include examples of data transformations appropriate for each algorithm.
I will be exploring the main APIs, how to set them up, the parameters, etc., over the next few weeks, so check back for these posts.
Today I received two boxes, containing 48 books of
The Performance Management Revolution by Howard Dresner
These books have been kindly donated by Duncan Fitter, UK Business Development Director at Oracle.
I will be distributing these books to my MSc Data Mining students over the next week.
Thanks Duncan and Oracle
The new/updated SQL Developer 3.1 Early Adopter has just been released.
For the Data Miner, there are no major changes and it appears that there has been some bug fixes and some minor enhancements to so parts.
The main ODM features, apart from bug fixes, in this release include:
- Globalization support, including translated error messages and GUI for all languages supported by SQL Developer
- Improved accessibility features including the addition of a Structure navigator that lists all the nodes and links displayed in a workflow
Bug / Feature
After unzipping the download I opened SQL Developer. With each new release you will have to upgrade the existing ODM repository. The easiest way of doing this is to open the ODM connections pane and double click on one of your ODM schemas. SQL Developer will then run the necessary scripts to upgrade the repository.
I discovered a bug/feature with SQL Developer 3.1 EA1 upgrade script. The repository upgrade does not complete and an error is report.
I logged this error on the ODM forum on OTN. Mark Kelly who is the Development Manager for ODM and monitors the ODM forum, and his team, were quickly onto investigating the error. Mark has posted an update on the ODM form and give a script that needs to be run before you upgrade your existing repository.
You can download the pre-upgrade script from here.
If you don’t have an existing repository then you don’t have to run the script.
Check out the message on the ODM forum.
How to Upgrade SQL Developer & ODM
You will have to download the new SQL Developer 3.1 EA install files.
- Unzip this into your SQL Developer directory
- Create a shortcut for sqldeveloper.exe on your desktop and relabel it SQL Developer 3.1 EA
- Double-click this short cut
- You should be presented with the above window. Select the Yes button to migrate you previous install settings
- SQL Developer should now open and contains all your previous connections
If you have an existing ODM repository, you need to run the pre-upgrade script (see above) at this point
- You will now have to upgrade the ODM repository in the database. The simplest way of doing this is to allow SQL Developer to run the necessary scripts.
- From the View Menu, select Oracle Data Miner –> Connections
- In the ODM Connections pane double click one of your ODM schemas. Enter the username and password and click OK
- You will then be prompted to migrate/update the ODM repository to the new version. Click Yes.
- Enter the SYS username and Password
- Click Start button, to start the migrate/upgrade scripts
- On my laptop this migrate/upgrade step took less than 1 minute
- The upgrade is now finished and you can start using ODM.
ODM – SQL Developer 3.1 EA – Release Notes
The ODM release notes can be found at
The Oracle BIWA SIG, which is part of the IOUG, will be having a tech cast on Wednesday 14th September 12:00 PM – 1:00 PM CDT (between 6pm and 7pm in Ireland)
It is titled ‘Building Next-Generation Predictive Analytics Applications using Oracle Data Mining’.
You can register for this by visiting http://ow.ly/6s35C
This presentation will cover how the Oracle Database has become a predictive analytics (PA) platform for next-generation applications and will include several examples including:
- Oracle Fusion Human Capital Management (HCM) Predictive Workforce,
- Oracle Adaptive Access Manager for fraud detection,
Oracle Communications Industry Model,
- Oracle Complex Event Processing and others and will be interspersed with
- Oracle Data Mining demos and PA examples where possible.
“Predictive analytics help you make better decisions by uncovering patterns and relationships hidden in the data. This new information generates competitive advantage. Oracle has invested heavily to “move the algorithms to the data” rather than current approaches. Oracle Data Mining provides 12 in-database algorithms that mine star schemas, structured, unstructured, transactional, and spatial data. Exadata, delivering 10x-100x faster performance, combined with OBIEE for dashboards and drill-down deliver an unbeatable in-database analytical platform that undergirds next-generation “predictive” analytics applications. This webcast will show you how to get started.”
Oracle Data Miner functionality is now well established and proven over the years. In particular with the release of the ODM 11gR2 version of the tool. But how will Oracle Data Miner develop into the future.
There are 4 main paths or Frontiers for future developments for Oracle Data Miner:
Oracle Data Miner Tool
The new ODM 11gR2 tool is a major development over the previous version of the tool. With the introduction of workflows and some added functionality for some of the features. the tool is now comparable with the likes of SAS Enterprise Miner and SPSS.
But the new tool is not complete and still needs a bit of fine tuning of most of the features. In particular with the usability and interactions. Some of the colour schemes needs to be looked at or to allow users to select their own colours.
Apart from the usability improvements for the tool another major development that is needed, is the ability to translate the workflow and the underlying database objects into usable code. This code can then be incorporated into our applications and other tools. The tool does allow you to produce shell code of the nodes, but there is still a lot of effort needed to make this usable. Under the previous version of the tool there was features available in JDeveloper and SQL Developer to produced packaged code that was easy to include in our applications.
“A lot done – More to do”
Over the past couple of months there has been a few postings on how Oracle Data Miner (11gR2) has been, or will be, incorporated in various Oracle Applications. For example Oracle Fusion Human Capital Management and Oracle Real Time Decision (RTD). Watch out of other applications that will be including Oracle Data Miner.
“A bit done – Lots more to do”
Oracle Business Intelligence
One of the most common places where ODM can be used is with OBIEE. OBIEE is the core engine for the delivery of the BI needs for an organisation. OBIEE coordinates the gathering of data from various sources, the defining of the business measures and then the delivery of this information in various forms to the users. Oracle Data Miner can be included in this process and can add significant value to the BI needs and report.
“A lot done – Need to publicise more”
Most data mining projects are independent of various Applications and BI requirements. They are projects that are hoping to achieve a competitive insight into their organisational data. Over time as the success of some pilot projects become know they need for more data mining projects will increase. This will lead to organisations have a core data mining team to support these project. With this, the team will need tools to support them in the delivery of their project and with the delivery. This is were OBIEE and Oracle Fusion Apps will come increasingly important.
“A lot done – more to do”
Before beginning any data mining task we need to performs some data investigation. This will allow us to explore the data and to gain a better understanding of the data values. We can discover a lot by doing this can it can help us to identify areas for improvement in the source applications, as well as identifying data that does not contribute to our business problem (this is called feature reduction), and it can allow us to identify data that needs reformatting into a number of additional features (feature creation). A simple example of this is a date of birth field provides no real value, but by creating a number of additional attributes (features) we can now use the date of birth field to determine what age group they fit into.
As with most of the interface in Oracle Data Miner 11gR2, there is a new Data Exploration interface. In this blog post I will talk you through how to set-up and use the new Data Exploration interface and show you how you can use the data exploration features to gain an understanding of the data before you begin using the data mining algorithms.
The examples given here are based on my previous blog posts and we will use the same sample data sets, that were set-up as part of the install and configuration.
See my other blog post and videos on installing and setting up Oracle Data Miner.
The next step is to create the Explore Data node on our workflow. From the Data tab in the Component Palette, select and drag the Explore Data node onto the workflow. Now we need to link the Data node to the Explore Data node.
Right-click on the Explore Data mode and click Run. This will make the ODM tool go to the database and analyse the data that is specified in our Data node. The analyse results will be used in the Explore Data note.
Exploring the Data
When the Explore Data node has finished we can look at the data it has generated. Right-click the Explore Data node and select View Data.
A lot of statistical information has been generated for each of the attributes in our Data node. In addition to the statistical information we also get a histogram of the attribute distributions.
We can work through each attribute taking the statistical data and the histograms to build up a picture of the data.
The data we are using is for an Electronics Goods store.
A few interesting things in the data are:
- 90% of the data comes from the United States of America
- PRINTER_SUPPLIES attribute only has one value. We can eliminate this from our data set as it will not contribute to the data mining algorithms
- Similarly for OS_DOC_SET_KENJI, which also has one one value
The histograms are based on predetermined number of bins. This is initially set to 10, but you may need to changed this value up or down to see if a pattern exists in the data.
An example of this is if we select AGE and set the number of bins to 10. We get a nice histogram showing that most of our customers are in the 31 to 46 age ranges. So maybe we should be concentrating on these.
Now if we change the number of bins to 30 can get a completely different picture of what is going on in the data.
To change the number of bin we need to go to the Workflow pane and select the Property Inspector. Scroll down to the Histogram section and change the Numerical Bins to 25. You then need to rerun the Explore Data node.
Now we can see that there are a number of important age groups what stand out more than others. If we look at the 31 to 46 age range, in the first histogram we can see that there is not much change between each of the age bins. But when we look at the second histogram for the 25 bins for the same 21 to 34 age range we get a very different view of the data. In this second histogram we see that that the ages of the customers vary a lot. What does mean. Well it can mean lots of different things and it all depends on the business scenario. In our example we are looking at an electronic goods store. What we can deduce from this second histogram is that there are a small number of customers up to about age 23. Then there is an increase. Is this due to people having obtained their main job after school having some disposable income. This peak is followed by a drop off in customers followed by another peak, drop off, peak, drop off etc. Maybe we can build a profile of our customer based on their age just like what our financial organisations due to determine what products to sell to use based on our age and life stage.
Conclusions on the data
From this histogram we can maybe categorise the customers into the follow
• Early 20s – out of education, fist job, disposable income
• Late 20s to early 30s – settling down, own home
• Late 30s – maybe kids, so have less disposable income
• 40s – maybe people are trading up and need new equipment. Or maybe the kids have now turned into teenagers and are encouraging their parents to buy up todate equipment.
• Late 50s – These could be empty nesters where their children have left home, maybe setting up home by themselves and their parents are building things for their home. Or maybe the parents are treating themselves with new equipment as they have more disposable income
• 60s + – parents and grand-parents buying equipment for their children and grand-children. Or maybe we have very techie people who have just retired
• 70+ – we have a drop off here.
As you can see we can discover a lot in the day by changing the number of bins and examining the data. The important part of this examination is trying to relate what you are seeing from the graphical representation of the data on the screen, back to the type of business we are examining. A lot can be discovered but you will have to spend some time looking for it.
ODM 11gR2 Extra Data Exploration Functionality
In ODM 11gR2 we now have an extra feature for our data analysis feature. We can now produce the histograms that are grouped by one of the other attributes. Typically this would be the Target or Class attribute but you can also use it with the other attributes.
To set this extra feature, double click on the Explore Data node. The Group By drop down lets you to select the attribute you want to group the other attributes by.
Using our example data, the target variable is AFFINITY_CARD. Select this in the drop down and run the Explore Data node again. When you look at the newly generated histograms you will now see each bin has two colours. If you hover the mouse of each coloured part you will be able to get the number of records in each group. You can use other attributes, such as the CUST_GENDER, COUNTRY_NAME, etc. Only use the attributes where it would make sense to analyse the data by.
This is a powerful new feature that allows you to gain a deeper level of insight into the data you are analysing
As with all development environments there will be need to move your code from one schema to another or from one database to another.
With Oracle Data Miner 11gR2, we have the same requirement. In our case it is not just individual procedures or packages, we have a workflow consisting of a number of nodes. With each node we may have a number of steps or functions that are applied to the data.
Exporting an ODM (11gR2) Workflow
In the Data Miner navigator, right-click the name of the workflow that you want to export.
The Save dialog opens. Specify a location on you computer where the workflow is saved as an XML file.
The default name for the file is workflow_name.xml, where workflow_name is the name of the workflow. You can change the name and location of the file.
Importing an ODM (11gR2) Workflow
Before you import your ODM workflow, you need to make sure that you have access the the same data that is specified in the workflow.
All tables/views are prefixed with the schema where the table/view resides.
You may want to import the data into the new schema or ensure that the new schema has the necessary grants.
Open the connection in ODM.
Select the project under with you want to import the workflow, or create a new project.
Right click the Project and select Import Workflow.
Search for the XML export file of the workflow.
Preserve the objects during the import.
When you have all the data and the ODM workflow imported, you will need to run the entire workflow to ensure that you have everything setup correctly.
It will also create the models in the new schema.
Data encoding in Workflow
All of the tables and views used as data sources in the exported workflow must reside in the new account
The account from which the workflow was exported is encoded in the exported workflow e.g. the exported workflow was exported from the account DMUSER and contains the data source node with data MINING_DATA_BUILD. If you import the schema into a different account (that is, an account that is not DMUSER) and try to run the workflow, the data source node fails because the workflow is looking for USER.MINING_DATA_BUILD_V.
To solve this problem, right-click the data node (MINING_DATA_BUILD_V in this example) and select Define Data Wizard. A message appears indicating that DMUSER.MINING_DATA_BUILD_V does not exist in the available tables/views. Click OK and then select MINING_DATA_BUILD_V in the current account.
I have created a video of this blog. It illustrates how you can Export a workflow and Import the workflow into a new schema.
Make sure to check out my other Oracle Data Miner (11gR2) videos.
The next Irish Oracle BI SIG meeting will be on Thursday 23rd June starting at 6:30pm.
The format of this SIG meeting is a bit different from the previous ones.
This time the SIG meeting will be an informal networking event and there will be no demos or presentations.
The SIG event will be in the River View Bistro Bar, which is on the the MV Cillairne boat, that is moored beside the new convenion center on the quays. Check out its website
Before you can start using the Oracle Data Miner features that are now available in SQL Developer 3, there are a few steps you need to perform. This post will walk you through these steps and I have put together a video which goes into more detail. The video is available on my YouTube channel.
I will be posting more How To type videos over the coming weeks and months. Each video will focus in one one particular feature within the new Oracle Data Mining tool.
So following steps are necessary before you can start using the ODM tool
Set up of Oracle Data Miner tabs
To get the ODM tabs to display in SQL Developer, you need to go to the View menu and select the following from the Data Miner submenu
- Data Miner Connections
- Workflow Jobs
- Property Inspector
Create an ODM Schema
There are two main ways to create a Schema. The first and simplest way is to use SQL Developer. To do this you need to create a connection to SYS. Right click on the Other Users option and select Create User.
The second option is to use SQL*Plus to create the user. Using both methods you need to grant Connect & Resource privileges to the user.
Create the Repository
Before you can start using Oracle Data Mining, you need to create an Oracle Data Miner Repository in the database. Again there are two ways to do this. The simplest is to use the inbuilt functionality in SQL Developer. In the Oracle Data Miner Connections tab, double click on the ODM schema you have just created. SQL Developer will check the database to see if the ODM Repository exists in the database. If it will create the repository for you. But you will need to provide the SYS password.
The other way to create the repository is to run the installodmr.sql script that in available in the ‘datamining’ directory.
example: @installodmr.sql USER TEMP
Create another ODM Schema
It is typical that you would need to have more than one schema for your data mining work. After creating the default Oracle schema, the next step is to grant the schema the privileges to use the Data Mining Repository. This script is called
example: @usergrants.sql DMUSER
Hint: The schema name needs to be in upper case.
IMPORTANT: The last grant statement in the script may give an error. If this occurs then it is due to an invalid hidden character on the line. If you do a cut and paste of the grant statement and execute this statement, everything should run fine.
If you want to demo data to be created for this new ODM schema then you need to run
example: @instdemodata.sql DMUSER
All of these scripts can be found in SQL developer directories
I’ve recently had an article titled Oracle Data Miner Comes of Age accepted for the June edition of the UKOUG Oracle Scene article.
I’ve been thinking of ways to try to promote this article and I’ve decided I would create two videos and post them on YouTube.
The first video is a short 1 minute introduction to the article. A taster kind of video. I’ve learned from my initial attempts at producing the video that
- It is more difficult than it looks
- The camera on my laptop is not install straight. That is why I’m looking to one side
- I need a better quality microphone
But perhaps the most interesting thing was that within a couple of hours of posting it up on YouTube (and not telling anyone about it), it was found and tweeted by Charlie Burger. Charlie is the Senior Director in charge of the Oracle Data Miner tool. He also very kindly tweeted about one of my blog postings on the New Features of Oracle Data Miner 11g R2.
You can find the introduction video to the article at
I will be posting an much long view, which will be based on the full article over the next couple of weeks
Over the past couple of weeks I’ve been a little bit busy with some Oracle Data Miner 11gR2 related activities. These include
- Writing an article called Oracle Data Miner Comes of Age for submission to Oracle Scene, the UKOUG quarterly magazine. I was told on 20th April that my article was accepted and will be in the June edition
- The call for presentations opened for the annual UKOUG conference in Birmingham in December. I submitted a presentation which will be based on the article in Oracle Scene.
- I submitted 2 presentations to Oracle Open World in October. But funding might be a problem here. I’ve asked the ODM development group to see if they could sponsor some of the costs. One presentation is on Oracle Data Miner. The second is on
- I also submitted a presentation to an online (virtual) Oracle conference called VirtaThon, again on Oracle Data Miner.
Some other things that I have planned are
- Create two videos for the Oracle Scene article. The first video is a short intro to the article. The plan is to have this on the UKOUG website to promote the article. The second video will be based on the article, covering the material and the demo in the article
- Create a video on creating an ODM repository and getting started with ODM
- Create a video on removing the ODM repository
- Create a video on saving/exporting a DM model from ODM
- Write an article on what Oracle products can be used throughout the Data Mining LifeCycle (CRISP-DM). Hopefully I will submit this for the autumn edition of Oracle Scene.
- Get all the documentation available on the data manipulation stage in the new ODM tool and write an article based on this, produce a video of it, etc
All of this to be finished by the middle of June.
So I have a busy few weeks ahead of me.