Data Science

ODM : View Transformations generated by Automatic Data Prepreparation

Posted on

A very powerful feature of Oracle Data Mining and one that I think does not get enough notice is called Automatic Data Preparation.

Data Preparation is one of the most time consuming, repetitive and boring parts of the work that a Data Miner or Data Scientist performs as part of their daily tasks. Apart from gathering the data, integrating the data, getting the data into the required formation the most interesting part of the work is with feature engineering.

Then you have all the other boring data preparation tasks of how to handle missing data, type conversion, binning, normalization, outlier treatment etc.

With Automatic Data Preparation (ADP) in Oracle Data Mining you can let Oracle work all of these things out for you and to perform all the necessary coding and to store all of this coding as part of the in-database data mining model.

This is Fantastic. This ADP feature can same you hours and in some cases days of effort.

But (there is always a but 🙂 ) what if you are a bit unsure if the transformations that are being performed are exactly what you would wanted. Maybe you would like to see what Oracle is doing and depending on this you can do it a different way.

The first step is to examine the transformations that are generated by stored as part of the in-database data mining model. The DBMS_DATA_MINING package has a function called GET_MODEL_TRANSFORMATIONS. When you query this function, passing in the name of the data mining model, you will get returned the list of transformations that have been applied to each model.

In the following example a GLM model was created using the Oracle Data Miner tool (that is part of SQL Developer). When you use Oracle Data Miner, ADP is automatically turned on.

The following query calls the GET_MODEL_TRANSFORMATIONS function with the data mining model called CLAS_GLM_1_59/.

SELECT * FROM TABLE(DBMS_DATA_MINING.GET_MODEL_TRANSFORMATIONS(‘CLAS_GLM_1_59’));

The following image contains the output generated by this query.

Blog dat trans 1

When you look at the data under the EXPRESSION column we get to see what the ADP did to the data. In most of the cases there are just some simple data clean-up being performed and formatting for getting the data ready for input into the algorithm.

If we now look at the Naive Bayes model for the same data set we get a very different sent of transformations being listed under the EXPRESSION column.

SELECT * FROM TABLE(DBMS_DATA_MINING.GET_MODEL_TRANSFORMATIONS(‘CLAS_NB_1_59’));

Blog dat trans 2

Now we get to see some of the data binning that ADP performs and is required for input to the Naive Bayes algorithm. You will also notices that we also have some transformations in the REVERSE_EXPRESSION column. These are the inverse or reverse of the transformation that was generated in the EXPRESSION column.

I will let you explore the data transformations that are produced by ADP for the SVM and Decision Tree algorithms.

I will show you how you change the reverse expression in my next blog post, as there are times when you might want the data to be presented slightly differently after the model has been run to score your data.

To get more details of what Automatic Data Preparation is performed for each data mining algorithm you can check out this link in the 11g documentaion. This section seems to be missing from the online 12c documentation.

Automatic Analytics is So main stream. Not something new.

Posted on

Everyone is doing advanced analytics. Right? Hmm

Everyone is talking about advanced analytics? Yes that is true.

Everyone is an expert in advanced analytics? This is so not true. Watch out for these Great Pretenders. You know what I mean! You know who I mean! Maybe you know some of them already? If not, watch out for these Great Pretenders!!!

Some people are going around talking about data mining, predictive analytics, advanced analytics, machine learning etc as if this is some new topic. Well it isn’t. It isn’t anything new and most of the techniques have been about for 10, 20, 30+ years.

Some people are saying you should only use language X or tool Y because. Everything else is basically rubbish.

What we do have is a wider understanding of how to use these techniques on our various data sources.

What we have is a lot more tools that allow us to perform these tasks a lot easier, at greater speed, with more functionality and without the need to fully understand the hard core maths that is going on behind the scenes.

What we have is a lot more languages to perform these tasks and to support the vast amount of work that goes into understanding the data and preparing the data.

Someone thing for all of us to watch out for, when we ready about these topics, is what kind of problem area they are addressing. The following table illustrates the three main types or categories of Analytics. These categories are Descriptive Analytics, Predictive Analytics and Prescriptive Analytics. I think most people would agree that the Descriptive and Predictive Analytics categories are very mature at this stage. With Predictive Analytics we are perhaps still evolving in this category and a lot more work needs to be done before this this become wide spread.

Blog 1

Some people talk as if Predictive Analytics is some new and exciting topic. But isn’t all that new. It was been around for the past 30+ years. If you go back over the Gartner Hype Cycle that comes out every September, Predictive Analytics is no longer being shown on this graph. The last time it appeared on the Gartner Hype Cycle was back in 2013 and it was positioned on the far right of the graph in the section called Plateau of Productivity.

So Predictive Analytics is very mature and main stream. Part of the reason that it is main stream is that Predictive Analytics has allowed for a new category of Analytics to evolve and this is Automatic Analytics.

Automatic Analytics is where Advanced and Predictive Analytics has been build into our day to day applications that are used to run our business. We do not need the hard core type of data scientists to perform various analytic on our data. Instead these task, once they have been defined, can then be added to our applications to process, evaluate and make decisions all automatically. This is were we need the data scientists to be able to communicate with the business and be able to work with them to solve real world business projects. This is a different type of data scientist to the “hard” core data scientist who delves into the various statistical methods, machine learning methods, data management methods, etc.

The following table extends the table given above to include Automatic Analytics, and is my own take on how and where Automatic Analytics fits.

Blog 2

Every time we get an insurance quote, health insurance quote, get a “random” call from our Telco offering a free upgrade, get our loyalty card statements, get a loan from the bank, look at or buy a book on Amazon, etc. the list could go on and on, but these are all examples of how predictive analytics has been automated into our everyday business application.

But this is nothing new. When I first got into data mining/predictive analytics over 16 years ago, it was considered a common thing that certain types of companies did. What has happened in the time since and particularly in the past few years is that a lot more people are seeing the value in using it.

Before I finish off this post we can have a quick look at what Oracle has been doing in this area. They have their Advanced Analytics Option and Real-Time Decisions tools to all data scientists do their magic. But over the past X years (nobody can give me an exact number) they have been very, very active in building in lots and lots of predictive analytics into their various business applications, particularly with into with Fusion Apps and BI Apps.

Blog 3

A recent quote from Oracle highlights their aim with this,

… products designed to close the gap between data scientists and businesses.

Now with Oracle making a big push to the cloud, they are busy adding in more and more Automatic (Predictive) Analytics into their Cloud Applications. What we need from Oracle is a clearer identification of where they have done this. Plus with the migration of their Apps to the cloud, their Advanced Analytics Option is a core part of their Cloud platform. As they upgrade or add new features into their Cloud Apps, you will now be able to get the benefit of these Automatic (Predictive) Analytics as they come available.

Blog 5

US President talks about Data Science

Posted on

Check out the video of US President talking about Data Science and the first Chief Data Scientist of the USA talks about his mission.

Evaluating Classification Results

Posted on

When you are working on building classification models you will need some ways of measuring the effectiveness of each model that you will build. This measurement/evaluation is perform during the model build process.

Typically the model build process consists of 2 steps (I’m assuming all data preparation etc has been completed:

  • Build the model: During this step you will feed in a portion of your data set to the data mining algorithm. Typical this data will be a subset of your data set and will typically consist of 60% to 70% of the data. This data is used to by the data mining algorithm to build the model.
  • Test the model: After the model has been built you will need to test the model to see how efficient it is at making the predictions. This is where we use the data that was not used to build the model. For this data we already know the outcome. So after we have applied the model to this data subset we can measure the predicted values against the actual values.

Most of the data mining tools will automate these two steps, specifically the splitting the data into the build and test data sets. But if you are using a language like R, etc then you will need to manually perform these steps.

The most common way of collating the test results is to use the Confusion Matrix. This allows us to layout the correct predictions, the incorrect predictions and to perform a number of other statistical measurements.

True Positives

True Negatives

False Positives

False Negatives

The last two of the above values are also commonly referred to in statistics as Type 1 (false positive) and Type 2 (false negative) errors.

Depending on your project you will concentrate on a combination of the true and false values of either the Positives or the negatives.

For example, in Medical Diagnostics for cancer, you will be looking to keep the False Negatives to a minimum. This is where you have predicted someone does not have cancer, but actually does. The consequence of this is that the person is not brought back for addition testing and we all know what will happen. On the other hand it is OK to have a hight False Positive in this case. In this scenario you bring the person back for additional tests and discover that they are all clear 🙂

Precision = How many of the selected items are relevant? (as a percentage)

Recall = How many of the relevant items are selected? (as a percentage)

Accuracy = How many did we correctly predict? (as a percentage)

The following table illustrates these measurements and tests.

Confusion Matrix

There are lots of other statistical tests that can be performed on your results. Everyone will have their own preferences. What I have highlighted here are the main statistical test for you to look at.

You cannot use one or a few of the statistical tests to make a decision on what data mining model works best for your data. It is a combination of these statistical test, your understanding of the data and you understanding of the business project that need to be considered.

In my next 2 blog posts I will show you how you can perform these tests on the results generated by the Oracle Data Miner tool and then on the Oracle Data Miner models produced using PL/SQL.

my Oracle Data Miner Book

Posted on

Some of you may be aware that I have been writing a on Oracle Data Miner. Actually the book covers the Oracle Data Miner GUI that is part of SQL Developer, the SQL and PL/SQL functions, procedures and packages that form the Oracle Data Mining option in the database and lots of other topics for the DBA, Developer and BI/DW people.
Today is a bit day for this book as it is officially released and available for purchase. See below for some links to where you can but the book in print and e-book formats. It has been published by McGraw-Hill/Oracle Press.
The book is aimed at a variety of people and the aim of the book is to introduce them to using the Oracle Data Miner tool and how to perform various data mining and predictive analytics tasks using SQL and PL/SQL.
The book will not teach you about how each of the data mining algorithms works. There is a bit of an assumption that you know a bit about these already. There are lots of books and resources about that cover that material. You can look on my book as an getting start / how to use type of book.
Below are are the images of the front cover and the back cover.
Book Cover            Book Back Cover
For more details of the book and for some updates keep an eye on my ODM Book page. On this page I’m adding a FAQ secion. This will be based on questions that I receive about the book.
If you buy the book then I hope you will find it helpful. If you are going to attend one of my presentations at an Oracle User Group meeting then bring the book along and I can sign it for you. Alternatively if you are at Oracle Open World 2014, come along to the Oracle Press Book Store, as I will be there to sign books on Wednesdays 1st October between 13:00 and 13:30.
Where can you Buy my Oracle Data Miner book (print and e-book).
You can buy the book from the McGraw-Hill/Oracle Press website and from Amazon. Each site will offer discounts so check out which one is the best for you.
McGraw-Hill/Oracle Press
For USA locations (enter promo code Tierney to save 20% and free delivery) www.mhprofessional.com
For UK & Ireland locations (enter promo code Tierney to save 20% and free delivery) www.mcgraw-hill.co.uk/tpr
Amazon
Click here to buy it on www.amazom.com
Click here to but it on www.amazon.co.uk

Oracle Advanced Analytics and Oracle Fusion Apps

Posted on

At a recent Oracle User Group conference, I was part of a round table discussion on Apps and BI. Unfortunately most of the questions were focused on Apps and the new Fusion Applications from Oracle. I mentioned that there was data mining functionality (using the Oracle Advanced Analytics Option) built into the Fusion Apps, it seems to come as a surprise to the Apps people. They were not aware of this built in functionality and capabilities. Well Oracle Data Mining and Oracle Advanced Analytics has been built into the following Oracle Fusion Applications.

  • Oracle Fusion HCM Workforce Predictions
  • Oracle Fusion CRM Sales Prediction Engine
  • Oracle Spend Classification
  • Oracle Sales Prospector
  • Oracle Adaptive Access Manager

Oracle Data Mining and Oracle Advanced Applications are also being used in the following applications:

  • Oracle Airline Data Model
  • Oracle Communications Data Model
  • Oracle Retail Data Model
  • Oracle Security Governor for Healthcare

I intend to submit a presentation on this topic to future Oracle User Group conferences as a way of spreading the Advanced Analytics message within the Oracle user community. If you would like me to present on this topic at your conference or SIG drop me an email and we can make the necessary arrangement 🙂

The ORE Packages

Posted on

If you are interested in using ORE or just to get an idea of what does ORE give you that does not already exist in one of the other R packages then the table below lists the packages that come as part of ORE.

Before you can use then you will need to load these into your workspace. To do this you can issue the following command from the R prompt or from the prompt in RStudio.

> library(ORE)

RStudio is my preferred R interface and is widely used around the world. table,th,td { border:1px solid black; border-collapse:collapse }

ORE Installed Packages Description
ORE Oracle R Enterprise
OREbase ORE – base
OREdm The ORE functions that use the in-database Oracle Data Miner algorithms
OREeda The ORE functions used for exploratory data analysis
OREgraphics The ORE functions used for graphics
OREpredict The ORE functions used for model predictions
OREstats The ORE stats functions
ORExml The ORE functions that convert R objects to XML
DBI R Database Interface
ROracle OCI based Oracle database interface for R
XML Tools for parsing and generating XML within R and S-Plus.
bitops Functions for Bitwise operations
png Read and write PNG images

In addition to these core ORE packages, ORE also uses some R packages as part of the core ORE packages listed above. The following table lists the R packages that are used in the ORE packages. So make sure you have these packages installed. They should have come with your installation of R, but if something has happened then you can download them again.

table,th,td { border:1px solid black; border-collapse:collapse }

R Packages used by ORE Description
base The R Base Package
boot Bootstrap Functions (originally by Angelo Canty for S)
class Functions for Classification
cluster Cluster Analysis Extended Rousseeuw et al
codetools Code Analysis Tools for R
compiler The R Compiler Package
datasets The R Datasets Package
foreign Read Data Stored by Minitab, S, SAS, SPSS, Stata, Systat, dBase, ..
graphics The R Graphics Package
grDevices The R Graphics Devices and Support for Colours and Fonts
grid The Grid Graphics Package
KernSmooth Functions for kernel smoothing for Wand & Jones (1995)
lattice Lattice Graphics
MASS Support Functions and Datasets for Venables and Ripley’s MASS
Matrix Sparse and Dense Matrix Classes and Methods
methods Formal Methods and Classes
mgcv GAMs with GCV/AIC/REML smoothness estimation and GAMMs by PQL
nlme Linear and Nonlinear Mixed Effects Models

I’ve been using R a lot over the past few years and I’ve had a number of projects involving R particularly over the past 12 month. I just found out that I will now have another short duration R project in May and June.

So watch out for lots more blog posts on R and ORE. Plus the usual blog posts on using Oracle Data Mining. ORE and Oracle Data Mining are very closely linked.

Gartner 2014 Advanced Analytics Quadrant

Posted on

The Gartner 2014 Advanced Analytics Quadrant is out now. Well it is if you can find it.

Some of the companies have put it up on their websites to promote their position.

For some reason Oracle hasn’t and I wonder why?

Gartner Advanced Analytics MQ Feb2014

You can see that some typical technologies are missing from this, but this is to be expected. How much are companies really deploying these alternatives on real problems and in production. Perhaps the positioning of Revolution Analysis might be an indicator. At some point there might be a shift from investigative analysis into more main stream projects and then into production.

What is still evident from this years quadrant is that SAS and IBM (SPSS) still have very dominant positions and perhaps will have for some time to come.

It will be interesting how this will all play out over the next few years.

Using the in-database ODM algorithms in ORE

Posted on

Oracle R Enterprise is the version of R that Oracle has that runs in the database instead of on your laptop or desktop.

Oracle already has a significant number of data mining algorithms in the database. With ORE they have exposed these so that they can be easily called from your R (ORE) scripts.

To access these in-database data mining algorithms you will need to use the ore.odm package.

ORE is continually being developed with new functionality being added all the time. Over the past 2 years Oracle have released and updated version of ORE about every 6 months. ORE is generally not certified with the latest version of R. But is slightly behind but only a point or two of the current release. For example the current version of ORE 1.4 (released only last week) is certified for R version 3.0.1. But the current release of R is 3.0.3.

Will ORE work with the latest version of R? The simple answer is maybe or in theory it should, but is not certified.

Let’s get back to ore.dm. The following table maps the ore.odm functions to the in-database Oracle Data Mining functions.

ORE Function Oracle Data Mining Algorithm What Algorithm can be used for
ore.odmAI Minimum Description Length Attribute Importance
ore.odmAssocRules Apriori Association Rules
ore.odmDT Decision Tree Classification
ore.odmGLM Generalized Linear Model Classification and Regression
ore.odmKMeans k-Means Clustering
ore.odmNB Naïve Bayes Classification
ore.odmNMF Non-Negative Matrix Factorization Feature Extraction
ore.odmOC O-Cluster Clustering
ore.odmSVM Support Vector Machines Classification and Regression

table,th,td { border:1px solid black; border-collapse:collapse }

As you can see we only have a subset of the in-database Oracle Dat Miner algorithms. This is a pity really, but I’m sure as we get newer releases of ORE these will be added.

ODM: Changing the bar chart format in Explore Node

Posted on

In Oracle Data Miner you can use the Explore Node to gather an initial set of statistics for your dataset. As part of this you will also get a bar chart that shows the distributions of the values contained within each attribute. The following example shows the default layout of the bar charts. Explore1

These graphs a very useful for presenting the initial data exploration results from to your business users. In addition to these graphs you can also use the Graph node to give some additional graphical representations.

But the default bar chart that is produced by the Explore Node can appear to be a bit basic.

So what if we could change the layout to have a 3-D effect. People like 3-D bar charts.

Is this possible in Oracle Data Miner? If so then how can we do it?

Well it is possible and you can use the following steps to change your bar charts to 3-D.

To access the Explore Node settings go the the Tools menu and then select Preferences from the drop down menu.

Explore2

Then the Preferences window opens scroll down to the Data Miner option and expand the available options.

Explore3

The Explorer Data Viewer allows you to change the Precision settings. The section option is the Graphical Settings. You can change the Depth Radius setting. By default this is set to Zero. By increasing this value you can change the degree of the 3-D effect of the bar charts. You can also change the colour scheme too.

Explore4

I’m not a fan of the other colour schemes that are available and mu favourite is still the default Nautical. The following bar chart is the same as the one at the top of this post but has the 3-D effect.

Explore5

Adding Oracle Data Miner to OBIEE

Posted on

Oracle Data Miner is a very powerful tool that provides advanced machine learning algorithms that are embedded in the Oracle database. By using Oracle Data Miner you do not have to use another tool, from another vendor, to do your data mining. You can do everything in the database, ensuring that the security of your data is maintained and use all the performance functionality that comes with the database.
To add to the advanced insights that you can get from using ODM, you can combine ODM with your OBIEE dashboards to gain a deeper level of insight of your data. This is the combining of data mining techniques and visualization techniques.
The purpose of this blog post is to show you the steps involved in adding an ODM model to your OBIEE dashboards. Lots of people have been asking for the details of how to do it, so here it is.
The following example is based on a presentation that I have given a few times (OUG Ireland, UKOUG, OOW) with Antony Heljula.
1. Export & Import the ODM model
If your data mining analysis and development was completed in a different database to where your OBIEE data resides then you will need to move the ODM model from ODM/development database to the OBIEE database.
ODM provides two PL/SQL procedures to allow you to easily move your ODM model. These procedures are part of the DBMS_DATA_MINING package. To export a model you will need to use the DBMS_DATA_MINING.EXPORT_MODEL procedure. Similarly to import your (exported) ODM model you will use the DBMS_DATA_MINING.IMPORT_MODEL procedure.
2. Create a view that uses the ODM model
You can create a view that uses the PREDICTION and PREDICTION_PROBABILITY functions to apply the import ODM model to your data. For example the following view is used to score our customer data to make a prediction of they are going to churn and the probability that this prediction is correct.
SELECT st_pk,
       prediction(clas_decision_tree using *) WITHDRAW_PREDICTION,
       prediction_probability(clas_decision_tree using *) WITHDRAW_PROBABILITY
FROM   CUSTOMER_DATA;

clip_image002
3. Import the view into the Physical layer of the BI Repository (RPD)
The view was then imported into the Physical layer of the BI Repository (RPD) where it was joined on primary key to the other customer tables (we had one records per customer in the view). With the tables being joined, we can use the prediction columns to filter the customer data. For example filter all the customer who are likely to churn, WITHDRAW_PREDICTION = ‘N’
clip_image002[11]
clip_image002[13]
4.Add the new columns to the Business Model layer
The new prediction columns were then mapped into the Business Model layer where they could be incorporated into various relevant calculations e.g. % Withdrawals Predicted, and then subsequently presented to the end users for reporting
clip_image002[9]
5. Add to your Dashboards
The Withdraw prediction columns could then be published on the BI Dashboards where they could be used to filter the data content. In the example below, the use has chosen to show data for only those customers who are predicted to Withdraw with a probability rating of >70%
clip_image002[5]