oracle data mining
Big Data videos by Oracle
Here are the links to the 2 different sets of Big Data videos that Oracle have produced over the past 12 months
Oracle Big Data Videos – Version 1
Episode 2 – Gold Mine or Just Stuff
Episode 4 – Everything You Always Wanted to Know
Oracle Big Data Videos – Version 2
Episode 1 – Overview for the Boss
Episode 3 – Acquiring Big Data
Episode 4 – Organising Big Data
Episode 5 – Analysing Big Data
Other videos include
Analytics Sessions at Oracle Open World 2012
The content catalog for Oracle Open World 2012 was made public during the week. OOW is on between 30th September and 4th October.
The following table gives a list of most of the Data Analytics type sessions that are currently scheduled.
Why did I pick these sessions? If I was able to go to OOW then these are the sessions I would like to attend. Yes there would be many more sessions I would like to attend on the core DB technology and Development streams.
| Session Title | Presenters |
| CON6640 – Database Data Mining: Practical Enterprise R and Oracle Advanced Analytics | Husnu Sensoy |
| CON8688 – Customer Perspectives: Oracle Data Integrator | Gurcan Orhan – Software Architect & Senior Developer, Turkcell Technology R&D Julien Testut – Product Manager, Oracle |
| HOL10089 – Oracle Big Data Analytics and R | George Lumpkin – Vice President, Product Management, Oracle |
| CON8655 – Tackling Big Data Analytics with Oracle Data Integrator | Mala Narasimharajan – Senior Product Marketing Manager, Oracle Michael Eisterer – Principal Product Manager, Oracle |
| CON8436 – Data Warehousing and Big Data with the Latest Generation of Database Technology | George Lumpkin – Vice President, Product Management, Oracle |
| CON8424 – Oracle’s Big Data Platform: Settling the Debate | Martin Gubar – Director, Oracle Kuassi Mensah – Director Product Management, Oracle |
| CON8423 – Finding Gold in Your Data Warehouse: Oracle Advanced Analytics | Charles Berger – Senior Director, Product Management, Data Mining and Advanced Analytics, Oracle |
| CON8764 – Analytics for Oracle Fusion Applications: Overview and Strategy | Florian Schouten – Senior Director, Product Management/Strategy, Oracle |
| CON8330 – Implementing Big Data Solutions: From Theory to Practice | Josef Pugh – , Oracle |
| CON8524 – Oracle TimesTen In-Memory Database for Oracle Exalytics: Overview | Tirthankar Lahiri – Senior Director, Oracle |
| CON9510 – Oracle BI Analytics and Reporting: Where to Start? | Mauricio Alvarado – Principal Product Manager, Oracle |
| CON8438 – Scalable Statistics and Advanced Analytics: Using R in the Enterprise | Marcos Arancibia Coddou – Product Manager, Oracle Advanced Analytics, Oracle |
| CON4951 – Southwestern Energy’s Creation of the Analytical Enterprise | Jim Vick – , Southwestern Energy Richard Solari – Specialist Leader, Deloitte Consulting LLP |
| CON8311 – Mining Big Data with Semantic Web Technology: Discovering What You Didn’t Know | Zhe Wu – Consultant Member of Tech Staff, Oracle Xavier Lopez – Director, Product Management, Oracle |
| CON8428 – Analyze This! Analytical Power in SQL, More Than You Ever Dreamt Of | Hermann Baer – Director Product Management, Oracle Andrew Witkowski – Architect, Oracle |
| CON6143 – Big Data in Financial Services: Technologies, Use Cases, and Implications | Omer Trajman – , Cloudera Ambreesh Khanna – Industry Vice President, Oracle Sunil Mathew – Senior Director, Financial Services Industry Technology, Oracle |
| CON8425 – Big Data: The Big Story | Jean-Pierre Dijcks – Sr. Principal Product Manager, Oracle |
| CON10327 – Recommendations in R: Scaling from Small to Big Data | Mark Hornick – Senior Manager, Oracle |
Part 2 of the Leaning Tower of Pisa problem in ODM
In previous post I gave the details of how you can use Regression in Oracle Data Miner to predict/forecast the lean of the tower in future years. This was based on building a regression model in ODM using the known lean/tilt of the tower for a range of years.
In this post I will show you how you can do the same tasks using the Oracle Data Miner functions in SQL and PL/SQL.
Step 1 – Create the table and data
The easiest way to do this is to make a copy of the PISA table we created in the previous blog post. If you haven’t completed this, then go to the blog post and complete step 1 and step 2.
create table PISA_2
as select * from PISA;
Step 2 – Create the ODM Settings table
We need to create a ‘settings’ table before we can use the ODM API’s in PL/SQL. The purpose of this table is to store all the configuration parameters needed for the algorithm to work. In our case we only need to set two parameters.
BEGIN
delete from pisa_2_settings;
INSERT INTO PISA_2_settings (setting_name, setting_value) VALUES
(dbms_data_mining.algo_name, dbms_data_mining.ALGO_GENERALIZED_LINEAR_MODEL);
INSERT INTO PISA_2_settings (setting_name, setting_value) VALUES
(dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off );
COMMIT;
END;
Step 3 – Build the Regression Model
To build the regression model we need to use the CREATE_MODEL function that is part of the DBMS_DATA_MINING package. When calling this function we need to pass in the name of the model, the algorithm to use, the source data, the setting table and the target column we are interested in.
BEGIN
DBMS_DATA_MINING.CREATE_MODEL(
model_name => ‘PISA_REG_2’,
mining_function => dbms_data_mining.regression,
data_table_name => ‘pisa_2_build_v’,
case_id_column_name => null,
target_column_name => ’tilt’,
settings_table_name => ‘pisa_2_settings’);
END;
After this we should have our regression model.
Step 4 – Query the Regression Model details
To find out what was produced as in the previous step we can query the data dictionary.
SELECT model_name,
mining_function,
algorithm,
build_duration,
model_size
from USER_MINING_MODELS
where model_name like ‘P%’;
select setting_name,
setting_value,
setting_type
from all_mining_model_settings
where model_name like ‘P%’;
Step 5 – Apply the Regression Model to new data
Our final step would be to apply it to our new data i.e. the years that we want to know what the lean/tilt would be.
SELECT year_measured, prediction(pisa_reg_2 using *)
FROM pisa_2_apply_v;
Using ODM Regression for the Leaning Tower of Pisa tilt problem
This blog post will look at how you can use the Regression feature in Oracle Data Miner (ODM) to predict the lean/tilt of the Leaning Tower of Pisa in the future.
This is a well know regression exercise, and it typically comes with a set of know values and the year for these values. There are lots of websites that contain the details of the problem. A summary of it is:
The following table gives measurements for the years 1975-1985 of the “lean” of the Leaning Tower of Pisa. The variable “lean” represents the difference between where a point on the tower would be if the tower were straight and where it actually is. The data is coded as tenths of a millimetre in excess of 2.9 meters, so that the 1975 lean, which was 2.9642.
Given the lean for the years 1975 to 1985, can you calculate the lean for a future date like 200, 2009, 2012.
Step 1 – Create the table
Connect to a schema that you have setup for use with Oracle Data Miner. Create a table (PISA) with 2 attributes, YEAR_MEASURED and TILT. Both of these attributes need to have the datatype of NUMBER, as ODM will ignore any of the attributes if they are a VARCHAR or you might get an error.
CREATE TABLE PISA
(
YEAR_MEASURED NUMBER(4,0),
TILT NUMBER(9,4)
);
Step 2 – Insert the data
There are 2 sets of data that need to be inserted into this table. The first is the data from 1975 to 1985 with the known values of the lean/tilt of the tower. The second set of data is the future years where we do not know the lean/tilt and we want ODM to calculate the value based on the Regression model we want to create.
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1975,2.9642);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1976,2.9644);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1977,2.9656);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1978,2.9667);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1979,2.9673);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1980,2.9688);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1981,2.9696);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1982,2.9698);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1983,2.9713);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1984,2.9717);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1985,2.9725);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1986,2.9742);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1987,2.9757);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1988,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1989,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1990,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1995,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (2000,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (2005,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (2010,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (2009,null);
Step 3 – Start ODM and Prepare the data
Open SQL Developer and open the ODM Connections tab. Connect to the schema that you have created the PISA table in. Create a new Project or use an existing one and create a new Workflow for your PISA ODM work.
Create a Data Source node in the workspace and assign the PISA table to it. You can select all the attributes..
The table contains the data that we need to build our regression model (our training data set) and the data that we will use for predicting the future lean/tilt (our apply data set).
We need to apply a filter to the PISA data source to only look at the training data set. Select the Filter Rows node and drag it to the workspace. Connect the PISA data source to the Filter Rows note. Double click on the Filter Row node and select the Expression Builder icon. Create the where clause to select only the rows where we know the lean/tilt.
Step 4 – Create the Regression model
Select the Regression Node from the Models component palette and drop it onto your workspace. Connect the Filter Rows node to the Regression Build Node.
Double click on the Regression Build node and set the Target to the TILT variable. You can leave the Case ID at . You can also select if you want to build a GLM or SVM regression model or both of them. Set the AUTO check box to unchecked. By doing this Oracle will not try to do any data processing or attribute elimination.
You are now ready to create your regression models.
To do this right click the Regression Build node and select Run. When everything is finished you will get a little green tick on the top right hand corner of each node.
Step 5 – Predict the Lean/Tilt for future years
The PISA table that we used above, also contains our apply data set
We need to create a new Filter Rows node on our workspace. This will be used to only look at the rows in PISA where TILT is null. Connect the PISA data source node to the new filter node and edit the expression builder.
Next we need to create the Apply Node. This allows us to run the Regression model(s) against our Apply data set. Connect the second Filter Rows node to the Apply Node and the Regression Build node to the Apply Node.
Double click on the Apply Node. Under the Apply Columns we can see that we will have 4 attributes created in the output. 3 of these attributes will be for the GLM model and 1 will be for the SVM model.
Click on the Data Columns tab and edit the data columns so that we get the YEAR_MEASURED attribute to appear in the final output.
Now run the Apply node by right clicking on it and selecting Run.
Step 6 – Viewing the results
Where we get the little green tick on the Apply node we know that everything has run and completed successfully.
To view the predictions right click on the Apply Node and select View Data from the menu.
We can see the the GLM mode gives the results we would expect but the SVM does not.
Data Science Is Multidisciplinary
A few weeks ago I had a blog post called Domain Knowledge + Data Skills = Data Miner.
In that blog post I was saying that to be a Data Scientist all you needed was Domain Knowledge and some Data Skills, which included Data Mining.
The reality is that the skill set of a Data Scientist will be much larger. There is a saying ‘A jack of all trades and a master of none’. When it comes to being a data scientist you need to be a bit like this but perhaps a better saying would be ‘A jack of all trades and a master of some’.
I’ve put together the following diagram, which includes most of the skills with an out circle of more fundamental skills. It is this outer ring of skills that are fundamental in becoming a data scientist. The skills in the inner part of the diagram are skills that most people will have some experience in one or more of them. The other skills can be developed and learned over time, all depending on the type of person you are.
Can we train someone to become a data scientist or are they born to be a data scientist. It is a little bit of both really but you need to have some of the fundamental skills and the right type of personality. The learning of the other skills should be easy(ish)
What do you think? Are their Skill that I’m missing?
VM for Oracle Data Miner
Recently the OTN team have updated the ‘Database App Development’ Developer Day virtual machine to include Oracle 11.2.0.2 DB and SQL Developer 3.1. This is all you need to try out Oracle Data Miner.
So how do you get started with using Oracle Data Miner on your PC. The first step is to download and install the latest version of Oracle VirtualBox.
The next step is to download and install the OTN Developer Day appliance. Click on the above link to go to the webpage and follow the instructions to download and install the appliance. Download the first appliance on this page ‘Database App Development’ VM. This is a large download and depending on your internet connection it can take anything from 30 minutes to hours. So I wouldn’t recommend doing this over a wifi.
When you start up the VM your OS username and password is oracle. Yes it is case sensitive.
When the get logged into the VM you can close or minimise the host window
There are two important icons, the SQL Developer and the ODDHandsOnLab.html icons.
The ODDHandsOnLab.html icon loads a webpage what contains a number of tutorials for you to follow.
The tutorial we are interest in is the Oracle Data Miner Tutorial. There are 4 tutorials given for ODM. The first two tutorials need to be followed in the order that they are given. The second two tutorials can be done in any order.
If you have not used SQL Developer before then you should work through this tutorial before starting the Oracle Data Miner tutorials.
The first tutorial takes you through the steps needed to create your ODM schema and to create the ODM repository within the database. This tutorial will only take you 10 to 15 minutes to complete.
In the second tutorial you get to use the ODM to build your first ODM model. This tutorial steps your through how to get started with an ODM project, workflow, the different ODM features, how to explore the data, how to create classification models, how to explore the model and then how to apply one of these models to new data. This second tutorial will take approx. 30 to 40 minutes to complete.
It is all very simple and easy to use.
Domain Knowledge + Data Skills = Data Miner
Over the past few weeks I have been talking to a lot of people who are looking at how data mining can be used in their organisation, for their projects and to people who have been doing data mining for a log time.
What comes across from talking to the experienced people, and these people are not tied to a particular product, is that you need to concentrate on the business problem. Once you have this well defined then you can drill down to the deeper levels of the project. Some of these levels will include what data is needed (not what data you have), tools, algorithms, etc.
Statistics is only a very small part of a data mining project. Some people who have PhDs in statistics who work in data mining say you do not use or very rarely use their statistics skills.
Some quotes that I like are:
“Focus hard on Business Question and the relevant target variable that captures the essence of the question.” Dean Abbott PAW Conf April 2012
“Find me something interesting in my data is a question from hell. Analysis should be guided by business goals.” Colin Shearer PAW Conf Oct 2011
There has need a lot of blog posting and articles on what are the key skills for a Data Miner and the more popular Data Scientist. What is very clear from all of these is that you will spend most of your time looking at, examining, integrating, manipulating, preparing, standardising and formatting the data. It has been quoted that all of these tasks can take up to 70% to 85% of a Data Mining/Data Scientist time. All of these tasks are commonly performed by database developers and in particular the developers and architects involved in Data Warehousing projects. The rest of the time for the running of the data mining algorithms, examining the results, and yes some stats too.
Every little time is spent developing algorithms!!! Why is this ? Would it be that the algorithms are already developed (for a long time now and are well turned) and available in all the data mining tools. We can almost treat these algorithms as a black box. So one of the key abilities of a data miner/data scientist would be to know what the algorithms can do, what kind of problems they can be used for, know what kind of outputs they produce, etc.
Domain knowledge is important, no matter how little it is, in preparing for and being involved in a data mining project. As we define our business problem the domain expert can bring their knowledge to the problem and allows us separate the domain related problems from the data related problems. So the domain expertise is critical at that start of a project, but the domain expertise is also critical when we have the outputs from the data mining algorithms. We can use the domain knowledge to tied the outputs from the data mining algorithms back to the original problem to bring real meaning to the original business problem we are working on.
So what is the formula of skill sets for a data mining or data scientist. Well it is a little like the title of this blog;
Domain Knowledge + Data Skills + Data Mining Skills + a little bit of Machine Learning + a little bit of Stats = a Data Miner / Data Scientist
2 Day Oracle Data Miner course material
Last week I managed to get my hands on the training material for the 2 Day Oracle Data Miner course. This course is run by Oracle University.
Many thanks to Michael O’Callaghan who is a BI Sales person here in Ireland and Oracle University, for arranging this.
The 2 days are pretty packed with a mixture of lecture type material, lots of hands on exercises and some time for open discussions. In particular, day 2 will be very busy day.
Check out the course outline and published schedule – click here
You can have this course on site at your organisation. If this is something that interests you then contact your Oracle University account manager. There is also the traditional face-to-face delivery and the newer online delivery, where people from around the world come together for the online class.
Oracle Analytics Sessions at COLLABORATE12
There are a number of Oracle Advanced Analytics and related topics taking place this week at COLLABORATE12 in Las Vegas (http://collaborate12.com).
| Date | Time | Presentation | Presenter |
| Sun 22nd | 9:00-3pm | Oracle Business Intelligence Application Journey | |
| Mon 23rd | 9:45-10:45 | Managing Unstructured Data using Hadoop, Oracle 11g and Oracle Exadata Database Machine | Jim Steiner |
| Mon 23rd | 9:45-10:45 | Environmental Data Management and Analytics-a Real World Perspective | Angela Miller |
| Mon 23rd | 11-12 | Public Safety and Environmental Real-Time Analytics using Oracle Business Intelligence | Raghav Venkat Therese Arguelles |
| Mon 23rd | 11-12 | BI is more than slice and dice | Peter Scott |
| Mon 23rd | 14:30-15:30 | In-Database Analytics: Predictive Analytics, Data Mining, Exadata & Business Intelligence | Jacek Myczkowski |
| Mon 23rd | 15:45-16:45 | Big Data Analytics, R you ready | Mark Hornick Shyam Nath |
| Tues 24th | 10:45-11:45 | BI Analytics and Oracle NoSQL. The Future of Now | Manish Khera |
| Wed. 25th | 8:15-9:15 | Oracle Data Mining – A Component of the Oracle Advanced Analytics Option-Hands-on Lab | Charlie Berger |
| Wed 25th | 9:30-10:30 | Oracle R Enterprise – A Component of the Oracle Advanced Analytics Option-Hands-on Lab | Mark Hornick |
Here are the abstracts from the two main Oracle Advanced Analytics presentations by Charlie Berger and Mark Hornick
Oracle Data Mining – A Component of the Oracle Advanced Analytics Option
This Hands-on Lab provides an introduction to Oracle Data Mining and the Oracle Data Miner GUI.
Oracle Data Mining (ODM), now part of Oracle Advanced Analytics, provides an extensive set of in-database data mining algorithms that solve a wide range of business problems. It can predict customer behavior, detect fraud, analyze market baskets, segment customers, and mine text to extract sentiments. ODM provides powerful data mining algorithms that run as native SQL functions for in-database model building and model deployment. There is no need for the time delays and security risks of data movement.
The free Oracle Data Miner GUI is an extension to Oracle SQL Developer 3.1 that enables data analysts to work directly with data inside the database, explore the data graphically, build and evaluate multiple data mining models, apply ODM models to new data, and deploy ODM’s predictions and insights throughout the enterprise. Oracle Data Miner work flows capture and document the user’s analytical methodology and can be saved and shared with others to automate advanced analytical methodologies.
Oracle R – A component of the Oracle Advanced Analytics Option
This Hands-on Lab provides an introduction to Oracle R Enterprise.
Oracle R Enterprise, a part of the Oracle Advanced Analytics Option, makes the open source R statistical programming language and environment ready for the enterprise by integrating R with Oracle Database. R users can interactively and transparently execute R scripts for statistical and graphical analyses on data stored in Oracle Database. R scripts can be executed in Oracle Database using potentially multiple database-managed R engines – resulting in data parallel execution. ORE also provides a rich set of statistical functions and advanced analytics techniques.
In this lab, attendees will be introduced to Oracle’s strategy for R, including the Oracle R Distribution, Oracle R Enterprise (ORE), and Oracle R Connector for Hadoop (ORCH). We will focus on Oracle R Enterprise with hands-on exercises exploring the transparency layer, embedded R execution, and statistics engine.
Data Visualization Videos & Resources
Here is a selection of videos and websites on Data Visualisations.
Hans Rosling videos of his TED talks
- World Population Growth
- Global Population Growth (TED)
- Asia’s Rise – How and When
- HIV: New facts and stunning data visuals
- Video for the BBC
Other videos
Useful Websites
Oracle Advanced Analytics Video by Charlie Berger
Charlie Berger (Sr. Director Product Management, Data Mining & Advanced Analytics) as produced a video based on a recent presentation called ‘Oracle Advanced Analytics: Oracle R Enterprise & Oracle Data Mining’.
This is a 1 hour video, including some demos, of product background, product features, recent developments and new additions, examples of how Oracle is including Oracle Data Mining into their fusion applications, etc.
Oracle has 2 data mining products, with main in-database Oracle Data Mining and the more recent extensions to R to give us Oracle R Enterprise.
Check out the video – Click here.
Check out Charlie’s blog at https://blogs.oracle.com/datamining/
Oracle University : 2 Day Oracle Data Mining training course
ODM–Attribute Importance using PL/SQL API
In a previous blog post I explained what attribute importance is and how it can be used in the Oracle Data Miner tool (click here to see blog post).
In this post I want to show you how to perform the same task using the ODM PL/SQL API.
The ODM tool makes extensive use of the Automatic Data Preparation (ADP) function. ADP performs some data transformations such as binning, normalization and outlier treatment of the data based on the requirements of each of the data mining algorithms. In addition to these transformations we can specify our own transformations. We do this by creating a setting tables which will contain the settings and transformations we can the data mining algorithm to perform on the data.
ADP is automatically turned on when using the ODM tool in SQL Developer. This is not the case when using the ODM PL/SQL API. So before we can run the Attribute Importance function we need to turn on ADP.
Step 1 – Create the setting table
CREATE TABLE Att_Import_Mode_Settings (
setting_name VARCHAR2(30),
setting_value VARCHAR2(30));
Step 2 – Turn on Automatic Data Preparation
BEGIN
INSERT INTO Att_Import_Mode_Settings (setting_name, setting_value)
VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_on);
COMMIT;
END;
Step 3 – Run Attribute Importance
BEGIN
DBMS_DATA_MINING.CREATE_MODEL(
model_name => ‘Attribute_Importance_Test’,
mining_function => DBMS_DATA_MINING.ATTRIBUTE_IMPORTANCE,
data_table_name > ‘mining_data_build_v’,
case_id_column_name => ‘cust_id’,
target_column_name => ‘affinity_card’,
settings_table_name => ‘Att_Import_Mode_Settings’);
END;
Step 4 – Select Attribute Importance results
SELECT *
FROM TABLE(DBMS_DATA_MINING.GET_MODEL_DETAILS_AI(‘Attribute_Importance_Test’))
ORDER BY RANK;
ATTRIBUTE_NAME IMPORTANCE_VALUE RANK
——————– —————- ———-
HOUSEHOLD_SIZE .158945397 1
CUST_MARITAL_STATUS .158165841 2
YRS_RESIDENCE .094052102 3
EDUCATION .086260794 4
AGE .084903512 5
OCCUPATION .075209339 6
Y_BOX_GAMES .063039952 7
HOME_THEATER_PACKAGE .056458722 8
CUST_GENDER .035264741 9
BOOKKEEPING_APPLICAT .019204751 10
ION
CUST_INCOME_LEVEL 0 11
BULK_PACK_DISKETTES 0 11
OS_DOC_SET_KANJI 0 11
PRINTER_SUPPLIES 0 11
COUNTRY_NAME 0 11
FLAT_PANEL_MONITOR 0 11

You must be logged in to post a comment.