Irish BI SIW
OUG Ireland BI & Tech SIGs June 2013
On 11th and 12th June we will be having our next SIG meetings for BI and Tech. The BI SIG will be on the 11th June in the Oracle offices in East Point. We then move the the Conrad Hotel on the 12th June for the Tech SIG. Here are the agendas for the 2 days.
BI SIG
Tech SIG
These events are open to everyone, are free for members and a small fee for non-members.
To register for these event go to the following links
OUG Ireland 2013–Call for Presentations
The call for presentations at the OUG Ireland Conference is now open. The conference will be on Tuesday 12th March in Dublin city centre.
It is hoped to have at a number of concurrent tracks covering all the main topic areas, including application development, database administration, business intelligence, applications, etc.
If you are interested in submitting a presentation then you need to fill in some of the detail at
Follow the OUG Ireland conversation on twitter using the tag #oug_ire
Events for Oracle Users in Ireland–Winter 2012
Over the next couple months there are a number of events happening for Oracle Users in Ireland.
There are a number of OUG Ireland SIG meetings happening. These are:
11th September : Joint SIG meeting of the BI & EPM SIG and the first Tech SIG meeting
12th September : HCM SIG meeting
20th November : BI & EPM SIG. This will be a full day SIG event.
3rd-5th December : The annual UKOUG Conference in Birmingham. There is usually a good attendance from Ireland at this conference. On the Monday night there will be the focus pubs. Join me at the Ireland table.
Oracle and the Oracle User Group have arrange to have Tom Kyte come to Dublin on 19th September, to give a number of presentations. Check out this link for more details and how to register.
Oracle will be having an Oracle Technology Day in Dublin on 15th November in the Croke Park Conference Centre. They will be talking about Cloud Computing, Mobile Computing, Social Media and Big Data. They are also hoping to include some of the updates and product news that will be announced at Oracle Open World in early October. You will need to register for this event.
Summary of Dates & Events to add to your diary.
11th September : OUG Ireland BI & EPM and Tech SIG meeting
12th September : OUG Ireland HCM SIG meeting
19th September : Tom Kyte in Dublin
15th November : Oracle Technology Day
3rd- 5th December : UKOUG Conference in Birmingham.
My OUG Ireland Conference Presentations
Wednesday 21st March an important for the OUG Ireland as it is the annual conference. This year we are in a new venue, the Dublin Convention Centre, on the river Liffey.
After many, many years of being an attendee of my local conference, this year I will be presenting 2 presentations. Actually, I’ll be presenting one and co-presenting another.
My first presentation, will be an introduction to Oracle Data Miner, which is now part of SQL Developer. I will be talking about the new features and some features that be part of a future presentation. Most of the presentation time will be taken up with a Demonstration of using Oracle Data Miner (ODMr). I will step through a the main steps of data mining using the ODMr tool. The data set that I will be using is based on a University in the UK who wanted to look at how data mining could be used to help them manage student retention/churn.
The second presentation will be lead by Antony Heljula, of Peak Indications, with me co-presenting or butting in on some topics. This presentation will be at a much higher level. This presentation will be aimed at analysts and managers who are looking at data mining and what it can do for them.We will look at what it can be used for, who are the main people, some sample case studies/application areas, data quality issues, etc. There will be a demonstration on how you can incorporate the data mining model, developed in the first presentation, into OBIEE Dashboards. We will be using the same UK University scenario here and we will show how data mining has helped to identify specific types students that could not be identified using other means.
Check out the full conference agenda – here
There are plenty of excellent presentations, with lots of Oracle ACE’s and Oracle ACE Directors.
Some of my other activities on the day will be:
- Talking to people about writing articles for the Oracle Scene, the user group manage. I’m the deputy editor of Oracle Scene.
- I’m also deputy chair of the Irish BI & EPM SIG, so I’ll be trying to persuade people take part in and present at future meetings.
- Finally and perhaps most importantly, I will be meeting other people in the Oracle world here in Ireland. Some of these people I know for 20+ years. Because of busy schedules sometimes the only time we get to catch-up is at the annual conference.
If you would like to talk to me about the topics covered in the presentations or about any of the about activities, look out for me during the day. I will be at the (free) drinks reception at the end of the day, so you can talk to me then. If that does not suit, then drop me an email and we can arrange to meet up.
ODM 11gR2–Real-time scoring of data
In my previous posts I gave sample code of how you can use your ODM model to score new data.
Applying an ODM Model to new data in Oracle – Part 2
Applying an ODM Model to new data in Oracle – Part 1
The examples given in this previous post were based on the new data being in a table.
In some scenarios you may not have the data you want to score in table. For example you want to score data as it is being recorded and before it gets committed to the database.
The format of the command to use is
prediction(ODM_MODEL_NAME USING )
prediction_probability(ODM_Model_Name, Target Value, USING )
So we can list the model attributes we want to use instead of using the USING * as we did in the previous blog posts
Using the same sample data that I used in my previous posts the command would be:
Select prediction(clas_decision_tree
USING
20 as age,
‘NeverM’ as cust_marital_status,
‘HS-grad’ as education,
1 as household_size,
2 as yrs_residence,
1 as y_box_games) as scored_value
from dual;
SCORED_VALUE
————
0
Select prediction_probability(clas_decision_tree, 0
USING
20 as age,
‘NeverM’ as cust_marital_status,
‘HS-grad’ as education,
1 as household_size,
2 as yrs_residence,
1 as y_box_games) as probability_value
from dual;
PROBABILITY_VALUE
—————–
1
So we get the same result as we got in our previous examples.
Depending of what data we have gathered we may or may not have all the values for each of the attributes used in the model. In this case we can submit a subset of the values to the function and still get a result.
Select prediction(clas_decision_tree
USING
20 as age,
‘NeverM’ as cust_marital_status,
‘HS-grad’ as education) as scored_value2
from dual;
SCORED_VALUE2
————-
0
Select prediction_probability(clas_decision_tree, 0
USING
20 as age,
‘NeverM’ as cust_marital_status,
‘HS-grad’ as education) as probability_value2
from dual;
PROBABILITY_VALUE2
——————
1
Again we get the same results.
Ireland table at Focus Pub tonight
Today (Monday 5th Dec) is the first day of the UKOUG Conference in Birmingham.
Tonight we have the Focus Pubs session starting at 8:45pm. This year we have a Ireland table for all of the Irish people at the conference to gather at and to meet.
I’ll be there so drop along and say hello.
Applying an ODM Model to new data in Oracle – Part 2
This is the second of a two part blog posting on using an Oracle Data Mining model to apply it to or score new data. The first part looked at how you can score data the DBMS_DATA_MINING.APPLY procedure for scoring data batch type process.
This second part looks at how you can apply or score the new data, using our ODM model, in a real-time mode, scoring a single record at a time.
PREDICTION Function
The PREDICTION SQL function can be used in many different ways. The following examples illustrate the main ways of using it. Again we will be using the same data set with data in our (NEW_DATA_TO_SCORE) table.
The syntax of the function is
PREDICTION ( model_name, USING attribute_list);
Example 1 – Real-time Prediction Calculation
In this example we will select a record and calculate its predicted value. The function will return the predicted value with the highest probability
SELECT cust_id, prediction(clas_decision_tree using *)
FROM NEW_DATA_TO_SCORE
WHERE cust_id = 103001;
CUST_ID PREDICTION(CLAS_DECISION_TREEUSING*)
———- ————————————
103001 0
So a predicted class value is 0 (zero) and this has a higher probability than a class value of 1.
We can compare and check this results with the result that was produced using the DBMS_DATA_MINING.APPLY function (see previous blog post).
SQL> select * from new_data_scored
2 where cust_id = 103001;
CUST_ID PREDICTION PROBABILITY
———- ———- ———–
103001 0 1
103001 1 0
Here we can see that the class value of 0 has a probability of 1 (100%) and the class value of 1 has a probability of 0 (0%).
Example 2 – Selecting top 10 Customers with Class value of 1
For this we are selecting from our NEW_DATA_TO_SCORE table. We want to find the records that have a class value of 1 and has the highest probability. We only want to return the first 10 of these
SELECT cust_id
FROM NEW_DATA_TO_SCORE
WHERE PREDICTION(clas_decision_tree using *) = 1
AND rownum <=10;
CUST_ID
———-
103005
103007
103010
103014
103016
103018
103020
103029
103031
103036
Example 3 – Selecting records based on Prediction value and Probability
For this example we want to find our from what Countries do the customer come from where the Prediction is 0 (wont take up offer) and the Probability of this occurring being 1 (100%). This example introduces the PREDICTION_PROBABILITY function. This function allows use to use the probability strength of the prediction.
select country_name, count(*)
from new_data_to_score
where prediction(clas_decision_tree using *) = 0
and prediction_probability (clas_decision_tree using *) = 1
group by country_name
order by count(*) asc;
COUNTRY_NAME COUNT(*)
—————————————- ———-
Brazil 1
China 1
Saudi Arabia 1
Australia 1
Turkey 1
New Zealand 1
Italy 5
Argentina 12
United States of America 293
The examples that I have give above are only the basic examples of using the PREDICTION function. There are a number of other uses that include using the PREDICTION_COST, PREDICTION_SET, PREDICTION_DETAILS. Examples of these will be covered in a later blog post
Oracle Ireland: Data Centre Transformation Event 7th December
Oracle in Ireland is hosting a session called Data Centre Transformation on 7th December (9:30-13:00), in the Guinness Storehouse, St James Gate, Dublin 8.
The agenda for this session is
9:00 | Registration & Coffee |
10:00 | The 21st Century Data Centre, Delivered by Oracle Solaris – Mike Ramchand |
10:30 | Oracle Enterprise Manager 12c – John Caulfield, Solutions Director |
11:00 | Oracle Virtualised Systems (VM 3.0) – Dave Patterson, Oracle Hardware |
11:30 | Coffee Break |
12:00 | Transformative Oracle Storage Solutions – Neil Caughey, Oracle Storage Business Unit |
12:30 | Extreme Performance with Oracle Exadata and Exalogic – Brian Grant, Oracle Exalogic Business Development Manager |
To book your place on this event email oracle.events@ketchumpleon.com
Or register by following this web link.
I wont be at this event as I’ll be presenting in the afternoon at the UKOUG conference in Birmingham.
Applying an ODM Model to new data in Oracle – Part 1
This is the first of a two part blog posting on using an Oracle Data Mining model to apply it to or score new data. This first part looks at the how you can score data using the DBMS_DATA_MINING.APPLY procedure in a batch type process.
The second part will be posted in a couple of days and will look how you can apply or score the new data, using our ODM model, in a real-time mode, scoring a single record at a time.
DBMS_DATA_MINING.APPLY
Instead of applying the model to data as it is captured, you may need to apply a model to a large number of records at the same time. To perform this bulk processing we can use the APPLY procedure that is part of the DBMS_DATA_MINING package. The format of the procedure is
DBMS_DATA_MINING.APPLY (
model_name IN VARCHAR2,
data_table_name IN VARCHAR2,
case_id_column_name IN VARCHAR2,
result_table_name IN VARCHAR2,
data_schema_name IN VARCHAR2 DEFAULT NULL);
Parameter Name | Description |
Model_Name | The name of your data mining model |
Data_Table_Name | The source data for the model. This can be a tree or view. |
Case_Id_Column_Name | The attribute that give uniqueness for each record. This could be the Primary Key or if the PK contains more than one column then a new attribute is needed |
Result_Table_Name | The name of the table where the results will be stored |
Data_Schema_Name | The schema name for the source data |
The main condition for applying the model is that the source table (DATA_TABLE_NAME) needs to have the same structure as the table that was used when creating the model.
Also the data needs to be prepossessed in the same way as the training data to ensure that the data in each attribute/feature has the same formatting.
When you use the APPLY procedure it does not update the original data/table, but creates a new table (RESULT_TABLE_NAME) with a structure that is dependent on what the underlying DM algorithm is. The following gives the Result Table description for the main DM algorithms:
For a Classification algorithms
case_id VARCHAR2/NUMBER
prediction NUMBER / VARCHAR2 — depending a target data type
probability NUMBER
For Regression
case_id VARCHAR2/NUMBER
prediction NUMBER
For Clustering
case_id VARCHAR2/NUMBER
cluster_id NUMBER
probability NUMBER
Example / Case Study
My last few blog posts on ODM have covered most of the APIs for building and transferring models. We will be using the same data set in these posts. The following code uses the same data and models to illustrate how we can use the DBMS_DATA_MINING.APPLY procedure to perform a bulk scoring of data.
In my previous post we used the EXPORT and IMPORT procedures to move a model from one database (Test) to another database (Production). The following examples uses the model in Production to score new data. I have setup a sample of data (NEW_DATA_TO_SCORE) from the SH schema using the same set of attributes as was used to create the model (MINING_DATA_BUILD_V). This data set contains 1500 records.
SQL> desc NEW_DATA_TO_SCORE
Name Null? Type
———————————— ——– ————
CUST_ID NOT NULL NUMBER
CUST_GENDER NOT NULL CHAR(1)
AGE NUMBER
CUST_MARITAL_STATUS VARCHAR2(20)
COUNTRY_NAME NOT NULL VARCHAR2(40)
CUST_INCOME_LEVEL VARCHAR2(30)
EDUCATION VARCHAR2(21)
OCCUPATION VARCHAR2(21)
HOUSEHOLD_SIZE VARCHAR2(21)
YRS_RESIDENCE NUMBER
AFFINITY_CARD NUMBER(10)
BULK_PACK_DISKETTES NUMBER(10)
FLAT_PANEL_MONITOR NUMBER(10)
HOME_THEATER_PACKAGE NUMBER(10)
BOOKKEEPING_APPLICATION NUMBER(10)
PRINTER_SUPPLIES NUMBER(10)
Y_BOX_GAMES NUMBER(10)
OS_DOC_SET_KANJI NUMBER(10)
SQL> select count(*) from new_data_to_score;
COUNT(*)
———-
1500
The next step is to run the the DBMS_DATA_MINING.APPLY procedure. The parameters that we need to feed into this procedure are
Parameter Name | Description |
Model_Name | CLAS_DECISION_TREE — we imported this model from our test database |
Data_Table_Name | NEW_DATA_TO_SCORE |
Case_Id_Column_Name | CUST_ID — this is the PK |
Result_Table_Name | NEW_DATA_SCORED — new table that will be created that contains the Prediction and Probability. |
The NEW_DATA_SCORED table will contain 2 records for each record in the source data (NEW_DATA_TO_SCORE). For each record in NEW_DATA_TO_SCORE we will have one record for the each of the Target Values (O or 1) and the probability for each target value. So for our NEW_DATA_TO_SCORE, which contains 1,500 records, we will get 3,000 records in the NEW_DATA_SCORED table.
To apply the model to the new data we run:
BEGIN
dbms_data_mining.apply(
model_name => ‘CLAS_DECISION_TREE’,
data_table_name => ‘NEW_DATA_TO_SCORE’,
case_id_column_name => ‘CUST_ID’,
result_table_name => ‘NEW_DATA_SCORED’);
END;
/
This takes 1 second to run on my laptop, so this apply/scoring of new data is really quick.
The new table NEW_DATA_SCORED has the following description
SQL> desc NEW_DATA_SCORED
Name Null? Type
——————————- ——– ——-
CUST_ID NOT NULL NUMBER
PREDICTION NUMBER
PROBABILITY NUMBER
SQL> select count(*) from NEW_DATA_SCORED;
COUNT(*)
———-
3000
We can now look at the prediction and the probabilities
SQL> select * from NEW_DATA_SCORED where rownum <=12;
CUST_ID PREDICTION PROBABILITY
———- ———- ———–
103001 0 1
103001 1 0
103002 0 .956521739
103002 1 .043478261
103003 0 .673387097
103003 1 .326612903
103004 0 .673387097
103004 1 .326612903
103005 1 .767241379
103005 0 .232758621
103006 0 1
103006 1 0
12 rows selected.
How Many Sleeps to Santa
select
to_date(’25/12/2011′,’DD/MM/YYYY’) – trunc(sysdate) “How Many Sleep to Santa”
from dual;
How Many Sleep to Santa
———————–
34
Call for Presentations : OUG Ireland Conference 2012
The call for presentations for the annual Oracle User Group Ireland conference has been posted in last few days.
The conference is planned for March 2012 and the venue will be picked over the next few weeks.
I’m on organising committee this year. It is hoped to have a number of parallel streams covering core Database Technology, BI (&EPM), Development (including Fusion).
If you are interested in presenting a short presentation of approx. 45 minutes (including time for questions), then you will need to submit your Topic and Abstract using the following link : www.oug.org/Irelandpapers
The conference is not limited to presenters from Ireland and it is hoped to get a number of well known Oracle experts and Oracle ACEs to come to Dublin for the day.
What kind of topics are of interest. Well pretty much anything Oracle. We have all come across something interesting in our jobs that we could share, be it using a particular technique, new features, sharing experiences, best practices, product demos, etc
I’ve already submitted a presentation on Oracle Data Miner.
There is a Twitter hash tag for the Oracle Conference #oug_ire2012. So add this to your Twitter tool to follow developments and announcements about the conference.
If you have any question about the conference drop me a email.
My UKOUG Conference 2011 Schedule
The UKOUG conference will be in a couple of weeks. I have my flights and hotel booked, and I’ve just finished selecting my agenda of presentations. I really enjoy this conference as it serves many purposes including, finding new directions Oracle is taking, new product features, some upskilling/training, confirming that the approaches that I have been using on projects are valid, getting lots of hints and tips, etc.
One thing that I always try to do and I strongly everyone (in particular first timers) to do is to go to 1 session everyday that is on a topic or product that you know (nearly) nothing about. You might discover that you know more than you think or you may learn something new that can be feed into some project on your return or over the next 12 months.
My agenda for the conference currently looks Very busy and in between these session, there is the exhibition hall, meetings with old and new friends, meetings with product/business unit managers, asking people to write articles for Oracle Scene, checking out possible presenters to come to Ireland for our conference in March 2012, etc. Then there is my presentation on the Wednesday afternoon.
Sunday
I’ll miss most of the Oak Table event on the Sunday but I hope to make it in time for
16:40-17:30 : Performance & High Availability Panel Session
Monday
9:20-9:50 : Keynote by Mark Sunday, Oracle (H1)
10:00-10:45 : The Future of BI & Oracle roadmap, Mike Durran, Oracle (H5)
11:05-12:05 : Implementing Interactive Maps with OBIEE 11g, Antony Heljula, Peak Indicators (H10A)
12:15-13:15 : OBI 11g Analysis & Reporting New Features, Mark Rittman (8A)
14:30-15:15 : Master Data Management – What is it & how to make it work – Robert Barnett, Hub Solutions Designs (H10A)
16:20-17:35 : Dummies Guide to Oracle ADF, Grant Ronald, Oracle, (Media Suite)
16:35-18:30 : The DB Time Performance Method, Graham Wood, Oracle (H8A)
17:45-18:30 : Performance & Stability with Oracle 11g SQL Plan Management, Doug Burns (H1)
17:45-18:30 : Experiences in Virtualization, Michael Doherty (H10A)
19:45-20:45 : Exhibition Welcome Drinks
20:45-Late : Focus Pubs
Tuesday
9:00-11:00 : Next Generation BI Architectures Masterclass, Andrew Bond, Oracle (H10B)
10:10-10:55 : Who’s afraid of Analytic Functions, Alex Nuijten, Maxima (H5)
11:15-12:15 : Analysing Your Data with Analytic Functions, Carl Dudley, (H9)
11:25-13:25 : Using a Physical Standby to Minimize Downtime for DB Release or Server Change, Michael Abbey, Pythian (Media Suite)
14:40-15:25 : How note to make the headlines, Mark Clewett, Hitachi (H10A)
14:40-15:25 : APEX Back to Basics, Paul Broughton, APEX Evangelists (H9)
15:35-16:20 : Can People be identified in the database, Pete Finnigan (H1)
16:40-18:35 : OTN Hands-on Workshop, Todd Trichler, Oracle (H8A)
17:50-18:35 : SQL Developer Data Modeler as a replacement for Oracle Designer, Paul Bainbridge, Fujitsu, (H8B)
18:45-19:45 : Keynote : Future of Enterprise Software and Oracle, Ray Wang, Constellation Research (H1)
20:00-Late : Evening Social & Networking
Wednesday
9:00-10:00 : Oracle 11g Database: Automatic Parallelism, Joel Goodman, Oracle (H9)
9:00-10:00 : Big Data: Learn how to predict the future, Keith Laker, Oracle (H8B)
10:10-10:55 : All about indexes – What to index, when and how, Mark Bobak, ProQuest (H5)
11:20-12:30 : Using Application Express to Build Highly Accessible Products, Anthony Rayner, Oracle (H8A)
12:30-13:30 : Practical uses for APEX Dictionary, John Scott, APEX Evangelists (H8A)
15:20-16:05 : How to deploy you Oracle Data Miner 11g R2 Workflows in a Live Environment – Me (H7B)
16:15-17:00 : Next Generation Data Warehousing, Kulvinder Hari, Oracle (H8A)
16:15-17:00 : Beyond RTFM and WTF Message Moments. Introducing a new standard: Oracle Fusion Applications User Assistance, Ultan O’Broin (Executive Room 7)
I know I have some overlapping sessions, but I will decide on the date which of these I will attend.
As you an see I will be following the BI stream mainly, with a few sessions on the Database and Development streams too.
This year there is a smart phone app help us organise our agenda, meetings, etc, The only downside is that the app does not import the agenda that I created on the website. So I have to do it again. Maybe for next year they will have an import agenda feature.
You must be logged in to post a comment.