ODM 11g R2
My UKOUG Presentation on ODM PL/SQL API
On Wednesday 7th Dec I gave my presentation at the UKOUG conference in Birmingham. The main topic of the presentation was on using the Oracle Data Miner PL/SQL API to implement a model in a production environment.
There was a good turn out considering it was the afternoon of the last day of the conference.
I asked the attendees about their experience of using the current and previous versions of the Oracle Data Mining tool. Only one of the attendees had used the pre 11g R2 version of the tool.
From my discussions with the attendees, it looks like they would have preferred an introduction/overview type presentation of the new ODM tool. I had submitted a presentation on this, but sadly it was not accepted. Not enough people had voted for it.
For for next year, I will submit an introduction/overview presentation again, but I need more people to vote for it. So watch out for the vote stage next June and vote of it.
Here are the links to the presentation and the demo scripts (which I didn’t get time to run)
Demo Script 1 – Exploring and Exporting model
Demo Script 2 – Import, Dropping and Renaming the model. Plus Queries that use the model
ODM–PL/SQL API for Exporting & Importing Models
In a previous blog post I talked about how you can take a copy of a workflow developed in Oracle Data Miner, and load it into a new schema.
When you data mining project gets to a mature stage and you need to productionalise the data mining process and model updates, you will need to use a different set of tools.
As you gather more and more data and cases, you will be updating/refreshing your models to reflect this new data. The new update data mining model needs to be moved from the development/test environment to the production environment. As with all things in IT we would like to automate this updating of the model in production.
There are a number of database features and packages that we can use to automate the update and it involves the setting up of some scripts on the development/test database and also on the production database.
These steps include:
- Creation of a directory on the development/test database
- Exporting of the updated Data Mining model
- Copying of the exported Data Mining model to the production server
- Removing the existing Data Mining model from production
- Importing of the new Data Mining model.
- Rename the imported mode to the standard name
The DBMS_DATA_MINING PL/SQL package has 2 functions that allow us to export a model and to import a model. These functions are an API to the Oracle Data Pump. The function to export a model is DBMS_DATA_MINING.EXPORT_MODEL and the function to import a model is DBMS_DATA_MINING.IMPORT_MODEL.The parameters to these function are what you would expect use if you were to use Data Pump directly, but have been tailored for the data mining models.
Lets start with listing the models that we have in our development/test schema:
SQL> connect dmuser2/dmuser2
Connected.
SQL> SELECT model_name FROM user_mining_models;
MODEL_NAME
——————————
CLAS_DT_1_6
CLAS_SVM_1_6
CLAS_NB_1_6
CLAS_GLM_1_6
Create/define the directory on the server where the models will be exported to.
CREATE OR REPLACE DIRECTORY DataMiningDir_Exports AS ‘c:\app\Data_Mining_Exports’;
The schema you are using will need to have the CREATE ANY DIRECTORY privilege.
Now we can export our mode. In this example we are going to export the Decision Tree model (CLAS_DT_1_6)
DBMS_DATA_MINING.EXPORT_MODEL function
The function has the following structure
DBMS_DATA_MINING.EXPORT_MODEL (
filename IN VARCHAR2,
directory IN VARCHAR2,
model_filter IN VARCHAR2 DEFAULT NULL,
filesize IN VARCHAR2 DEFAULT NULL,
operation IN VARCHAR2 DEFAULT NULL,
remote_link IN VARCHAR2 DEFAULT NULL,
jobname IN VARCHAR2 DEFAULT NULL);
If we wanted to export all the models into a file called Exported_DM_Models, we would run:
DBMS_DATA_MINING.EXPORT_MODEL(‘Exported_DM_Models’, ‘DataMiningDir’);
If we just wanted to export our Decision Tree model to file Exported_CLASS_DT_Model, we would run:
DBMS_DATA_MINING.EXPORT_MODEL(‘Exported_CLASS_DT_Model’, ‘DataMiningDir’, ‘name in (”CLAS_DT_1_6”)’);
DBMS_DATA_MINING.DROP_MODEL function
Before you can load the new update data mining model into your production database we need to drop the existing model. Before we do this we need to ensure that this is done when the model is not in use, so it would be advisable to schedule the dropping of the model during a quiet time, like before or after the nightly backups/processes.
DBMS_DATA_MINING.DROP_MODEL(‘CLAS_DECISION_TREE’, TRUE)
DBMS_DATA_MINING.IMPORT_MODEL function
Warning : When importing the data mining model, you need to import into a tablespace that has the same name as the tablespace in the development/test database. If the USERS tablespace is used in the development/test database, then the model will be imported into the USERS tablespace in the production database.
Hint : Create a DATAMINING tablespace in your development/test and production databases. This tablespace can be used solely for data mining purposes.
To import the decision tree model we exported previously, we would run
DBMS_DATA_MINING.IMPORT_MODEL(‘Exported_CLASS_DT_Model’, ‘DataMiningDir’, ‘name=’CLAS_DT_1_6”’, ‘IMPORT’, null, null, ‘dmuser2:dmuser3’);
We now have the new updated data mining model loaded into the production database.
DBMS_DATA_MINING.RENAME_MODEL function
The final step before we can start using the new updated model in our production database is to rename the imported model to the standard name that is being used in the production database.
DBMS_DATA_MINING.RENAME_MODEL(‘CLAS_DT_1_6’, ‘CLAS_DECISION_TREE’);
Scheduling of these steps
We can wrap most of this up into stored procedures and have schedule it to run on a semi-regular bases, using the DBMS_JOB function. The following example schedules a procedure that controls the importing, dropping and renaming of the models.
DBMS_JOB.SUBMIT(jobnum.nextval, ‘import_new_data_mining_model’, trunc(sysdate), add_month(trunc(sysdate)+1);
This schedules the the running of the procedure to import the new data mining models, to run immediately and then to run every month.
ODM 11.2 Data Dictionary Views.
The Oracle 11.2 database contains the following Oracle Data Mining views. These allow you to query the database for the metadata relating to what Data Mining Models you have, what the configurations area and what data is involved.
ALL_MINING_MODELS
Describes the high level information about the data mining models in the database. Related views include DBA_MINING_MODELS and USER_MINING_MODELS.
| Attribute | Data Type | Description |
| OWNER | Varchar2(30) NN | Owner of the mining model |
| MODEL_NAME | Varchar2(30) NN | Name of the mining model |
| MINING_FUNCTION | Varchar2(30) | What data mining function to use CLASSIFICATION REGRESSION CLUSTERING FEATURE_EXTRACTION ASSOCIATION_RULES ATTRIBUTE_IMPORTANCE |
| ALGORITHM | Varchar2(30) | Algorithm used by the model NAIVE_BAYES ADAPTIVE_BAYES_NETWORK DECISION_TREE SUPPORT_VECTOR_MACHINES KMEANS O_CLUSTER NONNEGATIVE_MATRIX_FACTOR GENERALIZED_LINEAR_MODEL APRIORI_ASSOCIATION_RULES MINIMUM_DESCRIPTION_LENGTH |
| CREATION_DATE | Date NN | Date model was created |
| BUILD_DURATION | Number | Time in seconds for the model build process |
| MODEL_SIZE | Number | Size of model in MBytes |
| COMMENTS | Varchar2(4000) |
SELECT model_name,
mining_function,
algorithm,
build_duration,
model_size
FROM ALL_MINING_MODELS;
MODEL_NAME MINING_FUNCTION ALGORITHM BUILD_DURATION MODEL_SIZE
————- —————- ————————– ————– ———-
CLAS_SVM_1_6 CLASSIFICATION SUPPORT_VECTOR_MACHINES 3 .1515
CLAS_DT_1_6 CLASSIFICATION DECISION_TREE 2 .0842
CLAS_GLM_1_6 CLASSIFICATION GENERALIZED_LINEAR_MODEL 3 .0877
CLAS_NB_1_6 CLASSIFICATION NAIVE_BAYES 2 .0459
Describes the attributes of the data mining models. Related views are DBA_MINING_MODEL_ATTRIBUTES and USER_MINING_MODEL_ATTRIBUTES.
| Attribute | Data Type | Description |
| OWNER | Varchar2(30) NN | Owner of the mining model |
| MODEL_NAME | Varchar2(30) NN | Name of the mining mode |
| ATTRIBUTE_NAME | Varchar2(30) NN | Name of the attribute |
| ATTRIBUTE_TYPE | Varchar2(11) | Logical type of attribute NUMERICAL – numeric data CATEGORICAL – character data |
| DATA_TYPE | Varchar2(12) | Data type of attribute |
| DATA_LENGTH | Number | Length of data type |
| DATA_PRECISION | Number | Precision of a fixed point number |
| DATA_SCALE | Number | Scale of the fixed point number |
| USAGE_TYPE | Varchar2(8) | Indicated if the attribute was used to create the model (ACTIVE) or not (INACTIVE) |
| TARGET | Varchar2(3) | Indicates if the attribute is the target |
If we take one of our data mining models that was listed about and select what attributes are used by that model;
SELECT attribute_name,
attribute_type,
usage_type,
target
from all_mining_model_attributes
where model_name = ‘CLAS_DT_1_6’;
ATTRIBUTE_NAME ATTRIBUTE_T USAGE_TY TAR
—————————— ———– ——– —
AGE NUMERICAL ACTIVE NO
CUST_MARITAL_STATUS CATEGORICAL ACTIVE NO
EDUCATION CATEGORICAL ACTIVE NO
HOUSEHOLD_SIZE CATEGORICAL ACTIVE NO
OCCUPATION CATEGORICAL ACTIVE NO
YRS_RESIDENCE NUMERICAL ACTIVE NO
Y_BOX_GAMES NUMERICAL ACTIVE NO
AFFINITY_CARD CATEGORICAL ACTIVE YES
The first thing to note here is that all the attributes are listed as ACTIVE. This is the default and will be the case for all attributes for all the algorithms, so we can ignore this attribute in our queries, but it is good to check just in case.
The second thing to note is for the last row we have the AFFINITY_CARD has a target attribute value of YES. This is the target attributes used by the classification algorithm.
ALL_MINING_MODEL_SETTINGS
Describes the setting of the data mining models. The settings associated with a model are algorithm dependent. The Setting values can be provided as input to the model build process. Alternatively, separate settings table can used. If no setting values are defined of provided, then the algorithm will use its default settings.
| Attribute | Data Type | Description |
| OWNER | Varchar2(30) NN | Owner of the mining model |
| MODEL_NAME | Varchar2(30) NN | Name of the mining model |
| SETTING_NAME | Varchar2(30) NN | Name of the Setting |
| SETTING_VALUE | Varchar2(4000) | Value of the Setting |
| SETTING_TYPE | Varchar2(7) | Indicates whether the default value (DEFAULT) or a user specified value (INPUT) is used by the model |
Lets take our previous example of the ‘CLAS_DT_1_6’ model and query the database to see what the setting are.
column setting_value format a30
select setting_name,
setting_value,
setting_type
from all_mining_model_settings
where model_name = ‘CLAS_DT_1_6’;
SETTING_NAME SETTING_VALUE SETTING
———————– —————————- ——-
ALGO_NAME ALGO_DECISION_TREE INPUT
PREP_AUTO ON INPUT
TREE_TERM_MINPCT_NODE .05 INPUT
TREE_TERM_MINREC_SPLIT 20 INPUT
TREE_IMPURITY_METRIC TREE_IMPURITY_GINI INPUT
CLAS_COST_TABLE_NAME ODMR$15_42_50_762000JERWZYK INPUT
TREE_TERM_MINPCT_SPLIT .1 INPUT
TREE_TERM_MAX_DEPTH 7 INPUT
TREE_TERM_MINREC_NODE 10 INPUT
ODM 11.2–Data Mining PL/SQL Packages
The Oracle 11.2 database contains 3 PL/SQL packages that allow you to perform all (well almost all) of your data mining functions.
So instead of using the Oracle Data Miner tool you can write some PL/SQL code that will you to do the same things.
Before you can start using these PL/SQL packages you need to ensure that the schema that you are going to use has been setup with the following:
- Create a schema or use and existing one
- Grant the schema all the data mining privileges: see my earlier posting on how to setup an Oracle schema for data mining – Click here and YouTube video
- Grant all necessary privileges to the data that you will be using for data mining
The first PL/SQL package that you will use is the DBMS_DATA_MINING_TRANSFORM. This PL/SQL package allows you to transform the data to make it suitable for data mining. There are a number of functions in this package that allows you to transform the data, but depending on the data you may need to write your own code to perform the transformations. When you apply your data model to the test or the apply data sets, ODM will automatically take the transformation functions defined using this package and apply them to the new data sets.
The second PL/SQL package is DBMS_DATA_MINING. This is the main data mining PL/SQL package. It contains functions to allow you to:
- To create a Model
- Describe the Model
- Exploring and importing of Models
- Computing costs and text metrics for classification Models
- Applying the Model to new data
- Administration of Models, like dropping, renaming, etc
The next (and last) PL/SQL package is DBMS_PREDICTIVE_ANALYTICS.The routines included in this package allows you to prepare data, build a model, score a model and return results of model scoring. The routines include EXPLAIN which ranks attributes in order of influence in explaining a target column. PREDICT which predicts the value of a target attribute based on values in the input. PROFILE which generates rules that describe the cases from the input data.
Over the coming weeks I will have separate blog posts on each of these PL/SQL packages. These will cover the functions that are part of each packages and will include some examples of using the package and functions.
ODM PL/SQL API 11.2 New Features
The PL/SQL API interface for Oracle Data Miner has had a number of new features. These are listed below along with the new API features added with the 11.1 release.
- Support for Native Transactional Data with Association Rules: you can build association rule models without first transforming the transactional data.
- SVM class weights specified with CLAS_WEIGHTS_TABLE_NAME: including the GLM class weights
- FORCE argument to DROP_MODEL: you can now force a drop model operation even if a serious system error has interrupted the model build process
- GET_MODEL_DETAILS_SVM has a new REVERSE_COEF parameter: you can obtain the transformed attribute coefficients used internally by an SVM model by setting the new REVERSE_COEF parameter to 1
11.1g API New Features
- Mining Model schema objects: previous releases, DM models were implemented as a collection of tables and metadata within the DMSYS schema. in 11.1 models are implemented as data dictionary objects in the SYS schema. A new set of DD views present DM models and their properties
- Automatic and Embedded Data Preparation: previously data preparation was the responsibility of the user. Now it can be automated
- Scoping of Nested Data: supports nested data types for both categorical and numerical data. Most algorithms require multi-record case data to the presented as columns of nested rows, each containing an attribute name/value pair. ODM processes each nested row as a separate attribute.
- Standardised Handling of Sparse Data & Missing Values: standardised across all algorithms.
- Generalised Linear Models: has a new algorithm and supports classification (logistic regression) and regression (linear regression)
- New SQL Data Mining Function: PREDICTION_BOUNDS has been introduced for Generalised Linear Models. This returns the confidence bounds on predicted values (regression models) or predicted probabilities (classification)
- Enhanced Support for Cost-Sensitive Decision Making: can be added or removed using DATA_MINING.ADD_COST_MATRIX and DBMS_DATA_MINING_REMOVE_COST_MATRIX.
ODM API Demos in PL/SQL (& Java)
If you have been using Oracle Data Miner to develop your data mining workflows and models, at some point you will want to move away from the tool and start using the ODM APIs.
Oracle Data Mining provides a PL/SQL API and a Java API for creating supervised and unsupervised data mining models. The two APIs are fully interoperable, so that a model can be created with one API and then modified or applied using the other API.
I will cover the Java APIs in a later post, so watch out for that.
To help you get started with using the APIs there are a number of demo PL/SQL programs available. These were available as part of the the pre-11.2g version of the tool. But they don’t seem to packaged up with the 11.2 (SQL Developer 3) application.
The following table gives a list of the PL/SQL demo programs that are available. Although these were part of the pre-11.2g tool, they still seem to work on your 11.2g database.
You can download a zip of these files from here.
The sample PL/SQL programs illustrate each of the algorithms supported by Oracle Data Mining. They include examples of data transformations appropriate for each algorithm.
I will be exploring the main APIs, how to set them up, the parameters, etc., over the next few weeks, so check back for these posts.
OOW Focus on Sessions
Oracle Open World has a huge number of sessions commencing on Sunday and run until Thursday. To help attendees and non-attendees work out what sessions are available you can work your way through the schedule builder.
This can be a bit difficult to find the sessions that you might be interested in. So this year they have produced a set of Focus On documents that contain all the session related to particular areas.
The following are the available Focus On areas and documents:
Let me know if I have missed any Focus On documents and I will update the list.
Oh and don’t forget the Oracle Data Miner sessions.
If you are not able to attend OOW, you can check out the OOW Live channel on YouTube to watch the keynote and main session
http://www.youtube.com/Oracle?src=7308729&Act=99&pcode=WWMK11042185MPP039
Check out Oracle Data Miner at OOW 11
If you are at Oracle Open World (OOW11) and you have an interest in Oracle Data Miner, check out the following presentation sessions:![]()
In addition to these sessions there are also the following Hands-On Labs, where you can get your hand dirty with the tool.![]()
Do let me know if I have missed a session so that I can update the list.
I’m not attending OOW11
so let me know what the sessions are like.
And tell Charlie that I sent you
Next Generation Analytics–Oracle BIWA TechCast
The Oracle BIWA SIG, which is part of the IOUG, will be having a tech cast on Wednesday 14th September 12:00 PM – 1:00 PM CDT (between 6pm and 7pm in Ireland)
It is titled ‘Building Next-Generation Predictive Analytics Applications using Oracle Data Mining’.
You can register for this by visiting http://ow.ly/6s35C
This presentation will cover how the Oracle Database has become a predictive analytics (PA) platform for next-generation applications and will include several examples including:
- Oracle Fusion Human Capital Management (HCM) Predictive Workforce,
- Oracle Adaptive Access Manager for fraud detection,
Oracle Communications Industry Model, - Oracle Complex Event Processing and others and will be interspersed with
- Oracle Data Mining demos and PA examples where possible.
“Predictive analytics help you make better decisions by uncovering patterns and relationships hidden in the data. This new information generates competitive advantage. Oracle has invested heavily to “move the algorithms to the data” rather than current approaches. Oracle Data Mining provides 12 in-database algorithms that mine star schemas, structured, unstructured, transactional, and spatial data. Exadata, delivering 10x-100x faster performance, combined with OBIEE for dashboards and drill-down deliver an unbeatable in-database analytical platform that undergirds next-generation “predictive” analytics applications. This webcast will show you how to get started.”
My Oracle Magazine Collection
Over the past couple of days I have been doing a bit of a reorganisation of my book case in my home office. On one of the shelves I keep my Oracle Magazine. My collection dates back to 1992. I began my working career as a graduate consultant with Oracle in Ireland. At that stage Oracle Magazine seemed to be published every 4 to 6 months, but around 1995 it moved to being published every 2 months.
I though that I had a full collection of Oracle Magazine from 1993 onwards, but the table below shows that I have a number of missing editions. Perhaps these gaps are due to my good nature of lending them to other people, or maybe I just lost them somewhere.
What I’m looking to do is to complete my collection. If you have one of the missing editions, can you let me know. Assuming that you don’t mind parting with it, we can arrange postal.
Looking back over the previous editions, it is interesting to see some of the topics that were discussed. Typically they were a couple years before they became commonly used.
An idea for Oracle Magazine is to have a new column that looks back at an article on a particular technique/technology/tool and reflects on how things have changed (or not) since the article was written.
New Frontiers for Oracle Data Miner
Oracle Data Miner functionality is now well established and proven over the years. In particular with the release of the ODM 11gR2 version of the tool. But how will Oracle Data Miner develop into the future.
There are 4 main paths or Frontiers for future developments for Oracle Data Miner:
Oracle Data Miner Tool
The new ODM 11gR2 tool is a major development over the previous version of the tool. With the introduction of workflows and some added functionality for some of the features. the tool is now comparable with the likes of SAS Enterprise Miner and SPSS.
But the new tool is not complete and still needs a bit of fine tuning of most of the features. In particular with the usability and interactions. Some of the colour schemes needs to be looked at or to allow users to select their own colours.
Apart from the usability improvements for the tool another major development that is needed, is the ability to translate the workflow and the underlying database objects into usable code. This code can then be incorporated into our applications and other tools. The tool does allow you to produce shell code of the nodes, but there is still a lot of effort needed to make this usable. Under the previous version of the tool there was features available in JDeveloper and SQL Developer to produced packaged code that was easy to include in our applications.
“A lot done – More to do”
Oracle Applications
Over the past couple of months there has been a few postings on how Oracle Data Miner (11gR2) has been, or will be, incorporated in various Oracle Applications. For example Oracle Fusion Human Capital Management and Oracle Real Time Decision (RTD). Watch out of other applications that will be including Oracle Data Miner.
“A bit done – Lots more to do”
Oracle Business Intelligence
One of the most common places where ODM can be used is with OBIEE. OBIEE is the core engine for the delivery of the BI needs for an organisation. OBIEE coordinates the gathering of data from various sources, the defining of the business measures and then the delivery of this information in various forms to the users. Oracle Data Miner can be included in this process and can add significant value to the BI needs and report.
“A lot done – Need to publicise more”
Customized Projects
Most data mining projects are independent of various Applications and BI requirements. They are projects that are hoping to achieve a competitive insight into their organisational data. Over time as the success of some pilot projects become know they need for more data mining projects will increase. This will lead to organisations have a core data mining team to support these project. With this, the team will need tools to support them in the delivery of their project and with the delivery. This is were OBIEE and Oracle Fusion Apps will come increasingly important.
“A lot done – more to do”
Data Exploration using Oracle Data Miner 11gR2
Before beginning any data mining task we need to performs some data investigation. This will allow us to explore the data and to gain a better understanding of the data values. We can discover a lot by doing this can it can help us to identify areas for improvement in the source applications, as well as identifying data that does not contribute to our business problem (this is called feature reduction), and it can allow us to identify data that needs reformatting into a number of additional features (feature creation). A simple example of this is a date of birth field provides no real value, but by creating a number of additional attributes (features) we can now use the date of birth field to determine what age group they fit into.
As with most of the interface in Oracle Data Miner 11gR2, there is a new Data Exploration interface. In this blog post I will talk you through how to set-up and use the new Data Exploration interface and show you how you can use the data exploration features to gain an understanding of the data before you begin using the data mining algorithms.
The examples given here are based on my previous blog posts and we will use the same sample data sets, that were set-up as part of the install and configuration.
See my other blog post and videos on installing and setting up Oracle Data Miner.
Data Set-up
Before we can begin the data exploration we need to identify data we are going to use. To do this we need to select the Data tab from the Component Palette, and then select Data Source.![]()
To create the Data Node on our Workflow we need to click and drag the Data Source onto the workflow. Select the MINING_DATA_BUILD_V and select all the data.![]()
The next step is to create the Explore Data node on our workflow. From the Data tab in the Component Palette, select and drag the Explore Data node onto the workflow. Now we need to link the Data node to the Explore Data node.
Right-click on the Explore Data mode and click Run. This will make the ODM tool go to the database and analyse the data that is specified in our Data node. The analyse results will be used in the Explore Data note.
Exploring the Data
When the Explore Data node has finished we can look at the data it has generated. Right-click the Explore Data node and select View Data.
A lot of statistical information has been generated for each of the attributes in our Data node. In addition to the statistical information we also get a histogram of the attribute distributions.
We can work through each attribute taking the statistical data and the histograms to build up a picture of the data.
The data we are using is for an Electronics Goods store.
A few interesting things in the data are:
- 90% of the data comes from the United States of America
- PRINTER_SUPPLIES attribute only has one value. We can eliminate this from our data set as it will not contribute to the data mining algorithms
- Similarly for OS_DOC_SET_KENJI, which also has one one value
The histograms are based on predetermined number of bins. This is initially set to 10, but you may need to changed this value up or down to see if a pattern exists in the data.
An example of this is if we select AGE and set the number of bins to 10. We get a nice histogram showing that most of our customers are in the 31 to 46 age ranges. So maybe we should be concentrating on these.
Now if we change the number of bins to 30 can get a completely different picture of what is going on in the data.
To change the number of bin we need to go to the Workflow pane and select the Property Inspector. Scroll down to the Histogram section and change the Numerical Bins to 25. You then need to rerun the Explore Data node.
Now we can see that there are a number of important age groups what stand out more than others. If we look at the 31 to 46 age range, in the first histogram we can see that there is not much change between each of the age bins. But when we look at the second histogram for the 25 bins for the same 21 to 34 age range we get a very different view of the data. In this second histogram we see that that the ages of the customers vary a lot. What does mean. Well it can mean lots of different things and it all depends on the business scenario. In our example we are looking at an electronic goods store. What we can deduce from this second histogram is that there are a small number of customers up to about age 23. Then there is an increase. Is this due to people having obtained their main job after school having some disposable income. This peak is followed by a drop off in customers followed by another peak, drop off, peak, drop off etc. Maybe we can build a profile of our customer based on their age just like what our financial organisations due to determine what products to sell to use based on our age and life stage.
Conclusions on the data
From this histogram we can maybe categorise the customers into the follow
• Early 20s – out of education, fist job, disposable income
• Late 20s to early 30s – settling down, own home
• Late 30s – maybe kids, so have less disposable income
• 40s – maybe people are trading up and need new equipment. Or maybe the kids have now turned into teenagers and are encouraging their parents to buy up todate equipment.
• Late 50s – These could be empty nesters where their children have left home, maybe setting up home by themselves and their parents are building things for their home. Or maybe the parents are treating themselves with new equipment as they have more disposable income
• 60s + – parents and grand-parents buying equipment for their children and grand-children. Or maybe we have very techie people who have just retired
• 70+ – we have a drop off here.
As you can see we can discover a lot in the day by changing the number of bins and examining the data. The important part of this examination is trying to relate what you are seeing from the graphical representation of the data on the screen, back to the type of business we are examining. A lot can be discovered but you will have to spend some time looking for it.
ODM 11gR2 Extra Data Exploration Functionality
In ODM 11gR2 we now have an extra feature for our data analysis feature. We can now produce the histograms that are grouped by one of the other attributes. Typically this would be the Target or Class attribute but you can also use it with the other attributes.
To set this extra feature, double click on the Explore Data node. The Group By drop down lets you to select the attribute you want to group the other attributes by.
Using our example data, the target variable is AFFINITY_CARD. Select this in the drop down and run the Explore Data node again. When you look at the newly generated histograms you will now see each bin has two colours. If you hover the mouse of each coloured part you will be able to get the number of records in each group. You can use other attributes, such as the CUST_GENDER, COUNTRY_NAME, etc. Only use the attributes where it would make sense to analyse the data by.
This is a powerful new feature that allows you to gain a deeper level of insight into the data you are analysing
Brendan Tierney
- ← Previous
- 1
- 2
- 3
- 4
- Next →

You must be logged in to post a comment.