Oracle

Transforming Outliers in Oracle Data Mining

Posted on Updated on

In previous posts I’ve shown how to use the DBMS_DATA_MINING.TRANSFORM function to transform data is various ways including, normalization and missing data. In this post I’ll build upon these to show how to outliers can be handled.

The following example will show you how you can transform data to identify outliers and transform them. In the example, Winsorsizing transformation is performed where the outlier values are replaced by the nearest value that is not an outlier.

The transformation process takes place in three stages. For the first stage a table is created to contain the outlier transformation data. The second stage calculates the outlier transformation data and store these in the table created in stage 1. One of the parameters to the outlier procedure requires you to list the attributes you do not the transformation procedure applied to (this is instead of listing the attributes you do want it applied to).  The third stage is to create a view (MINING_DATA_V_2) that contains the data set with the outlier transformation rules applied. The input data set to this stage can be the output from a previous transformation process (e.g. DATA_MINING_V).

BEGIN
   -- Clean-up : Drop the previously created tables
   BEGIN
      execute immediate 'drop table TRANSFORM_OUTLIER';
   EXCEPTION
      WHEN others THEN
         null;
   END;

   -- Stage 1 : Create the table for the transformations
   -- Perform outlier treatment for: AGE and YRS_RESIDENCE
   --
   DBMS_DATA_MINING_TRANSFORM.CREATE_CLIP (
      clip_table_name => 'TRANSFORM_OUTLIER');

   -- Stage 2 : Transform the categorical attributes
   --   Exclude the number attributes you do not want transformed
   DBMS_DATA_MINING_TRANSFORM.INSERT_CLIP_WINSOR_TAIL (
      clip_table_name => 'TRANSFORM_OUTLIER',
      data_table_name => 'MINING_DATA_V',
      tail_frac       => 0.025,
      exclude_list    => DBMS_DATA_MINING_TRANSFORM.COLUMN_LIST (
                          'affinity_card',
                          'bookkeeping_application',
                          'bulk_pack_diskettes',
                          'cust_id',
                          'flat_panel_monitor',
                          'home_theater_package',
                          'os_doc_set_kanji',
                          'printer_supplies',
                          'y_box_games'));

   -- Stage 3 : Create the view with the transformed data
   DBMS_DATA_MINING_TRANSFORM.XFORM_CLIP(
      clip_table_name => 'TRANSFORM_OUTLIER',
      data_table_name => 'MINING_DATA_V',
      xform_view_name => 'MINING_DATA_V_2');
END;

The view MINING_DATA_V_2 will now contain the data from the original data set transformed to process missing data for numeric and categorical data (from previous blog post), and also has outlier treatment for the AGE attribute.

 

 

Advertisements

Transforming Missing Data using Oracle Data Mining

Posted on Updated on

In a previous post I showed how you can normalize data using the in-database machine learning feature using the DBMS_DATA_MINING.TRANSFORM function.  This same function can be used to perform many more data transformations with standardized routines. When it comes to missing data, where you have some case records where the value for an attribute is missing you have a number of options open to you. The first is to evaluate the degree of missing values for the attribute for the data set as a whole. If it is very high, you may want to remove that attribute from the data set. But in scenarios when you have a small number or percentage of missing values you will want to find an appropriate or an approximate value. Such calculations can involve the use of calculating the mean or mode.

To build this up using DBMS_DATA_MINING.TRANSFORM function, we need to follow a simple three stage process. The first stage creates a table that will contain the details of the transformations. The second stage defines and runs the transformation function to calculate the replacement values and finally, the third stage, to create the necessary records in the table created in the previous stage. These final two stages need to be followed for both numerical and categorical attributes. For the final stage you can create a new view that contains the data from the original table and has the missing data rules generated in the second stage applied to it. The following example illustrates these two stages for numerical and categorical attributes in the MINING_DATA_BUILD_V data set.

-- Transform missing data for numeric attributes
-- Stage 1 : Clean up, if previous run
--    transformed missing data for numeric and categorical
--    attributes.
BEGIN
   --
   -- Clean-up : Drop the previously created tables
   --
   BEGIN
      execute immediate 'drop table TRANSFORM_MISSING_NUMERIC';
   EXCEPTION
      WHEN others THEN
         null;
   END;

   BEGIN
      execute immediate 'drop table TRANSFORM_MISSING_CATEGORICAL';
   EXCEPTION
      WHEN others THEN
         null;
   END;

Now for stage 2 to define the functions to calculate the missing values for Numerical and Categorical variables.

-- Stage 2 : Perform the transformations
--    Exclude any attributes you don't want transformed
--      e.g. the case id and the target attribute

   --
   -- Transform the numeric attributes
   --
   dbms_data_mining_transform.CREATE_MISS_NUM (
      miss_table_name => 'TRANSFORM_MISSING_NUMERIC');

   dbms_data_mining_transform.INSERT_MISS_NUM_MEAN (
    miss_table_name => 'TRANSFORM_MISSING_NUMERIC',
    data_table_name => 'MINING_DATA_BUILD_V',
    exclude_list    => DBMS_DATA_MINING_TRANSFORM.COLUMN_LIST (
                       'affinity_card',
                       'cust_id'));

   --
   -- Transform the categorical attributes
   --
   dbms_data_mining_transform.CREATE_MISS_CAT (
      miss_table_name => 'TRANSFORM_MISSING_CATEGORICAL');

   dbms_data_mining_transform.INSERT_MISS_CAT_MODE (
      miss_table_name => 'TRANSFORM_MISSING_CATEGORICAL',
      data_table_name => 'MINING_DATA_BUILD_V',
      exclude_list    => DBMS_DATA_MINING_TRANSFORM.COLUMN_LIST (
                         'affinity_card',
                         'cust_id'));
END;

When the above code completes the two transformation tables, TRANSFORM_MISSING_NUMERIC and TRANSFORM_MISSING_CATEGORICAL, will exist in your schema.

Querying these two tables shows the table attributes along with the value to be used to relate the missing value. For example the following illustrates the missing data transformations for the categorical data.

SELECT col, 
       val 
FROM transform_missing_categorical;

For the sample data set used in these examples we get.

COL                       VAL
------------------------- -------------------------
CUST_GENDER               M
CUST_MARITAL_STATUS       Married
COUNTRY_NAME              United States of America
CUST_INCOME_LEVEL         J: 190,000 - 249,999
EDUCATION                 HS-grad
OCCUPATION                Exec.
HOUSEHOLD_SIZE            3

For stage three you will need to create a new view (MINING_DATA_V). This combines the data from original table and the missing data rules generated in the second stage applied to it. This is built in stages with an initial view (MINING_DATA_MISS_V) created that merges the data source and the transformations for the missing numeric attributes. This view (MINING_DATA_MISS_V) will then have the transformations for the missing categorical attributes applied to create the a new view called MINING_DATA_V that contains all the missing data transformations.

BEGIN
   -- xform input data to replace missing values
   -- The data source is MINING_DATA_BUILD_V
   -- The output is MINING_DATA_MISS_V

   DBMS_DATA_MINING_TRANSFORM.XFORM_MISS_NUM(
      miss_table_name => 'TRANSFORM_MISSING_NUMERIC',
      data_table_name => 'MINING_DATA_BUILD_V',
      xform_view_name => 'MINING_DATA_MISS_V');

   -- xform input data to replace missing values
   -- The data source is MINING_DATA_MISS_V
   -- The output is MINING_DATA_V
   DBMS_DATA_MINING_TRANSFORM.XFORM_MISS_CAT(
      miss_table_name => 'TRANSFORM_MISSING_CATEGORICAL',
      data_table_name => 'MINING_DATA_MISS_V',
      xform_view_name => 'MINING_DATA_V');
END;

You can now query the MINING_DATA_V view and see that the data displayed will not contain any null values for any of the attributes.

 

Connecting Go Lang to Oracle Database

Posted on Updated on

It seems like more and more people are using Go. With that comes the need to  access a database or databases. This blog will show you how to get connected to an Oracle Database and to perform some basic operations using Go.

The first thing you need is to have Go installed. There are a couple of options for you. The first is go download from the Go Lang website, or if you are an Oracle purist they provide some repositories for you and these can be installed using yum.

Next you need to install Oracle Instant Client.

Screenshot 2019-04-22 10.48.42

Unzip the contents of the downloaded file. Copy the extracted directory (and it’s contents) to your preferred location and then add the path to this directory to the search PATH. Depending on your configuration and setup, you may need to configure some environment variables. Some people report having to create a ‘.pc’ file and having to change the symlinks for libraries. I didn’t have to do any of this.

The final preparation steps, after installing Go and Oracle Instant Client, is to download the ‘goracle’ package. This package provides a GO database/sql driver for connecting to Oracle Database using ODPI-C. To install the ‘goracle’ package run:

go get gopkg.in/goracle.v2

This takes a few seconds to run. There is no display updates or progress updates when this command is running.

See below for the full Go code for connecting to Oracle Database, executing a query and gathering some database information. To this code, with file name ‘ora_db.go’

go run ora_db.go

I’ll now break this code down into steps.

Import the packages that will be used in you application. In this example I’m importing four packages. ‘fmt’ is the formatting package and allows us to write to standard output. the ‘time’ package allows us to capture and print some date, time and how long things take. The ‘database/sql’ package to allow SQL execution and is needed for the final package ‘goracle’.

import (
    "fmt"
    "time"
    "database/sql"
    goracle "gopkg.in/goracle.v2"
)

Next we can define the values needed for connecting the Oracle Database. These include the username, password, the host string and the database name.

    username := "odm_user";
    password := "odm_user";
    host := ".....";
    database := "....";

Now test the database connection. This doesn’t actually create a connection. This is deferred until you run the first command against the database. This tests the connection

db, err := sql.Open("goracle", username+"/"+password+"@"+host+"/"+database)
if err != nil {
    fmt.Println("... DB Setup Failed") 
    fmt.Println(err)
    return
}
defer db.Close()

If an error is detected, the error message will be printed and the application will exit (return). the ‘defer db.Close’ command sets up to close the connection, but defers it to the end of the application. This allows you to keep related code together and avoid having to remember to add the close command at the end of your code.

Now force the connection to open using a Ping command

if err = db.Ping(); err != nil {
    fmt.Println("Error connecting to the database: %s\n", err)
    return
}

Our database connection is now open!

The ‘goracle’ package allows us to get some metadata about the connection, such as details of the client and server configuration. Here we just gather the details of what version of the database we are connected to. The Oracle Database I’m using is 18c Extreme Edition host on Oracle Cloud.

var serverVersion goracle.VersionInfo 
serverVersion, err = goracle.ServerVersion(db);
if err != nil {
    fmt.Printf("Error getting Database Information: %s\n", err)
    return
}
fmt.Println("DB Version : ",serverVersion)

First we define the variable used to store the server details in. This is defined with data type as specified in the ‘goracle’ package. Then gather the server version details, check for an error and finally print out the details.

To execute a query, we define the query (dbQuery) and then use the connection (db) to run this query (db.Query). The variable ‘rows’ points to the result set from the query. Then defer the closing of the results set point. We need to keep this results set open, as we will parse it in the next code segment.

dbQuery := "select table_name from user_tables where table_name not like 'DM$%' and table_name not like 'ODMR$%'"
rows, err := db.Query(dbQuery)
if err != nil {
   fmt.Println(".....Error processing query")
   fmt.Println(err)
   return
}
defer rows.Close()

To parse the result set, we can use a FOR loop. Before the loop we define a variable to contain the value returned from the result set (tableName). The FOR loop will extract each row returned and assign the value returned to the variable tableName. This variable is then printed.

var tableName string
for rows.Next() {
   rows.Scan(&tableName)
   fmt.Println(tableName)
}

That’s it.

We have connected to Oracle Database, retrieved the version of the database, executed a query and processed the result set.

Here is the full code and the output from running it.

package main

import (
    "fmt"
    "time"
    "database/sql"
    goracle "gopkg.in/goracle.v2"
)

func main(){
    username := "odm_user";
    password := "odm_user";
    host := ".....";
    database := "....";

    currentTime := time.Now()
    fmt.Println("Starting at : ", currentTime.Format("03:04:05:06 PM"))

    fmt.Println("... Setting up Database Connection") 
    db, err := sql.Open("goracle", username+"/"+password+"@"+host+"/"+database)
    if err != nil {
        fmt.Println("... DB Setup Failed") 
        fmt.Println(err)
        return
    }
    defer db.Close()

    fmt.Println("... Opening Database Connection") 
    if err = db.Ping(); err != nil {
        fmt.Println("Error connecting to the database: %s\n", err)
        return
    }
    fmt.Println("... Connected to Database")

    var serverVersion goracle.VersionInfo 
    serverVersion, err = goracle.ServerVersion(db);
    if err != nil {
        fmt.Printf("Error getting Database Information: %s\n", err)
        return
    }
    fmt.Println("DB Version : ",serverVersion)

    dbQuery := "select table_name from user_tables where table_name not like 'DM$%' and table_name not like 'ODMR$%'"
    rows, err := db.Query(dbQuery)
    if err != nil {
        fmt.Println(".....Error processing query")
        fmt.Println(err)
        return
    }
    defer rows.Close()

    fmt.Println("... Parsing query results") 
    var tableName string
    for rows.Next() {
        rows.Scan(&tableName)
        fmt.Println(tableName)
    }

    fmt.Println("... Closing connection") 
    finishTime := time.Now()
    fmt.Println("Finished at ", finishTime.Format("03:04:05:06 PM"))
}

Screenshot 2019-04-22 11.30.10

Time Series Forecasting in Oracle – Part 1

Posted on

 

Time-series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. In this blog post I’ll introduce what time-series analysis is, the different types of time-series analysis and introduce how you can do this using SQL and PL/SQL in Oracle Database. I’ll have additional blog posts giving more detailed examples of Oracle functions and how they can be used for different time-series data problems.

Time-series forecasting is the use of a model to predict future values based on previously observed/historical values. It is a form of regression analysis with additions to facilitate trends, seasonal effects and various other combinations.

Screenshot 2019-04-13 12.59.56

Time-series forecasting is not an exact science but instead consists of a set of statistical tools and techniques that support human judgment and intuition, and only forms part of a solution. It can be used to automate the monitoring and control of data flows and can then indicate certain trends, alerts, rescheduling, etc., as in most business scenarios it is used for predict some future customer demand and/or products or services needs.

Typical application areas of Time-series forecasting include:

  • Operations management: forecast of product sales; demand for services
  • Marketing: forecast of sales response to advertisement procedures, new promotions etc.
  • Finance & Risk management: forecast returns from investments
  • Economics: forecast of major economic variables, e.g. GDP, population growth, unemployment rates, inflation; useful for monetary & fiscal policy; budgeting plans & decisions
  • Industrial Process Control: forecasts of the quality characteristics of a production process
  • Demography: forecast of population; of demographic events (deaths, births, migration); useful for policy planning

When working with time-series data we are looking for a pattern or trend in the data. What we want to achieve is the find a way to model this pattern/trend and to then project this onto our data and into the future. The graphs in the following image illustrate examples of the different kinds of scenarios we want to model.

Most time-series data sets will have one or more of the following components:

  • Seasonal: Regularly occurring, systematic variation in a time series according to the time of year.
  • Trend: The tendency of a variable to grow over time, either positively or negatively.
  • Cycle: Cyclical patterns in a time series which are generally irregular in depth and duration. Such cycles often correspond to periods of economic expansion or contraction.  Also know as the business cycle. 
  • Irregular: The Unexplained variation in a time series.

When approaching time-series problems you will use a combination of visualizations and time-series forecasting methods to examine the data and to build a suitable model. This is where the skills and experience of the data scientist becomes very important.

Oracle provided a algorithm to support time-series analysis in Oracle 18c. This function is called Exponential Smoothing. This algorithm allows for a number of different types of time-series data and patterns, and provides a wide range of statistical measures to support the analysis and predictions, in a similar way to Holt-Winters.

Screenshot 2019-04-15 11.57.40

The first parameter for the Exponential Smoothing function is the name of the model to use. Oracle provides a comprehensive list of models and these are listed in the following table.

Screenshot 2019-04-15 11.57.40

Check out my other blog posts on performing time-series analysis using the Exponential Smoothing function in Oracle Database. These will give more detailed examples of how the Oracle time-series functions, using the Exponential Smoothing algorithm, can be used for different time-series data problems. I’ll also look at example of the different configurations.

OCI – Making DBaaS Accessible using port 1521

Posted on Updated on

When setting up a Database on Oracle Cloud Infrastructure (OCI) for the first time there are a few pre and post steps to complete before you can access the database using a JDBC type of connect, just like what you have in SQL Developer, or using Python or other similar tools and/or languages.

1. Setup Virtual Cloud Network (VCN)

The first step, when starting off with OCI, is to create a Virtual Cloud Network.

Screenshot 2019-03-13 11.08.48

Create a VCN and take all the defaults. But change the radio button shown in the following image.

Screenshot 2019-03-13 11.13.07

That’s it. We will come back to this later.

2. Create the Oracle Database

To create the database select ‘Bare Metal, VM and Exadata’ from the menu.

Screenshot 2019-03-13 11.14.08

Click on the ‘Launch DB System’ button.

Screenshot 2019-03-13 11.15.28

Fill in the details of the Database you want to create and select from the various options from the drop-downs.

Screenshot 2019-03-13 11.16.56

Fill in the details of the VCN you created in the previous set, and give the name of the DB and the Admin password.

Screenshot 2019-03-13 11.19.00

When you are finished everything that is needed, the ‘Launch DB System’ at the bottom of the page will be enabled. After clicking on this botton, the VM will be built and should be ready in a few minutes. When finished you should see something like this.

Screenshot 2019-03-13 11.22.51

3. SSH to the Database server

When the DB VM has been created you can now SSH to it. You will need to use the SSH key file used when creating the DB VM. You will need to connect to the opc (operating system user), and from there sudo to the oracle user. For example

ssh -i <ssh file> opc@<public IP address>

The public IP address can be found with the Database VM details

Screenshot 2019-03-13 11.26.35

[opc@tudublins1 ~]$ sudo su - oracle
[oracle@tudublins1 ~]$ . oraenv
ORACLE_SID = [cdb1] ? 
The Oracle base has been set to /u01/app/oracle
[oracle@tudublins1 ~]$ 
[oracle@tudublins1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 - Production on Wed Mar 13 11:28:05 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0

SQL> alter session set container = pdb1;

Session altered.

SQL> create user demo_user identified by DEMO_user123##;

User created.

SQL> grant create session to demo_user;

Grant succeeded.

SQL>

4. Open port 1521

To be able to access this with a Basic connection in SQL Developer and most programming languages, we will need to open port 1521 to allow these tools and languages to connect to the database.

To do this go back to the Virtual Cloud Networks section from the menu.

Screenshot 2019-03-13 11.08.48

Click into your VCN, that you created earlier. You should see something like the following.

Screenshot 2019-03-13 11.34.53

Click on the Security Lists, menu option on the left hand side.

Screenshot 2019-03-13 11.39.10From that screen, click on Default Security List, and then click on the ‘Edit All Rules’ button at the top of the next screen.

Add a new rule to have a ‘Destination Port Range’ set for 1521

Screenshot 2019-03-13 11.41.19

That’s it.

5. Connect to the Database from anywhere

Now you can connect to the OCI Database using a basic SQL Developer Connection.

Screenshot 2019-03-13 11.46.06

Creating an Autonomous Data Warehouse Cloud Service

Posted on Updated on

The following outlines the steps to create a Autonomous Data Warehouse Cloud Service.

Log into your Oracle Cloud account and then follow these steps.

1. Select Autonomous Data Warehouse Cloud service from the side menu

NewImage

2. Select Create Autonomous Data Warehouse button

NewImage

3. Enter the Compartment details (Display Name, Database Name, CPU Core Count & Storage)

NewImage

4. Enter a Password for Administrator, and then click ‘Create Autonomous Data Warehouse’

NewImage

5. Wait until the ADWC is provisioned

Going from this

NewImage

to this

NewImage

And you should receive and email that looks like this

NewImage

6. Click on the name of the ADWS you created

NewImage

7. Click on the Service Console button

NewImage

8. Then click on Administration and then Download a Connection Wallet

Specify the password

NewImage

You an now use this to connect to the ADWS using SQL Developer

All done.

 

Oracle Code Presentation March 2018

Posted on

Last week I was presenting at Oracle Code in New York. I’ve presented at a few Oracle Code events over the past 12 months and it is always interesting to meet and talk with developers from around the World.

The title of my presentation this time was ‘SQL: The one language to rule all your data’.

I’ve given this presentation a few times at different events (POUG, OOW, Oracle Code). I take the contents of this presentation for granted and that most people know these things. But the opposite is true. Well a lot of people do know these things, but a magnitude more do not seem to know.

For example, at last weeks Oracle Code event, I had about 100 people in the room. I started out by asking the attendees ‘How many of you write SQL every day?’. About 90% put up their hand. Then a few minutes later after I start talking about various statistical functions in the database, I then ask them to ‘Count how many statistical functions they have used?’ I then asked them to raise their hands if they use over five statistical functions. About eight people put up their hands. Then I asked how many people use over ten functions. To my surprise only one (yes one) person put up their hand.

The first half of the presentation talks about statistical, analytical and machine learning in the database.

The second half covers some (not all) of the various data types and locations of data that can be accessed from the database.

The presentation then concludes with the title of the presentation about SQL being the one language to rule all your data.

Based on last weeks experience, it looks like a lot more people need to hear it !

Hopefully I’ll get the chance to share this presentation with other events and Oracle User Group conferences.

Two of the key take away messages are:

  • Google makes us stupid
  • We need to RTFM more often

Here is a link to the slides on SlideShare

And I recorded a short video about the presentation with Bob from OTN/ODC.