Oracle Technology Network

Oracle Magazine–March 1995

Posted on

In 1995 we have a change to the frequency of publication of Oracle Magazine. It is not published every 2 two months with 6 editions each year, as it is still the case.

The headline articles in the March/April 1994 edition of Oracle Magazine included Integrating Unstructured Information, Minimizing Client/Server Network Traffic with Oracle Forms 4.0, Relational Objects and how the Canadian Postal Service was using Oracle Technology to deliver mail on time.

image

Other articles include:

Dynamic SQL Comes to PL/SQL: Introduces us to the DBMS_SQL package. A useful quick reference one pager is given giving the details of each of the 15 procedures and functions, including, BIND_VARIABLE, CLOSE_CURSOR, COLUMN_VALUE, DEFINE_COLUMN, EXECUTE, EXECUTE_AND_FETCH, FETCH_ROWS, IS_OPEN, LAST_ERROR_POSITION, LAST_ROW_COUNT, LAST_ROW_ID, LAST_SQL_FUNCTION_CODE, OPEN_CURSOR, PARSE, VARIABLE_VALUE.

Personal Oracle 7 for Windows gets released and it now has a graphical administration tool to allow us to manage the database, including users, backup and recovery managers along with export/import and SQL*Loader and Oracle Objects for OLE. The hardware requirements included 8Mb of RAM and less than 30Mb of hard-disk space. How things have changed!!!. It was capable of running on Window 3.1 and was Windows 95 ready.

Rollback segments in in Oracle 7 allows use to now effectively manage our transactions. It gave the following recommendations:

  • Create a separate tablespace for rollback segments for administrative and monitoring purposes
  • Se the size of the tablespace according to the number and storage parameters of the rollback segments
  • Place your rollback segment tablespace on a separate disk device, if possible. Consider total I/O activities and spread I/O over available disk devices
  • Create one rollback segment for every four concurrent transactions
  • Don’t create more than 50 rollback segments
  • Create smaller rollback segments for OLTP applications. This increases the likelihood that rollback segments will be cached in SGA according to the least recently used algorithm, resulting in performance gain
  • Create larger rollback segments for decision support or back processing applications
  • Set initial to you average transaction size, set OPTIMAL to four to eight times larger than INITIAL

Oracle Magazine goes digital and interactive with a CD-ROM containing every issue of published in the previous two years and also contained a map of Oracle offices worldwide.

To view the cover page and the table of contents click on the above image or click here.

My Oracle Magazine Collection can be found here. You will find links to my blog posts on previous editions.

OTN Workshop Days in Dublin 17-

Posted on

Oracle in Ireland have arranged a number of FREE Oracle Technology Network Hands on Workshops.

17th April : Database Firewall

18th April : Oracle Real Application Testing

19th April : Database 11g R2 New Features

20th April : Business Integration using Oracle SOA Suite 11g

All the workshops are in the Oracle offices in East Point, in Dublin.

To register for these events

http://www.oracle.com/us/dm/34862-splashpage-1438215.html

Oracle Magazine–Fall 1993

Posted on

The headline articles of the Fall 1993 edition of Oracle Magazine included topics on Multimedia Servers, Free Space Defragmentation and Geographic Information Systems.

image

Apart from the head line articles most of the other articles were about product announcements/updates and about how certain types of companies were using IT.

The main technical article was by Kevin Loney on freeing up free space in an Oracle 6 database using defragmentation. This was a popular topic for Oracle version 6 and 7, when disk space was expensive. It is less of an issue today.

There was an article on the Digital Highway and in Multimedia Servers. These were early indicators of Larry’s investing in the Video On-Demand servers.

The first maintenance release of Oracle 7 database was announced (Oracle 7.1), with enhancements centred around:

  • Parallel Operations : performance increases, splitting of query execution, data loading and index creation tasks and execute them concurrently on multiple CPS. Permits multiple sessions running SQL*Loader using the fast bulk load i.e.direct path
  • Application Development : Users can now embed PL/SQL functions in SQL statements and reference those functions in SQL expressions as if they are build-in.
  • Administration : Tablespaces can be placed in read-only mode. Provides a parallel recovery mechanism.
  • Standards Compliance : Oracle 7.1 is compliance with the SQL92 Entry Level standard. The ORDER BY clause can not reference SELECT list items names with a column alias.

Some of the products discussed in articles included Pro*C version 2.0, SQL*Module version 1.0, OracleWare, Oracle Graphics version 2.0, SQL*Net version 2 on MVS, SQL*Connect to SQL/400 and Oracle Card 2.0.

Apart from these product related articles, some others of interest included an advertisement for ERwin which was a product owned at that time by Logic Works.

There was a short article on the IOUG 12th Annual User Group Conference in September.

To view the cover page and the table of contents click on the above image or click here.

My Oracle Magazine Collection can be found here.

My previous posts on Oracle Magazine
  – Winter 1993
  – Fall 1992

Rewards received in the Post today

Posted on

Today I arrived home from work to find that I had received 2 parcels.

In the first parcel was

image

Thank you Oracle ACE Programme and OTN.

In the second parcel was an IOUG Select Contributors T-shirt. I received this because I had an article in the Q1 2012 edition of IOUG Select, on Oracle Data Miner.  Thank you IOUG.

Oracle Magazine–Winter 1993

Posted on

The lead articles in the first Oracle Magazine of 1993 (winter) were about how the Oracle database was being used in the Toronto Stock Exchange and in the Municipio de Panama of Panama City.

image

Other articles of interest included

  • Cary Millsap on the Optimal Flexible Architecture. Was this Cary’s first article in Oracle Magazine ? OFA provided a set of guidelines on how to install Oracle so that performance is not affected as your database environment grows. Cary gives 3 basic rules for creating and ODA compliant Oracle database (the full set is in the article).
    1. Establish and orderly operating system directory structure in which any database file can be stored on any disk resources (applicable to operating systems with hierarchical directory structures only).
    2. Separate groups of segments with different behavior into different tablespaces.
    3. Maximize database reliability and performance by separating database components across disk resources
  • There was 2 articles that looked at OODBMS. Oracle made their first purchase of an OODBMS company in 1994 and were trying to release their first OODBMS product in 1995. This never happened and their reverted to integrating some OODBMS features into the database instead.
  • Oracle Glue is released. This product has gone through a number of name changes since. What is it called now ?
  • There was a couple of articles discussing using UNIX or ALPHA server machines, when VLDBs were considered extremely large at 32GB. I remember there was one Bank who had the largest database in Ireland in 1993 and it was 4GB. I wonder what it is now.
  • Business Object announces the release of their DOS and Windows end user data access tool for relational database tool.
  • There was a number of adverts from companies providing tools and support for converting your Oracle Forms Version 2 to Forms 3

 

Click on the image above to view the PDF of the table of contents page.

My Oracle Magazine Collection can be found here.

Oracle Magazine-Fall 1992

Posted on

I’ve been collecting Oracle Magazine for almost 20 years now. I have almost the entire collection, but I’m still missing some of the editions. Some people have donated some of the editions I was missing. But I still missing some.  Can you help me ? Check out my Oracle Magazine Collection.

Every 2 to 3 weeks I intend to write a blog post on each of the Oracle Magazines that I have, starting with the earliest edition that is from Fall 1992.

image

The main theme of the Fall 1992 edition was about the new Oracle 7 Database. I didn’t get using an Oracle 7 Database until mid-1994.There are a few customer case studies of Oracle 7 implementations. In the article Migrating to Oracle 7, they list some of the new feature. The following is an extract from the new features section.

Standard Oracle 7 provides the functionality and performance to tackle most mission critical applications:

– Multithreaded server
– Shared SQL
– Cost Based optimiser
– Row level locking
– Declarative integrity
– Role based security
– 100% ANSI/ISO standard SQL
– Enhanced national language support

The procedural option for Oracle 7 provides additional capabilities that enhance the database server in the application environment:

– Stored Procedures and Functions
– Procedure packages
– Database Triggers
– Lock manager package
– Database alerts

The distributed option for Oracle 7 makes a physically distributed database appear as a single, logical database. Features of this option include:

– Distributed updates
– TP monitor (XA) interface
– Transparent two-phase commit
– Remote procedure calls
– Table replication (snapshots)
– Oracle Mail Interface

Another article that stands out is by Richard Barker (do you remember his?) who was responsible for the CASE*Method and Oracle’s CASE Tools.

There was several articles on the new Oracle Forms 3 and Oracle Menu 5. Talking about some of the new features like List of Values (LOVs), pop-up Field editor and Pop-up Pages.

There also also the first articles on using Oracle a Microsoft Windows environment. Oh how I remember the frequent blue screens when developing and compiling my forms and in particular my report, with does early releases on Windows.

 

The editorial staff of Oracle Magazine have kindly given me permission to make a PDF of the front cover and the table of contents available for each edition. To get this PDF click on the above image or follow this link to see what Oracle Magazine used to look like 20 years ago.

My next blog post on Oracle Magazine, will look at the Winter 1993 edition.

If you have any of the editions that I’m missing from my collection and you would like to donate then, then drop me an email and we can arrange delivery.  You wont see any of them on eBay, I promise.

New additions to my Oracle Magazine Collection

Posted on

I put out a call a few months ago looking for help in completing my Oracle Magazine collection. I also had a letter in the Nov/Dec 2011 edition of Oracle Magazine looking for help.

The following people contacted me in December offering to send me some of my missing editions.

Lisa Dobson – Oracle DBA at Durham University and Vice President of UKOUG
Debra Lilley – President of UKOUG
Christian Antognini – Trivadis AG, Switzerland
Dan Vlamis – Vlamis Software Solutions, MO, USA

Many thanks for your donations.

It seems that people don’t keep their Oracle Magazines!

I’m now just missing a very small number of editions since 1992.  Can anyone else help ?

Does anyone have any Oracle Magazines from before 1992 ?

The following table gives the current collection (printed editions). The grey boxes are the editions that I’m still missing, and keep an eye on my Oracle Magazine page for updates.

oracle mag3

ODM 11gR2–Real-time scoring of data

Posted on

In my previous posts I gave sample code of how you can use your ODM model to score new data.

Applying an ODM Model to new data in Oracle – Part 2

Applying an ODM Model to new data in Oracle – Part 1

The examples given in this previous post were based on the new data being in a table.

In some scenarios you may not have the data you want to score in table. For example you want to score data as it is being recorded and before it gets committed to the database.

The format of the command to use is

prediction(ODM_MODEL_NAME USING )

prediction_probability(ODM_Model_Name, Target Value, USING )

So we can list the model attributes we want to use instead of using the USING *  as we did in the previous blog posts

Using the same sample data that I used in my previous posts the command would be:

Select prediction(clas_decision_tree
USING
20 as age,
‘NeverM’ as cust_marital_status,
‘HS-grad’ as education,
1 as household_size,
2 as yrs_residence,
1 as y_box_games) as scored_value
from dual;

SCORED_VALUE
————
           0

Select prediction_probability(clas_decision_tree, 0
USING
20 as age,
‘NeverM’ as cust_marital_status,
‘HS-grad’ as education,
1 as household_size,
2 as yrs_residence,
1 as y_box_games) as probability_value
from dual;

PROBABILITY_VALUE
—————–
                1

So we get the same result as we got in our previous examples.

Depending of what data we have gathered we may or may not have all the values for each of the attributes used in the model. In this case we can submit a subset of the values to the function and still get a result.

Select prediction(clas_decision_tree
USING
20 as age,
‘NeverM’ as cust_marital_status,
‘HS-grad’ as education) as scored_value2
from dual;

SCORED_VALUE2
————-
            0

Select prediction_probability(clas_decision_tree, 0
USING
20 as age,
‘NeverM’ as cust_marital_status,
‘HS-grad’ as education) as probability_value2
from dual;

PROBABILITY_VALUE2
——————
                 1

Again we get the same results.

My first set of Oracle Products

Posted on

I started working with Oracle back in 1993 and my first project involved working with Oracle 5, Forms 2.3 and for reports RPT.

The Oracle Database and tools were very simple back then, but there was lots of “features” to work around.

Check out this video, for a short demo of Oracle 5 and Forms 2

I’m an Oracle ACE

Posted on Updated on

At 5:20pm today (Friday 2nd December), I received an email from the Oracle ACE program.  I had been nominated for the award of Oracle ACE.

“You have been chosen based on your significant contribution and activity in the Oracle technical community.  Like your fellow Oracle ACEs, you have demonstrated a proficiency in Oracle technology as well as a willingness to share your knowledge and experiences with the community.”

I am so honoured, considering the experts from around the world that are members of the Oracle ACE program.

The Oracle ACE Award is issued by the Oracle Corporation and the award is made to people who are know for their strong credentials in the Oracle community as enthusiasts, advocates and technical knowledge.

ODM–PL/SQL API for Exporting & Importing Models

Posted on

In a previous blog post I talked about how you can take a copy of a workflow developed in Oracle Data Miner, and load it into a new schema.
When you data mining project gets to a mature stage and you need to productionalise the data mining process and model updates, you will need to use a different set of tools.

As you gather more and more data and cases, you will be updating/refreshing your models to reflect this new data. The new update data mining model needs to be moved from the development/test environment to the production environment. As with all things in IT we would like to automate this updating of the model in production.
There are a number of database features and packages that we can use to automate the update and it involves the setting up of some scripts on the development/test database and also on the production database.

These steps include:

  • Creation of a directory on the development/test database
  • Exporting of the updated Data Mining model
  • Copying of the exported Data Mining model to the production server
  • Removing the existing Data Mining model from production
  • Importing of the new Data Mining model.
  • Rename the imported mode to the standard name

The DBMS_DATA_MINING PL/SQL package has 2 functions that allow us to export a model and to import a model. These functions are an API to the Oracle Data Pump. The function to export a model is DBMS_DATA_MINING.EXPORT_MODEL and the function to import a model is DBMS_DATA_MINING.IMPORT_MODEL.The parameters to these function are what you would expect use if you were to use Data Pump directly, but have been tailored for the data mining models.

Lets start with listing the models that we have in our development/test schema:

SQL> connect dmuser2/dmuser2
Connected.
SQL> SELECT model_name FROM user_mining_models;

MODEL_NAME
——————————
CLAS_DT_1_6
CLAS_SVM_1_6
CLAS_NB_1_6
CLAS_GLM_1_6

Create/define the directory on the server where the models will be exported to.

CREATE OR REPLACE DIRECTORY DataMiningDir_Exports AS ‘c:\app\Data_Mining_Exports’;

The schema you are using will need to have the CREATE ANY DIRECTORY privilege.

Now we can export our mode. In this example we are going to export the Decision Tree model (CLAS_DT_1_6)

DBMS_DATA_MINING.EXPORT_MODEL function
The function has the following structure

DBMS_DATA_MINING.EXPORT_MODEL (
     filename IN VARCHAR2,
     directory IN VARCHAR2,
     model_filter IN VARCHAR2 DEFAULT NULL,
     filesize IN VARCHAR2 DEFAULT NULL,
     operation IN VARCHAR2 DEFAULT NULL,
     remote_link IN VARCHAR2 DEFAULT NULL,
     jobname IN VARCHAR2 DEFAULT NULL);

If we wanted to export all the models into a file called Exported_DM_Models, we would run:

DBMS_DATA_MINING.EXPORT_MODEL(‘Exported_DM_Models’, ‘DataMiningDir’);

If we just wanted to export our Decision Tree model to file Exported_CLASS_DT_Model, we would run:

DBMS_DATA_MINING.EXPORT_MODEL(‘Exported_CLASS_DT_Model’, ‘DataMiningDir’, ‘name in (”CLAS_DT_1_6”)’);

DBMS_DATA_MINING.DROP_MODEL function
Before you can load the new update data mining model into your production database we need to drop the existing model. Before we do this we need to ensure that this is done when the model is not in use, so it would be advisable to schedule the dropping of the model during a quiet time, like before or after the nightly backups/processes.

DBMS_DATA_MINING.DROP_MODEL(‘CLAS_DECISION_TREE’, TRUE)

DBMS_DATA_MINING.IMPORT_MODEL function
Warning : When importing the data mining model, you need to import into a tablespace that has the same name as the tablespace in the development/test database.  If the USERS tablespace is used in the development/test database, then the model will be imported into the USERS tablespace in the production database.

Hint : Create a DATAMINING tablespace in your development/test and production databases. This tablespace can be used solely for data mining purposes.

To import the decision tree model we exported previously, we would run

DBMS_DATA_MINING.IMPORT_MODEL(‘Exported_CLASS_DT_Model’, ‘DataMiningDir’, ‘name=’CLAS_DT_1_6”’, ‘IMPORT’, null, null, ‘dmuser2:dmuser3’);

We now have the new updated data mining model loaded into the production database.

DBMS_DATA_MINING.RENAME_MODEL function
The final step before we can start using the new updated model in our production database is to rename the imported model to the standard name that is being used in the production database.

DBMS_DATA_MINING.RENAME_MODEL(‘CLAS_DT_1_6’, ‘CLAS_DECISION_TREE’);

Scheduling of these steps
We can wrap most of this up into stored procedures and have schedule it to run on a semi-regular bases, using the DBMS_JOB function. The following example schedules a procedure that controls the importing, dropping and renaming of the models.

DBMS_JOB.SUBMIT(jobnum.nextval, ‘import_new_data_mining_model’, trunc(sysdate), add_month(trunc(sysdate)+1);

This schedules the the running of the procedure to import the new data mining models, to run immediately and then to run every month.

ODM 11.2–Data Mining PL/SQL Packages

Posted on

The Oracle 11.2 database contains 3 PL/SQL packages that allow you to perform all (well almost all) of your data mining functions.

So instead of using the Oracle Data Miner tool you can write some PL/SQL code that will you to do the same things.

Before you can start using these PL/SQL packages you need to ensure that the schema that you are going to use has been setup with the following:

  • Create a schema or use and existing one
  • Grant the schema all the data mining privileges: see my earlier posting on how to setup an Oracle schema for data mining – Click here and YouTube video
  • Grant all necessary privileges to the data that you will be using for data mining

The first PL/SQL package that you will use is the DBMS_DATA_MINING_TRANSFORM. This PL/SQL package allows you to transform the data to make it suitable for data mining. There are a number of functions in this package that allows you to transform the data, but depending on the data you may need to write your own code to perform the transformations. When you apply your data model to the test or the apply data sets, ODM will automatically take the transformation functions defined using this package and apply them to the new data sets.

The second PL/SQL package is DBMS_DATA_MINING. This is the main data mining PL/SQL package. It contains functions to allow you to:

  • To create a Model
  • Describe the Model
  • Exploring and importing of Models
  • Computing costs and text metrics for classification Models
  • Applying the Model to new data
  • Administration of Models, like dropping, renaming, etc

The next (and last) PL/SQL package is DBMS_PREDICTIVE_ANALYTICS.The routines included in this package allows you to prepare data, build a model, score a model and return results of model scoring. The routines include EXPLAIN which ranks attributes in order of influence in explaining a target column. PREDICT which predicts the value of a target attribute based on values in the input. PROFILE which generates rules that describe the cases from the input data.

Over the coming weeks I will have separate blog posts on each of these PL/SQL packages. These will cover the functions that are part of each packages and will include some examples of using the package and functions.