oug_ire
Oracle ACEs at OUG Ireland 2015
The annual Oracle User Group in Ireland Conference will be on Thursday 19th March. This year the conference will be held in the Croke Park conference centre. This conference centre is only a short taxi ride from Dublin Airport and Dublin City Centre.
If you are planning a hotel stay for the conference I would recommend staying in a hotel in the city centre and get a taxi to/from the conference venue.
We have a large number of Oracle ACEs presenting at the conference. The following table lists the ACEs, their twitter handle and their website.
table.myTable { border-collapse:collapse; } table.myTable td, table.myTable th { border:1px solid black;padding:5px; }
Make sure you check out the full agenda for the conference by clicking on the following image. Plus there is a full day session on Friday 20th March with Maria Colgan on the Oracle In-Memory option.
People from Ireland Presenting at OOW14
Oracle Open World is coming up in a few days time. This is a huge event that also incorporates Jave One and various other smaller conferences that are for specific product areas and for partners.
I will be presenting at Oracle Open World this year and I’ll also be taking part in a number of other sessions/events including the Oracle ACE Directors briefing. Check out my blog post that list these sessions/events.
In addition to myself presenting at OOW14 there are a few other people from Ireland who are presenting. The following lists there sessions (including mine for a full list). If you are attending OOW14 then do try to drop along to these sessions.
Sunday 28th September 9:00-9:45
Brendan Tierney & Roel Hartman
Moscone South Room 304
What are they Thinking? With Oracle Application Express and Oracle Data Miner.
Debra Lilley 14:30-15:15
Moscone South Room 304
2 Looks at Oracle Database 12c: EOUC Short Talks [UGF8949]
Debra Lilley 15:30-16:15
Moscone South Room 304
12 Looks at Oracle Database 12c: EOUC Short Talks, Part 2 [UGF9221]
Tuesday 30th September 17:00-17:45
Mina Sagha Zadesh (Oracle Ireland)
Intercontinential – Grand Ballroom A
[CON4259] Unique Advantages of Oracle Solaris for Oracle Database Systems.
Wednesday 1st October 10:15-11:00
Simon Holt (ESB)
Marriott Marquis – Golden Gate C1/C2
[CON5388] An Oracle SuperCluster Engineered System for Oracle Utilities Network Management System.
Debra Lilley
Moscone West – 3018
Deliver Business Innovation while Reducing Upgrades’ Risk [CON8534].
Wednesday 1st October 11:30-12:45
Kevin Callanan (AIB)
Moscone South Room 301
[CON8247] DBA’s New Best Friend for Mistake Free Administration: Oracle Real Application Testing.
I’ll be at these sessions to support my fellow Irish. I hope to see you there too 🙂
OUG Ireland Super SIG Day : Sept. 24 2014
The next set of Ireland Oracle User Group SIG meetings will be on Wednesday 24th September. This will be a super SIG event with the following SIGs being run in parallel and all in one venue (Jurys Inn, Customs House, Dublin).
OUG Ireland BI & EPM SIG Meeting
OUG Ireland Technology SIG Meeting
Click on the links above for each SIG to get the details of agenda. There is a great line up of presentations for each SIG with some true experts.
From what I’ve been hearing that has been a lot of registrations already for these events. So if you are interested in attending then sign up now to reserve your place. Click on the SIG you want to attend and there will be a registration button on the webpage, just below the agenda.
Unfortunately I won’t be at these event 😦 The 24th September is the day that I travel out to Oracle Open World. Yes I know OOW doesn’t start until the 29th, but I will be attending the Oracle ACE Director briefing at Oracle HQ in Redwood Shores on the 25th and 26th.
Hopefully there will be lots of twitter action during these SIGs and don’t forget to use the OUG Ireland twitter hash tag of #oug_ire That way I and others can follow what is happening during the day.
OUG Ireland
The annual OUG Ireland Conference (or special event) will be on Tuesday 11th March. Actually this year there are sessions spread over 2 days, for the first time ever in the 10+ year history of OUG Ireland. In addition to 2 days of sessions there are 7 streams of presentations on the Tuesday and then there is the RAC AttacK for the first time in Ireland.
The main conference event is on Tuesday 11th March in the DCC in Dublin. Things kick off at 9:20 with Debra Lilley welcoming everyone to the event. Then Jon Paul from Oracle in Ireland will do the opening keynote. Then we can break into the 7 streams with lots of local case studies and some well known speakers from around the world including many Oracle ACEs and ACE Directors (my presentation is at 12:15).
The day ends up with 2 keynote presentations. There will be a keynote that will be focused on the App streams (Nadia Bendjedou, Oracle) and a separate keynote for Tech streams (by Tom Kyte).
Throughout the day there will be RAC Attack event. Look out for their tables in the exhibition hall. Again there will be some well known experts from around the world who will be on hand to help you get RAC setup and running on your own laptop, answer your questions and engage in lots of discussions about all thing Oracle. The RAC Attack Ninja will include Osama Mustafa, Philippe Fierens, Marcin Przepiorowski, Martin Bach and Tim Hall. Some of these are giving presentations throughout the day, so when they are not presenting you will find them at the RAC Attack table. Even if you are not going to install RAC drop by and have a chat with them.
On Wednesday 12th March the OUG Ireland Conference ventures into a second day of sessions. These sessions will be a full day of topics by Tom Kyte. This is certainly a day not to be missed. As they say places are limited so book your place today.
Click on the following image to view the agenda for the 2 days and to book your place on the 11th and 12th March.
OUG Ireland BI & Tech SIGs June 2013
On 11th and 12th June we will be having our next SIG meetings for BI and Tech. The BI SIG will be on the 11th June in the Oracle offices in East Point. We then move the the Conrad Hotel on the 12th June for the Tech SIG. Here are the agendas for the 2 days.
BI SIG
Tech SIG
These events are open to everyone, are free for members and a small fee for non-members.
To register for these event go to the following links
Oracle Magazine–March/April 1999
The headline articles for the March/April 1999 edition of Oracle Magazine were on the evolving world of the DBA. With some much new technology available in the database the role of the DBA is moving from a back office type role to one having a significant strategic influence in the organisation.
Other articles included:
- Oracle releases a web based version of their Oracle Strategic Procurement application that includes three key parts: Strategic Sourcing, Internet Procurement and Process Automation.
- Sun and Oracle announce a strategic agreement that allows both companies to enhance their product offerings by exchanging key technologies. Oracle will use the core of the Sun Solaris operating environment to deliver the industry’s first database server appliances.
- Oracle Data Mart Suite releases version 2.5. It includes, Oracle Data Mart Builder, Oracle Data Mart Designer, Oracle 8 Enterprise Edition, Oracle Discoverer, Oracle Application Server and Oracle Reports and Reports Server.
- New integration between Oracle Reports release 6.0 and Oracle Express Server release 6.2 to give users the ability to distribute high quality reports of information held in a multi-dimensional database across the enterprise.
- The need for the DBA to know and understand the V$ views has been increasing during the later releases of 7.3 and 8i. The can be used for a variety of purposes, including understanding locked users, system resources, licencing and parameter settings.
- One thing that all DBAs need to plan for is a database recovery. Planning it is one thing, but practicing it is another thing. A typical recovery plan will include, choosing a data file, create a backup, take the damaged tablespace offline, restore the damaged data file, bring the tablespace back online, recover the tablespace, bring the tablespace back online and test it.
- Avoiding trigger errors, including Mutating and constraining table errors.
- There is an article by Bryan Laplante on using Historgrams to Optimize Data Mart Performance.
To view the cover page and the table of contents click on the image at the top of this post or click here.
My Oracle Magazine Collection can be found here. You will find links to my blog posts on previous editions and a PDF for the very first Oracle Magazine from June 1987.
Clustering in Oracle Data Miner–Part 3
This is a the third part of a five (5) part blog post on building and using Clustering in Oracle Data Miner. The following outlines the contents of each post in this series on Clustering.
- The first part we looked at what clustering features exist in ODM and how to setup the data that we will be using in the examples
- The second part will focus on how to building Clusters in ODM .
- The third post will focus on examining the clusters produced by ODM and how to use the Clusters to apply to new data using ODM.
- The fourth post will look at how you can build and evaluate a Clustering model using the ODM SQL and PL/SQL functions.
- The fifth and final post will look at how you can apply your Clustering model to new data using the ODM SQL and PL/SQL functions.
In my previous posts on Clustering in ODM we have setup our data, we have explored it, we have taken a sample of the data and used this sample as input to the Cluster Build node. Oracle Data Miner has two clustering algorithms and our Cluster Build node created a clustering model for each.
In this post we will look at the next steps. The first of these is that we will look at examining what clustering models ODM produced. In the second part we will look at how we can use one of these clustering models to apply and label new data.
Step 1 – View the Cluster Models
To view the the cluster modes we need to right click the Cluster Build node and select View Models from the drop down list. We get an additional down down menu that gives the names of the two cluster models that were developed.
In my case these are called CLUS_KM_1_25 and CLUS_OC_1_25. You may get different numbers on your model names. These numbers are generated internally in ODM
The first mode that we will look at will be the K-Mean Cluster Model (CLUS_KM_1_25). Select this from the menu.
Step 2 – View the Cluster Rules
The hierarchical K-Mean cluster mode will be displayed. You might need to readjust/resize some of the worksheets/message panes etc in ODM to get the good portion of the diagram to display.
With ODM you cannot change, alter, merge, split, etc any the clusters that were generated. Oracle take the view of, this is what we have found it it up to you now to decide how you are going to use it.
To see that the cluster rules are for each cluster you can click on a cluster. When you do this you should get a pane (under the cluster diagram) that will contain two tabs, Centroid and Cluster Rule.
The Centroid tab provides a list of the attributes that best define the selected cluster, along with the average value for each attribute and some basic statistical information.
The Cluster Rules tab contains a set of rules that define the cluster in a IF/THEN statement format.
For each cluster in the tree we can see the number of cases in each cluster the percentage of overall cases for this cluster.
Work your way down the tree exploring each of the clusters produced.
The further down the tree you go the smaller the percentage of cases will fall into each cluster. In some tools you can merge these clusters. Not so in ODM. What you have to do is to use an IF statement in your code. Something like IF cluster_num IN (16, 17, 18, 19) THEN …..
Step 3 – Compare Clusters
In addition to the cluster tree, ODM also has two addition tabs to allow us to explore the clusters. These are Detail and Compare tabs.
Click on the Detail tab. We now get a detailed screen that contain various statistical information for each attribute. We can for each attribute get a histogram of the values within each attribute for this cluster.
We can use this important to start building up a picture of what each cluster might represent based on the values (and their distribution) for each cluster.
Try this out for a few clusters.
Step 4 – Multi-Cluster – Multi-variable Comparison of Clusters
The next level of comparison and evaluation of the clusters can be found under the Compare tab.
This lets us compare two clusters against each other at an attribute level. For example let us compare cluster 4 and 9. The attribute and graphics section gets updated to reflect the data for each of cluster. These are colour coded to distinguish the two clusters.
We can work our way down through each attribute and again we can use this information to help us to understand what each cluster might represent.
An additional feature here is that we can do multi-variable (attribute) comparison. Holding down the control button select LTV_BIN, SEX and AGE. With each selection we get a new graph appearing at the bottom of the screen. This shows the distribution of the values by attribute for each cluster. We can learn a lot from this.
So one possible conclusion we could draw from this data would be that Cluster 4 could be ‘Short Term Value Customers’ and Cluster 9 could be ‘Long Term Value Customer’
Step 5 – Renaming Clusters
When you have discovered a possible meaning for a Cluster, you can give it a meaningful name instead of it having a number. In our example, we would like to re-label Cluster 4 to ‘Short Term Value Customers’. To do this click on the Edit button that is beside the drop down that has cluster 4. Enter the new label and click OK.
In the drop down we will now get the new label appearing instead of the cluster number.
Similarly we can do this for the other cluster e.g. ‘Long Term Value Customer’.
We have just looked at how to explore our K-Means model. You can do similar exploration of the O-Cluster model. I’ll leave that for you to do.
We have now explored our clusters and we have decided which of our Clustering Models best suits our needs. In our scenario we are going to select the K-Mean model to apply and label our new data.
Step 1 – Create the Apply Node
We have already setup our sample of data that we are going to use as our Apply Data Set. We did this when we setup the two different Sample node.
We are going to use the Sample node that was set to 40%.
The first step requires us to create an Apply Node. This can be found under the Component Palette and Evaluate and Apply tab. Click on the Apply node and move the mouse to the workflow worksheet and click near the Sample Apply node.
To connect the two nodes, move the mouse to the Sample Apply node and right click. Select Connect from the drop down menu and then move the mouse to the Apply node and click again. An connection arrow will be created joining these nodes.
Step 2 – Specify which Clustering Model to use & Output Data
Next we need to specify which of the clustering model we want to use to apply to our new data.
We need to connect the Cluster Build node to the Apply node. Move the mouse to the Cluster Build node, right click and select connect from the drop down menu. Move the mouse to the Apply node and click. We get the connection arrow between the two node.
We now have joined the Data and the Cluster Build node to the Apply node.
The final step is to specify what clustering mode we would like to use. In our scenario we are going to specify the K-Mean model.
(Single) Click the Cluster Build node. We now need to use the Property Inspector to select the K-Means model for the apply set. In the Models tab of the Property Inspector we should have our two cluster models listed. Under the Output column click in the box for the O-Cluster model. We should now get a little red X mark appearing. The K-Mean model should still have the green arrow under the Output column.
Step 3 – Run the Apply Node
We have one last data setup to do on the Apply node. We need to specify what data from the apply data set we want to include in the output from the Apply node. For simplicity we want to just include the primary key, but you could include all the attributes. In addition to including the attributes from the apply data source, the Apply Node will also create some attributes based on the Cluster model we selected. In our scenario, the K-Means model will create two additional attributes. One of these will contain the Cluster ID and the other attribute will be the probability of the that cluster being valid.
To include the attributes from the source data, double click on the Apply node. This will open the Edit Apply Node window. You will see that it already contains the two attributes that will be created by the K-Mean model.
To add the attributes from the source data, click on the Data Columns tab and then click on the green ‘+’ symbol. For simplicity we are going to just select the CUSTOMER_ID. Click the OK button to finish.
Now we are ready to run the Apply node. To do this right click on the Apply Node and select Run from the drop down menu. When everything is finished you will get the little green tick mark on the top right hand corner of the Apply node.
Step 4 – View the Results
To view the results and the output produced by the Apply node, right click on the Apply node and select View Data from the drop down menu.
We get a new tab opened in SQL Developer that will contain the data. This will consist of the CUSTOMER_ID, the K-means Cluster ID and the Cluster Probability. You will see that the some of the clusters assigned will have a number and some will have the cluster labels that we assigned in a previous step.
It is now up to you to decide how you are going to use this clustering information in an operational or strategic way in your organisation.
In my next (fourth) blog post in the series on Clustering in Oracle Data Miner, I will show how you can perform similar steps, of building and evaluating clustering models, using the SQL and PL/SQL functions in the database. So we will not be using the ODM tool. We will be doing everything in SQL and SQL/PLSQL.
Clustering in Oracle Data Miner–Part 2
This is a the second part of a five (5) part blog post on building and using Clustering in Oracle Data Miner. The following outlines the contents of each post in this series on Clustering.
- The first part we looked at what clustering features exist in ODM and how to setup the data that we will be using in the examples
- The second part will focus on how to building Clusters in ODM .
- The third post will focus on examining the clusters produced by ODM and how to use the Clusters to apply to new data using ODM.
- The fourth post will look at how you can build and evaluate a Clustering model using the ODM SQL and PL/SQL functions.
- The fifth and final post will look at how you can apply your Clustering model to new data using the ODM SQL and PL/SQL functions.
With Clustering we are trying to find hidden patterns in our data. Unlike classification we a not directing the algorithms on what areas/attributes to focus on.
In our scenario we want to look to see what distinct groupings or Segments that our Customer data naturally fit into. For each of these Segment Oracle Data Miner will tell us what attributes and the values of these attributes that determine if a customer belongs to one segment or another.
Step 1 – Define the Data Source
The first step involves us creating a Data Source Node for the table that we created and loaded in the previous blog post. We called this table INSURANCE_CUST_LTV.
To create the Data Source Node go to the Component Palette. Under the Data tab you will find the Data Source option. Click on this and then go to the workflow worksheet and click. The Data Source node will be created and the wizard to specify the name of the table/view will open. Select INSURANCE_CUST_LTV from the list.
Click on the Next and then the Finish button to take in all the attribute.
Our data is now read to use.
Step 2 – Explore the Data
We can use the Explore Node to gather some statistics on the data and to produce some graphs.
To create the Explore Node, go to the Component Palette and under the Data tab you will find the Explore Data node. Click on this and then click again on the workflow worksheet, near the Data node.
You need to connect the Data node to the Explore Data node. Move your mouse to the Data node. Right-click this node and select Connect from the drop down menu. Then more the mouse to the Explore Data node and click on it. You will now have an arrowed line joining these two nodes
The next step we need to do is to right click on the Explore Data node and select Run from the drop down menu. ODM will go off to the database and gather various statistics and create a number of graphs based on the data in the table.
NB. If you click on the Explore Data node and then look in the Property Inspector you will see that ODM will take a sample of 2,000 records to produce the statistics and graphs. If you would like ODM to use all the records then you need to click the ‘Use All Data’ check box. Or you can change the sample size.
For your initial data investigation you might use the default of sampling 2,000 records before you increase the size of the sample.
In scenarios like this you may want to explore the data in more detail and to look at how the data is distributed in relation to certain attributes. In our data we have an attribute called LTV_BIN. In this attribute we have four values including, Very High, High, Medium and Low.
In our scenario, it might be more interesting to explore the data based on this attribute and it’s values. To do this we need to tell the Explore Data node to group the data analysis based on the values in this attribute.
Double-click the Explore Data node. In the Group By drop down select LTV_BIN. Click the OK button. You are now ready to run the Explore Data Node. To do this, right click on Explore Data node and select Run from the drop down list.
To view the statistics gathered and the graphs produced on the default sample of 2,000 records, right click the Explore Data node and select View Data from the drop down menu. You will get a new tab/window opening in SQL Developer with all the results.
This kind of data analysis only works with an attribute that has a low number of possible values.
Step 3 – Defining the data we will used to Build our Cluster models
We are going to divide the data in our CUST_INSURANCE_LTV into two data sets. The first data set will be used to build the Cluster models. The second data set will be used as part of the Apply node in my next blog post (part 3).
To divide the data we are going to use the Sample Node that can be found under the Transformation tab of the Component Palette.
Create your first Sample Node. In the Settings tab of the Property Inspector set the sample size to 60% and in the Details tab rename the node to Sample Build.
Create a second Sample node and give it a sample size of 40%. Rename this node to Sample Apply.
Right click on each of these Sample nodes to run them and have them ready for the next step of building the Clustering models.
Step 4 – Creating the Clustering Build Node
When you have finished exploring the data you are now ready to move on to creating the Clustering models. As ODM has two clustering algorithms, ODM will default to creating two Clustering models.
To create the Clustering models, go to the Component Palette. Under the Models tab, select Clustering.
Move the mouse to the workflow worksheet, near the Sample Build node and click the worksheet. The Clustering node will be created. Now we need to connect the data with the Clustering node. To do this right click on the Sample Build node and select Connect from the drop down list. Then move the mouse to the Clustering node and click. An arrowed line will be created connecting the two nodes.
At this point we can run the Clustering Build node or we can have a look at the setting for each algorithm.
Step 5 – The Clustering Algorithm settings
To setup the Cluster Build node you will need to double click on the node to open the properties window. The first thing that you need to do is to specify the Case ID (i.e. the primary key). In our example this is the CUSTOMER_ID.
Oracle Data Miner has two clustering algorithms. The first of these is the well know k-Means (it is an enhanced version of it) and the O-Cluster. To look at the settings for each algorithm, click on the model listed under Model Settings and then click on the Advanced button.
A new window will open that lists all the attributes for the in the data source. The CUSTOMER_ID is unchecked as we said that this was the CASE_ID.
Click on the Algorithm Settings tab to see the internal settings for the k-means algorithm. All of these settings have a default value. Oracle has worked out what the optimal setting are for you. The main setting that you might want to play with is the Number of Clusters to build. The default is 10, but you might want to play with numbers between 5 and 15 depending on the number of clusters or segments you want to see in your data.
To view the algorithm settings for O-Cluster click on this under the Model Setting. We have less internal settings to worry about here, but we again can determine how many clusters we want to produce.
For our scenario we are going to take the default settings.
Step 6 – Run/Generate the Clustering models
At this stage we have the data set-up, the Cluster Build node created and the algorithm setting all set to what we want.
Now we are ready to run the Cluster Build node.
To do this, right click on the Cluster Build node and click run. ODM will go create a job that will contain PL/SQL code that will generate a cluster model based on K-Means and a second cluster model based on O-Cluster. This job will be submitted to the database and when it is completed we will get the little green tick mark on the top right hand corner of the Cluster Build node.
In the next blog post we will look at how to examine what clusters were produced by ODM and how we can take one of these and apply them to new data.
OUG Ireland 2013 Agenda is now live
The agenda for the OUG Ireland 2013 event is now live. The event will be in the Dublin Convention Centre on the 12th March. There are lots of excellent sessions, across 7 tracks!! So there will be something (or lots of things) for everyone who works in the Oracle world here in Ireland.
I’m sure the Oracle Database track will be very popular. I wonder why!!!
Agenda : http://www.ukoug.org/2013-events/oug-ireland-2013/agenda/
Remember registration is FREE. You don’t have to be a member of the User Group to come to this event. It is open to everyone and did I mention that it is FREE. Registration is now open.
I’ll be there. Well I suppose I have to as I’ll be presenting ![]()
I hope to see you there.
Oracle Magazine-Nov/Dec. 1998
The headline articles for the Nov/Dec 1998 edition of Oracle Magazine were on building web based applications and thin client computing. A large part of the magazine was dedicated to these topics. This was a bumper edition with a total of 152 pages of content.
Other articles included:
- There was a few articles on using Oracle 8i, including how to use Java in the Database, the Internet File System, Intermedia and Data Warehousing. Oracle 8i comes with over 150 new features
- There was a couple of articles on the Millennium Bug and how to approach such projects. There was also some advice for organisations who would have to look at how to deal with the introduction of the Euro currency in Europe.
- There was a section for articles on new product announcements from Oracle partners, including Quest, Nextek, Maxager, ObjectShare, Constellar (Warehouse Builder), Prism, DataMetrics, IQ Software, Eventus, DataMirror, Precise, Saville, DataShark, J-Database Exchange, Andataco, GeoMedia
- Oracle makes available Oracle 8i and the Application Server on a Linux platform for the first time.
- With Oracle 8i we have a number of ways of managing our constraints, including:
- Deferrable integrity constraints
- Non unique indexes for primary key and unique constraints
- Immediate constraint enabling
- Detecting lock and waiting transactions was always a task that consumed a lot of time for a DBA. A number of scripts was given to help you identify these and to resolve these problems.
- For allow of Oracle Certified DBAs out there. There was an article promoting the OCP DBA program and Exam. Some hints and tips about the exam were given, along with some practice questions.
- Plus there was 12 pages on adverts at the back of the magazine.
To view the cover page and the table of contents click on the image at the top of this post or click here.
My Oracle Magazine Collection can be found here. You will find links to my blog posts on previous editions and a PDF for the very first Oracle Magazine from June 1987.
The ‘Oh No You Don’t’ of (Oracle) Data Science
Over the past couple of weeks I’ve had conversations with a large number of people about Data Science in the Oracle arena.
A few things have stood out. The first and perhaps the most important of these is that there is confusion of what Data Science actually means. Some think it is just another name for Statistics or Advanced Statistics, some Predictive Analytics or Data Mining, or Data Analysis, Data Architecture, etc.. The reality is it is not. It is more than what these terms mean and this is a topic for discussion for another day.
During these conversations the same questions or topics keep coming up and the simplest answer to all of these is taken from a Pantomime (Panto).
We need to have lots of statisticians
‘Oh No You Don’t !’
We can only do Data Science if we have Big Data
‘Oh No You Don’t !’
We can only do data mining/data science if we have 10’s or 100’s of Million of records
‘Oh No You Don’t !’
We need to have an Exadata machine
‘Oh No You Don’t !’
We need to have an Exalytics machine
‘Oh No You Don’t !’
We need extra servers to process the data
‘Oh No You Don’t !’
We need to buy lots of Statistical and Predictive Analytics software
‘Oh No You Don’t !’
We need to spend weeks statistically analysing a predictive model
‘Oh No You Don’t !’
We need to have unstructured data to do Data Science
‘Oh No You Don’t !’
Data Science is only for large companies
‘Oh No You Don’t !’
Data Science is very complex, I can not do it
‘Oh No You Don’t !’
Let us all say it together for one last time ‘Oh No You Don’t’
In its simplest form, performing Data Science using the Oracle stack, just involves learning and using some simple SQL and PL/SQL functions in the database.
Maybe we (in the Oracle Data Science world and those looking to get into it) need to adopt a phrase that is used by Barrack Obama of ‘Yes We Can’, or as he said it in Irish when he visited Ireland back in 2011, ‘Is Feidir Linn’.
Remember it is just SQL.
My Blog Stats for 2012
Here are the stats from my blog for 2012.
In total I’ve had almost 28,000 blog post views. This is a 7 fold increase on the number of blog post views I had in 2011.
I had 92 blog posts in 2012 and the most popular blog posts were
- Celtic Knot Mirror Wood Carving
-
Oracle Magazine Volume 1 Number 1 Oracle Database next release (12c) new features Oracle Advanced Analytics Option in Oracle 12c Exalytics: How much will it cost me? Update on Exalytics Pricing Data Science is Multidisciplinary Top search keywords used to find my blog
- exalytics pricing
oracle data mining oracle data miner data science brendan tierney Top Countries
- United States
52% Ireland 8% United Kingdom 8% India 4% Russia 4% Germany 3% France 3% Netherlands 1% Canada 1% Turkey 1% Top OS
- Windows
59% Macintosh 28% Linux 5% iPhone 2% iPad 1% Top Browsers
- Firefox
47% Internet Explorer 26% Chrome 15% Safari 4%
- ← Previous
- 1
- 2
- 3
- …
- 8
- Next →





You must be logged in to post a comment.