Showing posts with label Brendan Tierney. Show all posts
Showing posts with label Brendan Tierney. Show all posts

Wednesday, December 28, 2016

2016: A review of the year

As 2016 draws to a close I like to look back at what I have achieved over the year. Most of the following achievements are based on my work with the Oracle User Group community. I have some other achievements are are related to the day jobs (Yes I have multiple day jobs), but I won't go into those here.

As you can see from the following 2016 was another busy year. There was lots of writing, which I really enjoy and I'll be continuing with in 2017. As they say, watch this space for writing news in 2017.

Books

Yes 2016 was a busy year for writing and most of the later half of 2015 and the first half of 2016 was taken up writing two books. Yes two books. One of the books was on Oracle R Enterprise and this book compliments my previous published book on Oracle Data Mining. I now have the books that cover both components of the Oracle Advanced Analytics Option.

I also co-wrote a book with legends of Oracle community. These were Arup Nada, Martin Widlake, Heli Helskyaho and Alex Nuijten.

NewImage NewImage

More news coming in 2017.

Blog Posts

One of the things I really enjoy doing is playing with various features of Oracle and then writing some blog posts about them. When writing the books I had to cut back on writing blog posts. I was luck to be part of the 12.2 Database beta this year and over the past few weeks I've been playing with 12.2 in the cloud. I've already written a blog post or two already on this and I also have an OTN article on this coming out soon. There will be more 12.2 analytics related blog posts in 2017.

In 2016 I have written 55 blog posts (including this one). This number is a little bit less when compared with previous years. I'll blame the book writing for this. But more posts are in the works for 2017.

Articles

In 2016 I've written articles for OTN and for Toad World. These included:

OTN
  1. Oracle Advanced Analytics : Kicking the Tires/Tyres
  2. Kicking the Tyres of Oracle Advanced Analytics Option - Using SQL and PL/SQL to Build an Oracle Data Mining Classification Model
  3. Kicking the Tyres of Oracle Advanced Analytics Option - Overview of Oracle Data Miner and Build your First Workflow
  4. Kicking the Tyres of Oracle Advanced Analytics Option - Using SQL to score/label new data using Oracle Data Mining Models
  5. Setting up and configuring RStudio on the Oracle 12.2 Database Cloud Service
ToadWorld
  1. Introduction to Oracle R Enterprise
  2. ORE 1.5 - User Defined R Scripts

Conferences

  1. January - Yes SQL Summit, NoCOUG Winter Conference, Redwood City, CA, USA **
  2. January - BIWA Summit, Oracle HQ, Redwood City, CA, USA **
  3. March - OUG Ireland, Dublin, Ireland
  4. June - KScope, Chicago, USA (3 presentations)
  5. September - Oracle Open World (part of EMEA ACEs session) **
  6. December - UKOUG Tech16 & APPs16

** for these conferences the Oracle ACE Director programme funded the flights and hotels. All other expenses and other conferences I paid for out of my own pocket.

OUG Activities

I'm involved in many different roles in the user group. The UKOUG also covers Ireland (incorporating OUG Ireland), and my activities within the UKOUG included the following during 2016:

  • Editor of Oracle Scene: We produced 4 editions in 2016. Thank you to all who contributed and wrote articles.
  • Created the OUG Ireland Meetup. We had our first meeting in October. Our next meetup will be in January.
  • OUG Ireland Committee member of TECH SIG and BI & BA SIG.
  • Committee member of the OUG Ireland 2 day Conference 2016.
  • Committee member of the OUG Ireland conference 2017.
  • KScope17 committee member for the Data Visualization & Advanced Analytics track.

I'm sure I've forgotten a few things, I usually do. But it gives you a taste of some of what I got up to in 2016.

Tuesday, April 2, 2013

OTN has links to two of my blog posts

Over the past couple of weeks I’ve noticed that I had a bit of a spike in my blog stats (I don’t check them often). In particular there was 2 groups of blog posts that were getting a lot of the hit.

After a bit of investigation I found out that it was do to referrals from one particular website. It was OTN or Oracle Technology Network, and more specifically it was from their webpage dedicated for Database Admins and Developer.

Yes OTN had links to my blog posts on Clustering in Oracle Data Miner and to my blog post on Are you a Type I and Type II Data Scientists.

image

What a surprise this was to discover!!!  and what a honour Smile

I don’t know how long they will be on the OTN webpage, but hopefully lots of people in the Oracle community will find them useful.

I’m working on my next set of Oracle Data Miner blog posts, so watch this space. Plus I’ve started work on two technical articles that I’ll be submitting to OTN over the next few weeks. So hopefully you will see these up on OTN soon.

Monday, March 11, 2013

Oracle Magazine–March/April 1999

The headline articles for the March/April 1999 edition of Oracle Magazine were on the evolving world of the DBA. With some much new technology available in the database the role of the DBA is moving from a back office type role to one having a significant strategic influence in the organisation.
image
Other articles included:
  • Oracle releases a web based version of their Oracle Strategic Procurement application that includes three key parts: Strategic Sourcing, Internet Procurement and Process Automation.
  • Sun and Oracle announce a strategic agreement that allows both companies to enhance their product offerings by exchanging key technologies. Oracle will use the core of the Sun Solaris operating environment to deliver the industry’s first database server appliances.
  • Oracle Data Mart Suite releases version 2.5. It includes, Oracle Data Mart Builder, Oracle Data Mart Designer, Oracle 8 Enterprise Edition, Oracle Discoverer, Oracle Application Server and Oracle Reports and Reports Server.
  • New integration between Oracle Reports release 6.0 and Oracle Express Server release 6.2 to give users the ability to distribute high quality reports of information held in a multi-dimensional database across the enterprise.
  • The need for the DBA to know and understand the V$ views has been increasing during the later releases of 7.3 and 8i. The can be used for a variety of purposes, including understanding locked users, system resources, licencing and parameter settings.
  • One thing that all DBAs need to plan for is a database recovery. Planning it is one thing, but practicing it is another thing. A typical recovery plan will include, choosing a data file, create a backup, take the damaged tablespace offline, restore the damaged data file, bring the tablespace back online, recover the tablespace, bring the tablespace back online and test it.
  • Avoiding trigger errors, including Mutating and constraining table errors.
  • There is an article by Bryan Laplante on using Historgrams to Optimize Data Mart Performance.

To view the cover page and the table of contents click on the image at the top of this post or click here.
My Oracle Magazine Collection can be found here. You will find links to my blog posts on previous editions and a PDF for the very first Oracle Magazine from June 1987.

Wednesday, January 30, 2013

Oracle Magazine-Nov/Dec. 1998

The headline articles for the Nov/Dec 1998 edition of Oracle Magazine were on building web based applications and thin client computing. A large part of the magazine was dedicated to these topics.  This was a bumper edition with a total of 152 pages of content.

image

Other articles included:

  • There was a few articles on using Oracle 8i, including how to use Java in the Database, the Internet File System, Intermedia and Data Warehousing.  Oracle 8i comes with over 150 new features
  • There was a couple of articles on the Millennium Bug and how to approach such projects. There was also some advice for organisations who would have to look at how to deal with the introduction of the Euro currency in Europe.
  • There was a section for articles on new product announcements from Oracle partners, including Quest, Nextek, Maxager, ObjectShare, Constellar (Warehouse Builder), Prism, DataMetrics, IQ Software, Eventus, DataMirror, Precise, Saville, DataShark, J-Database Exchange, Andataco, GeoMedia
  • Oracle makes available Oracle 8i and the Application Server on a Linux platform for the first time.
  • With Oracle 8i we have a number of ways of managing our constraints, including:
    • Deferrable integrity constraints
    • Non unique indexes for primary key and unique constraints
    • Immediate constraint enabling
  • Detecting lock and waiting transactions was always a task that consumed a lot of time for a DBA. A number of scripts was given to help you identify these and to resolve these problems.
  • For allow of Oracle Certified DBAs out there. There was an article promoting the OCP DBA program and Exam. Some hints and tips about the exam were given, along with some practice questions.
  • Plus there was 12 pages on adverts at the back of the magazine.

To view the cover page and the table of contents click on the image at the top of this post or click here.

My Oracle Magazine Collection can be found here. You will find links to my blog posts on previous editions and a PDF for the very first Oracle Magazine from June 1987.

Friday, January 25, 2013

OUG Norway Agenda is now live

The OUG Norway spring conference (17th April – 19th April) agenda is now live and is open for registrations.

Click here for the Conference Agenda

Click here for the Conference Registration

This is a 3 day conference. The first day (17th April) will be held in the Radisson BLU Scandinavia ( Holbergsplass ) and the next two (and a bit) days will be on the Color Magic boat that will be travelling between Oslo and Kiel in Germany and back to Oslo. The boat will be arriving back in Oslo on the Saturday morning (20th April).

There will be some presentations in Norwegian, but it looks like most of the presentations will be in English. There will also be some well known names from the Oracle world presenting at this conference.

In addition to these people, I will be giving two presentations on using Predictive Analytics in Oracle using the Oracle Data Miner tool and in-database functionality.

My first presentation will be an overview of the advanced analytics option and a demonstration of what you can do using the Oracle Data Miner tool (part of SQL Developer). This presentation is currently scheduled for Thursday (18th April) at 5pm.

My second presentation will be at 9:30am on the Friday morning (19th April). In this presentation we will look at the in-database features, what can we do in SQL and PL/SQL, and we will look at what you need to do deploy you Oracle Data Mining models in a production environment.

If possible we might be able to review some new 12c new features for Oracle Data Miner Smile

Friday, January 4, 2013

My Blog Stats for 2012

Here are the stats from my blog for 2012.

In total I’ve had almost 28,000 blog post views. This is a 7 fold increase on the number of blog post views I had in 2011.

I had 92 blog posts in 2012 and the most popular blog posts were

Top search keywords used to find my blog

  • exalytics pricing
  • oracle data mining
  • oracle data miner
  • data science
  • brendan tierney

Top Countries

  • United States  52%
  • Ireland  8%
  • United Kingdom  8%
  • India  4%
  • Russia  4%
  • Germany  3%
  • France  3%
  • Netherlands  1%
  • Canada  1%
  • Turkey  1%

Top OS

  • Windows  59%
  • Macintosh  28%
  • Linux  5%
  • iPhone  2%
  • iPad  1%

Top Browsers

  • Firefox  47%
  • Internet Explorer  26%
  • Chrome  15%
  • Safari  4%

Wednesday, January 2, 2013

OUG Norway April 2013 - New Year’s News

I received an email at 23:24 on the 1st January from the OUG in Norway telling me that I’ve had two presentations accepted for the Annual OUG Norway seminar event. This will be on during the 17th-19th April.

The first day of this event (17th April) will be held in a hotel in Oslo. Then on the morning of 18th April we board the Color Magic cruise for the next two days of the conference. The ferry/cruise will go from Oslo to Kiel in Germany and then back again to Oslo, returning around 10am on Saturday 20th April.

I will be giving two presentations on the Oracle Advanced Analytics Option. The first presentation, ‘Using Predictive Analytics in Oracle’, will give an overview of the Oracle Advanced Analytics Option and will then focus on the Oracle Data Miner work-flow tool. This will presentation will include a live demo of using Oracle Data Miner to create some data mining models.

The second presentation, ‘How to Deploy and Use your Oracle Data Miner Models in Production’, builds on the examples given in the first presentation and will show how you can migrate, user and update your Oracle Data Miner models using the features available in SQL and PL/SQL. Again a demo will be given.

Thursday, December 20, 2012

Articles wanted for Oracle Scene–Spring 2013

The Call for Articles is now open for the Spring edition of Oracle Scene magazine. This is a publication of the UKOUG.

We are looking for technical articles covering all product offerings from Oracle. 

Typically articles will range from 3 pages to 8 pages (MS Word format). These will convert into 2 to 5 page articles in Oracle Scene.

Check out the Article Formatting Guidelines before submitting.
All pictures and images should be 300dpi.
Include a 100(max) word Bio and your photo
Email your article and images to

articles@ukoug.org.uk
For more details about submitting an article, check out
http://www.ukoug.org/what-we-offer/oracle-scene/article-submissions/

Wednesday, December 19, 2012

Association Rules in ODM-Part 4

This is a the final part of a four part blog post on building and using Association Rules in the Oracle Database using Oracle Data Miner. The following outlines the contents of each post in the series on Association Rules

  1. This first part will focus on how to building an Association Rule model
  2. The second post will be on examining the Association Rules produced by ODM – This blog post
  3. The third post will focus on using the Association Rules on your data.
  4. The final post will look at how you can do some of the above steps using the ODM SQL and PL/SQL functions.

In my previous posts I showed how you can go about setting up for Association Rule analysis in Oracle Data Miner and how to examine the rules that are generated.

This post will focus on how we build and use association rules using the functionality that is available in SQL and PL/SQL.

Step 1 – Build the Settings Table

As with all Oracle Data Mining functions in SQL and PL/SQL you will need to setup or build a settings table. This table contains all the settings necessary to run the model build functions. It is a good idea to create a separate settings table for each model build that you complete.

CREATE TABLE assoc_sample_settings (
setting_name VARCHAR2(30),
setting_value VARCHAR2(4000));

Step 2 – Define the Settings for the Model

Before you go to generate your model you need to set some of the parameters for the algorithm. To start with you need to defined that we are going to generate an Association Rules model, turn off the Automatic Data Preparation.

We can also set 3 additional settings for Association Rules.


image



The ASSO_MIN_SUPPORT has a default of 0.1 or 10%. That means that only rules that exist in 10% or more of the cases will be generated. This is really a figure that is too high. In the code below we will set this to a 1%. This matches the settings that we used in SQL Developer in my previous posts.


BEGIN



INSERT INTO assoc_sample_settings (setting_name, setting_value) VALUES

(dbms_data_mining.algo_name, dbms_data_mining.ALGO_APRIORI_ASSOCIATION_RULES);



INSERT into assoc_sample_settings (setting_name, setting_value) VALUES

(dbms_data_mining.prep_auto, dbms_data_mining.prep_auto_off);



INSERT into assoc_sample_settings (setting_name, setting_value) VALUES

(dbms_data_mining.ODMS_ITEM_ID_COLUMN_NAME, ‘PROD_ID’);



INSERT into assoc_sample_settings (setting_name, setting_value) VALUES

(dbms_data_mining.ASSO_MIN_SUPPORT, 0.01);



COMMIT;



END;

/



Step 3 – Prepare the Data



In our example scenario we are using the SALE data that is part of the SH schema. The CREATE_MODEL function needs to have an attribute (CASE_ID) that identifies the key of the shopping basket. In our case we have two attributes, so we will need to use a combined key. This combined key consists of the CUST_ID and the TIME_ID. This links all the transaction records related to the one shopping event together.



We also just need the attribute that has the information that we need. In our Association Rules (Market Basket Analysis) scenario, we will need to include the PROD_ID attribute. This contains the product key of each product that was included in the basket



CREATE VIEW ASSOC_DATA_V AS (

SELECT RANK() OVER (ORDER BY CUST_ID, TIME_ID) CASE_ID,


t.PROD_ID


FROM SH.SALES t );



Step 4 – Create the Model



We will need to use the DBMS_DATA_MINING.CREATE_MODEL function. This will use the settings in our ASSOC_SAMPLE_SETTINGS table. We will use the view created in Step 3 above and use the CASE_ID attribute we created as the Case ID in the function all.



BEGIN 
   DBMS_DATA_MINING.CREATE_MODEL( 
     model_name          => 'ASSOC_MODEL_2', 
     mining_function     => DBMS_DATA_MINING.ASSOCIATION, 
     data_table_name     => 'ASSOC_DATA_V', 
     case_id_column_name => ‘CASE_ID’, 
     target_column_name  => null, 
     settings_table_name => 'assoc_sample_settings');


END;



On my laptop this took approximately 5 second to run on just over 918K records involving just over 143K cases or baskets.



Now that is quick!!!



Step 5 – View the Model Outputs



There are a couple of functions that can be used to extract the rules produced in our previous step. These include:



GET_ASSOCIATION_RULES : This returns the rules from an association model.



SELECT rule_id, 
       antecedent, 
       consequent, 
       rule_support,


       rule_confidence


FROM TABLE(DBMS_DATA_MINING.GET_ASSOCIATION_RULES('assoc_model_2', 10));



The 10 here returns the top 10 records or rules.image GET_FREQUENT_ITEMSETS : returns a set of rows that represent the frequent item sets from an association model. In the following code we want the top 30 item sets to be returned, but filtered to only display item sets where there are 2 or more rules.



SELECT itemset_id,

       items,


       support,


       number_of_items


FROM TABLE(DBMS_DATA_MINING.GET_FREQUENT_ITEMSETS('assoc_model_2', 30))


WHERE number_of_items >= 2;


image

Tuesday, December 18, 2012

BIWA Summit–9th & 10th January, 2013

The BIWA Summit will be on the 9th and 10th January, 2013. It is being held in the Sofitel Hotel beside the Oracle HQ at Redwood Shores, just outside of San Francisco.

The BIWA Summit looks to be leading event in 2013 focused on Analytics, Data Warehousing, Big Data and BI. If you are a data architect or a data scientist this is certainly one event that you should consider attending in 2013.

All the big names (in the Oracle world) will be there Tom Kyte, Mark Rittman, Maria Colgan, Balaji Yelmanchili, Vaishnavi Sashikanth, Charlie Berger, Mark Hornick, Karl Rexter, Tim and Dan Vlamis.

Oh and then there is me. I’ll be giving a presentation on the Oracle Data Scientist. This will be on the first day of the event (9th) at 11:20am.

For anyone interest in the Oracle Data Scientist World there are lots of presentations to help you get start and up to speed in this area. Here is a list of presentations and hands on labs that I can recommend.

image

As is typical with all good conferences there are many presentations on at the same time that I would like to attend. If only I could time travel.

This is a great event to start off the new year and for everyone who is thinking of moving into or commencing a project in the area. So get asking you manager to see if there is any training budget left for 2012 or get first dibs on the training budget for 2013.

Registration is open and at the moment the early bird discount still seems to be available. You can also book a room in the hotel using the registration page.

To view the full agenda – click here

Tom Kyte in Dublin 21st January 2013

Tom Kyte will be back in Dublin on the 21st January, 2013.  He will be giving a number of presentations covering some of his popular Oracle Open World sessions and will also include a AskTom session

It will be a full day, kicking off at 9am and finishing around 3:30pm.

There is no better way to kick off the new year with a full day of FREE Oracle training and up skilling with Tom Kyte.

To register for the event send an email to marketing-ie_ie@oracle.com

As they say places are limited, so book early,  I have Smile so I’ll see you there.

Friday, December 14, 2012

OUG Ireland 2013–Call for Presentations

The call for presentations at the OUG Ireland Conference is now open. The conference will be on Tuesday 12th March in Dublin city centre.

It is hoped to have at a number of concurrent tracks covering all the main topic areas, including application development, database administration, business intelligence, applications, etc.

If you are interested in submitting a presentation then you need to fill in some of the detail at

OUG Ireland – Submit a Paper

Follow the OUG Ireland conversation on twitter using the tag  #oug_ire

call for papers

Thursday, November 29, 2012

Association Rules in ODM-Part 3

This is a the third part of a four part blog post on building and using Association Rules in Oracle Data Miner. The following outlines the contents of each post in the series on Association Rules

  1. This first part will focus on how to building an Association Rule model
  2. The second post will be on examining the Association Rules produced by ODM – This blog post
  3. The third post will focus on using the Association Rules on your data.
  4. The final post will look at how you can do some of the above steps using the ODM SQL and PL/SQL functions.

In my previous posts I showed how you can go about setting up for Association Rule analysis in Oracle Data Miner and how to examine the rules that are generated.

This post will focus on how we can extract and use these rules in Oracle Data Miner.

Step 1 – Model Details

Association Rules are an unsupervised method of data mining. In Oracle Data Miner we cannot use the Apply node to to score new data. What we have to do is to generate the Model Details. These in turn can then be used.

The Model Details node is used when we do unsupervised learning to extract the rules that are generated.

To do this we need to click on the Model Details node in the Models section of the Component Palette and then click on our workspace, just to the right of the Association Rule node.

The Edit Model Selection window will open. Connect the Association Rule node to the Model Details node. Then Run the node. This will then generate the Association Rules in a format what we can reuse.

image

When you get the small green tick on the Model Details node you can then view what was generated.

Right click on the Model Details node and click on View Details from the menu.

image

The output is similar to what we would have seen under the Association Rule node with the addition of a few more attributes that include the schema name and model name.

We can order the rules based on the Confidence level by double clicking on the Confidence column header. You might need to do this twice to get the rule appearing based on a descending confidence value.

At this point we can no look at persisting the Association Rules. See step 2 below.

We can also view the SQL that was used to generate the Association Rules that we see in the Model Details node. While still viewing the rules, click on the SQL tab.

image

Step 2 – Persisting the Association Rules

To make the rules persist and be useable outside of ODM we can persist the Association Rules in a table. The first step to do this is to create a new Table Node. This can be found under the Data section of the Component Palette. Click this Create Table or View node in the component palette and then click on the workspace, just to the right of the Model Details node.

Connect the Model Details node to the Output node, by right clicking on the Model Details node, select Connect from the menu and then click on the Output Node.

We can now edit the format of the Output i.e. specify what attributes are to be in our Output table. Double click on the Output node or right click and select Edit from the menu. We now get the Edit Create Table or View Node.

SNAGHTML18801036

We can give the output a meaningful name e.g. AR_OUTPUT_RULES. We can also specify what rule properties we can to export to attributes in out table.

We will need to un-tick the Auto Input Columns Selection tick box before we can remove any of the output attributes. In my case I only want to have ANTECENDENT_ITEMS, CONSEQUENT_ITEMS, ID, LENGTH, CONFIDENCE and SUPPORT in my out put. So I need to select and highlight all the other attributes (holding the control button). After selecting all the attributes I do not want included in the final output table, I need to click on the red X icon.

SNAGHTML18859128

When complete click on the OK button to go back to the workflow.

To generate the table right click on the AR_OUTPUT_RULES node and select Run from the menu. When you get the green tick mark on the AR_OUTPUT_RULES node the table has been created with records containing the details of each rules.

image

To view the contents of the AR_OUTPUT_RULES table we can right click on this node and select view data from the menu.

image

We can now use these rules in our applications.

 

Check out the next post in the series (Part 4) where we will look at the functionality available in the ODM SQL & PL/SQL functions to perform Association Rule analysis.

Tuesday, November 27, 2012

Association Rules in ODM–Part 2

This is a the second part of a four part blog post on building and using Association Rules in Oracle Data Miner.  The following outlines the contents of each post in the series on Association Rules

  1. This first part will focus on how to building an Association Rule model
  2. The second post will be on examining the Association Rules produced by ODM – This blog post
  3. The third post will focus on using the Association Rules on your data.
  4. The final post will look at how you can do some of the above steps using the ODM SQL and PL/SQL functions. 

In the previous post I looked at the steps needed to setup a data source and to setup the Association Rule node. When everything was setup we ran the workflow.

Step 1 – Viewing the Model

We the workflow has finished running we will have the green tick marks on each node. This is where we left thing at the end of the previous post (Part 1). To view the model details, right click on the Association Role Node and select View Models from the menu.

image

There are 3 main concepts that are important in relation to Association Rules:

  • Support: is the proportion of transactions in the data set that contain the item set i.e. the number of times the rule occurs
  • Confidence: is the proportion of the occurrences of the antecedent that result in the consequent e.g. how many times do we get C when we have A and B  {A, B} => C
  • Lift: indicates the strength of a rule over the random co-occurrence of the antecedent and the consequent

Support and Confidence are the primary measures that are used to access the usefulness of an association rule.

In our example we can see that the the antecedent and the consequent has numbers separated by the word AND. These numbers correspond to the product numbers.

Step 2 – Examining the Model Rules

To read the antecedent and the consequent for the first rule in our example we have:

Antecedent: 137 AND 143 AND 128

Consequent: 144

To read this association rule we would say that if a Customer bought product 137 and product 143 and product 128, then we have a Confidence value of almost 71%. This is a strong association.

We can check the ordering of the rules by changing the Sort By criteria. As Confidence and Support are the main ways to evaluate the rules, we can change the Sort By criteria to be Confidence. Then click on the Query button to refresh the rules section.

image

Here get a list of the strongest rules listed in descending order.

Below the section of the screen that has the Rules, we have the Rule Details section.

image 

Here we can see that the rule gets formatted into an IF statement. The first rule in the list has a confidence of almost 97%. As it is a simple IF statement it can be easily implemented in our applications.

We want use the information that these rules provides in a number of ways. One such consequence of these rules is that we can look at improving the ordering and distribution of these products to ensure that we have sufficient numbers of each. Another consequence is that we can enhance the front end selling mechanism to make sure that if a customer is buying product 114, 118 and 115 then we can remind the customer of product 119. We can also ensure that all these products are not located beside each other, so that the customer will have to walk past many other products in order to find them. That is why we never see milk and bread beside each other in a grocery store.

Step 3 – Applying Filters to the Model Rules

In the previous step we were able to sort our rules based on some of the measures of our Association Rules and to see how these rules are structured.

Association Rule Analysis can generate many thousands of possible rules for a small data set. In some cases the similar rules can appear and we can have lots of rules that occur so infrequently that they are perhaps meaningless.

ODM provides us with a number of filters that we can apply to the rules that enables use to look for the rules that are of must interest to use. We can access these filters by clicking on the More button, that is located just under the Query button.

We can refine our query on the rules based on the various measures and the number if items in the rule. In addition to this we can also filter based on the values of the items. This is particularly useful if we want to concentrate on specific items (in our example Products). To illustrate this use focus on the rules that involve Product 115. Click on the green + symbol on the right hand side of the window. Select 115 from the list provided. Next we need to decide if we want Product 115 involved in the Antecedent or the Consequent. In our example select the Consequent. This is located to the bottom right of the window. Then click the OK button and then click on the Query button to update the list of rules that correspond with the new filter.

image

We can see that we only have rules that have Product 115 in the Consequent column.

We can also see that we have 134 rules for this scenarios out of a total of 20,988 (your results might differ slightly to mine and that’s OK. It really depends on what version of the sample data you are using)

 

Check out the next post in the series (Part 3) where we will look at how you can use the Association Rules produced by ODM.

Friday, November 23, 2012

UKOUG 2012-What is it all about

The annual showcase conference of the UKOUG is coming up soon, actually it is just a week away now.

I attended my first UKOUG conference back in 1998 and I’ve been attending it on and off every since. Over the past few years I’ve been very luck to be able to attend it every year and during most of those I’ve presented at too.

This year will be no different as I will be co-presenting with Antony Heljula on using Oracle Data Miner and OBIEE to get some insight of on your data.

The core part of the conference runs over 3 days Monday to Wednesday, with many, many streams of excellent content for each area that the user group covers.  The real problem I have when I attend this conferences is that my interests typically span two if not three of the streams, so I’m typically seen running between presentation.

This year there are two pre-conference events. The first of these is the Oaktable Sunday, where there are two streams of presentations from members of the OakTable networks, who are a bunch of hardcore DBA types. So if you are one of those or would like to be one of those then try to get along to those session on the Sunday.

Also on the Sunday there is a Middleware event. This is being organised by Simon Haslam and Jacco Landlust. Like the OakTable event you will find some hardcode middleware experts sharing some of the work, their discoveries and offering lots of useful advice.

During the main conference look out for the Un-conference sessions where you will have the opportunity to give a short 10 minute plus, no powerpoint talk and some topic or issues you want to discuss. Whether you want to lead a discussion or to just sit in on one then these are a not to be missed. Check out the schedule near the exhibition hall.

The Oracle User Experience team will be at the conference this year to talk to you about your experiences of using the products. In addition they will be setting up a UX lab to get some feedback on their designs.

There will be more talks by members of the OakTable network during the lunch times. These will be shorter than the Sunday talks and again there will be no powerpoint.

You will have the opportunity to get hands-on experience building an Oracle RAC database using virtual machines on your own laptop with help from the experts from the IOUG RAC SIG and Pythian.

In addition to all of this there will be roundtable discussions, master classes and hands on labs.

You may have noticed that I haven’t talked about the actual streams that are running over the 3 days of the conference. There is so much going on at this conference that you will learn so much your boss will be sending you back next year. This conference is the most cost effective training you will ever receive.

I better get packing for a busy few days of learning, networking and having fun at the same time.

I’m going to make another blog post on what what my plans are for the conference, what presentations I will be going to, what events, etc.

Oh and don’t forget my advice in the latest Oracle Scene online or have a read of your free copy if you are attending the conference.

Oh I forgot to mention the social event.  Tut Tut.  There are lots of social and network events too so when you are finished with the presentations for a day, you can enjoy a free drink.

Association Rules in ODM–Part 1

This is a the first part of a four part blog post on building and using Association Rules in Oracle Data Miner. The following outlines the contents of each post in the series on Association Rules

  1. This first part will focus on how to building an Association Rule model
  2. The second post will be on examining the Association Rules produced by ODM – This blog post
  3. The third post will focus on using the Association Rules on your data.
  4. The final post will look at how you can do some of the above steps using the ODM SQL and PL/SQL functions.

image

The data set we will be using for Association Rule Analysis will be the sample data that comes with the SH schema in the database. Access to this schema and it’s data was setup when we created our data mining schema and ODM Repository.

Step 1 – Getting setup

As with all data mining projects you will need a workspace that will contain your workflows. Based on my previous ODM blog posts you will have already created a Project and some workflows. You can either reuse an existing workflow you have used for one of the other ODM modeling algorithms or you can create a new Workflow called Association Rules.

Step 2 – Define your Data Set

Assuming that your database has been setup to have the Sample schemas and their corresponding data, we will be using the data that is in the SH schema. In a previous post, I gave some instructions on setting up your database to use ODM and part of that involved a step to give your ODM schema access to the sample schema data.

We will start off by creating a Data Source Node. Click on the Data Source Node under the Component Palette. Then move your mouse to your your workspace area and click. A Data Source Node will be created and a window will open. Scroll down the list of Available Tables until you find the SH.SALES table. Click on this table and then click on the Next button. We want to include all the data so we can now click the Finish Button.

image

Our Data Source Node will now be renamed to SALES.

Step 3 – Setup the Association Build Node

Under the Model section of the Component Palette select Association. Move the mouse to your work area (and perhaps just the to right of the SALES node) click. Our Association Node will be created.

image

For the next step we need to join the our data source (SALES) with the Association Build Node. Right click on the SALES data node and select Connect from the drop down menu. Then move the mouse to the Association Build node and click. You should now have the two nodes connected.

We will now get the Edit Association Build Node property window opening for us. We will need to enter the following information:

  • Transaction ID: This is the attribute(s) that can be used to uniquely identify each transaction. In our example the Customer ID and the Time ID of the transaction allows us to identify what we want to analyse by i.e. the basket. This will group all the related transactions together
  • Item ID: What is the attribute of the thing you want to analyse. In our case we want to analyse the Products purchased, so select PROD_ID in this case
  • Value: This is an identifier used to specify another column with the transaction data to combine with the Item ID. <Existence> means that you want to see if there are any type of common bundling among all values of the selected Item ID. Use this.

image

Like all data mining products, Oracle has just one Algorithm to use for Association Rule Analysis, the Apriori Algorithm.

Click the OK button. You are now ready to run the Association Build Node. Right click on the node and select Run from the menu. After a short time everything should finish and we will have the little green tick makes on each of the nodes.

image

 

Check out the next post in the series (Part 2) where we will look at how you can examine the rules produced by our model in ODM.

Monday, November 19, 2012

Review Oracle Magazine- July/August 1998

The headline articles for the July/August1998 edition of Oracle Magazine were on designing, developing and delivering Data Marts using the Oracle Database and related tools. The main article looks at the different phases of developing a data mart in 90 days.

One of the key messages in these articles is to keep focused on the business problem and that the technology part is very minor in this. This message is still vey key to the analytics and big data world, keep focused on the business problem.

image

Other articles included:

  • Oracle ships JDeveloper Suite. It included App Builder for Java, Oracle Application Server 4.0, Oracle 8 Database Server, Symantec’s Visual Page HTML editor, and a one year developer’s membership in the Oracle Technology Network. Yes there used to be a cost to be a member of OTN!!!.
  • Oracle We Developer Suite wins the PC Magazine Editor Choice award. The suite comes with full development licences for Designer/2000 Release 2.1, including object extensions, Developer/2000 Release 2.1, Oracle App Builder for Java, Oracle Application Server 3.0, Oracle Database Server (releases 7 and 8) and the Oracle Web Development Kit
  • Oracle Support announce plans to give read only access, via the web, to its Bug database.
  • There was an advert for TOAD when it was still freeware and provided by ToadSoft.
  • Security management for Oracle 8, has been increasing in importance over the past few years. For all those people who have some security responsibilities, here are some key elements for database security: System security, Data security, User security, Password management and System auditing. Security is more than just providing a Firewall.
  • Building Message-based apps with Oracle 8’s Advanced Queuing, involves 5 main steps, including:
    • Start the server’s AQ background process
    • Create a database user account to manage queues
    • Create a user-defined type for application messages
    • Create a queue table and a corresponding queue of the user defined message type
    • Build the application to enqueue and dequeue messages of the user defined message type
  • For the DBAs there was an article on Fast Full Index Scan, how to enable it and gives a number of examples of the hints including the index_fss.

To view the cover page and the table of contents click on the image at the top of this post or click here.

My Oracle Magazine Collection can be found here. You will find links to my blog posts on previous editions and a PDF for the very first Oracle Magazine from June 1987.

Friday, November 16, 2012

Accepted for BIWA Summit–9th to 10th January

I received an email today to say that I had a presentation accepted for the BIWA Summit. This conference will be in the Sofitel Hotel beside the Oracle HQ in Redwood City.

The title of the presentation is “The Oracle Data Scientist” and the abstract is

Over the past 18 months we have seen a significant increase in the demand for Data Scientists. But how does someone become a data scientist. If we examine the requirements and job descriptions of this role we can see that being able to understand and process data are fundamental skills. So an Oracle developer is ideally suited to being a Data Scientist. The presentation will show how an  Oracle developer can evolve into a data scientist through a number of stages, including BI developer, OBIEE developer, statistical analysis, data miner and data scientist. The tasks and tools will be discussed and explored through each of these roles. The second half of the presentation will focus on the data mining functionality available in SQL and PL/SQL. This will consist of a demonstration of an Analytics Development environment and how you can migrate (and use) your models in a Production environment

For some reason Simon Cowell of XFactor fame kept on popping into my head and it now looks like he will be making an appearance in the presentation too. You will have to wait until the conference to find out what Simon Cowell and Being an Oracle Data Scientist have in common.

Check out the BIWA Summit website for more details and to register for the event.

I’ll see you there Smile

Friday, November 9, 2012

Update on : Adding numbers between

Over the past few days I’ve had a number of emails and comments based on my previous post.  My previous post was called ‘Adding numbers between two values’. I included some PL/SQL code that can be used to add up the numbers between two values. I mentioned that this was a question that my pre-teen son (a few year pre-teen) had asked me.

There are two main solutions to the same problem. One involves just using a SELECT and the other involves using recursion. I will come back the these alternative solutions below.

But let me start off with a bit more detail and background to why I approached the problem the way that I did. The main reason is that my son is a pre-teen. Over the past couple of years he as expressed an interest in what his daddy does. We even have matching ORACLENERD t-shirts Smile

When I was working through the problem with my son I wanted to show him how to take a problem and by breaking it down into its different parts we can work out an overall solution. We can then take each of these parts and translate them into code. In this case some PL/SQL, yes it is a bit nerdy and we do have the t-shirt. The code that I gave illustrates many different parts of the language and hopefully he will use some of these features as we continue on our learning experience.

It is good sometimes to break a problem down into smaller parts. That way we can understand it better, what works and what does not work, if something does not work then we will know what bit and also leads to easier maintenance. At a later point as you develop an in-depth knowledge of certain features of a language you can then rewrite what you have to be more efficient.

All part of the learning experience.

Ok lets take a look at the other ways to answer this problem. The first approach is to just use a single SELECT statement.

SELECT sum(rownum + &&Start_Number - 1)
FROM    dual
CONNECT by level <= &End_Number - &&Start_Number + 1;

An even simpler way is

SELECT sum(level)
FROM    dual
CONNECT BY level between &Start_Number and &End_Number;

These queries create a hierarchical query that produce all the numbers between the Start_Number parameter and the End_Number parameter. The SUM is needed to all all the numbers/rows produced.  This is nice and simple (but not that easy for by son at this point).

Thank you to everyone who contacted me about this. I really appreciated your feedback and please keep your comments coming for all my posts.

Tuesday, November 6, 2012

Adding numbers between two values

My son asked me the other day, what is the total number if you add all the numbers between Zero and 100.
We could have sat down to work it out with some paper and a pen, but instead I decided to introduce him to the world of Oracle, SQL and PL/SQL
The first step we took was to work out how you would do it on paper for some of the numbers. Then we translated this into some PL/SQL code. OK I did a lot this but he did seem to understand and follow what I was doing.
So the following Function is what we ended up with to add all the numbers between two numbers and return the answer.
CREATE or REPLACE function AddNumsBetween
       ( pStartNum IN NUMBER,
         pEndNum IN NUMBER)
   RETURN NUMBER
IS
   vSum   Number := 0;
BEGIN
   FOR i IN pStartNum .. pEndNum LOOP
      vSum := vSum + i;
   END LOOP;
   return vSum;
END;
/

The next step was to write some code to call this function. The code prompts the user to enter the Start number and End number.
set serveroutput on
DECLARE
   vStartNum  NUMBER := 0;
   vEndNum   NUMBER := 100;
   vAnswer    NUMBER := 0;
BEGIN
   vStartNum := &Start_Number;
   vEndNum := &End_Number;
   vAnswer := AddNumsBetween(vStartNum, vEndNum);
   dbms_output.put_line('The sum of numbers between '||vStartNum||' and '||vEndNum||' is '||vAnswer||'.');
END;
/

To answer by son’s original query, we used Zero and 100 as our inputs.
image
The answer to the question is 5,050.