Saturday, December 31, 2011

My first set of Oracle Products

I started working with Oracle back in 1993 and my first project involved working with Oracle 5, Forms 2.3 and for reports RPT.

The Oracle Database and tools were very simple back then, but there was lots of “features” to work around.

Check out this video, for a short demo of Oracle 5 and Forms 2

What version of the Oracle Database do I need ?

Oracle has recently made available a very useful webpage that that lists the functionality available for each version of the 11g Database. So before you decide which version of the database to purchase, check out this webpage.

http://www.oracle.com/us/products/database/product-editions-066501.html

Friday, December 23, 2011

i-BI : A new name for real BI

I’ve been working in the BI and related fields since the mid 90s. Over the past number of years I’ve gotten a little bit confused about what Business Intelligence (BI) really means.  Maybe it’s just a bit of old age kicking in way too early.

It seems to me that the term Business Intelligence has been hijacked by a large number of companies and software vendors. It seems that every “reporting tool” has been re-labelled into a Business Intelligence tool, without providing any really intelligence features. You are still just a reporting tool with no real intelligence features. Yes you do have some nice graphics that can be used instead of just listing numbers. But that is not Business Intelligence.

Business Intelligence is going beyond what these tools are capable off. Most of the skills and abilities for BI comes from the people who are doing it, not the tools. In reality you will need to use a number of tools or to write some custom code to help you gain the extra bit of insight into your data. The “reporting tools” can then deliver the results.

Also Ralph Kimball said a long time ago that the skills of someone working in the DW/BI area was that they needed to be half-DBA and half-MBA.

A quote that I heard recently from the Predictive Analytics World Conference, was “You need to be able to ask the right question”. This is to ensure that you can frame your analytics projects correctly and be able to measure the results.

I think that this question was key back in the mid 90s when I started out in the BI field and I still think it applies to all areas of BI.  The thing that we have lost in BI is the real intelligence part of it.

So I’m proposing a new name for really BI. It is intelligent-Business Intelligence (i-BI).

Lets differentiate between BI and the real intelligent BI work.

What do I mean by intelligent BI (i-BI) ?  What I mean area skills in Data Warehousing, Time Series Analysis, Advanced Analytics, Data Mining, Predictive Analysis, solving or addressing real business problems, etc.

Or maybe I’m just wrong and have missed some developments in BI over the past 16+ years. Or maybe I’m becoming a bit too cynical.

What do you think ?

Wednesday, December 21, 2011

Article for Oracle Scene–Due 13th Jan

As we approach Christmas, many of us will be looking forward to a few days holidays/vacation. During this period we may start thinking about some techniques or methods that we discovered over the past 12 months or about things we need to find out more on, over the coming months.

One thing to consider is to write an article on these techniques or methods, for Oracle Scene.  The next due date for submitting articles is 13th January.

http://www.ukoug.org/what-we-offer/oracle-scene/editorial-calendar/

For more details and ideas check out my webpage Oracle Scene

Make sure you check out the Article Guidelines and Submission Details

http://www.ukoug.org/what-we-offer/oracle-scene/article-submissions/

I hope to write an article based on the presentation I gave at the UKOUG Conference in Birmingham.

The most common question that I get asked is ‘how long should it be?’.  The length of an article can be anything from half a page, up to 4 or 5 pages long.

Tuesday, December 20, 2011

Updating your ODM (11g R2) model in production

In my previous blog posts on creating an ODM model, I gave the details of how you can do this using the ODM PL/SQL API.

But at some point you will have a fairly stable environment. What this means is that you will know what type of algorithm and its corresponding settings work best for for your data.

At this point you should be able to re-create your ODM model in the production database. The frequency of doing this update is dependent on number of new cases that you have. So you need to update your ODM model could be daily, weekly, monthly, etc.

image

To update your model you will need to:

- Creating a settings table for your model
- Create a new ODM model
- Rename your new ODM model to the production name

The following examples are based on the example data, model names, etc that I’ve used in my previous post.

Creating a Settings Table

The first step is to create a setting table for your algorithm. This will contain all the parameter settings needed to create the new model. You will have worked out these setting from your previous attempts at creating your models and you will know what parameters and their values work best.

-- Create the settings table
CREATE TABLE decision_tree_model_settings (
    setting_name VARCHAR2(30),
    setting_value VARCHAR2(30));

-- Populate the settings table
-- Specify DT. By default, Naive Bayes is used for classification.
-- Specify ADP. By default, ADP is not used.
BEGIN
    INSERT INTO decision_tree_model_settings (setting_name, setting_value)
    VALUES (dbms_data_mining.algo_name,       
           dbms_data_mining.algo_decision_tree);
   
    INSERT INTO decision_tree_model_settings (setting_name, setting_value)
    VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_on);
  
    COMMIT;
END;

Create a new ODM Model

We will need to use the DBMS_DATA_MINING.CREATE_MODEL procedure. In our example we will want to create a Decision Tree based on our sample data, which contains the previously generated cases and the new cases since the last model rebuild.

BEGIN
    DBMS_DATA_MINING.CREATE_MODEL(
        model_name          => ‘Decision_Tree_Method2',
        mining_function     => dbms_data_mining.classification,
        data_table_name     => 'mining_data_build_v',
        case_id_column_name => 'cust_id',
        target_column_name  => 'affinity_card',
        settings_table_name => ‘decision_tree_model_settings');
END;

Rename your ODM model to production name

The model we have create created above is not the name that is used in our production software. So we will need to rename it to our production name.

But we need to be careful about when we do this. If you drop a model or rename a model when it is being used then you can end up with indeterminate results.

What I suggest you do, is to pick a time of the day when your production software is not doing any data mining. You should drop the existing mode (or rename it) and the to rename the new model to the production model name.

DBMS_DATA_MINING.DROP_MODEL('CLAS_DECISION_TREE‘);

and then

DBMS_DATA_MINING.RENAME_MODEL('Decision_Tree_Method2', 'CLAS_DECISION_TREE');

Monday, December 19, 2011

Oracle Analytics Update & Plan for 2012

On Friday 16th December, Charlie Berger (Sr. Director, Product Management, Data Mining & Advanced Analytics) posted the following on the Oracle Data Mining forum on OTN.

“… soon you'll be able to use the new Oracle R Enterprise (ORE) functionality. ORE is currently in beta and is targeted to go General Availability in the near future. ORE brings additional functionality to the ODM Option, which will then be renamed to the Oracle Advanced Analytics Option to reflect the significant adv. analytical functionality enhancements. ORE will allow R users to write R scripts and run them inside the database and eliminate and/or minimize data movement in/out of the DB. ORE will provide R to SQL transparency for SQL push-down to in-DB SQL and and expanding library of Oracle in-DB statistical functions. Packages that cannot be pushed down will be run in embedded R mode while the DB manages all data flows to the multiple R engines running inside the DB.


In January, we'll open up a new OTN discussion forum specifically for Oracle R Enterprise focused technical discussions. Stay tuned.

I’m looking forward to getting my hands on the new Oracle R Enterprise, in 2012. In particular I’m keen to see what additional functionality will be added to the Oracle Data Mining option in the DB.

So watch out for the rebranding to Oracle Advanced Analytics

Charlie – Any chance of an advanced copy of ORE and related DB bits and bobs.

Sunday, December 18, 2011

Recent Wood Carvings

I’ve managed to get enough time over the past couple of days to finish some wood carvings that I started a couple of months ago.

IMG_0773An Angel for the Christmas Tree (beech)

IMG_0775A name plate for the house (beech)

IMG_0776A Sun face for the shed door (Ash)

Tuesday, December 13, 2011

Oracle Big Data Videos

Mark Townsend, Database Product Manager at Oracle gave a presentation on Big Data at the UKOUG conference and used the following videos to illustrate how a company can evolve their Big Data into useful and meaningful information.

Big Data – The Challenge

Big Data – Gold Mine or just Stuff

Big Data – Big Data Speaks

Big Data – Everything You Always Wanted to Know

Big Data – Little Data

Monday, December 12, 2011

My UKOUG Presentation on ODM PL/SQL API

On Wednesday 7th Dec I gave my presentation at the UKOUG conference in Birmingham. The main topic of the presentation was on using the Oracle Data Miner PL/SQL API to implement a model in a production environment.

There was a good turn out considering it was the afternoon of the last day of the conference.

I asked the attendees about their experience of using the current and previous versions of the Oracle Data Mining tool. Only one of the attendees had used the pre 11g R2 version of the tool.

From my discussions with the attendees, it looks like they would have preferred an introduction/overview type presentation of the new ODM tool. I had submitted a presentation on this, but sadly it was not accepted.  Not enough people had voted for it.

For for next year, I will submit an introduction/overview presentation again, but I need more people to vote for it. So watch out for the vote stage next June and vote of it.

Here are the links to the presentation and the demo scripts (which I didn’t get time to run)

My Presentation

Demo Script 1 – Exploring and Exporting model

Demo Script 2 – Import, Dropping and Renaming the model. Plus Queries that use the model

Monday, December 5, 2011

Ireland table at Focus Pub tonight

Today (Monday 5th Dec) is the first day of the UKOUG Conference in Birmingham.

Tonight we have the Focus Pubs session starting at 8:45pm. This year we have a Ireland table for all of the Irish people at the conference to gather at and to meet.

I’ll be there so drop along and say hello.

Friday, December 2, 2011

I’m an Oracle ACE

At 5:20pm today (Friday 2nd December), I received an email from the Oracle ACE program.  I had been nominated for the award of Oracle ACE.

“You have been chosen based on your significant contribution and activity in the Oracle technical community.  Like your fellow Oracle ACEs, you have demonstrated a proficiency in Oracle technology as well as a willingness to share your knowledge and experiences with the community.”

I am so honoured, considering the experts from around the world that are members of the Oracle ACE program.

The Oracle ACE Award is issued by the Oracle Corporation and the award is made to people who are know for their strong credentials in the Oracle community as enthusiasts, advocates and technical knowledge.

Thursday, December 1, 2011

Oracle Big Data & Analytics Sessions at UKOUG Conference

There are a number of BIG Data and Analytics presentations at the UKOUG Conference in Birmingham (4th Dec – 7th Dec).

I’ve worked my way through the agenda grids for each day of the conference and I’ve come up with the following list.  If you are interested in BIG Data and Analytics these presentations are a must see

Monday 12:15-13:15 Exadata Live – Graham Wood – Hall 7A
5th Dec    
Tuesday 9:00-10:00 Big Data-Are you ready – Mark Townsend – Hall 1
6th Dec 10:10-10:55 Who’s afraid of Analytic Functions – Alex Nuijten – Hall 5
  11:15-12:15 Analysing Your Data with Analytic Functions – Carl Dudley – Hall 9
  16:40-17:40 Mobile Analytics using OBIEE 11g – Jon Mead – Exec Room 1
Wednesday 9:00-10:00 Oracle 11g Database Automatic Parallelism – Joel Goodman – Hall 9
7th Dec 15:20-16:05 How to Deploy your Oracle Data Miner in a Live Environment - me

Monday, November 28, 2011

Exalytics Events over the next week

The BIWA SIG is hosting a techcast called “Using Oracle R Enterprise” on Wednesday 30th November, 2011 at noon EST (approx 6pm GMT).

The TechCast is being presented by Mark Hornick, Senior Manager, Oracle Advanced Analytics Development

URL for TechCast: https://stbeehive.oracle.com/bconf/confDetails?confID=334B:3BF0:owch:38893C00F42F38A1E0404498C8A6612B0004075AECF7&guest=true&confKey=608880
-- Web Conference ID: 303397
-- Web Conference Key: 608880
-- Dialup: 1-866-682-4770, ID 5548204, passcode 1234

Several analytic tool vendors have added R-integration to their software. However, Oracle is the largest company to throw their weight behind R. On October 3, Oracle unveiled their integration of R: Oracle R Enterprise (http://www.oracle.com/us/corporate/features/features-oracle-r-enterprise-498732.html) as part of their Oracle Big Data Appliance announcement (http://www.oracle.com/us/corporate/press/512001).


Oracle R Enterprise allows users to perform statistical analysis with advanced visualization on data stored in Oracle Database. Oracle R Enterprise enables scalable R solutions, while facilitating production deployment of R scripts and Hadoop based solutions, as well as integration of R results with Oracle BI Publisher and OBIEE dashboards.

Check out the Oracle YouTube video (5min), that demos how an Exalytics application that can analyse almost a billion records instantly.

If you are attending the UKOUG Conference in Birmingham, Jon Mead (RittmanMead) is giving a presentation called “What can Exalytics do for me?” and is on Tuesday 5th December @15:35, in the area above the box office.

Thursday, November 24, 2011

Applying an ODM Model to new data in Oracle – Part 2

This is the second of a two part blog posting on using an Oracle Data Mining model to apply it to or score new data. The first part looked at how you can score data the DBMS_DATA_MINING.APPLY procedure for scoring data batch type process.

This second part looks at how you can apply or score the new data, using our ODM model, in a real-time mode, scoring a single record at a time.

PREDICTION Function

The PREDICTION SQL function can be used in many different ways. The following examples illustrate the main ways of using it. Again we will be using the same data set with data in our (NEW_DATA_TO_SCORE) table.

The syntax of the function is

PREDICTION ( model_name, USING attribute_list);

Example 1 – Real-time Prediction Calculation

In this example we will select a record and calculate its predicted value. The function will return the predicted value with the highest probability

SELECT cust_id, prediction(clas_decision_tree using *)
FROM   NEW_DATA_TO_SCORE
WHERE cust_id = 103001;

   CUST_ID PREDICTION(CLAS_DECISION_TREEUSING*)
---------- ------------------------------------
    103001                                    0

So a predicted class value is 0 (zero) and this has a higher probability than a class value of 1.

We can compare and check this results with the result that was produced using the DBMS_DATA_MINING.APPLY function (see previous blog post).

SQL> select * from new_data_scored
  2  where cust_id = 103001;

   CUST_ID PREDICTION PROBABILITY
---------- ---------- -----------
    103001          0           1
    103001          1           0

Here we can see that the class value of 0 has a probability of 1 (100%) and the class value of 1 has a probability of 0 (0%).

Example 2 – Selecting top 10 Customers with Class value of 1

For this we are selecting from our NEW_DATA_TO_SCORE table. We want to find the records that have a class value of 1 and has the highest probability. We only want to return the first 10 of these

SELECT cust_id
FROM    NEW_DATA_TO_SCORE
WHERE PREDICTION(clas_decision_tree using *) = 1
AND       rownum <=10;

   CUST_ID
----------
    103005
    103007
    103010
    103014
    103016
    103018
    103020
    103029
    103031
    103036

Example 3 – Selecting records based on Prediction value and Probability

For this example we want to find our from what Countries do the customer come from where the Prediction is 0 (wont take up offer) and the Probability of this occurring being 1 (100%). This example introduces the PREDICTION_PROBABILITY function. This function allows use to use the probability strength of the prediction.

select country_name, count(*)
from   new_data_to_score
where  prediction(clas_decision_tree using *) = 0
and    prediction_probability (clas_decision_tree using *) = 1
group by country_name
order by count(*) asc;

COUNTRY_NAME                               COUNT(*)
---------------------------------------- ----------
Brazil                                            1
China                                             1
Saudi Arabia                                      1
Australia                                         1
Turkey                                            1
New Zealand                                       1
Italy                                             5
Argentina                                        12
United States of America                        293

The examples that I have give above are only the basic examples of using the PREDICTION function. There are a number of other uses that include using the PREDICTION_COST, PREDICTION_SET, PREDICTION_DETAILS. Examples of these will be covered in a later blog post

Tuesday, November 22, 2011

Oracle Ireland: Data Centre Transformation Event 7th December

Oracle in Ireland is hosting a session called Data Centre Transformation on 7th December (9:30-13:00), in the Guinness Storehouse, St James Gate, Dublin 8.

The agenda for this session is

9:00 Registration & Coffee
10:00 The 21st Century Data Centre, Delivered by Oracle Solaris – Mike Ramchand
10:30 Oracle Enterprise Manager 12c – John Caulfield, Solutions Director
11:00 Oracle Virtualised Systems (VM 3.0) – Dave Patterson, Oracle Hardware
11:30 Coffee Break
12:00 Transformative Oracle Storage Solutions – Neil Caughey, Oracle Storage Business Unit
12:30 Extreme Performance with Oracle Exadata and Exalogic – Brian Grant, Oracle Exalogic Business Development Manager

To book your place on this event email oracle.events@ketchumpleon.com

Or register by following this web link.

I wont be at this event as I’ll be presenting in the afternoon at the UKOUG conference in Birmingham.

Monday, November 21, 2011

Applying an ODM Model to new data in Oracle – Part 1

This is the first of a two part blog posting on using an Oracle Data Mining model to apply it to or score new data.  This first part looks at the how you can score data using the DBMS_DATA_MINING.APPLY procedure in a batch type process.

The second part will be posted in a couple of days and will look how you can apply or score the new data, using our ODM model, in a real-time mode, scoring a single record at a time.

DBMS_DATA_MINING.APPLY

Instead of applying the model to data as it is captured, you may need to apply a model to a large number of records at the same time. To perform this bulk processing we can use the APPLY procedure that is part of the DBMS_DATA_MINING package. The format of the procedure is

DBMS_DATA_MINING.APPLY (
      model_name           IN VARCHAR2,
      data_table_name      IN VARCHAR2,
      case_id_column_name  IN VARCHAR2,
      result_table_name    IN VARCHAR2,
      data_schema_name     IN VARCHAR2 DEFAULT NULL);

Parameter Name Description
Model_Name The name of your data mining model
Data_Table_Name The source data for the model. This can be a tree or view.
Case_Id_Column_Name The attribute that give uniqueness for each record. This could be the Primary Key or if the PK contains more than one column then a new attribute is needed
Result_Table_Name The name of the table where the results will be stored
Data_Schema_Name The schema name for the source data

The main condition for applying the model is that the source table (DATA_TABLE_NAME) needs to have the same structure as the table that was used when creating the model.

Also the data needs to be prepossessed in the same way as the training data to ensure that the data in each attribute/feature has the same formatting.

When you use the APPLY procedure it does not update the original data/table, but creates a new table (RESULT_TABLE_NAME) with a structure that is dependent on what the underlying DM algorithm is. The following gives the Result Table description for the main DM algorithms:

For a Classification algorithms

case_id      VARCHAR2/NUMBER
prediction   NUMBER / VARCHAR2  -- depending a target data type
probability  NUMBER

For Regression

case_id     VARCHAR2/NUMBER
prediction  NUMBER

For Clustering

case_id      VARCHAR2/NUMBER
cluster_id   NUMBER
probability  NUMBER

Example / Case Study

My last few blog posts on ODM have covered most of the APIs for building and transferring models. We will be using the same data set in these posts. The following code uses the same data and models to illustrate how we can use the DBMS_DATA_MINING.APPLY procedure to perform a bulk scoring of data.

In my previous post we used the EXPORT and IMPORT procedures to move a model from one database (Test) to another database (Production). The following examples uses the model in Production to score new data. I have setup a sample of data (NEW_DATA_TO_SCORE) from the SH schema using the same set of attributes as was used to create the model (MINING_DATA_BUILD_V). This data set contains 1500 records.

SQL> desc NEW_DATA_TO_SCORE
Name                                 Null?    Type
------------------------------------ -------- ------------
CUST_ID                              NOT NULL NUMBER
CUST_GENDER                          NOT NULL CHAR(1)
AGE                                           NUMBER
CUST_MARITAL_STATUS                           VARCHAR2(20)
COUNTRY_NAME                         NOT NULL VARCHAR2(40)
CUST_INCOME_LEVEL                             VARCHAR2(30)
EDUCATION                                     VARCHAR2(21)
OCCUPATION                                    VARCHAR2(21)
HOUSEHOLD_SIZE                                VARCHAR2(21)
YRS_RESIDENCE                                 NUMBER
AFFINITY_CARD                                 NUMBER(10)
BULK_PACK_DISKETTES                           NUMBER(10)
FLAT_PANEL_MONITOR                            NUMBER(10)
HOME_THEATER_PACKAGE                          NUMBER(10)
BOOKKEEPING_APPLICATION                       NUMBER(10)
PRINTER_SUPPLIES                              NUMBER(10)
Y_BOX_GAMES                                   NUMBER(10)
OS_DOC_SET_KANJI                              NUMBER(10)

SQL> select count(*) from new_data_to_score;

  COUNT(*)
----------
      1500

The next step is to run the the DBMS_DATA_MINING.APPLY procedure. The parameters that we need to feed into this procedure are

Parameter Name Description
Model_Name CLAS_DECISION_TREE  -- we imported this model from our test database
Data_Table_Name NEW_DATA_TO_SCORE
Case_Id_Column_Name CUST_ID  -- this is the PK
Result_Table_Name NEW_DATA_SCORED   -- new table that will be created that contains the Prediction and Probability.

The NEW_DATA_SCORED table will contain 2 records for each record in the source data (NEW_DATA_TO_SCORE). For each record in NEW_DATA_TO_SCORE we will have one record for the each of the Target Values (O or 1) and the probability for each target value. So for our NEW_DATA_TO_SCORE, which contains 1,500 records, we will get 3,000 records in the NEW_DATA_SCORED table.

To apply the model to the new data we run:

BEGIN
  dbms_data_mining.apply(
  model_name => 'CLAS_DECISION_TREE',
  data_table_name => 'NEW_DATA_TO_SCORE',
  case_id_column_name => 'CUST_ID',
  result_table_name => 'NEW_DATA_SCORED');
END;
/

This takes 1 second to run on my laptop, so this apply/scoring of new data is really quick.

The new table NEW_DATA_SCORED has the following description

SQL> desc NEW_DATA_SCORED
Name                            Null?    Type
------------------------------- -------- -------
CUST_ID                         NOT NULL NUMBER
PREDICTION                               NUMBER
PROBABILITY                              NUMBER

SQL> select count(*) from NEW_DATA_SCORED;

  COUNT(*)
----------
      3000

We can now look at the prediction and the probabilities

SQL> select * from NEW_DATA_SCORED where rownum <=12;

   CUST_ID PREDICTION PROBABILITY
---------- ---------- -----------
    103001          0           1
    103001          1           0
    103002          0  .956521739
    103002          1  .043478261
    103003          0  .673387097
    103003          1  .326612903
    103004          0  .673387097
    103004          1  .326612903
    103005          1  .767241379
    103005          0  .232758621
    103006          0           1
    103006          1           0

12 rows selected.

How Many Sleeps to Santa


select
to_date('25/12/2011','DD/MM/YYYY') - trunc(sysdate) "How Many Sleep to Santa"
from dual;

How Many Sleep to Santa
-----------------------
                     34

Thursday, November 17, 2011

Call for Presentations : OUG Ireland Conference 2012

image
The call for presentations for the annual Oracle User Group Ireland conference has been posted in last few days.
The conference is planned for March 2012 and the venue will be picked over the next few weeks.
I’m on organising committee this year. It is hoped to have a number of parallel streams covering core Database Technology, BI (&EPM), Development (including Fusion).
If you are interested in presenting a short presentation of approx. 45 minutes (including time for questions), then you will need to submit your Topic and Abstract using the following link :  www.oug.org/Irelandpapers
The conference is not limited to presenters from Ireland and it is hoped to get a number of well known Oracle experts and Oracle ACEs to come to Dublin for the day.
What kind of topics are of interest. Well pretty much anything Oracle. We have all come across something interesting in our jobs that we could share, be it using a particular technique, new features, sharing experiences, best practices, product demos, etc
I’ve already submitted a presentation on Oracle Data Miner.
There is a Twitter hash tag for the Oracle Conference #oug_ire2012.  So add this to your Twitter tool to follow developments and announcements about the conference.
If you have any question about the conference drop me a email.

Wednesday, November 16, 2011

My UKOUG Conference 2011 Schedule

UK Oracle User Group Conference 2011

The UKOUG conference will be in a couple of weeks. I have my flights and hotel booked, and I’ve just finished selecting my agenda of presentations. I really enjoy this conference as it serves many purposes including, finding new directions Oracle is taking, new product features, some upskilling/training, confirming that the approaches that I have been using on projects are valid, getting lots of hints and tips, etc.

One thing that I always try to do and I strongly everyone (in particular first timers) to do is to go to 1 session everyday that is on a topic or product that you know (nearly) nothing about.  You might discover that you know more than you think or you may learn something new that can be feed into some project on your return or over the next 12 months.

My agenda for the conference currently looks Very busy and in between these session, there is the exhibition hall, meetings with old and new friends, meetings with product/business unit managers, asking people to write articles for Oracle Scene, checking out possible presenters to come to Ireland for our conference in March 2012, etc.  Then there is my presentation on the Wednesday afternoon.

Sunday

I’ll miss most of the Oak Table event on the Sunday but I hope to make it in time for

16:40-17:30 : Performance & High Availability Panel Session

Monday

9:20-9:50 : Keynote by Mark Sunday, Oracle (H1)
10:00-10:45 : The Future of BI & Oracle roadmap, Mike Durran, Oracle (H5)
11:05-12:05 : Implementing Interactive Maps with OBIEE 11g, Antony Heljula, Peak Indicators (H10A)
12:15-13:15 : OBI 11g Analysis & Reporting New Features, Mark Rittman (8A)
14:30-15:15 : Master Data Management – What is it & how to make it work – Robert Barnett, Hub Solutions Designs (H10A)
16:20-17:35 : Dummies Guide to Oracle ADF, Grant Ronald, Oracle, (Media Suite)
16:35-18:30 : The DB Time Performance Method, Graham Wood, Oracle (H8A)
17:45-18:30 : Performance & Stability with Oracle 11g SQL Plan Management, Doug Burns (H1)
17:45-18:30 : Experiences in Virtualization, Michael Doherty (H10A)
19:45-20:45 : Exhibition Welcome Drinks
20:45-Late : Focus Pubs

Tuesday

9:00-11:00 : Next Generation BI Architectures Masterclass, Andrew Bond, Oracle (H10B)
10:10-10:55 : Who’s afraid of Analytic Functions, Alex Nuijten, Maxima (H5)
11:15-12:15 : Analysing Your Data with Analytic Functions, Carl Dudley, (H9)
11:25-13:25 : Using a Physical Standby to Minimize Downtime for DB Release or Server Change, Michael Abbey, Pythian (Media Suite)
14:40-15:25 : How note to make the headlines, Mark Clewett, Hitachi (H10A)
14:40-15:25 : APEX Back to Basics, Paul Broughton, APEX Evangelists (H9)
15:35-16:20 : Can People be identified in the database, Pete Finnigan (H1)
16:40-18:35 : OTN Hands-on Workshop, Todd Trichler, Oracle (H8A)
17:50-18:35 : SQL Developer Data Modeler as a replacement for Oracle Designer, Paul Bainbridge, Fujitsu, (H8B)
18:45-19:45 : Keynote : Future of Enterprise Software and Oracle, Ray Wang, Constellation Research (H1)
20:00-Late : Evening Social & Networking

Wednesday

9:00-10:00 : Oracle 11g Database: Automatic Parallelism, Joel Goodman, Oracle (H9)
9:00-10:00 : Big Data: Learn how to predict the future, Keith Laker, Oracle (H8B)
10:10-10:55 : All about indexes – What to index, when and how, Mark Bobak, ProQuest (H5)
11:20-12:30 : Using Application Express to Build Highly Accessible Products, Anthony Rayner, Oracle (H8A)
12:30-13:30 : Practical uses for APEX Dictionary, John Scott, APEX Evangelists (H8A)
15:20-16:05 : How to deploy you Oracle Data Miner 11g R2 Workflows in a Live Environment – Me  (H7B)
16:15-17:00 : Next Generation Data Warehousing, Kulvinder Hari, Oracle (H8A)
16:15-17:00 : Beyond RTFM and WTF Message Moments. Introducing a new standard: Oracle Fusion Applications User Assistance, Ultan O’Broin (Executive Room 7)

I know I have some overlapping sessions, but I will decide on the date which of these I will attend.

As you an see I will be following the BI stream mainly, with a few sessions on the Database and Development streams too.

This year there is a smart phone app help us organise our agenda, meetings, etc, The only downside is that the app does not import the agenda that I created on the website. So I have to do it again. Maybe for next year they will have an import agenda feature.

New UKOUG mobile app – Launched October 2011

Wednesday, November 9, 2011

ODM–PL/SQL API for Exporting & Importing Models

In a previous blog post I talked about how you can take a copy of a workflow developed in Oracle Data Miner, and load it into a new schema.
When you data mining project gets to a mature stage and you need to productionalise the data mining process and model updates, you will need to use a different set of tools.

As you gather more and more data and cases, you will be updating/refreshing your models to reflect this new data. The new update data mining model needs to be moved from the development/test environment to the production environment. As with all things in IT we would like to automate this updating of the model in production.
There are a number of database features and packages that we can use to automate the update and it involves the setting up of some scripts on the development/test database and also on the production database.

These steps include:

  • Creation of a directory on the development/test database
  • Exporting of the updated Data Mining model
  • Copying of the exported Data Mining model to the production server
  • Removing the existing Data Mining model from production
  • Importing of the new Data Mining model.
  • Rename the imported mode to the standard name

The DBMS_DATA_MINING PL/SQL package has 2 functions that allow us to export a model and to import a model. These functions are an API to the Oracle Data Pump. The function to export a model is DBMS_DATA_MINING.EXPORT_MODEL and the function to import a model is DBMS_DATA_MINING.IMPORT_MODEL.The parameters to these function are what you would expect use if you were to use Data Pump directly, but have been tailored for the data mining models.

Lets start with listing the models that we have in our development/test schema:

SQL> connect dmuser2/dmuser2
Connected.
SQL> SELECT model_name FROM user_mining_models;

MODEL_NAME
------------------------------
CLAS_DT_1_6
CLAS_SVM_1_6
CLAS_NB_1_6
CLAS_GLM_1_6

Create/define the directory on the server where the models will be exported to.

CREATE OR REPLACE DIRECTORY DataMiningDir_Exports AS 'c:\app\Data_Mining_Exports';

The schema you are using will need to have the CREATE ANY DIRECTORY privilege.

Now we can export our mode. In this example we are going to export the Decision Tree model (CLAS_DT_1_6)

DBMS_DATA_MINING.EXPORT_MODEL function
The function has the following structure

DBMS_DATA_MINING.EXPORT_MODEL (
     filename IN VARCHAR2,
     directory IN VARCHAR2,
     model_filter IN VARCHAR2 DEFAULT NULL,
     filesize IN VARCHAR2 DEFAULT NULL,
     operation IN VARCHAR2 DEFAULT NULL,
     remote_link IN VARCHAR2 DEFAULT NULL,
     jobname IN VARCHAR2 DEFAULT NULL);

If we wanted to export all the models into a file called Exported_DM_Models, we would run:

DBMS_DATA_MINING.EXPORT_MODEL('Exported_DM_Models', 'DataMiningDir');

If we just wanted to export our Decision Tree model to file Exported_CLASS_DT_Model, we would run:

DBMS_DATA_MINING.EXPORT_MODEL('Exported_CLASS_DT_Model', 'DataMiningDir', 'name in (''CLAS_DT_1_6'')');

DBMS_DATA_MINING.DROP_MODEL function
Before you can load the new update data mining model into your production database we need to drop the existing model. Before we do this we need to ensure that this is done when the model is not in use, so it would be advisable to schedule the dropping of the model during a quiet time, like before or after the nightly backups/processes.

DBMS_DATA_MINING.DROP_MODEL('CLAS_DECISION_TREE', TRUE)

DBMS_DATA_MINING.IMPORT_MODEL function
Warning : When importing the data mining model, you need to import into a tablespace that has the same name as the tablespace in the development/test database.  If the USERS tablespace is used in the development/test database, then the model will be imported into the USERS tablespace in the production database.

Hint : Create a DATAMINING tablespace in your development/test and production databases. This tablespace can be used solely for data mining purposes.

To import the decision tree model we exported previously, we would run

DBMS_DATA_MINING.IMPORT_MODEL('Exported_CLASS_DT_Model', 'DataMiningDir', 'name=’CLAS_DT_1_6''', 'IMPORT', null, null, 'dmuser2:dmuser3');

We now have the new updated data mining model loaded into the production database.

DBMS_DATA_MINING.RENAME_MODEL function
The final step before we can start using the new updated model in our production database is to rename the imported model to the standard name that is being used in the production database.

DBMS_DATA_MINING.RENAME_MODEL('CLAS_DT_1_6', 'CLAS_DECISION_TREE');

Scheduling of these steps
We can wrap most of this up into stored procedures and have schedule it to run on a semi-regular bases, using the DBMS_JOB function. The following example schedules a procedure that controls the importing, dropping and renaming of the models.

DBMS_JOB.SUBMIT(jobnum.nextval, 'import_new_data_mining_model', trunc(sysdate), add_month(trunc(sysdate)+1);

This schedules the the running of the procedure to import the new data mining models, to run immediately and then to run every month.

Saturday, November 5, 2011

What Conference ? If I had the time and money

If I had lots of free time and enough money what conferences would I go to around the world. I regularly get asked for recommendations on what conferences should a person attend. It all depends on what you want to get out of your conference trip. Be is training, education, information building, networking, etc. or to enjoy the local attractions.

The table below is my preferred list of conferences to attend. All of the conferences below are focused on two main areas. The first area is Oracle  and the second area is that of Data Mining/Predictive Analytics.

I hope you find the list useful. If you can recommend some others let me know.

Month Conference
January  
February

March

Annual Ireland Oracle Conference – Dublin, Ireland

Predictive Analytics World – USA (San Francisco)

Text Analytics World

Hotsos Symposium

April

Collaborate (IOUG Conference USA)

Enterprise Data World (USA)

Miracle OpenWorld (Denmark)

May

OUG Harmony (Finland)

June

Oracle Development Tools User Group Kaleidoscope (Kscope)

Data Governance – Summer Conference

Oracle Benelux User Group Conference

July

VirtaThon – Online Oracle Conference

August

ACM SIGKDD Conference on KDD & Data Mining

September  
October

Oracle Open World – San Francisco, USA

Predictive Analytics World – USA (New York)

SAS Analytics Conference

November

TDWI World Conference

Data Governance – Winter Conference (USA)

Predictive Analytics World – UK

International Conference on Data Mining & Engineering (ICDMKE)

Australia Oracle User Group Conference

Germany Oracle User Group Conference (DOAG)

December

Annual UKOUG Conference – Birmingham, UK

IEEE International Conference on Data Mining (ICDM)

Oracle Open World Latin America

There is a lot of conferences in the October, November and December months. Some of these are on overlapping dates, which is a pity. Perhaps the organisers of some of these conferences. Also during the January and February months there does not seem to be any conferences in the areas.

If you would like to sponsor a trip to one or more of these then drop me an email Smile

Thursday, November 3, 2011

ODM 11.2 Data Dictionary Views.

The Oracle 11.2 database contains the following Oracle Data Mining views. These allow you to query the database for the metadata relating to what Data Mining Models you have, what the configurations area and what data is involved.

ALL_MINING_MODELS

Describes the high level information about the data mining models in the database.  Related views include DBA_MINING_MODELS and USER_MINING_MODELS.

Attribute Data Type Description
OWNER Varchar2(30) NN Owner of the mining model
MODEL_NAME Varchar2(30) NN Name of the mining model
MINING_FUNCTION Varchar2(30) What data mining function to use
CLASSIFICATION
REGRESSION
CLUSTERING
FEATURE_EXTRACTION
ASSOCIATION_RULES
ATTRIBUTE_IMPORTANCE
ALGORITHM Varchar2(30) Algorithm used by the model
NAIVE_BAYES
ADAPTIVE_BAYES_NETWORK
DECISION_TREE
SUPPORT_VECTOR_MACHINES
KMEANS
O_CLUSTER
NONNEGATIVE_MATRIX_FACTOR
GENERALIZED_LINEAR_MODEL
APRIORI_ASSOCIATION_RULES
MINIMUM_DESCRIPTION_LENGTH
CREATION_DATE Date NN Date model was created
BUILD_DURATION Number Time in seconds for the model build process
MODEL_SIZE Number Size of model in MBytes
COMMENTS Varchar2(4000)  
Lets query the my DMUSER2 data mining schema. This was created during a previous post where we exported some ODM models from schema and loaded them into DMUSER2 schema

SELECT model_name, 
       mining_function,
       algorithm,
       build_duration,
       model_size
FROM ALL_MINING_MODELS;

MODEL_NAME     MINING_FUNCTION  ALGORITHM                      BUILD_DURATION MODEL_SIZE
-------------  ---------------- -------------------------- -------------- ----------
CLAS_SVM_1_6   CLASSIFICATION    SUPPORT_VECTOR_MACHINES                     3      .1515
CLAS_DT_1_6    CLASSIFICATION    DECISION_TREE                               2      .0842
CLAS_GLM_1_6   CLASSIFICATION    GENERALIZED_LINEAR_MODEL                    3      .0877
CLAS_NB_1_6    CLASSIFICATION    NAIVE_BAYES                                 2      .0459

ALL_MINING_MODEL_ATTRIBUTES

Describes the attributes of the data mining models.  Related views are DBA_MINING_MODEL_ATTRIBUTES and USER_MINING_MODEL_ATTRIBUTES.

Attribute Data Type Description
OWNER Varchar2(30) NN Owner of the mining model
MODEL_NAME Varchar2(30) NN Name of the mining mode
ATTRIBUTE_NAME Varchar2(30) NN Name of the attribute
ATTRIBUTE_TYPE Varchar2(11) Logical type of attribute
NUMERICAL – numeric data
CATEGORICAL – character data
DATA_TYPE Varchar2(12) Data type of attribute
DATA_LENGTH Number Length of data type
DATA_PRECISION Number Precision of a fixed point number
DATA_SCALE Number Scale of the fixed point number
USAGE_TYPE Varchar2(8) Indicated if the attribute was used to create the model (ACTIVE) or not (INACTIVE)
TARGET Varchar2(3) Indicates if the attribute is the target

If we take one of our data mining models that was listed about and select what attributes are used by that model;

SELECT attribute_name,
       attribute_type,
       usage_type,
       target
from all_mining_model_attributes
where model_name = 'CLAS_DT_1_6';

ATTRIBUTE_NAME                 ATTRIBUTE_T USAGE_TY TAR
------------------------------ ----------- -------- ---
AGE                            NUMERICAL   ACTIVE   NO
CUST_MARITAL_STATUS            CATEGORICAL ACTIVE   NO
EDUCATION                      CATEGORICAL ACTIVE   NO
HOUSEHOLD_SIZE                 CATEGORICAL ACTIVE   NO
OCCUPATION                     CATEGORICAL ACTIVE   NO
YRS_RESIDENCE                  NUMERICAL   ACTIVE   NO
Y_BOX_GAMES                    NUMERICAL   ACTIVE   NO
AFFINITY_CARD                  CATEGORICAL ACTIVE   YES

The first thing to note here is that all the attributes are listed as ACTIVE. This is the default and will be the case for all attributes for all the algorithms, so we can ignore this attribute in our queries, but it is good to check just in case.

The second thing to note is for the last row we have the AFFINITY_CARD has a target attribute value of YES. This is the target attributes used by the classification algorithm.

ALL_MINING_MODEL_SETTINGS

Describes the setting of the data mining models. The settings associated with a model are algorithm dependent. The Setting values can be provided as input to the model build process. Alternatively, separate settings table can used.  If no setting values are defined of provided, then the algorithm will use its default settings.

Attribute Data Type Description
OWNER Varchar2(30) NN Owner of the mining model
MODEL_NAME Varchar2(30) NN Name of the mining model
SETTING_NAME Varchar2(30) NN Name of the Setting
SETTING_VALUE Varchar2(4000) Value of the Setting
SETTING_TYPE Varchar2(7) Indicates whether the default value (DEFAULT) or a user specified value (INPUT) is used by the model

Lets take our previous example of the 'CLAS_DT_1_6' model and query the database to see what the setting are.

column setting_value format a30
select setting_name, 
       setting_value,
       setting_type
from all_mining_model_settings
where model_name = 'CLAS_DT_1_6';

SETTING_NAME            SETTING_VALUE                SETTING
----------------------- ---------------------------- -------
ALGO_NAME               ALGO_DECISION_TREE           INPUT
PREP_AUTO               ON                           INPUT
TREE_TERM_MINPCT_NODE   .05                          INPUT
TREE_TERM_MINREC_SPLIT  20                           INPUT
TREE_IMPURITY_METRIC    TREE_IMPURITY_GINI           INPUT
CLAS_COST_TABLE_NAME    ODMR$15_42_50_762000JERWZYK  INPUT
TREE_TERM_MINPCT_SPLIT  .1                           INPUT
TREE_TERM_MAX_DEPTH     7                            INPUT
TREE_TERM_MINREC_NODE   10                           INPUT

Wednesday, November 2, 2011

Tom Kyte Seminar Day–Dublin

On Wednesday 2nd December, I attended a full day of presentations given by Tom Kyte of Oracle (asktom.oracle.com). Tom covered a number of topics and these included some of his Oracle Open World presentations.

The topics that were covered included

  • 5 things about SQL (OOW11)
  • Database Option Packs
  • 5 things about PL/SQL (OOW11)
  • Q&A Ask Tom Session

All of these presentations can be downloaded from Tom’s website www.asktom.com.

Tom wont be presenting at the annual UKOUG conference in December, but he is hoping to be there next year (2012).

IMG_0749

Monday, October 31, 2011

ODM 11.2–Data Mining PL/SQL Packages

The Oracle 11.2 database contains 3 PL/SQL packages that allow you to perform all (well almost all) of your data mining functions.

So instead of using the Oracle Data Miner tool you can write some PL/SQL code that will you to do the same things.

Before you can start using these PL/SQL packages you need to ensure that the schema that you are going to use has been setup with the following:

  • Create a schema or use and existing one
  • Grant the schema all the data mining privileges: see my earlier posting on how to setup an Oracle schema for data mining – Click here and YouTube video
  • Grant all necessary privileges to the data that you will be using for data mining

The first PL/SQL package that you will use is the DBMS_DATA_MINING_TRANSFORM. This PL/SQL package allows you to transform the data to make it suitable for data mining. There are a number of functions in this package that allows you to transform the data, but depending on the data you may need to write your own code to perform the transformations. When you apply your data model to the test or the apply data sets, ODM will automatically take the transformation functions defined using this package and apply them to the new data sets.

The second PL/SQL package is DBMS_DATA_MINING. This is the main data mining PL/SQL package. It contains functions to allow you to:

  • To create a Model
  • Describe the Model
  • Exploring and importing of Models
  • Computing costs and text metrics for classification Models
  • Applying the Model to new data
  • Administration of Models, like dropping, renaming, etc

The next (and last) PL/SQL package is DBMS_PREDICTIVE_ANALYTICS.The routines included in this package allows you to prepare data, build a model, score a model and return results of model scoring. The routines include EXPLAIN which ranks attributes in order of influence in explaining a target column. PREDICT which predicts the value of a target attribute based on values in the input. PROFILE which generates rules that describe the cases from the input data.

Over the coming weeks I will have separate blog posts on each of these PL/SQL packages. These will cover the functions that are part of each packages and will include some examples of using the package and functions.

Saturday, October 29, 2011

ODM PL/SQL API 11.2 New Features

The PL/SQL API interface for Oracle Data Miner has had a number of new features. These are listed below along with the new API features added with the 11.1 release.

  • Support for Native Transactional Data with Association Rules: you can build association rule models without first transforming the transactional data.
  • SVM class weights specified with CLAS_WEIGHTS_TABLE_NAME: including the GLM class weights
  • FORCE argument to DROP_MODEL: you can now force a drop model operation even if a serious system error has interrupted the model build process
  • GET_MODEL_DETAILS_SVM has a new REVERSE_COEF parameter: you can obtain the transformed attribute coefficients used internally by an SVM model by setting the new REVERSE_COEF parameter to 1

11.1g API New Features

  • Mining Model schema objects: previous releases, DM models were implemented as a collection of tables and metadata within the DMSYS schema. in 11.1 models are implemented as data dictionary objects in the SYS schema. A new set of DD views present DM models and their properties
  • Automatic and Embedded Data Preparation: previously data preparation was the responsibility of the user. Now it can be automated
  • Scoping of Nested Data: supports nested data types for both categorical and numerical data. Most algorithms require multi-record case data to the presented as columns of nested rows, each containing an attribute name/value pair. ODM processes each nested row as a separate attribute.
  • Standardised Handling of Sparse Data & Missing Values: standardised across all algorithms.
  • Generalised Linear Models: has a new algorithm and supports classification (logistic regression) and regression (linear regression)
  • New SQL Data Mining Function: PREDICTION_BOUNDS has been introduced for Generalised Linear Models. This returns the confidence bounds on predicted values (regression models) or predicted probabilities (classification)
  • Enhanced Support for Cost-Sensitive Decision Making: can be added or removed using DATA_MINING.ADD_COST_MATRIX and DBMS_DATA_MINING_REMOVE_COST_MATRIX.

Friday, October 21, 2011

Interesting quotes from Predictive Analytics World

The Predictive Analytics World conference is finishing up today in New York. Over the past few days the conference has had some of the leading analytic type people presenting at it.

Twitter, as usual, has been busy and there has been some very interesting and important quotes.

The list of tweets (#pawcon) below are the ones I found most interesting:

Manu Sharma from LinkedIn: "Guru" job title is down, "Ninja" is up.

Despite the "data science" buzz, the biggest skill among #pawcon attendees is " #DataMining

Andrea Medinaceli: Visualization is very powerful for making analytics results accessible to upper management (and for buy-in)

Social Network Analytics (SNA) with Zynga, 20M daily active users, 90M monthly active users; 10K nodes, 45K edges (big!)

Vertica: Zynga is an analytics company in the disguise of a gaming company; graph analytics find users/influencers

Colin Shearer: Find me something interesting in my data is a question from hell (analysis should be guided by business goals)

John Elder advocates ensemble methods - usually improve analytics results

Tom Davenport: to get real value, #analytics need to move from one-time craft to industrialized activity

10 years from now all Fortune 500 companies will have a Chief Analytics Officer at the level of COO or CFO

Must be a sign of the economy, so much of the focus on the value of predictive is on retaining customers. #PAWCON.

Tom Davenport: #Analytics is not about math, it is about relationships (with your business client) - says Intel Chief Mathematician

Karl Rexer: companies with higher analytic capabilities are doing better than their peers

Wednesday, October 19, 2011

ODM API Demos in PL/SQL (& Java)

If you have been using Oracle Data Miner to develop your data mining workflows and models, at some point you will want to move away from the tool and start using the ODM APIs.

Oracle Data Mining provides a PL/SQL API and a Java API for creating supervised and unsupervised data mining models. The two APIs are fully interoperable, so that a model can be created with one API and then modified or applied using the other API.

I will cover the Java APIs in a later post, so watch out for that.

To help you get started with using the APIs there are a number of demo PL/SQL programs available. These were available as part of the the pre-11.2g version of the tool. But they don’t seem to packaged up with the 11.2 (SQL Developer 3) application.

The following table gives a list of the PL/SQL demo programs that are available. Although these were part of the pre-11.2g tool, they still seem to work on your 11.2g database.

You can download a zip of these files from here.

The sample PL/SQL programs illustrate each of the algorithms supported by Oracle Data Mining. They include examples of data transformations appropriate for each algorithm.

image

I will be exploring the main APIs, how to set them up, the parameters, etc.,  over the next few weeks, so check back for these posts.

Tuesday, October 18, 2011

Book Donation by Oracle

Today I received two boxes, containing 48 books of

The Performance Management Revolution by Howard Dresner

image

These books have been kindly donated by Duncan Fitter, UK Business Development Director at Oracle.

I will be distributing these books to my MSc Data Mining students over the next week.

Thanks Duncan and Oracle

Wednesday, October 12, 2011

SQL Developer 3.1 EA & Bug

The new/updated SQL Developer 3.1 Early Adopter has just been released.

For the Data Miner, there are no major changes and it appears that there has been some bug fixes and some minor enhancements to so parts.

The main ODM features, apart from bug fixes, in this release include:

  • Globalization support, including translated error messages and GUI for all languages supported by SQL Developer
  • Improved accessibility features including the addition of a Structure navigator that lists all the nodes and links displayed in a workflow

Bug / Feature

After unzipping the download I opened SQL Developer. With each new release you will have to upgrade the existing ODM repository. The easiest way of doing this is to open the ODM connections pane and double click on one of your ODM schemas. SQL Developer will then run the necessary scripts to upgrade the repository.

I discovered a bug/feature with SQL Developer 3.1 EA1  upgrade script. The repository upgrade does not complete and an error is report.

I logged this error on the ODM forum on OTN. Mark Kelly who is the Development Manager for ODM and monitors the ODM forum, and his team, were quickly onto investigating the error. Mark has posted an update on the ODM form and give a script that needs to be run before you upgrade your existing repository.

You can download the pre-upgrade script from here.

If you don’t have an existing repository then you don’t have to run the script.

Check out the message on the ODM forum.

https://forums.oracle.com/forums/ann.jspa?annID=1678

https://forums.oracle.com/forums/thread.jspa?threadID=2296374&tstart=0

 

How to Upgrade SQL Developer & ODM

You will have to download the new SQL Developer 3.1 EA install files.

http://www.oracle.com/technetwork/developer-tools/sql-developer/sqldev-ea-download-486950.html

  • Unzip this into your SQL Developer directory
  • Create a shortcut for  sqldeveloper.exe on your desktop and relabel it SQL Developer 3.1 EA
  • Double-click this short cut

SNAGHTML137e3ee9

  • You should be presented with the above window. Select the Yes button to migrate you previous install settings
  • SQL Developer should now open and contains all your previous connections

If you have an existing ODM repository, you need to run the pre-upgrade script (see above) at this point 

  • You will now have to upgrade the ODM repository in the database. The simplest way of doing this is to allow SQL Developer to run the necessary scripts.
  • From the View Menu, select Oracle Data Miner –> Connections
  • In the ODM Connections pane double click one of your ODM schemas. Enter the username and password and click OK

SNAGHTML1383bc81[4]

  • You will then be prompted to migrate/update the ODM repository to the new version. Click Yes.
  • Enter the SYS username and Password

SNAGHTML1385934f

  • Click Start button, to start the migrate/upgrade scripts
  • On my laptop this migrate/upgrade step took less than 1 minute
  • The upgrade is now finished and you can start using ODM.

ODM – SQL Developer 3.1 EA – Release Notes

The ODM release notes can be found at

http://www.oracle.com/technetwork/database/options/odm/dataminer-31-relnotes-489144.html