Tuesday, August 21, 2012

Review of Oracle Magazine–November/December 1996

The headline articles for the November/December 1996 edition of Oracle Magazine were focused on VLDB with articles on scaling to petabyte sized databases, the latest and best hardware to use, what new exist features in 7.3 and 8 for VLDBS, what new tools exists to assist administrators with the scale of the database.

image

Other articles included:

  • There was an article on what is a Operational Data Store (ODS) and it also highlights how and ODS is different to a Data Warehouse. Despite this article and many, many more like it in the wider press since 1996 there is still lots of confusion out in the IT world on what whey are and how they are different.
  • A new Database Design tool has been added to the Oracle Designer/2000 suit. This new tool was supposed to be lightweight, etc.  Oracle Data Modeler is a much better tool.
  • Oracle outlines their roadmap for making their database and certain tools available on Windows NT.
  • IKEA has implemented and Oracle 7 DB on multiple platforms, including IBM MVS, Digital VMS, IBM AIX and other UNIX variants. Other tools used by IKEA included Developer/2000 and Designer/2000.
  • How to manage multi-table joins to reduce the amount of processing. The article looked at how to use Nested Loops, Merger Joins and Hash Joins. The article also suggests that in some cases maybe you need to consider redesigning your tables/data model.
  • Motorola implements multi-lingual Oracle Human Resources 10SC in 14 offices in 8 countries. There was a lot of use of NLS functionality in the database including NLS_LANG, NLS_NUMERIC_CHARACTERS, NLS_SORT and the translated _TL tables in Oracle Applications.
  • We have the first Y2K related article, an much of the discussion focused on how Oracle Stores Dates in the database. Most of the fuss focused on if you captured and stored a two digit year or a four digit year. Oracle provided the RR format mask to minimise the amount of recoding that needed to be done to Many applications around the world.
  • There was 6 pages of job adverts from Oracle Australia, Database Consultants Inc, ACT1, BPA, Profound Consulting, RHI Consulting, Ernst & Young, TransTech, Wilco, Information Alliance, Exor Technologies, The Consulting Team, InTimeSystems, May&Speh, Price Waterhouse. I wonder where some of those companies are now.

To view the cover page and the table of contents click on the image at the top of this post or click here.

My Oracle Magazine Collection can be found here. You will find links to my blog posts on previous editions and a PDF for the very first Oracle Magazine from June 1987.

Tuesday, August 14, 2012

Tom Kyte Seminar–Dublin 19th September 2012

Calling all Oracle users in Ireland.

Tom Kyte will be back in Dublin on Wednesday 19th September for a half day seminar.

image

The event is is being organised by the Ireland OUG and Oracle.

It will be in the Gibson Hotel beside the Point village.

This is a FREE event for everyone, so share the news and get to see Tom Kyte present for a 4 hours.

As they say spaces are limited, so book your place today. I have.

To register for the event – click here.

Friday, August 3, 2012

Call for paper for Oracle Scene-Winter 2012 Edition

The call for papers is now open for Oracle technical papers for publication in the UKOUG Oracle Scene magazine.

The submission date for completed papers is 24th August. 

To get more information of paper guidelines and submission details go to,

http://www.ukoug.org/what-we-offer/oracle-scene/

The Winter edition will be published online and in print format around the end of October. This will be in time for the EPM & Hyperion, JDE and UKOUG 2012 conferences. So this is a chance to get your message across to these communities.

Did you get a presentation accepted for the UKOUG annual conference or were you disappointed ?  Maybe you could consider writing a paper based on your presentation and submit it for consideration.

How about advertising in Oracle Scene. Over the past couple of editions we have had a significant increase in readership, with readership from countries around the world.

Over the past few years Oracle Scene has moved from being a regional User Group magazine to having a readership in 30+ countries around the world.

Why am I writing this post ?  I’m a deputy editor of Oracle Scene Smile

Monday, July 30, 2012

Review of Oracle Magazine-September/October 1996

The headline articles for the September/October 1996 edition of Oracle Magazine was on Putting the Web to Work and focused how to build web based applications. Topics covered included the Web Server, Intranet vs Client/Server applications, what (Oracle) tools to use.

image

Oracle articles included:

  • There was an interesting advertisement from Sun. It consisted on one page that contained the following text, “when your intranet is protected with Solstice by Sun, unauthorized users see your information quite differently,. For a free demonstration, turn the page” The next two pages are blank!
  • Oracle publishing will be launching Oracle Applications Magazine in November 1996. The new magazine will be targeted at top line-of-business managers and will offer executives and other qualified Oracle Applications users in-depth industry analysis and technology and business overviews of topics critical to managers looking for technology solutions to business problems.
  • Euro Star train service that covers UK and France, and has trains running under the the English Chanel, has implemented the Oracle Financials Application suite. One of the main features is its ability to handle multiple currencies and companies and the flexibility of running processes and period-end routines.
  • Oracle announces that Wells Fargo has negotiated that the largest enterprise database licence agreement in the financial industry and will be implementing Oracle Universal Server, Oracle DB 7.3 as well as DB options such as data warehousing and electronic commerce. This new environment will need to support 25,00 users and the gathering of 80 gigabytes of data each month.
  • Oracle has released a number of its applications for the web.
  • Using partitioning for a Data Warehouse and how it compares to using Clustering.
  • How to build business rules using triggers in Oracle 7 and how to ensure consistency in the data.
  • A summary of a number of SQL Functions were given with examples. These included Numeric, Character, Data, Conversion and Group By functions.
  • A listing of a procedure and some other scripts was given for sizing tables and indexes in Oracle 7.

To view the cover page and the table of contents click on the image at the top of this post or click here.

My Oracle Magazine Collection can be found here. You will find links to my blog posts on previous editions and a PDF for the very first Oracle Magazine from June 1987.

Tuesday, July 24, 2012

Oracle Technology Day in Dublin 15th November

Calling all Oracle users in Ireland.

Oracle is hosting a Technology day in Dublin on Thursday 15th November. This one day event will be held in the Croke Park Conference Centre.

They will be talking about Cloud Computing, Mobile Computing, Social Media and Big Data.

They are also hoping to include some of the updates and product news that will be announced at Oracle Open World in early October.

As this is not an Oracle User Group event, you should expect a certain amount of marketing and sales pitches at this event.

To register for this event got to – Register Now

Tuesday, July 10, 2012

Review of Oracle Magazine–July/August 1996

The headline articles for the July/August1996 edition of Oracle Magazine was on how to balance security and communication in a distributed world, extending Oracle power objects applications and automating Oracle tuning

image

Oracle articles included:

  • Oracle released three of its products on the web. These included Oracle Web Customers, Oracle Web Suppliers and Oracle Web Employees. They aimed to help make it possible for companies to conduct secure business transactions over the internet and corporate intranets. They also shipped Oracle Workflow to help support the implementation of these new products
  • Oracle Express Analyzer, an object-oriented reporting and analysis tool had its second release
  • UBS Bank implements the Oracle based operational accounting system, with over 800,000 input records daily and over 3,000 cost centre reports that needed different levels of summarisation. The new application allows the executives to view information in virtually any format choosing from 120,000 multi-level, multi-view reports.
  • The Egyptian Stock Exchange and Capital Market Authority implements a new trading system build on Oracle
  • Don Burleson in his article on Automating Oracle Tuning gives a number of scripts that would assist the DBA in finding out what is going on in the database. So instead of purchasing some expensive tools, all you needs was these scripts UTKBSTAT/UTLESTAT.

To view the cover page and the table of contents click on the image at the top of this post or click here.

My Oracle Magazine Collection can be found here. You will find links to my blog posts on previous editions and a PDF for the very first Oracle Magazine from June 1987.

Tuesday, June 26, 2012

Analytics Sessions at Oracle Open World 2012

The content catalog for Oracle Open World 2012 was made public during the week. OOW is on between 30th September and 4th October.

The following table gives a list of most of the Data Analytics type sessions that are currently scheduled.

Why did I pick these sessions? If I was able to go to OOW then these are the sessions I would like to attend. Yes there would be many more sessions I would like to attend on the core DB technology and Development streams.

Session Title Presenters
CON6640 - Database Data Mining: Practical Enterprise R and Oracle Advanced Analytics Husnu Sensoy
CON8688 - Customer Perspectives: Oracle Data Integrator Gurcan Orhan - Software Architect & Senior Developer, Turkcell Technology R&D
Julien Testut - Product Manager, Oracle
HOL10089 - Oracle Big Data Analytics and R George Lumpkin - Vice President, Product Management, Oracle
CON8655 - Tackling Big Data Analytics with Oracle Data Integrator Mala Narasimharajan - Senior Product Marketing Manager, Oracle
Michael Eisterer - Principal Product Manager, Oracle
CON8436 - Data Warehousing and Big Data with the Latest Generation of Database Technology George Lumpkin - Vice President, Product Management, Oracle
CON8424 - Oracle’s Big Data Platform: Settling the Debate Martin Gubar - Director, Oracle
Kuassi Mensah - Director Product Management, Oracle
CON8423 - Finding Gold in Your Data Warehouse: Oracle Advanced Analytics Charles Berger - Senior Director, Product Management, Data Mining and Advanced Analytics, Oracle
CON8764 - Analytics for Oracle Fusion Applications: Overview and Strategy Florian Schouten - Senior Director, Product Management/Strategy, Oracle
CON8330 - Implementing Big Data Solutions: From Theory to Practice Josef Pugh - , Oracle
CON8524 - Oracle TimesTen In-Memory Database for Oracle Exalytics: Overview Tirthankar Lahiri - Senior Director, Oracle
CON9510 - Oracle BI Analytics and Reporting: Where to Start? Mauricio Alvarado - Principal Product Manager, Oracle
CON8438 - Scalable Statistics and Advanced Analytics: Using R in the Enterprise Marcos Arancibia Coddou - Product Manager, Oracle Advanced Analytics, Oracle
CON4951 - Southwestern Energy’s Creation of the Analytical Enterprise Jim Vick - , Southwestern Energy
Richard Solari - Specialist Leader, Deloitte Consulting LLP
CON8311 - Mining Big Data with Semantic Web Technology: Discovering What You Didn’t Know Zhe Wu - Consultant Member of Tech Staff, Oracle
Xavier Lopez - Director, Product Management, Oracle
CON8428 - Analyze This! Analytical Power in SQL, More Than You Ever Dreamt Of Hermann Baer - Director Product Management, Oracle
Andrew Witkowski - Architect, Oracle
CON6143 - Big Data in Financial Services: Technologies, Use Cases, and Implications Omer Trajman - , Cloudera
Ambreesh Khanna - Industry Vice President, Oracle
Sunil Mathew - Senior Director, Financial Services Industry Technology, Oracle
CON8425 - Big Data: The Big Story Jean-Pierre Dijcks - Sr. Principal Product Manager, Oracle
CON10327 - Recommendations in R: Scaling from Small to Big Data Mark Hornick - Senior Manager, Oracle

Wednesday, June 20, 2012

Part 2 of the Leaning Tower of Pisa problem in ODM

In previous post I gave the details of how you can use Regression in Oracle Data Miner to predict/forecast the lean of the tower in future years. This was based on building a regression model in ODM using the known lean/tilt of the tower for a range of years.

In this post I will show you how you can do the same tasks using the Oracle Data Miner functions in SQL and PL/SQL.

Step 1 – Create the table and data

The easiest way to do this is to make a copy of the PISA table we created in the previous blog post. If you haven’t completed this, then go to the blog post and complete step 1 and step 2.

create table PISA_2
as select * from PISA;

image

Step 2 – Create the ODM Settings table

We need to create a ‘settings’ table before we can use the ODM API’s in PL/SQL. The purpose of this table is to store all the configuration parameters needed for the algorithm to work. In our case we only need to set two parameters.

BEGIN
delete from pisa_2_settings;
INSERT INTO PISA_2_settings (setting_name, setting_value) VALUES
(dbms_data_mining.algo_name, dbms_data_mining.ALGO_GENERALIZED_LINEAR_MODEL);
INSERT INTO PISA_2_settings (setting_name, setting_value) VALUES
(dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off );
COMMIT;
END;

Step 3 – Build the Regression Model

To build the regression model we need to use the CREATE_MODEL function that is part of the DBMS_DATA_MINING package. When calling this function we need to pass in the name of the model, the algorithm to use, the source data, the setting table and the target column we are interested in.

BEGIN
      DBMS_DATA_MINING.CREATE_MODEL(
        model_name          => 'PISA_REG_2',
        mining_function     => dbms_data_mining.regression,
        data_table_name     => 'pisa_2_build_v',
        case_id_column_name => null,
        target_column_name  => 'tilt',
        settings_table_name => 'pisa_2_settings');
END;

After this we should have our regression model.

Step 4 – Query the Regression Model details

To find out what was produced as in the previous step we can query the data dictionary.

SELECT model_name, 
       mining_function,
       algorithm,
       build_duration,
       model_size
from USER_MINING_MODELS
where model_name like 'P%';

image

select setting_name, 
       setting_value,
       setting_type
from all_mining_model_settings
where model_name like 'P%';

image

Step 5 – Apply the Regression Model to new data

Our final step would be to apply it to our new data i.e. the years that we want to know what the lean/tilt would be.

SELECT year_measured, prediction(pisa_reg_2 using *)
FROM   pisa_2_apply_v;

image

Tuesday, June 19, 2012

Using ODM Regression for the Leaning Tower of Pisa tilt problem

This blog post will look at how you can use the Regression feature in Oracle Data Miner (ODM) to predict the lean/tilt of the Leaning Tower of Pisa in the future.

This is a well know regression exercise, and it typically comes with a set of know values and the year for these values. There are lots of websites that contain the details of the problem. A summary of it is:

The following table gives measurements for the years 1975-1985 of the "lean" of the Leaning Tower of Pisa. The variable "lean" represents the difference between where a point on the tower would be if the tower were straight and where it actually is. The data is coded as tenths of a millimetre in excess of 2.9 meters, so that the 1975 lean, which was 2.9642.

Given the lean for the years 1975 to 1985, can you calculate the lean for a future date like 200, 2009, 2012.

Step 1 – Create the table

Connect to a schema that you have setup for use with Oracle Data Miner. Create a table (PISA) with 2 attributes, YEAR_MEASURED and TILT. Both of these attributes need to have the datatype of NUMBER, as ODM will ignore any of the attributes if they are a VARCHAR or you might get an error.

CREATE TABLE PISA
  (
    YEAR_MEASURED NUMBER(4,0),
    TILT          NUMBER(9,4)
);

Step 2 – Insert the data

There are 2 sets of data that need to be inserted into this table. The first is the data from 1975 to 1985 with the known values of the lean/tilt of the tower. The second set of data is the future years where we do not know the lean/tilt and we want ODM to calculate the value based on the Regression model we want to create.

Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1975,2.9642);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1976,2.9644);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1977,2.9656);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1978,2.9667);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1979,2.9673);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1980,2.9688);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1981,2.9696);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1982,2.9698);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1983,2.9713);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1984,2.9717);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1985,2.9725);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1986,2.9742);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1987,2.9757);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1988,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1989,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1990,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (1995,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (2000,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (2005,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (2010,null);
Insert into DMUSER.PISA (YEAR_MEASURED,TILT) values (2009,null);

Step 3 – Start ODM and Prepare the data

Open SQL Developer and open the ODM Connections tab. Connect to the schema that you have created the PISA table in. Create a new Project or use an existing one and create a new Workflow for your PISA ODM work.

Create a Data Source node in the workspace and assign the PISA table to it. You can select all the attributes..

The table contains the data that we need to build our regression model (our training data set) and the data that we will use for predicting the future lean/tilt (our apply data set).

We need to apply a filter to the PISA data source to only look at the training data set. Select the Filter Rows node and drag it to the workspace. Connect the PISA data source to the Filter Rows note. Double click on the Filter Row node and select the Expression Builder icon. Create the where clause to select only the rows where we know the lean/tilt.

image

image

Step 4 – Create the Regression model

Select the Regression Node from the Models component palette and drop it onto your workspace. Connect the Filter Rows node to the Regression Build Node.

image

Double click on the Regression Build node and set the Target to the TILT variable. You can leave the Case ID at <None>.  You can also select if you want to build a GLM or SVM regression model or both of them. Set the AUTO check box to unchecked. By doing this Oracle will not try to do any data processing or attribute elimination.

image

You are now ready to create your regression models.

To do this right click the Regression Build node and select Run. When everything is finished you will get a little green tick on the top right hand corner of each node.

image

Step 5 – Predict the Lean/Tilt for future years

The PISA table that we used above, also contains our apply data set

image

We need to create a new Filter Rows node on our workspace. This will be used to only look at the rows in PISA where TILT is null.  Connect the PISA data source node to the new filter node and edit the expression builder.

image

Next we need to create the Apply Node. This allows us to run the Regression model(s) against our Apply data set. Connect the second Filter Rows node to the Apply Node and the Regression Build node to the Apply Node.

image

Double click on the Apply Node.  Under the Apply Columns we can see that we will have 4 attributes created in the output. 3 of these attributes will be for the GLM model and 1 will be for the SVM model.

Click on the Data Columns tab and edit the data columns so that we get the YEAR_MEASURED attribute to appear in the final output.

Now run the Apply node by right clicking on it and selecting Run.

Step 6 – Viewing the results

Where we get the little green tick on the Apply node we know that everything has run and completed successfully.

image

To view the predictions right click on the Apply Node and select View Data from the menu.

image

We can see the the GLM mode gives the results we would expect but the SVM does not.

Monday, June 18, 2012

Oracle Magazine–Volume 1 Number 1

A few weeks ago I sent a few emails to some well know names in the Oracle World looking to see if they have a copy of the very first Oracle Magazine (Volume 1 Number 1).

Many thanks to Oracle ACE Director Cary Millsap of Method R, who responded to say that he had the very first Oracle Magazine. He kindly arranged to have it scanned into PDF. 

To view the 12 page Oracle Magazine (Volume 1 Number 1) click on the following image.  Read and Enjoy!

image

Some people have said that his is not the first Oracle Magazine, published in June 1987. Although this edition is labelled as Volume 1 Number 1, an Oracle Newsletter existed for a few years prior to this edition.

Do you know of anyone who has these newsletters ?

My Oracle Magazine Collection can be found here. You will find links to my blog posts on previous editions.

Wednesday, June 13, 2012

Data Science Is Multidisciplinary

[Update :October 2016.  There appears to be some discussion about the Venn diagram I've proposed below. The central part of this diagram is not anything I can up with. It was a commonly used Venn diagram for Data Mining. Thanks to Polly Michell-Guthrie for providing the original reference for the Venn. I just added the outer ring of additional skills needed for the new area of Data Science. This was just my view of things back in 2012. Things have moved on a bit since then]

A few weeks ago I had a blog post called Domain Knowledge + Data Skills = Data Miner.
In that blog post I was saying that to be a Data Scientist all you needed was Domain Knowledge and some Data Skills, which included Data Mining.
The reality is that the skill set of a Data Scientist will be much larger. There is a saying ‘A jack of all trades and a master of none’. When it comes to being a data scientist you need to be a bit like this but perhaps a better saying would be ‘A jack of all trades and a master of some’.
I’ve put together the following diagram, which includes most of the skills with an out circle of more fundamental skills. It is this outer ring of skills that are fundamental in becoming a data scientist. The skills in the inner part of the diagram are skills that most people will have some experience in one or more of them. The other skills can be developed and learned over time, all depending on the type of person you are.
image
Can we train someone to become a data scientist or are they born to be a data scientist. It is a little bit of both really but you need to have some of the fundamental skills and the right type of personality. The learning of the other skills should be easy(ish)
What do you think?  Are their Skill that I’m missing?