Tuesday, March 8, 2016

OUG Ireland 2016: a roaring success

Last week we had out OUG Ireland annual conference. Traditionally it has been a 1 day conference and in the past few years we have had an extra day where we had the likes of Tom Kyte (we miss you), Maria Colgan, etc give a full day seminar.

But over the past few years a few of the committee have been constantly asked about when we would have a second day of presentations (just like we have for the one day event).

After lots of discussions and looking at budgets and relooking at budgets, then looking at possible alternative venues we worked out a way to have a 2 day conference.

So last week we had our new 2 day conference in the Gresham hotel in the centre of Dublin. Check out the agenda here.

The day before the conference (on the Wednesday) I had arranged for a speakers drinks meet-up. There was lots of people travelling in from all across Europe to present, so this was a good way to get everyone together over a few beers. We met in a micro brewery in Dublin. It was a fun evening with people coming and going throughout the evening and some people stayed for the entire evening. (Oh my poor credit card!)

On Thursday we all got together with all the 275+ attendees at in the Gresham Hotel.

NewImage

We had 4 tracks that covered DBA, Developer, Business Analytics and Apps. The rooms were just the right size for these tracks, with most of the rooms being full for each of the presentations. With some presentations there was people standing around the sides and down the back. All of this helped to add to the atmosphere. The exhibitors were only there for the first day. This is something we need to look at for next year to get them to attend the 2 days.

On the Wednesday evening we had an Oracle ACE Dinner that was sponsored by OTN (THANK YOU) and they also provided the ACEs with conference ribbons. Also thank you to the OTN ACE programme for providing some travel support to the various ACE Directors.

On the second day (Friday) we kind of got of a slow start for the first sessions of the day, but this must have been due to the snow we had in Dublin that morning. But from the second sessions onwards we were back to packed rooms. Because of this great turn out on the second day, helps us justify having a 2 day conference next years.

I got a chance to talk to lots and lots of people over the course of the 2 days and every single person was talking about the conference being 2 days and Please doing this next year and the year after and the year after, etc.

One of the highlights from the end of the second day was the Q&A panel discussion on the Oracle Optimiser. On this panel we had Jonathan Lewis, Joze Senegacnik and Martin Widlake. I managed to rope in a friend Alan Kelly to be the quiz master for the session. The Q&A Panel session was scheduled for 45minutes. But despite the best efforts to encourage the attendees to finished up, they kept on asking questions and the panel kept on answering. This session ran for an additional 45 minutes. Yes that was a 90 minute session.

NewImage

All in all the 2 day OUG Conference was a roaring success and I would like to thank all the speakers, the attendees and the various sponsors for making this event happen.

Check out Debra's blog post about the conference.

Check out Martin Widlake's blog post about the conference.

Oh and there was a Yeast and Sourdough Bread Making symposium during OUG Ireland where a number of the peoples got to talk about and share their experiences on making bread.

Thursday, February 25, 2016

Recoding variable values using ore.recode

Oracle R Enterprise comes with a vast array of features that not really documented anywhere. One of these features that I've recently found useful is the ore.recode() function.

The following code illustrates how you can records the values in an existing attributes or (more specifically in this example) how you can create a new attribute based on the values in another attribute.

The data set that I'm using is the White Wine data set that can be found on the UCI Machine Learning Repository Archive website. You can download this data set and load it into a table in your Oracle schema using just two commands.

> WhiteWine = read.table("http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv",
                        sep=";", header=TRUE)
> ore.create(WhiteWine, table="WHITE_WINE")

This data set has an attribute called "quality". This "quality" attribute contains values ranging from 2 to 8, and indicates the quality of the wine.

A typical task you may want to do is to relabel values into attributes to something a bit more meaningful or to group some values into a more standardised value.

To demonstrate this I want to create a new attribute that contains a description of the type of wine (and who I might share it with).

In this case, and to allow for other values in future versions of the data sets I've coded up the following:

quality  grade
-------  ----------------
1        Paint Stripper
2        Vinegar
3        Barely Drinkable
4        For the in-laws
5        For my family
6        To share with friends
7        For cooking
8        To share with my wife
9        Mine all Mine

The next step we need to perform is to gather some information about the values in the "quality" attribute. We can use the table command to quickly perform the aggregations, and then use the marplot function to graph the distributions.

> WHITE_WINE2 <- WHITE_WINE
> table(WHITE_WINE2$quality)
> barplot(table(WHITE_WINE2$quality), xlab="Wine Quality Ranking")

NewImage

Now we are ready to perform the recoding of the values using the ore.recode() function.

> WHITE_WINE2$grade <- ore.recode(WHITE_WINE2$quality, old=c(1, 2, 3, 4, 5, 6, 7, 8, 9), 
                     new=c("1-Paint Stripper", "2-Vinegar", "3-Barely Drinkable",
                           "4-For the in-laws", "5-For my family", "6-To share with friends", 
                           "7-For cooking", "8-To share with my wife", 
                           "9-Mine all Mine"))

You can now go and inspect the data, perform a frequency count and compare the values with what we had previously.

> head(WHITE_WINE2[,c("quality", "grade")]) 
> table(WHITE_WINE2$grade) 

The final step is to write the newly modified data set back to your Oracle schema into a new table. This is to ensure that the original data is modified so that it can be used or reused later.

> ore.create(WHITE_WINE2, "WHITE_WINE2")

Thursday, February 18, 2016

OUG Ireland 2016: APPs Track highlights

Today I was joined by Debra Lilly who is the APPs track lead and conference chair for OUG Ireland. Debra lets us know what we can look forward to on the APPs track at this years conference.

Check out Debra's video.

Click on the image below to get more details of the agenda and to register for this 2 day conference.

NewImage

Follow the conference and OUG Ireland on twitter using #oug_ire

Monday, February 15, 2016

OUG Ireland 2016: DBA and Developer track highlights

This morning I records a short video (1:18m) view with Simon Holt giving a quick overview and some highlights of the DBA and Developer tracks for the upcoming OUG Ireland 2016 conference.

Click on the image below to get more details of the agenda and to register for this 2 day conference.

NewImage

Follow the conference and OUG Ireland on twitter using #oug_ire

Friday, February 12, 2016

Spark versus Flink

Spark is an open source Apache project that provides a framework for multi stage in-memory analytics. Spark is based on the Hadoop platform and can interface with Cassandra OpenStack Swift, Amazon S3, Kudu and HDFS. Spark comes with a suite of analytic and machine learning algorithm allowing you to perform a wide variety of analytics on you distribute Hadoop platform. This allows you to generate data insights, data enrichment and data aggregations for storage on Hadoop and to be used on other more main stream analytics as part of your traditional infrastructure. Spark is primarily aimed at batch type analytics but it does come with a capabilities for streaming data. When data needs to be analysed it is loaded into memory and the results are then written back to Hadoop.

NewImage

Flink is another open source Apache project that provides a platform for analyzing and processing data that is in a distributed stream and/or batch data processing. Similarly to Spark, Flink comes with a set of APIs that allows for each integration in with Java, Scala and Python. The machine learning algorithms have been specifically tuned to work with streaming data specifically but can also work in batch oriented data. As Flink is focused on being able to process streaming data, it run on Yarn, works with HDFS, can be easily integrated with Kafka and can connect to various other data storage systems.

NewImage

Although both Spark and Flink can process streaming data, when you examine the underlying architecture of these tools you will find that Flink is more specifically focused for streaming data and can process this data in a more efficient manner.

There has been some suggestions in recent weeks and months that Spark is now long the tool of choice for analytics on Hadoop. Instead everyone should be using Flink or something else. Perhaps it is too early to say this. You need to consider the number of companies that have invested significant amount of time and resources building and releasing products on top of Spark. These two products provide similar-ish functionality but each product are designed to process this data in a different manner. So it really depends on what kind of data you need to process, if it is bulk or streaming will determine which of these products you should use. In some environments it may be suitable to use both.

Will these tool replace the more traditional advanced analytics tools in organisations? the simple answer is No they won't replace them. Instead they will complement each other and if you have a Hadoop environment you will will probably end up using Spark to process the data on Hadoop. All other advanced analytics that are part of your more traditional environments you will use the traditional advanced analytics tools from the more main stream vendors.

Tuesday, February 2, 2016

ORE video : Demo Code Part 5

The following is the fifth and final set of demo code from my video on using R in the Oracle Database. Check out the video before using the following code. The blog post for the video will be updated to contain links to all blog posts that have the various demo code.

The following examples illustrate how you can use the Oracle R Enterprise capabilities within SQL and PL/SQL. The following illustrate building a GLM model using the glm algorithm that comes with the R language, and then uses this mode to score or label new data that is stored in a table. The last part of the example illustrates how you can perform What-If analysis using this ORE model

-- Build & save the R script, called Demo_GLM in the DB
--  This builds a GLM  DM model in the DB
--
Begin
   sys.rqScriptDrop('Demo_GLM');
   sys.rqScriptCreate('Demo_GLM',
      'function(dat,datastore_name) {
          mod <- glm(AFFINITY_CARD ~ CUST_GENDER + AGE + CUST_MARITAL_STATUS + COUNTRY_NAME + CUST_INCOME_LEVEL + EDUCATION + HOUSEHOLD_SIZE + YRS_RESIDENCE, dat, family = binomial())
       ore.save(mod, name=datastore_name, overwrite=TRUE)   }');
end;
/

--
-- After creating the script you need to run it to create the GLM model
--
select * 	
from table(rqTableEval(
             cursor(select CUST_GENDER,
                           AGE,
                           CUST_MARITAL_STATUS,
                           COUNTRY_NAME,
                           CUST_INCOME_LEVEL,
                           EDUCATION,
                           HOUSEHOLD_SIZE,
                           YRS_RESIDENCE,
                           AFFINITY_CARD
                    from mining_data_build_v),
             cursor(select 1 as "ore.connect", 'myDatastore' as "datastore_name” from dual),
                  'XML', 'Demo_GLM' ));


--
-- There are 2 ways to use the GLM model : in Batch and in Real-Time mode
--
-- First Step : Build the in-database R script to score you new data
--
Begin
   sys.rqScriptDrop('Demo_GLM_Batch');
   sys.rqScriptCreate('Demo_GLM_Batch',
      'function(dat, datastore_name) {
      ore.load(datastore_name)
      prd <- predict(mod, newdata=dat)
      prd[as.integer(rownames(prd))] <- prd
      res <- cbind(dat, PRED = prd)
      res}');
end;
/

-- 
-- Now you can run the script to score the new data in Batch model
--   The data is located in the table MINING_DATA_APPLY
--
select * from table(rqTableEval(
              cursor(select CUST_GENDER, AGE, CUST_MARITAL_STATUS, COUNTRY_NAME, CUST_INCOME_LEVEL, EDUCATION, HOUSEHOLD_SIZE, YRS_RESIDENCE
                     from   MINING_DATA_APPLY_V
                     where rownum <= 10),
             cursor(select 1 as "ore.connect", 'myDatastore' as "datastore_name" from dual),
             'select CUST_GENDER, AGE, CUST_MARITAL_STATUS, COUNTRY_NAME, CUST_INCOME_LEVEL, EDUCATION, HOUSEHOLD_SIZE, YRS_RESIDENCE, 1 PRED from MINING_DATA_APPLY_V','Demo_GLM_Batch'))
order by 1, 2, 3;

--
-- Now let us use the Demo_GLM_Batch script to score data in Real-Time
--  The data values are passed to the GLM model
--
select * from table(rqTableEval(
              cursor(select 'M' CUST_GENDER,
                            23 AGE,
                            'Married' CUST_MARITAL_STATUS, 
                            'United States of America' COUNTRY_NAME,
                            'B: 30,000 - 49,999' CUST_INCOME_LEVEL, 
                            'Assoc-A' EDUCATION,
                            '3' HOUSEHOLD_SIZE, 
                            5 YRS_RESIDENCE
                     from dual),
              cursor(select 'myDatastore' "datastore_name", 1 "ore.connect" from dual),
                    'select CUST_GENDER, AGE, CUST_MARITAL_STATUS, COUNTRY_NAME, CUST_INCOME_LEVEL, EDUCATION, HOUSEHOLD_SIZE, YRS_RESIDENCE, 1 PRED from MINING_DATA_APPLY',
              'Demo_GLM_Batch')) order by 1, 2, 3; 

Wednesday, January 27, 2016

BIWA Summit : HOL notes for ODM using SQL

The following notes and documents are for the BIWA Summit 2016 attendees who are taking the HOL on using ODM using SQL and PL/SQL.

Getting connected to the Cloud

The following document outlines the steps you need to perform to get connected to the Cloud Database we are using.

If you have attended the HOL (yesterday) then you can reuse the same connection details (and number).

If you didn't attend the HOL yesterday then you will need to be assigned a Number Ask me or Charlie for this. After you get assigned a number then follow the instructions.

NewImage

It might be a good idea to download this document to your local machine.

HOL Notes

The following links are for the HOL documents. The first is the slides we will work through and this document contains the exercises that you will complete.

NewImage

The second document is demonstration script and contains all the code that is in the HOL slides document. Download load this file and open it in Worksheet in SQL Developer that is connected to your schema in the database. (Do not use the ODM connections and no need to open the ODM GUI).

NewImage

Wednesday, January 20, 2016

BIWA Summit 2016

The annual BIWA Summit 2016 will be next week from the 26-28 January, and it is back in Oracle HQ at Redwood Shores. If you are into the Oracle Database, Business Intelligence, Big Data, Advanced Analytics, etc then this is the conference for you.

NewImage

Over the 3 days there is an action packed agenda of 5 parallel tracks, plus a full 3 days of Hands-on Labs. The agenda is filled with the whos-who of the Oracle BI and Analytics world, so if this is your area then BIWA Summit is the conference for you and your training budget. (I'm sure it is not too late to book your place)

I've been lucky this year in that I will have 2 Hands-on Labs and 1 presentation to give. Yes that is 5 hours of presenting/hosting to do. The presentation I will be giving is 'Is Oracle SQL the best language for Statistics?' (on Tuesday 26th). This presentation is listed for the BIWA Summit and also for the NoCOUG Yes SQL conference that is running at the same time in the same venue (i.e. co-located). I've also written a brand new 2-hour Hands-on Lab titled 'Predictive Analytics using SQL and PL/SQL'. The first outing for this will be on Wednesday 27th. I will also be co-hosting, with Charlie Berger, the 'Learn Predictive Analytics in 2hours with Oracle Data Miner' Hands-on Lab on Tuesday 26th.

Come to my Hands-on Labs to be in with chance to win a copy of my book on Oracle Data Mining.

NewImage

Hopefully I'll see you there!

Check out the full agenda by clicking on the image below.

NewImage

Tuesday, January 19, 2016

ORE video : Demo Code Part 4

The following is the fourth set of demo code from my video on using R in the Oracle Database. Check out the video before using the following code. The blog post for the video will be updated to contain links to all blog posts that have the various demo code.

The following code example illustrate how you can build a Data Mining model using the in-database data mining algorithms. In this example a Decision Tree model is created. This model is then applied to new data, scoring this data with the predicted values.

> #
> # Build am in-database ODM Decision Tree
> #
> dtData <- ore.get("MINING_DATA_BUILD_V")
> # Create a ODM DT model in the DB : Only a temporary model. It is deleted when you logout
> dtModel <- ore.odmDT(AFFINITY_CARD ~ ., dtData)
> # View the details of the ODM model
> #summary(dtModel)
> names(dtModel)
 [1] "name"          "settings"      "attributes"    "costs"         "distributions”
 [6] "nodes"         "formula"       "extRef"        "call"         
> dtModel$name
 [1] "ORE$208_210”
> dtModel$settings
                          value
prep.auto                    on
impurity.metric   impurity.gini
term.max.depth                7
term.minpct.node           0.05
term.minpct.split           0.1
term.minrec.node             10
term.minrec.split            20
> dtModel$attributes
                 name        type data.type data.length precision scale is.target
1       AFFINITY_CARD categorical    number          22         0     0      TRUE
2                 AGE   numerical    number          22        NA    NA     FALSE
3 CUST_MARITAL_STATUS categorical  varchar2          20        NA    NA     FALSE
4           EDUCATION categorical  varchar2          21        NA    NA     FALSE
5      HOUSEHOLD_SIZE categorical  varchar2          21        NA    NA     FALSE
6          OCCUPATION categorical  varchar2          21        NA    NA     FALSE
7       YRS_RESIDENCE   numerical    number          22        NA    NA     FALSE
>

> ## Compute the Compusion Matrix
> dtResults <- predict(dtModel, dtData, "AFFINITY_CARD")
> with(dtResults, table(AFFINITY_CARD, PREDICTION))
             PREDICTION
AFFINITY_CARD    0    1
            0 1056   64
            1  201  179
> ## How do you persist the model in the DB
> ##     Rename and save the model in the database
> dtModel$name
 [1] "ORE$208_210"

> ## Save the ODM model in the in-database R datastore
> ore.save(dtModel, name = "ORE_MODELS", overwrite=TRUE)
> ore.load(name = "ORE_MODELS")
 [1] "dtModel"

> ## Score new data using the DM Model
> ore.sync(table = c("MINING_DATA_APPLY"))
> ore.ls()
 [1] "DEMO_R_APPLY_RESULT"   "DEMO_R_TABLE"          "DEMO_SUBSET_TABLE"    
 [4] "INSUR_CUST_LTV_SAMPLE" "MINING_DATA_APPLY"     "MINING_DATA_BUILD_V"  
 [7] "MINING_DATA_TEST_V"   > dtApply <- ore.get("MINING_DATA_APPLY")
> dim(dtApply)
 [1] 1500   18
> class(dtApply)
 [1] "ore.frame”
 attr(,"package")
 [1] "OREbase”
> DTAPPLY <- ore.push(dtApply)
> dtApplyResult <- predict(dtModel, DTAPPLY)

> dtApplyResult <- predict(dtModel, DTAPPLY)
> head(dtApplyResult)
             '0'        '1' PREDICTION
100001 0.9521912 0.04780876          0
100002 0.9521912 0.04780876          0
100003 0.9521912 0.04780876          0
100004 0.9521912 0.04780876          0
100005 0.2633745 0.73662551          1
100006 0.9521912 0.04780876          0
> dim(dtApplyResult)
 [1] 1500    3
> dim(dtApply)
 [1] 1500   18
> dtResults <- cbind(dtApply, dtApplyResult)
> dim(dtResults)
 [1] 1500   21
> ore.drop(table = "DEMO_R_APPLY_RESULT")
> ore.create(dtApplyResult, table="DEMO_R_APPLY_RESULT")
> ## Run the following for the first time you will rename a mode
> # ore.exec(paste("BEGIN> 
  #                  DBMS_DATA_MINING.RENAME_MODEL(model_name => '", dtModel$name, "',> 
  #                      new_model_name => 'DEMO_R_DT_MODEL'); END;",sep=""))> 
  ## Run the following to refresh an existing model
> ore.exec(paste("BEGIN
+ DBMS_DATA_MINING.DROP_MODEL('DEMO_R_DT_MODEL');
+ DBMS_DATA_MINING.RENAME_MODEL(model_name => '", dtModel$name,"',
+ new_model_name => 'DEMO_R_DT_MODEL');
+ END;",sep=""))

Tuesday, January 12, 2016

ORE video : Demo Code Part 3

The following is the third set of demo code from my video on using R in the Oracle Database. Check out the video before using the following code. The blog post for the video will be updated to contain links to all blog posts that have the various demo code.

The following code is illustrates some simple examples of using Oracle R Enterprise. In these example you will see how to connect to the Oracle Database, how to query and process some of the tables and views in the Oracle Database, how to check that you are working with objects in the database, how to move data to the database and query it.

> library(ORE)
> # ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", port=1521, all=TRUE);
> ore.connect(user="dmuser", sid="orcl", host="localhost", password="dmuser", port=1521, all=FALSE);
> # Test the connection
> ore.is.connected()
 [1] TRUE
> # List all the tables and views
> ore.ls()
character(0)
> # Use ore.sync to only include the tables and views listed
> ore.sync()
> ore.ls()
 [1] "DEMO_R_APPLY_RESULT"      "DEMO_R_TABLE"             "INSUR_CUST_LTV_SAMPLE"    "MINING_DATA_APPLY"       
 [5] "MINING_DATA_APPLY_V"      "MINING_DATA_BUILD_V"      "MINING_DATA_TEST_V"       "MINING_DATA_TEXT_APPLY_V”
 [9] "MINING_DATA_TEXT_BUILD_V" "MINING_DATA_TEXT_TEST_V" 
> # Disconnect and reattached with no meta-data sync
> ore.disconnect()
> ore.connect(user="dmuser", sid="orcl", host="localhost", password="dmuser", port=1521, all=FALSE);
> ore.sync(table = c("MINING_DATA_BUILD_V", "MINING_DATA_TEST_V", "INSUR_CUST_LTV_SAMPLE"))
> ore.ls()
 [1] "INSUR_CUST_LTV_SAMPLE" "MINING_DATA_BUILD_V"   "MINING_DATA_TEST_V"   
> # Check for the existance of a table or view
> ore.exists("MINING_DATA_BUILD_V")
 [1] TRUE
> # list the objects in the DMUSER schema
> ore.ls("DMUSER")
 [1] "INSUR_CUST_LTV_SAMPLE" "MINING_DATA_BUILD_V"   "MINING_DATA_TEST_V" 
> #
> # Load data from a file into a new table
> ore.exists("DEMO_R_TABLE")
 [1] TRUE
> ore.drop(table='DEMO_R_TABLE')
> ore.ls()
 [1] "INSUR_CUST_LTV_SAMPLE" "MINING_DATA_BUILD_V"   "MINING_DATA_TEST_V"   
> titanic <- read.table("c:/R/titanic2.txt", header=T, sep="\t")
> ore.create(titanic, table="DEMO_R_TABLE")
> tData <- ore.get("DEMO_R_TABLE")
> head(tData)
                 NAME PCLASS AGE    SEX SURVIVED
1 Fynney, Mr Joseph J    2nd  35   male        0
2      Gale, Mr Harry    2nd  35   male        0
3   Gale, Mr Shadrach    2nd  38   male        0
4 Garside, Miss Ethel    2nd  24 female        1
5  Gaskell, Mr Alfred    2nd  16   male        0
6  Gavey, Mr Lawrence    2nd  26   male        0
> # Use ORE to pull data from the Database to local R
> # ore.pull  -- United States of America
> mdbv <- ore.get("MINING_DATA_BUILD_V")
> mdbv_data <- ore.pull(mdbv)
Warning message:ORE object has no unique key - using random order 
> head(mdbv_data,3)
  CUST_ID CUST_GENDER AGE CUST_MARITAL_STATUS             COUNTRY_NAME    CUST_INCOME_LEVEL EDUCATION OCCUPATION
1  101501           F  41              NeverM United States of America J: 190,000 - 249,999   Masters      Prof.
2  101502           M  27              NeverM United States of America I: 170,000 - 189,999     Bach.      Sales
3  101503           F  20              NeverM United States of America H: 150,000 - 169,999   HS-grad    Cleric.
  HOUSEHOLD_SIZE YRS_RESIDENCE AFFINITY_CARD BULK_PACK_DISKETTES FLAT_PANEL_MONITOR HOME_THEATER_PACKAGE
1              2             4             0                   1                  1                    1
2              2             3             0                   1                  1                    0
3              2             2             0                   1                  0                    0
  BOOKKEEPING_APPLICATION PRINTER_SUPPLIES Y_BOX_GAMES OS_DOC_SET_KANJI
1                       1                1           0                0
2                       1                1           1                0
3                       1                1           1                0
> class(mdbv_data)
[1] "data.frame”
> summary(mdbv_data)

Wednesday, January 6, 2016

ORE video : Demo Code Part 2

The following is the second set of demo code from my video on using R in the Oracle Database. Check out the video before using the following code. The blog post for the video will be updated to contain links to all blog posts that have the various demo code.

The following code gives a very quick demonstration of using the RORACLE R package to access the data in your Oracle schema. ROracle has a number of advantages over using RJDBC and most of the advantages are about the performance improvements. Typically when using ROracle you will see a many fold improvement with selecting data and moving it to your R client, processing data in the database and also writing data back to the Oracle Database. In some tests you can see a 7 times improvement in performance over RJDBC. Now that is a big difference.

But the problem with ROracle is that it is only available on certain platforms/OS. For example it is not officially available for the Mac. But if you google this issue carefully you will find unofficial ways over coming this problem.

ROracle is dependent on Oracle Client. So you will need to have Oracle Client installed on you machine and have it available on the search path.

When you have Oracle Client installed and the ROracle R package installed you are ready to start using it.

So here is the demo code from the video.

> library(ROracle)
> drv <- dbDriver("Oracle")
> # Create the connection string
> host <- "localhost"
> port <- 1521
> sid <- "orcl"
>connect.string <- paste("(DESCRIPTION=”, "(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))",
>    "(CONNECT_DATA=(SID=", sid, ")))", sep = "")

> con <- dbConnect(drv, username = "dmuser", password = "dmuser",dbname=connect.string)

> rs <- dbSendQuery(con, "select view_name from user_views")
> # fetch records from the resultSet into a data.frame
> data <- fetch(rs)
> # extract all rows
> dim(data)
[1] 6 1
> data
                  VIEW_NAME
1       MINING_DATA_APPLY_V
2       MINING_DATA_BUILD_V
3        MINING_DATA_TEST_V
4  MINING_DATA_TEXT_APPLY_V
5  MINING_DATA_TEXT_BUILD_V
6   MINING_DATA_TEXT_TEST_V
> dbCommit(con)
> dbClearResult(rs)
> dbDisconnect(con)


Wednesday, December 30, 2015

ORE Video : Demo Code part 1

In a previous blog post I posted a video on using R with the Oracle Database and using Oracle R Enterprise. This is a part 1 extension of that blog post that gives the first set of demo code.

This first set of demonstration code is for using RJDBC to connect to the Oracle Database. Using RJDBC relies on using the JDBC jar file for Oracle. It is easily found in various installations of Oracle products and will be called something like ojdbc.jar. I like to take a copy of this file and place it in the root/home directory.

> library(RJDBC)
> # Create connection driver and open 
> connectionjdbcDriver <- JDBC(driverClass="oracle.jdbc.OracleDriver", classPath="c:/ojdbc6.jar")
> jdbcConnection <- dbConnect(jdbcDriver, "jdbc:oracle:thin:@//localhost:1521/orcl", "dmuser", "dmuser")
> #list the tables in the schema
> #dbListTables(jdbcConnection)
> #get the DB connections details - it get LOTS of info - Do not run unless it is really needed
> dbGetInfo(jdbcConnection)
> # Query on the Oracle instance name.
> #instanceName <- dbGetQuery(jdbcConnection, "SELECT instance_name FROM v$instance")
              TABLE_NAME1 
1  INSUR_CUST_LTV_SAMPLE2            
2              OUTPUT_1_2
> #print(instanceName)tableNames <- dbGetQuery(jdbcConnection, "SELECT table_name from user_tables where  
                                                 table_name not like 'DM$%' and table_name not like 'ODMR$%'")
> print(tableNames)
> viewNames <- dbGetQuery(jdbcConnection, "SELECT view_name from user_views")print(viewNames)
1       MINING_DATA_APPLY_V
2       MINING_DATA_BUILD_V
3        MINING_DATA_TEST_V
4  MINING_DATA_TEXT_APPLY_V
5  MINING_DATA_TEXT_BUILD_V
6   MINING_DATA_TEXT_TEST_V

> v <- dbReadTable(jdbcConnection, "MINING_DATA_BUILD_V")
> names(v)
[1] "CUST_ID"                 "CUST_GENDER"             "AGE"                     
[4] "CUST_MARITAL_STATUS"     "COUNTRY_NAME"            "CUST_INCOME_LEVEL"       
[7] "EDUCATION"               "OCCUPATION"              "HOUSEHOLD_SIZE"         
[10] "YRS_RESIDENCE"           "AFFINITY_CARD"           "BULK_PACK_DISKETTES"    
[13] "FLAT_PANEL_MONITOR"      "HOME_THEATER_PACKAGE"    "BOOKKEEPING_APPLICATION”
[16] "PRINTER_SUPPLIES"        "Y_BOX_GAMES"             "OS_DOC_SET_KANJI" 
> dim(v)
[1] 1500   18
> summary(v)
    CUST_ID       CUST_GENDER             AGE        CUST_MARITAL_STATUS COUNTRY_NAME       
Min.   :101501   Length:1500        Min.   :17.00   Length:1500         Length:1500        
1st Qu.:101876   Class :character   1st Qu.:28.00   Class :character    Class :character   
Median :102251   Mode  :character   Median :37.00   Mode  :character    Mode  :character   
Mean   :102251                      Mean   :38.89                                          
3rd Qu.:102625                      3rd Qu.:47.00                                          
Max.   :103000                      Max.   :90.00                                          
CUST_INCOME_LEVEL   EDUCATION          OCCUPATION        HOUSEHOLD_SIZE     YRS_RESIDENCE    
Length:1500        Length:1500        Length:1500        Length:1500        Min.   : 0.000   
Class :character   Class :character   Class :character   Class :character   1st Qu.: 3.000   
Mode  :character   Mode  :character   Mode  :character   Mode  :character   Median : 4.000                                                                               
                                                                            Mean   : 4.089                                                                               
                                                                            3rd Qu.: 5.000                                                                               
                                                                            Max.   :14.000 
> hist(v$RESIDENCE)
> hist(v$AGE)
> dbDisconnect(jdbcConnection)

Make sure to check out the other demonstration scripts that are shown in the video.