Tuesday, October 11, 2016

OTN Appreciation Day : My favourite thing from OTN #ThanksOTN

This blog post is my contribution to the OTN Appreciation Day, the brain child of Tim Hall (read his blog post here).

For my contribution, I'm going to write about something that is a bit different to what most people will be writing about. Most people will be writing about some feature of the Oracle Database or maybe their favourite tool.

I'm not going to do that. What I'm going to write about is something that OTN does for use Developers, DBAs, etc.

Basically OTN has done so much over the years to help developers in a multitude of different ways.

Apart from the support that OTN gives me as an Oracle ACE Director (Thank you!), one of my favourite things that OTN makes available to us are the VirtualBox Pre-built Developer VMs.

NewImage

These pre-built VMs allow us developers to go play with the technology, to learn how to use it, to follow tutorials, to see how various software applications work together, etc all within a virtual machine.

I bet that (almost) everyone reading this blog and taking part in the OTN Appreciation Day will have used one or more of the virtual box prebuilt VMs.

Why is this a good thing? How would you like trying to install all this software from scratch? Not me. Typically for me when performing an install I usually mess something up. If this happens often enough then you may just get frustrated with what you are trying to do and just give up on it. The result will probably be you giving a negative review to your employer.

But the pre-build VMs take the pain of installing (sometimes) large and complex software is taken away from you and allows you to dive straight into using the software. I also really love that the VMs come with tutorials, decent data sets, example applications built using the software, and demonstrations on how to get each of these working together.

If you mess anything up, then you can just re-import the VM and start all over again. When you are finished using the VM and testing the software, all you need to do is to delete the VM. You latop, desktop or where ever you have installed the VM is left clean with no partially uninstalled files, etc.

Each of us will have our favourite VMs. For most people the Developer Day VM is fantastic. It you are a beginner or an experienced developer I would bet most people will have a copy of this VM and are probably using it as their personal Oracle Database sever.

For me, I'm also a regular user of the Oracle Big Data Lite VM and the OBIEE Sample Application VM.

For OTN Appreciation day, I haven't talked about a Database feature. Instead I've talked about something that OTN has done for us, the developer, DBA, etc community. I'd like to thank OTN for supporting the community by providing these VirtualBox pre-built VMs for us to use. You have saved me/us many, many, many hours/days/weeks/months over the years.

BTW. I'm looking forward to the VM with the 12.2c Database.

Monday, October 10, 2016

OUG Ireland Meet-up 20th October

Come along to the first OUG Ireland meet-up on the 20th October, in Bank of Ireland, Grand Canal Dock, Dublin, between 18:15 and 20:00.

Over the years the OUG Ireland SIG committee have organised one day SIG events once or twice a year. This is in addition to the annual OUG Ireland conference (typically held in March). Sometimes it has been a challenge to get people to attend, sometimes it has been a challenge to get enough speakers, sometimes it was a challenge to get a good venue, etc.

So we have decided to try something a little bit different. In keeping with the current trend of smaller scale events we have organised our first Meet-up. This will be a short 2 hour event to be held after work on the 20th October. So come along and joins us.

This is a free and open event. You do not need to be a member of the user group to come to this meet-up.

Here are the details:

Theme for Meet-up

Updates from Oracle Open World 2016

Agenda

18:00-18:20 : Sign-in, meet and greet, and setup of space with seats etc

18:20-18:30 : Introductions & Welcome, Agenda, what is OUG Ireland

18:30-18:45 : The Oracle 12.2 Database new features (Simon Holt)

18:45-19:00 : What's new in the BI, BA, Big Data world from Oracle (Brendan Tierney)

19:00-19:15 : What's happening with Cloud (Tony Cassidy)

19:15-19:30 : Other updates from Oracle (John Caulfield, Oracle)

19:30-19:45 : Q&A session and Open discussion

Location

Bank Of Ireland

1 Grand Canal Square

Dublin

Please sign up, so that we know who is coming

There are 2 places where you can sign up. It doesn't matter which one you use but please use one of them to let us know you will be there.

Sign up on EventBrite.com

Sign up on Meetup.com


We will be looking to setup more Meet-up events, so let us know what you think of the new format and particularly if you would like to get involved with talking about a topic, project, new feature, whatever, etc. for 15-20 minutes (a short demo would be good)

Wednesday, October 5, 2016

Oracle Data Miner 4.2 EA : New Features

A couple of weeks ago during the madness of Oracle Open World there was some new product releases and lots of updates to existing products.

One such product was SQL Developer. They released an Early Adopter version (EA1). This is where you can try out the new version of the product, but you need to be careful as it is not the GA/Production version. So it may have some "features".

One component of SQL Developer is the Oracle Data Miner tool. This tool GUI workflow based tool based on the Oracle Advanced Analytics option. At OOW we got to hear about the various new Oracle Data Mining features that are coming with Oracle 12.2 Database. For Oracle Data Miner (ODMr) 4.2 (EA) there are a lot of new features but most of these are hidden and will only come available when you are using the Oracle 12.2 DB.

But if you are using a 12.1 (or earlier) then there are some new features. I've been having a bit of a look around the EA1 release to see what is new and available to us now (while we wait for 12.2).

If you are on Oracle 12.1 DB or earlier there are two main new features. These are a new Workflow Scheduler and being able to specify in-memory options for ODMr objects. These can be easily found on the ODMr menu bar, are highlighted in the following image.

NewImage

Let us now have a quick look at these.

ODMr Workflow Scheduler

The Workflow Scheduler allows us to take an ODMr Workflow and to use schedule it to run in the Oracle Database at a defined time or for a defined schedule. Previously we would have to write the SQL and PL/SQL code to enable the scheduling. Plus the ODMr schedule was outputted in a number of SQL scripts. So it was a little bit of challenge to get the workflow running on a regular basis.

Now with the new in-built ODMr Schedular we can quickly and easily do this without having to write a line of SQL or PL/SQL. The tool will look after the hard bit for us. We can schedule the entire workflow or certain parts of the workflow.

NewImage

When setting up your schedule you can pick the Start Date, how frequently you would like it run (daily, weekly, monthly or some other custom frequency), when it should end (never, after X number of runs or on a specific date). You can also re-use an existing schedule.

NewImage

For the advanced settings you can setup email notification, the job priority level, maximum run durations and limits, and timezone to use.

NewImage

ODMr In-memory Options

To access the in-memory options you can click on the 'Performance Options' button on the ODMr menu or you can access it via the menu (Tools -> Preferences) to get the complete list of in-memory settings.

NewImage

When you use ODMr to build your data mining workflows, ODMr will create a number of objects for each of the nodes of the workflow. These are typically created as tables in your schema. The previous version of ODMr introduced the Performance Options, where you could set the degree of parallel to use for some Nodes and the underlying SQL and PL/SQL code that is generated.

Now we can specify if the tables created should be in-memory, and available of the significant performance response times when you are using the data in these tables. This is particularly useful as we work with larger and larger data sets and we want our lighting fast response from some of our data mining tasks.

In addition to turning on the in-memory option for certain nodes, we can also specify the in-memory configuration settings such as the level of Columnar Compression to use and the Priority Level.

NewImage


(I've been on the 12.2 beta so I've had a chance to try out many of the new features. There is some good stuff coming and I'll have blog posts about these when 12.2 comes GA)

Monday, September 26, 2016

Machine Learning notebooks (and Oracle)

Over the past 12 months there has been an increase in the number of Machine Learning notebooks becoming available.
What is a Machine Learning notebook?
As the name implies it can be used to perform machine learning using one or more languages and allows you to organise your code, scripts and other details in one application.
The ML notebooks provide an interactive environment (sometimes browser based) that allows you to write, run, view results, share/collaborate code and results, visualise data, etc.
Some of these ML notebooks come with one language and others come with two or more languages, and have the ability to add other ML related languages. The most common languages are Spark, Phython and R.
Based on these languages ML notebooks are typically used in the big data world and on Hadoop.
NewImage
Examples of Machine Learning notebooks include: (Starting with the more common ones)
  • Apache Zeppelin
  • Jupyter Notebook (formally known as IPython Notebook)
  • Azure ML R Notebook
  • Beaker Notebook
  • SageMath
At Oracle Open World (2016), Oracle announced that they are currently working creating their own ML notebook and it is based on Apache Zeppelin. They seemed to indicate that a beta version might be available in 2017. Here are some photos from that presentation, but with all things that Oracle talk about you have to remember and take into account their Safe Habor.
2016 09 22 12 43 41 2016 09 22 12 45 53 2016 09 21 12 16 09
I'm looking forward to getting my hands on this new product when it is available.

Friday, September 16, 2016

Oracle Text, Oracle R Enterprise and Oracle Data Mining - Part 4

This is the fourth blog post of a series on using Oracle Text, Oracle R Enterprise and Oracle Data Mining. Make sure to check out the previous blog posts as each one builds upon each other.

In this blog post, I will have an initial look at how you can use Oracle Text to perform document classification. In my next blog post, in the series, I will look at how you can use Oracle Data Mining with Oracle Text to perform classification.

The area of document classification using Oracle Text is a well trodden field and there are lots and lots of material out there to assist you. This blog post will look at the core steps you need to follow and how Oracle Text can help you with classifying your documents or text objects in a table.

When you use Oracle Text for documentation classification the simplest approach is to use 'Rule-based Classification'. With this approach you will defined a set of rules, when applied to the document will determine classification that will be assigned to the document.

There is a little bit of setup and configuration needed to make this happen. This includes the following.

  • Create a table that will store you document. See my previous blog posts in the series to see an example of one that is used to store the text from webpages.
  • Create a rules table. This will contain the classification label and then a set of rules that will be used by Oracle Text to determine that classification to assign to the document. These are in the format similar to what you might see in the WHERE clause of a SELECT statement. You will need follow the rules and syntax of CTXRULES to make sure your rules fire correctly.
  • Create a CTXRULE index on the rules table you created in the previous step.
  • Create a table that will be a link table between the table that contains your documents and the table that contains your categories.

When you have these steps completed you can now start classifying your documents. The following example illustrates using these steps using the text documents I setup in my previous blog posts.

Here is the structure of my documents table. I had also created an Oracle Text CTXSYS.CONTEXT index on the DOC_TEXT attribute.

create table MY_DOCUMENTS (	
 doc_pk			NUMBER(10) PRIMARY KEY, 
 doc_title		VARCHAR2(100), 
 doc_extracted 	DATE, 
 data_source 	VARCHAR2(200), 
 doc_text 		CLOB );
The next step is to create a table that contains our categories and rules. The structure of this table is very simple, and the following is an example.
create table DOCUMENT_CATEGORIES (
 doc_cat_pk  	NUMBER(10) PRIMARY KEY, 
 doc_category 	VARCHAR2(40),
 doc_cat_query  VARCHAR2(2000) );

create sequence doc_cat_seq;

Now we can create the table that will store the identified document categories/classifications for each of out documents. This is a link table that contains the primary keys from the MY_DOCUMENTS and the MY_DOCUMENT_CATEGORIES tables.

create table MY_DOC_CAT (
 doc_pk 	NUMBER(10), 
 doc_cat_pk NUMBER(10) );

Queries for CTXRULE are similar to those of CONTAINS queries. Basic phrasing within quotes is supported, as are the following CONTAINS operators: ABOUT, AND, NEAR, NOT, OR, STEM, WITHIN, and THESAURUS. The following statements contain my rules.

insert into document_categories values
  (doc_cat_seq.nextval, 'OAA','Oracle Advanced Analytics');

insert into document_categories values
  (doc_cat_seq.nextval, 'Oracle Data Mining','ODM or Oracle Data Mining');

insert into document_categories values
  (doc_cat_seq.nextval, 'Oracle Data Miner','ODMr or Oracle Data Miner or SQL Developer');

insert into document_categories values
  (doc_cat_seq.nextval, 'R Technologies','Oracle R Enterprise or ROacle or ORAACH or R');

We are now ready to create the Oracle Text CTXRULE index.

create index doc_cat_idx on document_categories(doc_cat_query) indextype is ctxsys.ctxrule;

Our next step is to apply the rules and to generate the categories/classifications. We have two scenarios to deal with here. The first is how do we do this for our existing records and the second to how can you do this ongoing as new documents get loaded into the MY_DOCUMENTS table.

For the first scenario, where the documents already exist in our table, we can can use a procedure, just like the following.

DECLARE
   v_document    MY_DOCUMENTS.DOC_TEXT%TYPE;
   v_doc         MY_DOCUMENTS.DOC_PK%TYPE;
BEGIN
   for doc in (select doc_pk, doc_text from my_documents) loop
      v_document := doc.doc_text;
      v_doc  := doc.doc_pk;
      for c in (select doc_cat_pk from document_categories
              where matches(doc_cat_query, v_document) > 0 )
         loop
            insert into my_doc_cat values (doc.doc_pk, c.doc_cat_pk);
      end loop;
   end loop;
END;
/

Let us have a look at the categories/classifications that were generated.

select a.doc_title, c.doc_cat_pk, b.doc_category
from my_documents a,
     document_categories b,
     my_doc_cat c
where a.doc_pk = c.doc_pk
and c.doc_cat_pk = b.doc_cat_pk
order by a.doc_pk, c.doc_cat_pk;

NewImage

We can see the the categorisation/classification actually gives us the results we would have expected of these documents/web pages.

Now we can look at how to generate these these categories/classifications on an on going basis. For this we will need a database trigger on the MY_DOCUMENTS table. Something like the following should do the trick.

CREATE or REPLACE TRIGGER t_cat_doc
  before insert on MY_DOCUMENTS
  for each row
BEGIN
  for c in (select doc_cat_pk from document_categories
            where  matches(doc_cat_query, :new.doc_text)>0)
  loop
        insert into my_doc_cat values (:new.doc_pk, c.doc_cat_pk);
  end loop;
END;

At this point we have now worked through how to build and use Oracle Text to perform Rule based document categorisation/classification.

In addition to this type of classification, Oracle Text also has uses some machine learning algorithms to classify documents. These include using Decision Trees, Support Vector Machines and Clustering. It is important to note that these are not the machine learning algorithms that come as part of Oracle Data Mining. Look out of my other blog posts that cover these topics.

Monday, September 12, 2016

My 3rd Book is now officially released

Today 12th September (2016) is the official release date of my 3rd book.

The title of the books is 'Oracle R Enterprise'. Make sure to check it out on Amazon.

It has been a busy 17 months, as you may have noticed that I had another book released a few weeks ago. Check it out here.

Yes, I was working on two books at the same time.

Yes, that was a lot of work, and looking back on it was a lot of fun too.

This new book (Oracle R Enterprise) is a good companion for my first book (Predictive Analytics using Oracle Data Miner), as I now have a book for each of the components of the Oracle Advanced Analytics option.

NewImage NewImage

Here is what is on the back cover of the book.

"Effectively manage your enterprise’s big data and keep complex processes running smoothly using the hands-on information contained in this Oracle Press guide. Oracle R Enterprise: Harnessing the Power of R in Oracle Database shows, step-by-step, how to create and execute large-scale predictive analytics and maintain superior performance. Discover how to explore and prepare your data, accurately model business processes, generate sophisticated graphics, and write and deploy powerful scripts. You will also find out how to effectively incorporate Oracle R Enterprise features in APEX applications, OBIEE dashboards, and Apache Hadoop systems. Learn to: • Install, configure, and administer Oracle R Enterprise • Establish connections and move data to the database • Create Oracle R Enterprise packages and functions • Use the R language to work with data in Oracle Database • Build models using ODM, ORE, and other algorithms • Develop and deploy R scripts and use the R script repository • Execute embedded R scripts and employ ORE SQL API functions • Map and manipulate data using Oracle R Advanced Analytics for Hadoop • Use ORE in Oracle Data Miner, OBIEE, and other applications ... "

This books is ideally suited to people who are starting out with Oracle R Enterprise (ORE) or have some experience with using it, and want to see what you can do with it and how it can be used with other products like APEX, OBIEE, Hadoop and Spark. Yes I touch on these in the book. This book may also be of interest for those who are working with the products I've just listed and want to see how to use ORE.

If you are at Oracle Open World (OOW) next week make sure to check out the book in the Oracle Book Store, and if you buy a copy try to track me down to get me to sign it. The best way to do this is to contact me on Twitter, leave a message at the Oracle Press stand, or you will find me hanging out at the OTN Lounge.

A special thanks to my technical editor, Mark Hornick, who is a Director of Oracle Advanced Analytics Product Management, for Oracle's R Technologies.

Here are quotes from some people about the book.

The book ‘Oracle R Enterprise’, written by Brendan Tierney, is a valuable resource for any data scientist who wants to use the R language with the Oracle Database. It demonstrates very well the many features of Oracle R Enterprise, from performing simple analytics to utilising the many performance features of the Oracle Database, allowing you to work with all your datasets - Big or small. Additionally the book demonstrates how you can use the power of the R language with the SQL language as well as with other Oracle products including APEX and OBIEE, as well as Hadoop and Spark.

- John Donnelly - Regional Director, Oracle Ireland

The new book by Brendan Tierney, Oracle ACE Director, on Oracle R Enterprise details how users can gain maximal value out of the Oracle Database’s tight integration with the popular open source R statistical programming language. The author guides the R community into how they can, through the ease and familiarity of R, tap into the power of the Oracle Database Enterprise Edition with its Oracle Advanced Analytics Option or the Oracle Database Cloud Service. Brendan, an expert in this field, clearly articulates how to get quickly started and provides extensive “how to” examples and R scripts. Readers of the book can learn how they can access data directly in the Database, eliminate data movement while exploiting the openness and flexibility of R. Readers can then tap into the scalability and security of SQL of the Oracle Database and leverage Oracle’s proprietary, parallelized in-database machine learning algorithms and Oracle R Enterprise’s R “push down” to SQL functions. Read this book and learn how to leverage R and reduce model development and enterprise model deployment from days/weeks to minutes/hours!

-Charlie Berger

Sr. Director Product Management, Oracle Advanced Analytics and Machine Learning

"Brendan Tierney conveys very clearly all the aspects required for a successful Data Scientist that wants to work with large Databases and large Big Data clusters. It contains a great articulation of all aspects related to building and deploying Machine Learning algorithms in an Oracle Database environment with an overview on the algorithms on Hadoop clusters, as well as the integration with Business Intelligence dashboards and Applications. This is an essential reference for anyone in the Data Science field today working with Oracle Databases.

Marcos Arancibia, Product Manager, Oracle Data Science.

Tuesday, September 6, 2016

Change the size of ORE PNG graphics using in-database R functions

In a previous blog post I showed you how create and display a ggplot2 R graphic using SQL. Make sure to check it out before reading the rest of this blog post.
In my previous blog post, I showed and mentioned that the PNG graphic returned by the embedded R execution SQL statement was not the same as what was produced if you created the graphic in an R session.
Here is the same ggplot2 graphic. The first one is what is produced in an R session and the section is what is produced by SQL query and the embedded R execution in Oracle.
NewImage NewImage
As you can see the second image (produced using the embedded R execution) gives a very square image.
The reason for this is that Oracle R Enterprise (ORE) creates the graphic image in PNG format. The default setting from this is 480 x 480. You will find this information when you go digging in the R documentation and not in the Oracle documentation.
So, how can I get my ORE produced graphic to appear like what is produced in R?
What you need to do is to change the height and width of the PNG image produced by ORE. You can do this by passing parameters in the SQL statement used to call the user defined R function, that in turn produces the ggplot2 image.
In my previous post, I gave the SQL statement to call and produce the graphic (shown above). One of the parameters to the rqTableEval function was set to null. This was because we didn't have any parameters to pass, apart from the data set.
We can replace this null with any parameters we want to pass to the user defined R function (demo_ggpplot). To pass the parameters we need to define them using a SELECT statement.
cursor(select 500 as "ore.png.height", 850 as "ore.png.width" from dual),
The full SELECT statement now becomes
select *
from table(rqTableEval( cursor(select * from claims),
                        cursor(select 500 as "ore.png.height", 850 as "ore.png.width" from dual),
                        'PNG',
                        'demo_ggpplot'));
When you view the graphic in SQL Developer, you will get something that looks a bit more like what you would expect or want to see.
NewImage
For each graphic image you want to produce using ORE you will need to figure out that are the best PNG height and width settings to use. Plus it also depends on what tool or application you are going to use to display the images (eg. APEX etc)

Thursday, September 1, 2016

How to Display a BLOB image in an APEX Report

Do you want to display an image on a report in APEX ?

Is the image stored as a BLOB data type in your schema or the blob is returned by some functions?

If so, then displaying the BLOB is not a simple or straight forward task.

Actually it is a simple and straight forward task, as long as you know "the trick" you need to create/defined in your APEX report.

The following steps outlines what you need to do to create a report with a BLOB images. Most of these are the standard steps, except for Step 4. That is the important one.

1. Create the Report using the APEX wizard

Create a new report. In my example here I'm going to create a classic report.

NewImage Enter a title for the report, and accept the default settings NewImage

Create as new navigation menu entry

NewImage

2. Define the Table or Query for the Report

Select the table or view that contains the data or define the SQL Query to return the results. It might be best to select this later option as it will make things clearer and easier to change in Step 4.

NewImage

Click next on the next 2 screens of the wizard and then click the Create button.

3. Set the BLOB attribute settings

When you run the report you will get something like the following being displayed. As you can see it clearly does not display the BLOB image.

NewImage

Next we need to setup the BLOB attribute settings. As shown in the following.

Screenshot 2016 08 26 13 59 30

When we run the report now, we now get an error message.

NewImage

4. Change the report query to return the length of the BLOB

Now this is the magic bit.

To get the image to display you need to go back to the Report level and change the query in the SQL Query box, to contain function below that get the length of the image in the BLOB attribute, dbms_lob.getlength() (in my example this attribute is call IMAGE)

select ID,
       dbms_lob.getlength(image)  image
from V_DOCUMENT_TM_IMAGE
Screenshot 2016 08 26 14 07 59

5. The BLOB object now appears :-)

That's it. Now when you run your report the image will be displayed.

NewImage

So now you know how to display a BLOB image in an APEX Report.

(Thanks to Roel and Joel for the help in working out how to do this)

Wednesday, August 24, 2016

How to get ORE to work with APEX

This blog post will bring you through the steps of how to get Oracle R Enterprise (ORE) to work with APEX.

The reason for this blog posts is that since ORE 1.4+ the security model has changed for how you access and run in-database user defined R scripts using the ORE SQL API functions.

I have a series of blog posts going out on using Oracle Text, Oracle R Enterprise and Oracle Data Mining. It was during one of these posts I wanted to show how easy it was to display an R chart using ORE in APEX. Up to now my APEX environment consisted of APEX 4 and ORE 1.3. Everything worked, nice and easy. But in my new APEX environment (APEX 5 and ORE 1.5), it didn't work. This is the calling of an in-database user defined R script using the SQL API functions didn't work. Here is the error message that is displayed.

NewImage

So something extra was needed with using ORE 1.5. The security model around the use of in-database user defined R scripts has changed. Extra functions are now available to allow you who can run these scripts. For example we have an ore.grant function where you can grant another user the privilege to run the script.

But the problem was, when I was in APEX, the application was defined on the same schema that the r script was created in (this was the RQUSER schema). When I connect to the RQUSER schema using ORE and SQL, I was able to see and run this R script (see my previous blog post for these details). But when I was in APEX I wasn't able to see the R script. For example, when using the SQL Workshop in APEX, I just couldn't see the R script.

NewImage

Something strange is going on. It turns out that the view definitions for the in-database ORE scripts are defined with

owner=SYS_CONTEXT('USERENV', 'SESSION_USER');

(Thanks to the Oracle ORE team and the Oracle APEX team for their help in working out what needed to be done)

This means when I'm connected to APEX, using my schema (RQUSER), I'm not able to see any of my ORE objects.

How do you overcome this problem ?

To fix this problem, I needed to grant the APEX_PUBLIC_USER access to my ORE script.

ore.grant(name = "prepare_tm_data_2", type = "rqscript", user = "APEX_PUBLIC_USER")

Now when I query the ALL_RQ_SCRIPTS view again, using the APEX SQL Workshop, I now get the following.

NewImage

Great. Now I can see the ORE script in my schema.

Now when I run my APEX application I now get graphic produced by R, running on my DB server, and delivered to my APEX application using SQL (via a BLOB object), displayed on my screen.

NewImage

Monday, August 22, 2016

My 2nd Book: is now available: Real World SQL and PL/SQL

It has been a busy 12 month. In addition to the day jobs, I've also been busy writing. (More news on this in a couple of weeks!)

Today is a major milestone as my second book is officially released and available in print and ebook formats.

The tile of the book is 'Real Word SQL and PL/SQL: Advice from the Experts'. Check it out on Amazon.

Now that sounds like a very fancy title, but it isn't meant to be. This book is written by 5 people (including me), who are all Oracle ACE Directors, who all have 20+ years of experience, each, of working with the Oracle Database, and we all love sharing our knowledge. My co-authors are Arup Nanda, Heli Helskyaho, Martin Widlake and Alex Nuitjen. It was a pleasure working with you.

I haven't seen a physical copy of the book yet !!! Yes the book is released and I haven't held it in my hands. Although I have seen pictures of it that other people have taken. There was a delay in sending out the author copies of the book, but as of this morning my books are sitting in Stansted Airport and should be making their way to Ireland today. So fingers crossed I'll have them tomorrow. I'll update this blog post with a picture when I have them. UPDATE: They finally arrived at 13:25 on the 22nd August.

NewImage

In addition to the 5 authors we also had Chet Justice (Oraclenerd), and Oracle ACE Director, as the technical editor. We also had Tim Hall, Oracle ACE Director, wrote a foreword for us.

NewImage NewImage

To give you some background to the book and why we wrote it, here is an extract from the start of the book, where I describe how the idea for this book came about and the aim of the book.

"While attempting to give you an idea into our original thinking behind the need for this book and why we wanted to write it, . the words of Rod Stewart’s song ‘Sailing’ keeps popping into my mind. These are ‘We are sailing, we are sailing, home again 'cross the sea’. This is because the idea for this book was born on a boat. Some call it a ship. Some call it a cruise ship. Whatever you want to call it, this book was born at the OUG Norway conference in March 2015. What makes the OUG Norway conference special is that it is held on a cruise ship that goes between Oslo in Norway to Kiel in Germany and back again. This means as a speaker and conference attendee you are ‘trapped’ on the cruise ship for 2 days filled with presentations, workshops, discussions and idea sharing for the Oracle community.

It was during this conference that Heli and Brendan got talking about their books. Heli had just published her Oracle SQL Developer Data Modeler book and Brendan had published his book on Oracle Data Miner the previous year. Whilst they were discussing their experiences of writing and sharing their knowledge and how much they enjoyed this,they both recognized that there are a lot of books for the people starting out in their Oracle career and then there are lots of books on specialized topics. What was missing were books that covered the middle group. A question they kept on asking but struggled to answer was, ‘after reading the introductory books, what book would they read next before getting onto the specialized books?’ This was particularly true of SQL and PL/SQL.

They also felt that something that was missing from many books, especially introductory ones, was the “Why and How” of doing things in certain ways that comes from experience. It is all well and good knowing the syntax of commands and the options, but what takes people from understanding a language to being productive in using it is that real-world derived knowledge that comes from using it for real tasks. It would be great to share some of that experience.

Then over breakfast on the final day of the OUG Norway conference, as the cruise ship was sailing through the fjorrd and around the islands that lead back to Oslo, Heli and Brendan finally agreed that this book should happen. They then listed the type of content they thought would be in such a book and who are the recognized experts (or super heroes) for these topics. This list of experts was very easy to come up with and the writing team of Oracle ACE Directors was formed, consisting of Arup Nanda, Martin Widlake and Alex Nuijten, along with Heli Helskyaho and Brendan Tierney. The author team then got to work defining the chapters and their contents. Using their combined 120+ years of SQL and PL/SQL experience they finally came up with scope and content for the book at Oracle Open World.

..."

As you can see, this book was 17 months in the making. This consisted of 4 months of proposal writing, research and refinement, 8 months of writing, 3 months of editing and 2 months for production of book.

Yes it takes a lot of time and commitment. We all finished our last tasks and final edits on the book back in early June. Since then the book has been sent for printing, converted into an ebook, books shipped to Oracle Press warehouse, then shipped to Amazon and other book sellers. Today it is finally available officially.

(when I say officially, it seems that Amazon has shipped some pre-ordered books a week ago)

If you are at Oracle Open World (OOW) in September make sure to check out the book in the Oracle Book Store, and if you buy a copy try to track us down to get us to sign it. The best way to do this is to contact us on Twitter, leave a message at the Oracle Press stand, or you will find us hanging out at the OTN Lounge.

Wednesday, August 17, 2016

Oracle Text, Oracle R Enterprise and Oracle Data Mining - Part 3

This is the third blog post of a series on using Oracle Text, Oracle R Enterprise and Oracle Data Mining. Check out the first and second blog posts of the series, as the data used in this blog post was extracted, processed and stored in a databases table.

This blog post is divided into 3 parts. The first part will build on what was covered in in the previous blog post and will expand the in-database ORE R script to include more data processing. The second part of this blog post will look at how you can use SQL to call our in-database ORE R scripts and to be able to include it in our custom applications, for example using APEX (part 3).

Part 1 - Expanding our in-database ORE R script for Text Mining

In my previous blog post we created an ORE user defined R script, that is stored in the database, and this script was used to perform text mining and to create a word cloud. But the data/text to be mined was processed beforehand and passed into this procedure.

But what if we wanted to have a scenario where we just wanted to say, here is the table that contains the data. Go ahead and process it. To do this we need to expand our user defined R script to include the loop to merge the webpage text into one variable. The following is a new version of our ORE user defined R script.

> ore.scriptCreate("prepare_tm_data_2", function (local_data) { 
  library(tm)
  library(SnowballC)
  library(wordcloud)
  
  tm_data <-""
  for(i in 1:nrow(local_data)) {
    tm_data <- paste(tm_data, local_data[i,]$DOC_TEXT, sep=" ")
  }
    
  txt_corpus <- Corpus (VectorSource (tm_data))
  
  # data clean up
  tm_map <- tm_map (txt_corpus, stripWhitespace) # remove white space
  tm_map <- tm_map (tm_map, removePunctuation) # remove punctuations
  tm_map <- tm_map (tm_map, removeNumbers) # to remove numbers
  tm_map <- tm_map (tm_map, removeWords, stopwords("english")) # to remove stop words
  tm_map <- tm_map (tm_map, removeWords, c("work", "use", "java", "new", "support"))

  # prepare matrix of words and frequency counts
  Matrix <- TermDocumentMatrix(tm_map) # terms in rows
  matrix_c <- as.matrix (Matrix)
  freq <- sort (rowSums (matrix_c)) # frequency data
  
  res <- data.frame(words=names(freq), freq)
  wordcloud (res$words, res$freq, max.words=100, min.freq=3, scale=c(7,.5), random.order=FALSE, colors=brewer.pal(8, "Dark2"))
} ) 

To call this R scipts using the embedded R execution we can use the ore.tableApply function. Our parameter to our new R script will now be an ORE data frame. This can be a table in the database or we can create a subset of table and pass it as the parameter. This will mean all the data process will occur on the Oracle Database server. No data is passed to the client or processing performed on the client. All work is done on the database server. The only data that is passed back to the client is the result from the function and that is the word cloud image.

> res <- ore.tableApply(MY_DOCUMENTS, FUN.NAME="prepare_tm_data_2")
> res

Part 2 - Using SQL to perform R Text Mining

Another way you ccan call this ORE user defined R function is using SQL. Yes we can use SQL to call R code and to produce an R graphic. Then doing this the R graphic will be returned as a BLOB. So that makes it easy to view and to include in your applications, just like APEX.

To call our ORE user defined R function, we can use the rqTableEval SQL function. You only really need to set two of the parameters to this function. The first parameter is a SELECT statement the defines the data set to be passed to the function. This is similar to what I showed above using the ore.tableApply R function, except we can have easier control on what records to pass in as the data set. The fourth parameter gives the name of the ORE user defined R script.

select *
from table(rqTableEval( cursor(select * from MY_DOCUMENTS),
                        null,
                        'PNG',
                        'prepare_tm_data_2'));

This is the image that is produced by this SQL statement and viewed in SQL Developer.

NewImage

Part 3 - Adding our R Text Mining to APEX

Adding the SQL to call an ORE user defined script is very simple in APEX. You can create a form or a report based on a query, and this query can be the same query that is given above.

Something that I like to do is to create a view for the ORE SELECT statement. This gives me some flexibility with some potential future modifications. This could be as simple as just changing the name of the script. Also if I discover a new graphic that I want to use, all I need to do is to change the R code in my user defined R script and it will automatically be picked up and displayed in APEX. See the images below.

WARNING: Yes I do have a slight warning. Since the introduction of ORE 1.4 and higher there is a slightly different security model around the use of user defined R scripts. Instead of going into the details of this and what you need to do in this blog post, I will have a separate blog post that describes the behaviour and what you need to do allow APEX to use ORE and to call the user defined R scripts in your schema. So look out for this blog post coming really soon.

NewImage In this blog post I showed you how you use Oracle R Enterprise and the embedded R execution features of ORE to use the text from the webpages and to create a word cloud. This is a useful tool to be able to see visually what words can stand out most on your webpage and if the correct message is being put across to your customers.

Monday, August 8, 2016

Oracle Text, Oracle R Enterprise and Oracle Data Mining - Part 2

This is the second blog post of a series on using Oracle Text, Oracle R Enterprise and Oracle Data Mining. Check out the first blog post of the series, as the data used in this blog post was extracted, processed and stored in a databases table.

In this blog post I will show you how you use Oracle R Enterprise and the embedded R execution features of ORE to use the text from the webpages and to create a word cloud. This is a useful tool to be able to see visually what words can stand out most on your webpage and if the correct message is being put across to your customers.

Prerequisites: You will need to load the following R packages into your R environment 'tm', 'word cloud' 'SnowballC'. These are required to process the following R code segments.

install.packages (c( "tm", "wordcloud", "SnowballC"))
library (tm)
library (wordcloud)
library (SnowballC)

Select data from table and prepare: We need to select the data from the table in our schema and to merge it into one variable.

local_data <- ore.pull(MY_DOCUMENTS)

tm_data <-""
for(i in 1:nrow(local_data)) {
  tm_data <- paste(tm_data, local_data[i,]$DOC_TEXT, sep=" ")
}
tm_data

Create function to perform Text Mining: In my previous blog post on creating a word cloud I gave the R code. In order to allow for this R code to be run on the database server (using the embedded R execution of ORE) we need to package this text mining R code up into a ORE user defined R script. This is stored in the database.

ore.scriptDrop("prepare_tm_data")
ore.scriptCreate("prepare_tm_data", function (tm_data) { 
  library(tm)
  library(SnowballC)
  library(wordcloud)
  
  txt_corpus <- Corpus (VectorSource (tm_data))
  
  # data clean up
  tm_map <- tm_map (txt_corpus, stripWhitespace) # remove white space
  tm_map <- tm_map (tm_map, removePunctuation) # remove punctuations
  tm_map <- tm_map (tm_map, removeNumbers) # to remove numbers
  tm_map <- tm_map (tm_map, removeWords, stopwords("english")) # to remove stop words
  tm_map <- tm_map (tm_map, removeWords, c("work", "use", "java", "new", "support"))
  
  # prepare matrix of words and frequency counts
  Matrix <- TermDocumentMatrix(tm_map) # terms in rows
  matrix_c <- as.matrix (Matrix)
  freq <- sort (rowSums (matrix_c)) # frequency data
  
  res <- data.frame(words=names(freq), freq)
  wordcloud (res$words, res$freq, max.words=100, min.freq=3, scale=c(7,.5), random.order=FALSE, colors=brewer.pal(8, "Dark2"))
} ) 

Before we can run this user define R script, we need to ensure that we have the 'tm', 'SnowballC' and 'wordcloud' R packages installed on the Oracle Database server. On the Oracle Database server you need to rune ORE.

> library(ORE)

Then run the following command to install these R packages

> install_packages(c('tm','wordcloud', 'SnowballC'))

Run the function on the DB Server: You are now ready to run the function. In an earlier step we had gathered the data. Now we can pass this data to the in-database R script.

> res <- ore.doEval(FUN.NAME="prepare_tm_data", tm_data=tm_data)

The ore.doEval function is a general purpose ORE function. In this case we pass it two parameters. The first parameter is the neame of the user defined R script stored in the database, and the second parameter is the data. The function returns and ORE object that contains the word cloud graphic.

Display the results: You can very easily display the results.

> res

This gives us the following graphic.

NewImage

In my next blog post, of this series, I will show you how you can use the function created above and some other bits and pieces, using some other features of ORE and also in SQL.