Friday, October 9, 2015

From Zero to Dashboards in 10min with BICS

Unless you have been going around with your head in the clouds, all you can hear from all the big names in the IT worlds is about moving to the Cloud. (yes that was a poor attempt at a joke. It is Friday afternoon after all).

The title of this post is 'From Zero to Dashboards in 5 min with BICS'.

BICS stands for Oracle BI Cloud Service.

Over the past few months I've been working rolling out BICS at a number of different sites, and it is surprising how quickly you can get up and running with BICS.

There is no install. There is the minimum of setup and configuration. All you need to do is to create a user and give them some privileges. All of that takes 2 minutes.

Next get them to log in to BICS (2 minutes for the first time they log in) and load up some data using the Data Load feature (another 2 minutes).

Then you can move onto the fun part of creating some Dashboards using Visual Analyzer. See the example below of one that we created in 4 minutes.

Bics 1

So all of the above took 10 minutes. How quick and easy is that!

Give it a go yourself and see how quick it is for you.

Let me know if you have any question on using BICS and if you need any help.

Tuesday, September 8, 2015

24th Sept: Oracle UG Ireland: BA, Big Data & Tech SIG (Free) event

This is a Free Event.

Register today to reserve your place, as places are limited.

Our next BA, Big Data & Tech SIG will be on Thursday 24th September. This event will be held in Jurys Inn, Customs House, in Dublin.

We have decided to merge the two SIGs for this event to offer everyone even more content and learning opportunities. The day is filled with presentations from End Users and from Product Experts. We will also have some of the latest updates from Oracle.

It is the ideal opportunity for you to learn, share and network with other users.

Here is the agenda for this full day event.

NewImage

As you can see this a really good agenda. So I hope to see you there.

You need to book your place for this SIG. To book your place go to here.


If you have ideas of topics you would like covered at the next SIG, just let me know.

I'll see you there.

Tuesday, September 1, 2015

My presentations at OOW 2015

Oracle Open World is coming at the end of October. Lots of people are starting to get excited as Oracle Open World is the number one place for all the Oracle geeks from around the World to hang out for a week.

This year I will be giving 2 presentations. Both of them are on the Sunday. That means I get to really enjoy the rest of the conference. These presentations are:

Sunday 13:30-14:15 & 14:30-15:14  (a double session)
Session Title : 12 more things about Oracle 12c 
Description: These sessions are being organised by the EMEA Oracle User Group and follow on from the successful sessions that were at last years Oracle Open World. These sessions/presentations will consist of 12 presenters giving a 5-7 minute talk on a topic related to the Oracle Database. Many thanks to Debra Lilley for arranging these session.  

For me I will be talking about Running R in the Oracle Database using Oracle R Enterprise. 5-7 Minutes is not enough time for this topic, but the whole idea of these sessions and talks to to give people a taster of what is possible in the Oracle Database. You can then use your new knowledge to check out other sessions at OOW and/or to check it out when you get home.

I'll put up another blog post when I have the full list and schedule of presenters.

Sunday 15:30-16:15  (yes this is right after the previous session/presentation)
Session Title : Real Business Value from Big Data and Advanced Analytics
Description : I will be co-presenting with Antony Heljula and we will be talking about some of the advanced analytics and big data projects we have worked on. We have had some amazing results with these projects and we will be giving you a taster of what is possible. Also some of these project didn't involve big data. So even with small data you can have amazing results using Advanced Analytics / Predictive Analytics / Data Mining / Data Science.

After these presentations I get to enjoy the rest of Oracle Open World with having to worry about giving another presentation.

My diary for the conference is filling up fast and it looks like I will be busy each day from 8am to late into the evening. Oracle Open World has lots and lots of social and networking events and these are great for meeting up with people and making new contacts.

Wednesday, August 26, 2015

Enabling Autotrace in SQL Developer

This is mainly a note to myself to save me from looking up the details very time I need it. I can just go to my own post.

To use Autotrace in SQL you need to grant the schema the PLUSTRACE role.

But in SQL Developer you need the PLUSTRACE role that has a few additional privileges.

Connect to SYS as sysdba and run the following.

grant select on v_$session to plustrace;
grant select on v_$sql_plan to plustrace;
grant select on v_$sql_plan_statistics to plustrace;
grant select on v_$sql to plustrace;

Now you will be able to run Autotrace in SQL Developer.

(Make sure to reconnect your SQL Developer connection so that the privileges can be picked up).

Friday, August 14, 2015

Managing ORE in-database Data Stores using SQL

When working with ORE you will end up creating a number of different data stores in the database. Also as your data science team increases the number of data stores can grow to a very large number.

When you install Oracle R Enterprise you will get a number of views that are made available to ORE users to see what ORE Data stores they have and what objects exist in them. All using SQL.

Perhaps some of the time the ORE developers and data analysts will use the set of ORE functions to manage the in-database ORE Data stores. These include:

NewImage

When using these ORE function the schema user/data scientist can see what ORE Data stores they have. You can use the ore.delete to delete an ORE Data store when it is no longer needed.

But the problem here is that over time your schemas can get a bit clogged up with ORE Data stores. Particularly when the data scientist is not longer working on the project or there is no need to maintain ORE Data stores. This is common on data science projects when you might have a number of data scientists work in/sharing the one database schema.

For a DBA, who's role will be to clean up the ORE Data store that are no longer needed, you have 4 options.

The first of these, is if all the ORE Data stores exist in the data scientists schema and nothing else in the schema is needed then you can just go ahead and drop the schema.

The second option is to log into the schema using SQL and drop the ORE Data stores. See an example of this below.

The third option is to connect to the Oracle schema using R and ORE and then use the ore.delete function to drop the ORE Data stores.

The fourth option is to connect to the RQSYS schema. This schema is the owner of the views used to query the ORE Data stores in each schema. After the RQSYS schema was created it was locked as part of the ORE installation. You as the DBA will need to unlock and then connect.

The following SQL lists the ORE Data stores that were created for that schema.

column dsname format a20
column description format a35

SELECT * FROM rquser_DataStoreList;

DSNAME                     NOBJ     DSSIZE CDATE     DESCRIPTION
-------------------- ---------- ---------- --------- -----------------------------------
ORE_DS                        2       5104 04-AUG-15 Example of ORE Datastore
ORE_FOR_DELETION              1       1675 14-AUG-15 Need to Delete this ORE Datastore
ORE_DS2                       5   51466509 04-AUG-15 DS for all R Env Data

You can also view what objects have saved in the ORE Data store.

column objname format a15
column class format a15
SELECT * FROM rquser_DataStoreContents;
 
DSNAME               OBJNAME         CLASS              OBJSIZE     LENGTH       NROW       NCOL
-------------------- --------------- --------------- ---------- ---------- ---------- ----------
ORE_DS               CARS_DATA       ore.frame             1306         11         32         11
ORE_DS               cars_ds         data.frame            3798         11         32         11
ORE_DS2              cars_ds         data.frame            3798         11         32         11
ORE_DS2              cars_ore_ds     ore.frame             1675         11         32         11
ORE_DS2              sales_ds        data.frame        51455575          7     918843          7
ORE_DS2              usa_ds          ore.frame             2749         23      18520         23
ORE_DS2              usa_ds2         ore.frame             2712         23      18520         23
ORE_FOR_DELETION     cars_ore_ds     ore.frame             1675         11         32         11

To drop an ORE Data store for you current schema you can use the rqDropDataStore SQL function.

BEGIN
   rqDropDataStore('ORE_FOR_DELETION');
END;
/

For the DBA when you unlock and connect to the RQSYS schema you will be able to see all the ORE Data stores in the data. The views will contain an additional column.

But if you use the above SQL function to delete an ORE Data store it will not work. This because this SQL function will only drop and ORE Data store if it exists in your schema. If we have connected to the RQSYS schema we will not have any ORE Data stores in it.

We can create a procedure that will allow use to delete/drop any ORE Data store in any schema.

create or replace PROCEDURE my_ORE_Datastore_Drop(
  ds_owner  in VARCHAR2,
  ds_name  IN VARCHAR2
  )
IS
  del_objIds rqNumericSet;
BEGIN
  del_objIds := rq$DropDataStoreImpl(ds_owner, ds_name);
  IF del_objIds IS NULL THEN
    raise_application_error(-20101, 'DataStore ' ||
                            ds_name || ' does not exist');
  END IF;

  -- remove from rq$datastoreinventory
  BEGIN
    execute immediate
       'delete from RQ$DATASTOREINVENTORY c where c.objID IN (' ||
       'select column_value from table(:del_objIds))' using del_objIds;
     COMMIT;
  EXCEPTION WHEN others THEN null;
  END;
END;

We are the DBA, logged into the RQSYS schema can now delete any ORE Data store in the database, using the following.

BEGIN
   my_ORE_Datastore_Drop('ORE_USER', 'ORE_FOR_DELETION');
END;
/

Thursday, July 30, 2015

Check out What Sauron is saying about Oracle

Over past year we have been (hopefully) hearing about Oracle Big Data SQL.

This is a new(-ish) option from Oracle that allows us to run our SQL queries not just on the data in our Oracle Database but also against NoSQL databases and Hadoop. No extra coding is needed, no extra formatting is needed, etc.

All the hard work in connecting to the data in this systems, translating it into executable code on these systems, executing it, capturing the results and presenting the results back to us sitting in our schema in our Oracle Database.

How cool is that.

To learn more about Oracle Big Data SQL check out their webpage.

But what let us get back to the title of this blog post, 'What Sauron is saying about Oracle'. I used these back at one of my presentation at BIWA Summit in January 2015 and I've been meaning to post these since.

If you have read books or watch the movies you will remember the phrase.

NewImage

We can apply this phrase to Oracle SQL now.

NewImage

or maybe my alternative version might be better.

NewImage

Tuesday, July 28, 2015

Charting Number of R Packages over time (Part 3)

This is the third and final blog post on analysing the number of new R packages that have been submitted over time.

Check out the previous blog posts:

In this blog post I will show you how you can perform Forecasting on our data to make some predictions on the possible number of new packages over the next 12 months.

There are 2 important points to note here:

  1. Only time will tell if these predictions are correct or nearly connect. Just like with any other prediction techniques.
  2. You cannot use just one of the Forecasting techniques in isolation to make a prediction. You need to use a number of functions/algorithms to see which one suits your data best.

The second point above is very important with all prediction techniques. Sometimes you see people/articles talking about them only using algorithm X. They have not considered any of the other techniques/algorithms. It is their favourite or preferred method. But that does not mean it works or is suitable for all data sets and all scenarios.

In this blog post I'm going to use 3 different forecasting functions, the in-build Forecast function in R, using HoltWinters and finally using ARIMA. Yes there are many more (it is R after all) and I'll leave these for you to explore.

1. Convert data set to Time Series data format

The first thing I need to do is to convert the data I want analyze into TimeSeries format (ts). This looks to have one record or instance for each data point.

So you cannot not have any missing data, or in my case any missing dates. Yes (for my data set) we could have some months where we do not have any submissions. What I could do is to work out mean values (or things like that) and fill for the missing months. But I'm feeling a bit lazy and after examining the data I see that we have a continuous set of data from September 2009 onwards. This is fine as most of the growth up to that point is flat.

So I need to subset the data to only include cases greater than or equal to September 2009 and less than or equal to June 2015. I wanted to explore July 2015 as the number for this month is incomplete.

The following code builds on the work we did in the second blog post in the series

library(forecast)
library(ggplot2)

# Subset the data
sub_data <- subset(data.sum, Group.date >= as.Date("2009-08-01", "%Y-%m-%d"))
sub_data <- subset(sub_data, Group.date <= as.Date("2015-06-01", "%Y-%m-%d"))

# Subset again to only take out the data we want to use in the time series
sub_data2 <- sub_data[,c("R_NUM")]

# Create the time series data, stating that it is monthly (12) and giving the start and end dates
ts_data <- ts(sub_data2, frequency=12, start=c(2009, 8), end=c(2015, 6))

# View the time series data
ts_data
     Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2009                               2   3   4   3   4
2010   5   5   1  11   5   2   5   4   4   5   1   3
2011  11   4   3   6   6   5   9  15   5   8  23  18
2012  33  17  51  28  37  33  50  71  41 231  51  60
2013  75  67  76  81  76  74  77  89 111  96 111 200
2014 155 129 175 140 145 133 155 207 232 162 229 310
2015 308 343 332 378 418 558    

We now have the data prepared for input to our Forecasting functions in R.

2. Using Forecast in R

For the Forecast function all you need to do is pass in the Time Series dataset and tell the function how my steps into the future you want it to predict. In all my examples I'll ask the functions to predict for the next 12 months.

ts_forecast <- forecast(ts_data, h=12)
ts_forecast

         Point Forecast     Lo 80     Hi 80 Lo 95     Hi 95
Jul 2015       447.1965  95.67066  784.3163     0  958.6669
Aug 2015       499.4344 115.94329  873.1822     0 1069.8689
Sep 2015       551.7875 123.88426  952.0773     0 1230.4707
Oct 2015       603.7212 156.89486 1078.0395     0 1370.2069
Nov 2015       654.7647 143.29718 1179.4903     0 1603.8335
Dec 2015       704.5162 135.76829 1352.8925     0 1844.7230
Jan 2016       752.6447 151.09936 1502.6088     0 2100.9708
Feb 2016       798.8877 156.37383 1652.0847     0 2575.3715
Mar 2016       843.0474 159.67095 1848.1703     0 2888.1738
Apr 2016       884.9849 154.59456 2061.7990     0 3281.6062
May 2016       924.6136 148.04651 2325.9060     0 3891.5064
Jun 2016       961.8922 138.67935 2531.7578     0 4395.6033

plot(ts_forecast)

NewImage

For this we get a very large range of values and very wide predictive intervals. If we limit the y axis we can get a better picture of the actual predictions.

plot(ts_forecast, ylim=range(0:1000))
NewImage

3. Using HoltWinters

For HoltWinters we can use the in-built R function for this. All we need to do is to pass in the Time Series data set. The first part we can plot the HoltWinters for the existing data set

?HoltWinters
hw <- HoltWinters(ts_data)
plot(hw)
NewImage

Now we want to predict for the next 12 months

forecast <- predict(hw, n.ahead = 12, prediction.interval = T, level = 0.95)
forecast

              fit       upr      lwr
Jul 2015 519.9304  599.8097 440.0512
Aug 2015 560.1083  648.4183 471.7983
Sep 2015 601.4528  701.0163 501.8892
Oct 2015 643.9639  757.3750 530.5529
Nov 2015 681.5168  811.0727 551.9608
Dec 2015 724.7363  872.4508 577.0218
Jan 2016 773.8308  941.4768 606.1848
Feb 2016 809.8836  999.0401 620.7272
Mar 2016 847.1448 1059.2371 635.0525
Apr 2016 898.4476 1134.7795 662.1158
May 2016 933.8755 1195.6532 672.0977
Jun 2016 972.3866 1260.7376 684.0356

plot(hw, forecast)
NewImage

4. Using ARIMA

For ARIMA we need to perform a simple conversion of the Time Series data into ARIMA format and then perform the forecase

fc_arima <- auto.arima(ts_data)
fc_fc_arima  <- forecast(fc_arima, h=12)
fc_fc_arima

         Point Forecast    Lo 80     Hi 80    Lo 95     Hi 95
Jul 2015       524.4758 476.2203  572.7314 450.6753  598.2764
Aug 2015       567.1156 513.2301  621.0012 484.7048  649.5265
Sep 2015       609.7554 548.3239  671.1869 515.8041  703.7068
Oct 2015       652.3952 581.6843  723.1062 544.2522  760.5383
Nov 2015       695.0350 613.5293  776.5408 570.3828  819.6873
Dec 2015       737.6748 644.0577  831.2920 594.4998  880.8499
Jan 2016       780.3147 673.4319  887.1974 616.8516  943.7777
Feb 2016       822.9545 701.7797  944.1292 637.6337 1008.2752
Mar 2016       865.5943 729.2004 1001.9881 656.9978 1074.1907
Apr 2016       908.2341 755.7718 1060.6963 675.0631 1141.4050
May 2016       950.8739 781.5559 1120.1918 691.9244 1209.8233
Jun 2016       993.5137 806.6027 1180.4246 707.6581 1279.3693

plot(fc_fc_arima, ylim=range(0:800))
NewImage

As you can see there are very different results from each of these forecasting techniques. If this was a real life project on real data then we would go about exploring a lot more of the Forecasting function available in R. The reason for this is to identify which R function and Forecasting algorithm works best for our data.

Which Forecasting technique would you choose from the selection above?

But will this function and algorithm always work with our data? The answer is NO. As our data evolves so may the algorithm that works best for our data. This is why the data science/analytics world is iterative. We need to recheck/revalidate the functions/algorithms to see if we need to start using something else or not. When we do need to use another function/algorithm you need to ask yourself why this has happened, what has changed in the data, what has changed in the business, etc.

Wednesday, July 22, 2015

Charting Number of R Packages over time (Part 2)

This is the second blog post on charting the number of new R Packages over time.

Check out the first blog post that looked at getting the data, performing some simple graphing and then researching some issues that were identified using the graph.

In this blog post I will look at how you can aggregate the data, plot it, get a regression line, then plot it using ggplot2 and we will include a trend line using the geom_smooth.

1. Prepare the data

In my previous post we extracted and aggregated the data on a daily bases. This is the plot that was shown in my previous post. This gives us a very low level graph and perhaps we might get something a little bit more useable is we aggregated the data. I have the data in an Oracle Database so it would be easy for me to write another query to perform the necessary aggregation. But let's make things a little bit trickier. I'm going to use R to do the aggregation.

Our data set is in the data frame called data. What I want to do is to aggregate it up to monthly level. The first thing I did was to create a new column that contains the values of the new aggregate level.

data$R_MONTH <- format(rdate2, "%Y%m01")
data$R_MONTH <- as.Date(data$R_MONTH3, "%Y%m%d")
data.sum <- aggregate(x = data[c("R_NUM")],
                    FUN = sum,
                    by = list(Group.date = data$R_MONTH)
)

2. Plot the Data

We now have the data aggregated at monthly level. We can now plot the graph. Ignore the last data point on the chart. This is for July 2015 and I extracted the data on the 9th of July. So we do not have a full months of data here.

plot(as.Date(data.sum$Group.date), data.sum$R_NUM, type="b", xaxt="n", cex=0.75 , ylab="Num New Packages", main="Number of New Packages by Month")
axis(1, as.Date(data.sum$Group.date, "%Y-%d"), as.Date(data.sum$Group.date, "%Y-%d"), cex.axis=0.5, las=1)

This gives us the following graph.

NewImage

3. Plot the data using ggplot2

The basic plot function of R is great and allows us to quickly and easily get some good graphs produced. But it is a bit limited and perhaps we want to create something that is a bit more elaborate. ggplot2 is a very popular package that can allow us to create a graph, building it up in a number of steps and layers to give something that is a lot more professional.

In the following example I've kept things simple and Yes I could have done so much more. I'll leave that as an exercise for you go off an do.

The first step is to use the qplot function to produce a basic plot using ggplot2. This gives us something similar to what we got from the plot function.

library(ggplot2)
qplot(x=factor(data.sum$Group.date), y=data.sum$R_NUM, data=data.sum, 
       xlab="Year/Month", ylab='Num of New Packages', asp=0.5)

This gives us the following graph.

NewImage

Now if we use ggplot2 then we need to specify a lot more information. Here is the equivalent plot using ggplot2 (with a line plot).

NewImage

4. Include a trend line

We can very easily include a trend line in a ggplot2 graph using the geom_smooth command. In the following example we have the same chart and include a linear regression line.

plt <- ggplot(data.sum, aes(x=factor(data.sum$Group.date), y=data.sum$R_NUM)) + geom_line(aes(group=1)) +
  theme(text = element_text(size=7),
        axis.text.x = element_text(angle=90, vjust=1)) + xlab("Year / Month") + ylab("Num of New Packages") +
  geom_smooth(method='lm', se=TRUE, size = 0.75, fullrange=TRUE, aes(group=20))
plt

NewImage

We can tell a lot from this regression plot.

But perhaps we would like to see a trend line on the chart, with something like a moving averages plot. Plus I've added in a bit of scaling to help with representing the data at a monthly level.

library(scales)
plt <- ggplot(data.sum, aes(x=as.POSIXct(data.sum$Group.date), y=data.sum$R_NUM)) + geom_line() + geom_point() +
  theme(text = element_text(size=12),
        axis.text.x = element_text(angle=90, vjust=1)) + xlab("Year / Month") + ylab("Num of New Packages") +
  geom_smooth(method='loess', se=TRUE, size = 0.75, fullrange=TRUE) +
  scale_x_datetime(breaks = date_breaks("months"), labels = date_format("%b"))
plt
NewImage

In the third blog post on this topic I will look at how we can use some of the forecasting and predicting functions available in R. We can use these to see help us visualize what the future growth patterns might be for this data. I have some interesting things to show.

Monday, July 20, 2015

the R Consortium

On the 30th June (2015) a number of companies came together to form the R Consortium. The aim of the R Consortium is to support the R community and to help it evolve.

NewImage

In a way the formation of this group is not surprising as there is a growing list of companies who have their own support implementation of R that provides a number of additional and very important features. Most of these features evolve around making R more useable within applications and for easier production deployment.

Founding companies and organizations of the R Consortium include The R Foundation, Platinum members Microsoft and RStudio; Gold member TIBCO Software Inc.; and Silver members Alteryx, Google, HP, Mango Solutions, Ketchum Trading and Oracle.

It is important to note about this group is that they won't be looking at the R language but will be looking at the infrastructure that supports R.

The big boys are now onboard and promoting R and it will be interesting to see what directions they will come up with.

This is something worth keeping an eye on.

Friday, July 17, 2015

UKOUG Tech 15 : Acceptances & Agenda

Just over a week ago the Acceptance emails started to go out for the UKOUG TECH15 and APPS15 conferences. Also at this time the rejection emails started to go out too :-(

I was on the receiving end of both of these type of emails.

But I was delighted with the news that I received.

My topic areas crosses the TECH and APPs topics and also crosses the Business Analytics stream which looks to bridge this divide in the world of Business Intelligence.

So in December I will be giving the following presentations.

Sunday 6th : Super Sunday event : Business Analytics Stream

12:30 - 14:30 : (2 hours) : Predictive Analytics in Oracle: Mining the Gold & Creating Valuable Products


I'll be giving away a copy of my book on Oralce Data Mining to one lucky attendee at this Super Sunday session.


Monday 7th : 9:00-9:50 : Analytics Stream

Is Oracle the Best Language for Statistics


Wednesday 9th : 14:30-15:20 : Big Data & Data Warehousing Stream (TECH 15 & APPS15)

Automating Predictive Analytics in Your Applications


Check out the full agendas for TECH 15 and APPS15 by clicking on appropriate image below.

NewImage

NewImage

NewImage


Go Register for these events now!

Wednesday, July 15, 2015

Charting Number of R Packages over time (Part 1)

This is the first of a three part blog post on charting and analysing the number of R package submissions.

(I will update this blog post with links to the other two posts as they come available)

I'm sure most of you have heard of the R programming language. If not then perhaps it is something that you might want to go off an learn a bit about. Why? well it is one of the most popular languages for performing various types of statistics, advanced topics on statistics and machine learning and for generating lots of cool looking graphs.

If this is not something that you might be interested then it is time to go to another website/blog.

In this blog post I'm going to chart the number of packages submitted to R and are available for download and installation.

Why am I doing this? I got bored one day after coming back from my vacation and I though it would be a useful thing to do. Then after doing this I decided to use these graphs somewhere else, but you will have to wait until 2016 to find out!

The R website has a listing of all the packages and the dates that they were submitted.

NewImage

There are a variety of tools available that you can use to extract the information on this webpage and there are lots of examples or R code too. I'll leave that as a little exercise for you to do.

I extracted all of this information and stored it in a table in my Oracle Database (of course I did as I work with Oracle databases day in day out). This will allow me to easily reuse this data whenever I need it plus I can update this table with new packages from time to time.

NewImage

The following R code:

  1. Setups up and ROracle connection to my schema in my database
  2. Connects to the database
  3. Setups up a query to extract the data from the table
  4. Fetches this data into an R data frame called data
  5. Reformat the date columns to remove the time element to it
  6. Plot the data
library(ROracle)

drv <- dbDriver("Oracle")
# Create the connection string
host <- "localhost"
port <- 1521
service <- "pdb12c"
connect.string <- paste(
  "(DESCRIPTION=",
  "(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))",
  "(CONNECT_DATA=(SERVICE_NAME=", service, ")))", sep = "")

con <- dbConnect(drv, username = "brendan", password = "brendan",dbname=connect.string)

res<-dbSendQuery(con, "select r_date, count(*) r_num from r_packages
                       group by r_date order by 1 asc")
data <- fetch(res)

rdate<- data$R_DATE
rdate2<-as.Date(rdate,"%d/%m/%y")

plot(data$R_NUM~rdate2, data, type="l" , xaxt="n")
axis(1, rdate2, format(rdate2, "%b %y"), cex.axis=.7, las=1)

After I run the above code I get the following plot.

NewImage

(Yes I could have done a better job on laying out the chart with all sorts of labels and colors etc)

This chart gives us a plot of the number of new submissions by day.

There are 2 very obvious things that stand out from this graph. The easiest one to deal with is that we can see that there has been substantical growth in new submissions over the past 3 years. Perhaps we need to examine these a bit closer and when you do you will find that a lot of these are existing packages that have been resubmitted with updates.

There is a very obvious peak just over half ways along the chart. We really need to investigate this to understand what has happended. This peak occurs on the 29th October 2012. What happened on the 29th October 2012 as this is clearly an anomaly with the rest of the data. Well on this date R version 2.15.2 was release and a there was a lot of update pagackes got resubmitted.

Check out my next two blog posts were I will explore this data in a bit more detail.

Part 2 blog post

Part 3 blog post

Monday, July 13, 2015

V506 of Oracle OBIEE SampleApp Virtual Machine

A few days ago Oracle released the latest version of the Virtual Machine for OBIEE SampleApp. The current version has a number of new features and new product versions (see below).

To get this latest version go to the following link to download the VM files and to install. As always this is a beast of a VM and you should only consider the install and setup if you have the space and in particular you have 16G RAM.

Oracle Business Intelligence Enterprise Edition Samples on OTN.

NewImage

v506 New Features

  • DB 12.1.0.2 with the In-Memory option
  • Load and process JSON data
  • Integrates with Big Data SQL
  • Connects to Impala
  • Session tracking in UT
  • Exalytics Aggregation Functions
  • Lots of new Visualizations
  • Custom Style Features
  • Hierarchical Session Variables
  • etc.

Software on V506

  • Oracle Enterprise Linux 6.5 x64
  • OBIEE 11.1.1.9GA two distinct OBIEE instances, Essbase 11.1.2.4, updated BIMAD
  • Oracle MapViewer11.1.1.9.1
  • Oracle BICS Data Sync v1
  • Oracle Database 12c IMDB 12.1.0.2, PDB Install, AWM 12.1.0.2a, APEX 4.2.6 & ORDS 2.0.1, ODM, Oracle Spatial and Graph
  • ORE 1.4.1 & R 3.1.1
  • ENDECA 3.1, Server 7.6.1, Studio 3.1, Provisioning Services
  • Cloudera CDH 5.1.2, Oracle BigData SQL, Oracle BigData Connectors
  • Plug and Play Companions : EPM 11.1.2.3, BIApps Demos
  • Utils: Start scripts, MapBuilder, SQLDev 4.1
NewImage