Monday, December 11, 2017
Oracle Code Online December 2017
Saturday, December 12, 2015
KScope 2016 Acceptances
I've never been to KScope. Yes never.
I've always wanted to. Each year you hear of all of these stories about how much people really enjoy KScope and how much they learn.
So back in October I decided to submit 5 presentations to KScope. 4 of these presentations are solo presentations and 1 joint presentation.
This week I have received the happy news that 2 of my solo presentations have been accepted, plus my joint presentation with Kim Berg Hansen.
So at the end of June 2016 I will be making my way to Chicago for a week of Oracle geekie fun at KScope.
My presentations will be:
- Is Oracle SQL the best language for Statistic?
- Running R in your Oracle Database using Oracle R Enterprise
and my join presentations is called
Forecasting in Oracle using the Power of SQL (this will talk about ROracle, Forecasting in R, Using Oracle R Enterprise and SQL)
I was really hoping that one of my rejected presentations would have been accepted. I really enjoy this presentation and I get to share stories about some of my predictive analytics projects. Ah well, maybe in 2017.
The last time I was in Chicago was over 15 years ago when I sent 5 days in Cellular One (The brand was sold to Trilogy Partners by AT&T in 2008 shortly after AT&T had completed its acquisition of Dobson Communications). I was there to kick off a project to build them a data warehouse and to build their first customer churn predictive model. I stayed in a hotel across the road from their office which was famous because a certain person had stayed in it why one the run. Unfortunately I didn't get time to visit downtown Chicago.
Tuesday, July 28, 2015
Charting Number of R Packages over time (Part 3)
This is the third and final blog post on analysing the number of new R packages that have been submitted over time.
Check out the previous blog posts:
- Blog Post 1 : Getting, some basic analysis and graph.
- Blog Post 2 : Aggregating the data and create a number of more useful graphs.
In this blog post I will show you how you can perform Forecasting on our data to make some predictions on the possible number of new packages over the next 12 months.
There are 2 important points to note here:
- Only time will tell if these predictions are correct or nearly connect. Just like with any other prediction techniques.
- You cannot use just one of the Forecasting techniques in isolation to make a prediction. You need to use a number of functions/algorithms to see which one suits your data best.
The second point above is very important with all prediction techniques. Sometimes you see people/articles talking about them only using algorithm X. They have not considered any of the other techniques/algorithms. It is their favourite or preferred method. But that does not mean it works or is suitable for all data sets and all scenarios.
In this blog post I'm going to use 3 different forecasting functions, the in-build Forecast function in R, using HoltWinters and finally using ARIMA. Yes there are many more (it is R after all) and I'll leave these for you to explore.
1. Convert data set to Time Series data format
The first thing I need to do is to convert the data I want analyze into TimeSeries format (ts). This looks to have one record or instance for each data point.
So you cannot not have any missing data, or in my case any missing dates. Yes (for my data set) we could have some months where we do not have any submissions. What I could do is to work out mean values (or things like that) and fill for the missing months. But I'm feeling a bit lazy and after examining the data I see that we have a continuous set of data from September 2009 onwards. This is fine as most of the growth up to that point is flat.
So I need to subset the data to only include cases greater than or equal to September 2009 and less than or equal to June 2015. I wanted to explore July 2015 as the number for this month is incomplete.
The following code builds on the work we did in the second blog post in the series
library(forecast) library(ggplot2) # Subset the data sub_data <- subset(data.sum, Group.date >= as.Date("2009-08-01", "%Y-%m-%d")) sub_data <- subset(sub_data, Group.date <= as.Date("2015-06-01", "%Y-%m-%d")) # Subset again to only take out the data we want to use in the time series sub_data2 <- sub_data[,c("R_NUM")] # Create the time series data, stating that it is monthly (12) and giving the start and end dates ts_data <- ts(sub_data2, frequency=12, start=c(2009, 8), end=c(2015, 6)) # View the time series data ts_data Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2009 2 3 4 3 4 2010 5 5 1 11 5 2 5 4 4 5 1 3 2011 11 4 3 6 6 5 9 15 5 8 23 18 2012 33 17 51 28 37 33 50 71 41 231 51 60 2013 75 67 76 81 76 74 77 89 111 96 111 200 2014 155 129 175 140 145 133 155 207 232 162 229 310 2015 308 343 332 378 418 558
We now have the data prepared for input to our Forecasting functions in R.
2. Using Forecast in R
For the Forecast function all you need to do is pass in the Time Series dataset and tell the function how my steps into the future you want it to predict. In all my examples I'll ask the functions to predict for the next 12 months.
ts_forecast <- forecast(ts_data, h=12) ts_forecast Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 Jul 2015 447.1965 95.67066 784.3163 0 958.6669 Aug 2015 499.4344 115.94329 873.1822 0 1069.8689 Sep 2015 551.7875 123.88426 952.0773 0 1230.4707 Oct 2015 603.7212 156.89486 1078.0395 0 1370.2069 Nov 2015 654.7647 143.29718 1179.4903 0 1603.8335 Dec 2015 704.5162 135.76829 1352.8925 0 1844.7230 Jan 2016 752.6447 151.09936 1502.6088 0 2100.9708 Feb 2016 798.8877 156.37383 1652.0847 0 2575.3715 Mar 2016 843.0474 159.67095 1848.1703 0 2888.1738 Apr 2016 884.9849 154.59456 2061.7990 0 3281.6062 May 2016 924.6136 148.04651 2325.9060 0 3891.5064 Jun 2016 961.8922 138.67935 2531.7578 0 4395.6033 plot(ts_forecast)
For this we get a very large range of values and very wide predictive intervals. If we limit the y axis we can get a better picture of the actual predictions.
plot(ts_forecast, ylim=range(0:1000))
3. Using HoltWinters
For HoltWinters we can use the in-built R function for this. All we need to do is to pass in the Time Series data set. The first part we can plot the HoltWinters for the existing data set
?HoltWinters hw <- HoltWinters(ts_data) plot(hw)
Now we want to predict for the next 12 months
forecast <- predict(hw, n.ahead = 12, prediction.interval = T, level = 0.95) forecast fit upr lwr Jul 2015 519.9304 599.8097 440.0512 Aug 2015 560.1083 648.4183 471.7983 Sep 2015 601.4528 701.0163 501.8892 Oct 2015 643.9639 757.3750 530.5529 Nov 2015 681.5168 811.0727 551.9608 Dec 2015 724.7363 872.4508 577.0218 Jan 2016 773.8308 941.4768 606.1848 Feb 2016 809.8836 999.0401 620.7272 Mar 2016 847.1448 1059.2371 635.0525 Apr 2016 898.4476 1134.7795 662.1158 May 2016 933.8755 1195.6532 672.0977 Jun 2016 972.3866 1260.7376 684.0356 plot(hw, forecast)
4. Using ARIMA
For ARIMA we need to perform a simple conversion of the Time Series data into ARIMA format and then perform the forecase
fc_arima <- auto.arima(ts_data) fc_fc_arima <- forecast(fc_arima, h=12) fc_fc_arima Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 Jul 2015 524.4758 476.2203 572.7314 450.6753 598.2764 Aug 2015 567.1156 513.2301 621.0012 484.7048 649.5265 Sep 2015 609.7554 548.3239 671.1869 515.8041 703.7068 Oct 2015 652.3952 581.6843 723.1062 544.2522 760.5383 Nov 2015 695.0350 613.5293 776.5408 570.3828 819.6873 Dec 2015 737.6748 644.0577 831.2920 594.4998 880.8499 Jan 2016 780.3147 673.4319 887.1974 616.8516 943.7777 Feb 2016 822.9545 701.7797 944.1292 637.6337 1008.2752 Mar 2016 865.5943 729.2004 1001.9881 656.9978 1074.1907 Apr 2016 908.2341 755.7718 1060.6963 675.0631 1141.4050 May 2016 950.8739 781.5559 1120.1918 691.9244 1209.8233 Jun 2016 993.5137 806.6027 1180.4246 707.6581 1279.3693 plot(fc_fc_arima, ylim=range(0:800))
As you can see there are very different results from each of these forecasting techniques. If this was a real life project on real data then we would go about exploring a lot more of the Forecasting function available in R. The reason for this is to identify which R function and Forecasting algorithm works best for our data.
Which Forecasting technique would you choose from the selection above?
But will this function and algorithm always work with our data? The answer is NO. As our data evolves so may the algorithm that works best for our data. This is why the data science/analytics world is iterative. We need to recheck/revalidate the functions/algorithms to see if we need to start using something else or not. When we do need to use another function/algorithm you need to ask yourself why this has happened, what has changed in the data, what has changed in the business, etc.
Wednesday, July 22, 2015
Charting Number of R Packages over time (Part 2)
This is the second blog post on charting the number of new R Packages over time.
Check out the first blog post that looked at getting the data, performing some simple graphing and then researching some issues that were identified using the graph.
In this blog post I will look at how you can aggregate the data, plot it, get a regression line, then plot it using ggplot2 and we will include a trend line using the geom_smooth.
1. Prepare the data
In my previous post we extracted and aggregated the data on a daily bases. This is the plot that was shown in my previous post. This gives us a very low level graph and perhaps we might get something a little bit more useable is we aggregated the data. I have the data in an Oracle Database so it would be easy for me to write another query to perform the necessary aggregation. But let's make things a little bit trickier. I'm going to use R to do the aggregation.
Our data set is in the data frame called data. What I want to do is to aggregate it up to monthly level. The first thing I did was to create a new column that contains the values of the new aggregate level.
data$R_MONTH <- format(rdate2, "%Y%m01") data$R_MONTH <- as.Date(data$R_MONTH3, "%Y%m%d") data.sum <- aggregate(x = data[c("R_NUM")], FUN = sum, by = list(Group.date = data$R_MONTH) )
2. Plot the Data
We now have the data aggregated at monthly level. We can now plot the graph. Ignore the last data point on the chart. This is for July 2015 and I extracted the data on the 9th of July. So we do not have a full months of data here.
plot(as.Date(data.sum$Group.date), data.sum$R_NUM, type="b", xaxt="n", cex=0.75 , ylab="Num New Packages", main="Number of New Packages by Month") axis(1, as.Date(data.sum$Group.date, "%Y-%d"), as.Date(data.sum$Group.date, "%Y-%d"), cex.axis=0.5, las=1)
This gives us the following graph.
3. Plot the data using ggplot2
The basic plot function of R is great and allows us to quickly and easily get some good graphs produced. But it is a bit limited and perhaps we want to create something that is a bit more elaborate. ggplot2 is a very popular package that can allow us to create a graph, building it up in a number of steps and layers to give something that is a lot more professional.
In the following example I've kept things simple and Yes I could have done so much more. I'll leave that as an exercise for you go off an do.
The first step is to use the qplot function to produce a basic plot using ggplot2. This gives us something similar to what we got from the plot function.
library(ggplot2) qplot(x=factor(data.sum$Group.date), y=data.sum$R_NUM, data=data.sum, xlab="Year/Month", ylab='Num of New Packages', asp=0.5)
This gives us the following graph.
Now if we use ggplot2 then we need to specify a lot more information. Here is the equivalent plot using ggplot2 (with a line plot).
4. Include a trend line
We can very easily include a trend line in a ggplot2 graph using the geom_smooth command. In the following example we have the same chart and include a linear regression line.
plt <- ggplot(data.sum, aes(x=factor(data.sum$Group.date), y=data.sum$R_NUM)) + geom_line(aes(group=1)) + theme(text = element_text(size=7), axis.text.x = element_text(angle=90, vjust=1)) + xlab("Year / Month") + ylab("Num of New Packages") + geom_smooth(method='lm', se=TRUE, size = 0.75, fullrange=TRUE, aes(group=20)) plt
We can tell a lot from this regression plot.
But perhaps we would like to see a trend line on the chart, with something like a moving averages plot. Plus I've added in a bit of scaling to help with representing the data at a monthly level.
library(scales) plt <- ggplot(data.sum, aes(x=as.POSIXct(data.sum$Group.date), y=data.sum$R_NUM)) + geom_line() + geom_point() + theme(text = element_text(size=12), axis.text.x = element_text(angle=90, vjust=1)) + xlab("Year / Month") + ylab("Num of New Packages") + geom_smooth(method='loess', se=TRUE, size = 0.75, fullrange=TRUE) + scale_x_datetime(breaks = date_breaks("months"), labels = date_format("%b")) plt
In the third blog post on this topic I will look at how we can use some of the forecasting and predicting functions available in R. We can use these to see help us visualize what the future growth patterns might be for this data. I have some interesting things to show.
Wednesday, July 15, 2015
Charting Number of R Packages over time (Part 1)
This is the first of a three part blog post on charting and analysing the number of R package submissions.
(I will update this blog post with links to the other two posts as they come available)
I'm sure most of you have heard of the R programming language. If not then perhaps it is something that you might want to go off an learn a bit about. Why? well it is one of the most popular languages for performing various types of statistics, advanced topics on statistics and machine learning and for generating lots of cool looking graphs.
If this is not something that you might be interested then it is time to go to another website/blog.
In this blog post I'm going to chart the number of packages submitted to R and are available for download and installation.
Why am I doing this? I got bored one day after coming back from my vacation and I though it would be a useful thing to do. Then after doing this I decided to use these graphs somewhere else, but you will have to wait until 2016 to find out!
The R website has a listing of all the packages and the dates that they were submitted.
There are a variety of tools available that you can use to extract the information on this webpage and there are lots of examples or R code too. I'll leave that as a little exercise for you to do.
I extracted all of this information and stored it in a table in my Oracle Database (of course I did as I work with Oracle databases day in day out). This will allow me to easily reuse this data whenever I need it plus I can update this table with new packages from time to time.
The following R code:
- Setups up and ROracle connection to my schema in my database
- Connects to the database
- Setups up a query to extract the data from the table
- Fetches this data into an R data frame called data
- Reformat the date columns to remove the time element to it
- Plot the data
library(ROracle) drv <- dbDriver("Oracle") # Create the connection string host <- "localhost" port <- 1521 service <- "pdb12c" connect.string <- paste( "(DESCRIPTION=", "(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))", "(CONNECT_DATA=(SERVICE_NAME=", service, ")))", sep = "") con <- dbConnect(drv, username = "brendan", password = "brendan",dbname=connect.string) res<-dbSendQuery(con, "select r_date, count(*) r_num from r_packages group by r_date order by 1 asc") data <- fetch(res) rdate<- data$R_DATE rdate2<-as.Date(rdate,"%d/%m/%y") plot(data$R_NUM~rdate2, data, type="l" , xaxt="n") axis(1, rdate2, format(rdate2, "%b %y"), cex.axis=.7, las=1)
After I run the above code I get the following plot.
(Yes I could have done a better job on laying out the chart with all sorts of labels and colors etc)
This chart gives us a plot of the number of new submissions by day.
There are 2 very obvious things that stand out from this graph. The easiest one to deal with is that we can see that there has been substantical growth in new submissions over the past 3 years. Perhaps we need to examine these a bit closer and when you do you will find that a lot of these are existing packages that have been resubmitted with updates.
There is a very obvious peak just over half ways along the chart. We really need to investigate this to understand what has happended. This peak occurs on the 29th October 2012. What happened on the 29th October 2012 as this is clearly an anomaly with the rest of the data. Well on this date R version 2.15.2 was release and a there was a lot of update pagackes got resubmitted.
Check out my next two blog posts were I will explore this data in a bit more detail.