Friday, December 28, 2018

Data Science, Machine Learning (and AI) 2019 watch list/predictions

Data Science and Machine Learning have been headline topics for many years now. Even before the Harvard Business Review article, 'Sexist Job of the 21st Century', was published Bach in 2012. The basics of Data Science, Machine Learning and even AI existing for many decades before but over recent years we have seem many advances and many more examples of application areas.

There are many people (Futurists) giving predictions of where things might be heading over the next decade or more. But what about issues that will affect us who are new to the area or for those that have been around doing it for way too long.

The list below is some of the things I believe will become more important and/or we will hear a lot more about these topics during 2019. (There is no particular order or priority to these topics, except for point about Ethics).

Ethics & privacy : With the introduction of EU GDPR there has been a renewed focus on data privacy and ethics surrounding this. This just doesn't affect the EU but every country around the world that processes data about people in EU. Lots of other countries are now looking at introducing similar laws to GDPR. This is all good, right? It has helped raise awareness of the value of personal data and what companies might be doing with it. We have seen lots of examples over the past 18 months where personal data has been used in ways that we are not happy about. Ethics on data usage is vital for all companies and greater focus will be placed on this going forward to ensure that data is used in an ethical manner, as not doing so can result in a backlash from your customers and they will just go elsewhere. Just because you have certain customer data, doesn't mean you should use it to exploit them. Expect to see some new job roles in this area.

Clearer distinction between different types of roles for Data Science : Everyone is a Data Scientist and if you aren't one then you probably want to be one. Data Scientists are the cool kids, at the moment, but with this comes confusion on what is a data scientist. Are these the people building machine learning algorithms? Or people who were called Business Intelligence experts a few years ago? Or are they people who build data pipelines? Or are they problem solvers? Or something else? A few years ago I wrote a blog post about Type I and Type II Data Scientists. This holds true today. A Data Scientist is a confusing term and doesn't really describe one particular job role. A Data Scientist can come in many different flavours and it is impossible for any one person to be all flavours. Companies don't have one or a small handful of data scientists, but they now have teams of people performing data science tasks. Yes most of these tasks have been around for a long time and will continue to be, and now we have others joining them. Today and going forward we will see clearer distinction between each of these flavours of data scientists, moving away from a generalist role to specialist roles to include Data Engineer, Business Analyst, Business Intelligence Solution Architect/Specialist, Data Visualization, Analytics Manager, Data Manager, Big Data/Cloud Engineer, Statisticians, Machine Learning Engineer, and a Data Scientist Manager (who plugs all the other roles together).

Data Governance : Do you remember when Data Governance was the whole trend, back five to eight years ago. Well it's going to come back in 2019. With the increased demands on managing data, in all it's shapes and locations, knowing what we have, where it is, and what people are doing with it is vital. As highlighted in the previous point, without good controls on our data and good controls over what we can do (in an ethical way) with out data, we will just end up in a mess and potentially annoying our customers. With the expansion of ML and AI, the role of data governance will gain greater attention as we need to manage all the ML and AI to ensure we have efficient delivery of these solutions. As more companies embrace the cloud, there will be a gradual shifting of data from on-premises to on-cloud, and in many instances there will be a hybrid existence. But what data should be stored where, based on requirements, security, laws, privacy concerns, etc. Good data governance is vital.

GDPR and ML : In 2018 we saw the introduction of the EU GDPRs. This has had a bit of an impact on IT in general and there has been lots of work and training on this for everyone. Within the GDPR there are a number of articles (22, 13, 14, etc) that impact upon the use of ML outputs. Some of this is about removing any biases from the data and process, and some is about the explainability of the predictions. The ability to explain a ML prediction is proving very challenging for most companies. This could mean huge rework in how their ML predictions work to ensure they are compliant with EU GDPR. In 2019 (and beyond) we will start to see the impact of this and work being done to address this. This also related to the point on Ethics and Privacy mentioned above.

More intelligent use of Data (let's call AI for now) : We have grown to know and understand the importance of data within our organisations. Even more so over the past few years with lots of articles from Harvard Business Review, the Economist, and lots of others. The importance of data and being able to use it efficiently and effectively has risen to board room level. We will continue to see in 2019 an increase in the intelligent use of data. Perhaps a better term for this is AI driven development. AI can mean lots of things from a simple IF statement to more complex ML and other algorithms or data processing techniques. Every application from now on needs to look at being more intelligent, more smarter than before. All processing needs to be more tightly integrated and more automation of processing (see below for more on this). This allows us to build smarter applications and with that smarter organisations.

Auto ML : The actual steps of doing the core ML tasks are really boring. I mean really boring. It typically involves running a few lines of R, Python, etc code or creating some nodes in a workflow tool. It isn't difficult or complicated. It's boring. What makes it even more boring, is the tuning of the (hyper) parameters. It's boring!. I wish all of this could be automated! Most of us have scripts that automate this for us, but in 2019 we will see more of this automated in the various languages, libraries and tools. A number of vendors will be bring out new or upgraded ML solutions that will 'Automate the Boring Stuff' for ML. Gartner says that by 2020 over 40% of data science tasks will be automated.

Automation : Building upon Auto ML (or Automated ML), we will see more automation of the entire ML process, from start to end. More automation on the data capture, data harvesting, data enrichment, data transformations, etc. Again automating the boring stuff. Additionally we will see more of the automation of ML into production systems. Most ML discussed covers up to creating and (poorly) evaluating a model. But what happens after that. We can automate the usage of the ML model (see next point) but not only that but we can automation of the whole iterative process of updating the models too. There are many example of this already and some are called Adaptive Intelligent applications.

Moving from back office to front of house : Unfortunately when most people talk about ML they are very limited to only creating a model for a particular scenario. But when you want to take such models out of the back room (where the data scientists live) and move it into production there are a number of challenges. Production can mean backend processing as well as front end applications. A lot has been covered on the use of ML for large bulk processing (back end applications). But we will see more and more integration of ML models into the every day applications our company uses. These ML models will all us to develop augmented analytic applications. This is similar to the re-emergence of AI application, whereby ML and other AI methods (eg. using an IF statement), can be used to develop more functionally rich applications. Developers will move beyond providing the required functionality to looking at how can I made my application more intelligent using AI and ML.

ML Micro-Services : To facilitate the automation tasks with putting ML into more production front end applications, an efficient approach is needed for this. With most solutions to-date, this has required a lot of development effort or complicated plumbing to make it work. We are now in age of containerisation. This allows the efficient rollout of new technology and new features for applications without any need for lots of development work. In a similar way for ML we will see more efficient delivery of ML using ML Scoring Engines. These can take an input data set and return the scored the data. This data set can consist of an individual record or many thousands. For ML to score or label new data, it is performing a simple mathematically calculation. Computers can perform these really quickly. By setting up and using ML Micro-services allows for many applications to use the ML model for scoring.

Renewed interest in Citizen Data Scientist : Citizen data scientist was a popular topics/role 3-5 years ago. In 2019 we will see a renewed interest in Citizen Data Scientists. Although there might be a new phrase used. Following on from the points above on automation of ML and to the point near the beginning about clearer distinctions of roles, and with greater education on core ML topics for everyone, we will see a lot more employees using ML and/or AI in their everyday jobs. In addition to this, with the integration of ML and AI in all applications (and not just front end applications), including greater use in reporting and analytic tools. We are already seeing elements of this with Chatbots, Analytics tools, Trends applications, etc.

Slight disillusionment for Deep Learning & renewed interest in solving business problems : It seemed that every day throughout 2018 there was hundreds of articles about the use of Deep Learning and Neural Networks. These are really great tools but are they suitable for everyone and for every type of problem. The simple answer is no they aren't. Most examples given seemed to be finding a cat or a dog in an image or other noddy examples. Yes deep learning and neural networks can give greater accuracy for predictions, but this level of accuracy comes at a price. In 2019 we will see a tail off on the use of 'real' deep learning and neural networks for noddy examples, and see some real use cases coming through. For example I'm working on two projects that uses these technologies to try and save lives. There will be a renewed focus on solving real business problems, and sometimes the best or most accurate solution or tool may not be the best or most efficient tool to use.

Big Data diminishes and (Semi-)Autonomous takes hold : Big Data! What's big data? Does Big Data really matter? Big data was the trendy topic for the past few years and everyone was claiming to be an expert and if your weren't doing big data then you felt left behind. With big data we had lots of technologies like Hadoop, Map-Reduce, Spark, HBase, Hive, etc and the list goes on and on. During 2018 there was a definite shift away from using any of these technologies and toward the use of cloud solutions. Many of the vendors had data storage solutions for your "Big Data" problem. But most of these are using PostgreSQL, or some columnar type of data storage engine. What the cloud gave us was a flexible and scalable architecture for our Data Storage problem. Notice the way I've dropped the "Big" from that. Data is Data and it comes in many different formats. Most Databases can store, process and query data is these formats. We've also seen the drive towards serverless and autonomous environments. For the majority of cases this is fine, but for others a more semi-automonous environment would suit them better. Again some of boring work has been automated. We will see more on this, or perhaps more correctly we will be hearing that everyone is using autonomous and if you aren't you should be! It isn't for everyone. Additionally, we will hearing more about ML Cloud Services and this has many issues that the vendors will not talk about about! (See first point on data privacy)

Monday, November 19, 2018

Oracle Machine Learning notebooks

In this blog post I'll have a look at Oracle Machine Learning notebooks, some of the example notebooks and then how to create a new one.

Check out my previous blog posts on ADWC.

- Create an Autonomous Data Warehouse Cloud Service

- Creating and Managing OML user on ADWC

On entering Oracle Machine Learning on your ADWC service, you will get the following.

NewImage

Our starting point is to example what is listed in the Examples section. Click on the Examples link. The following lists the example notebooks.

NewImage

Here we have examples that demonstrate how to build Anomaly Detection, Association Rules, Attribute Importance, Classification, Regression, Clustering and one that contains examples of various statistical function.

Click on one of these to see the notebook. The following is the notebook demoing the Statistical Functions. When you select a notebook it might take a few seconds to setup and open. There is some setup needed in the background and to make sure you have access to the demo data and then runs the notebook, generating the results. Most of the demo data is based on the SH schema.

NewImage

Now let us create our first notebook.

From the screen shown above lift on the menu icon on the top left of the screen.

NewImage

And then click on Notebooks from the pop-out menu.

NewImage

In the Notebooks screen click on the Create button to create your first notebook.

NewImage

And give it a meaningful name.

NewImage

The Notebook shell will be created and then opened for you.

In the grey box, just under the name the name of your Notebook, is where you can enter your first SQL statement. Then over on the right hand side of this Cell you will see a triangle on its side. This is the run button.

NewImage

For now you can only run SQL statements, but you also have other notebooks features such as different charting options and these are listed under the grey cell, where your SQL is located.

NewImage

Here you can create Bar, Pie, Area, Line and Scatter charts. Here is an example of a Bar chart.

NewImage

Warning: You do need to be careful of your syntax, as minimal details are given on what is wrong with your code. Not even the error numbers.

Go give it a good and see how far you can take these OML Notebooks.

Wednesday, November 14, 2018

Reading Data from Oracle Table into Python Pandas - How long & Different arraysize

Here are some results from a little testing I recent did on extracting data from an Oracle database and what effect the arraysize makes and which method might be the quickest.

The arraysize determines how many records will be retrieved in each each batch. When a query is issued to the database, the results are returned to the calling programme in batches of a certain size. Depending on the nature of the application and the number of records being retrieved, will determine the arraysize value. The value of this can have a dramatic effect on your query and application response times. Sometimes a small value works very well but sometimes you might need a larger value.

My test involved using an Oracle Database Cloud instance, using Python and the following values for the arraysize.

arraysize = (5, 50, 500, 1000, 2000, 3000, 4000, 5000) 

The first test was to see what effect these arraysizes have on retrieving all the data from a table. The in question has 73,668 records. So not a large table. The test loops through this list of values and fetches all the data, using the fetchall function (part of cx_Oracle), and then displays the time taken to retrieve the results.

# import the Oracle Python library
import cx_Oracle
import datetime
import pandas as pd
import numpy as np

# setting display width for outputs in PyCharm
desired_width = 280
pd.set_option('display.width', desired_width)
np.set_printoptions(linewidth=desired_width)
pd.set_option('display.max_columns',30)

# define the login details
p_username = "************"
p_password = "************"
p_host = "************"
p_service = "************"
p_port = "1521"

print('--------------------------------------------------------------------------')
print(' Testing the time to extract data from an Oracle Database.')
print('    using different approaches.')
print('---')
# create the connection
con = cx_Oracle.connect(user=p_username, password=p_password, dsn=p_host+"/"+p_service+":"+p_port)

print('')
print(' Test 1: Extracting data using Cursor for different Array sizes')
print('    Array Size = 5, 50, 500, 1000, 2000, 3000, 4000, 5000')
print('')
print('   Starting test at : ', datetime.datetime.now())

beginTime = datetime.datetime.now()
cur_array_size = (5, 50, 500, 1000, 2000, 3000, 4000, 5000)
sql = 'select * from banking_marketing_data_balance_v'

for size in cur_array_size:
    startTime = datetime.datetime.now()
    cur = con.cursor()
    cur.arraysize = size
    results = cur.execute(sql).fetchall()
    print('      Time taken : array size = ', size, ' = ', datetime.datetime.now()-startTime, ' seconds,  num of records = ', len(results))
    cur.close()

print('')
print('   Test 1: Time take = ', datetime.datetime.now()-beginTime)
print('')

And here are the results from this first test.

Starting test at :  2018-11-14 15:51:15.530002
      Time taken : array size =  5  =  0:36:31.855690  seconds,  num of records =  73668
      Time taken : array size =  50  =  0:05:32.444967  seconds,  num of records =  73668
      Time taken : array size =  500  =  0:00:40.757931  seconds,  num of records =  73668
      Time taken : array size =  1000  =  0:00:14.306910  seconds,  num of records =  73668
      Time taken : array size =  2000  =  0:00:10.182356  seconds,  num of records =  73668
      Time taken : array size =  3000  =  0:00:20.894687  seconds,  num of records =  73668
      Time taken : array size =  4000  =  0:00:07.843796  seconds,  num of records =  73668
      Time taken : array size =  5000  =  0:00:06.242697  seconds,  num of records =  73668

As you can see the variation in the results.

You may get different performance results based on your location, network connectivity and proximity of the database. I was at home (Ireland) using wifi and my database was located somewhere in USA. I ran the rest a number of times and the timings varied by +/- 15%, which is a lot!

When the data is retrieved in this manner you can process the data set in the returned results set. Or what is more traditional you will want to work with the data set as a panda. The next two test look at a couple of methods of querying the data and storing the result sets in a panda.

For these two test, I'll set the arraysize = 3000. Let's see what happens.

For the second test I'll again use the fetchall() function to retrieve the data set. From that I extract the names of the columns and then create a panda combining the results data set and the column names.

startTime = datetime.datetime.now()
print('   Starting test at : ', startTime)
cur = con.cursor()
cur.arraysize = cur_array_size
results = cur.execute(sql).fetchall()
print('   Fetched ', len(results), ' in ', datetime.datetime.now()-startTime, ' seconds at ', datetime.datetime.now())
startTime2 = datetime.datetime.now()
col_names = []
for i in range(0, len(cur.description)):
    col_names.append(cur.description[i][0])
print(' Fetched data & Created the list of Column names in ', datetime.datetime.now()-startTime, ' seconds at ', datetime.datetime.now())

The results from this are.

      Fetched  73668  in  0:00:07.778850  seconds at  2018-11-14 16:35:07.840910
      Fetched data & Created the list of Column names in  0:00:07.779043  seconds at  2018-11-14 16:35:07.841093
      Finished creating Dataframe in  0:00:07.975074  seconds at  2018-11-14 16:35:08.037134

Test 2: Total Time take =  0:00:07.975614

Now that was quick. Fetching the data set in just over 7.7788 seconds. Creating the column names as fractions of a millisecond, and then the final creation of the panda took approx 0.13 seconds.

For the third these I used the pandas library function called read_sql(). This function takes two inputs. The first is the query to be processed and the second the name of the database connection.

print(' Test 3: Test timing for read_sql into a dataframe')
cur_array_size = 3000
print('   will use arraysize = ', cur_array_size)
print('')
startTime = datetime.datetime.now()
print('   Starting test at : ', startTime)

df2 = pd.read_sql(sql, con)

print('      Finished creating Dataframe in ', datetime.datetime.now()-startTime, ' seconds at ', datetime.datetime.now())
# close the connection at end of experiments
con.close()

and the results from this are.

   Test 3: Test timing for read_sql into a dataframe will use arraysize =  3000

   Starting test at :  2018-11-14 16:35:08.095189
      Finished creating Dataframe in  0:02:03.200411  seconds at  2018-11-14 16:37:11.295611

You can see that it took just over 2 minutes to create the panda data frame using the read_sql() function, compared to just under 8 seconds using the previous method.

It is important to test the various options for processing your data and find the one that works best in your environment. As with most languages there can be many ways to do the same thing. The challenge is to work out which one you should use.

Tuesday, November 6, 2018

Installing and configuring Oracle 18c XE

The following are the simple steps required to install Oracle 18c XE (express edition) on Oracle Linux. Check out my previous blog post on Oracle 18c XE. Also check out the product webpage for more details and updates. There is a very important word on that webpage. That word is 'FREE' and is something you don't see too often. Go get and use the (all most) full enterprise version of the Oracle Database.

I've created a VM using Oracle Linux for the OS.

After setting up the VM, login as root and download the RPM file.

NewImage

Run the following as root to perform dependency checks and configurations.

yum install -y oracle-database-preinstall-18c 

You can now run the install using the following command.

yum -y localinstall oracle-database-xe-18c-1.0-1.x86_64.rpm. 

When the install has completed, the next step is to install the database. This is done using the following command.

/etc/init.d/oracle-xe-18c configure 

You will be prompted to enter a common password for the SYS, SYSTEM and PDBADMIN users. You will need to change these at a later time.

Then to start the database, run

systemctl start oracle-xe-18c 

The next time you restart the VM, you might find that the database hasn't started or loaded. You will need to do this manually. This is a bit of a pain in the behind.

To avoid having to do this each time, run the following commands as root.

systemctl daemon-reload
systemctl enable oracle-xe-18c

These commands will allow the database to be shutdown when the machine or VM is being shutdown and will automatically start up the database when the machine/VM startups again.

The final step is to connect to the database

sqlplus sys///localhost:1521/XE as sysdba 

You can then go and perform all your typical admin tasks, set up SQLDeveloper, and create additional users.

Bingo! All it good now.

Docker

Putting Oracle 18c XE on docker is an excellent way to make it easily deployable and to build out solutions that require a DB.

Check out these links for instructions on how to setup a Docker container with Oracle 18c XE.

https://github.com/fuzziebrain/docker-oracle-xe

Sunday, October 28, 2018

RandomForests in R, Python and SQL

I recently wrote a two part article explaining how Random Forests work and how to use them in R, Python and SQL.

These were posted on ToadWorld webpages. Check them out.

Part 1 of article


Part 2 of article

Tuesday, October 23, 2018

Creating and Managing OML users on Oracle ADWS

(Check out my recent blog post on getting the ADWS up and running. You will need to have following those before you can perform the following steps.

In this post I'll look at how to setup and manage users specifically for the Oracle Machine Learning (OML) tool. This tool is only available on ADWS and is a zeppelin based notebook for analytics, data science and machine learning.

1. Open the service console for ADWS and click on Administration Administration can be found on the small menu list on the left hand side of the screen. NewImage

2. Click on Manage Oracle ML Users As we are only interested in OML and Users for OML, just click on the section titled 'Oracle ML Users'

3. Sign-in as Admin user This user was created in my previous blog post. Hopefully you can remember the password. NewImage

4. Create a New User The only user currently enabled for OML is the Admin user. To create a new OML user click on the Create button

NewImage

5. Enter OML User details Enter the details of the OML user. Enter an email address and the person will receive an email with their login details. You have the choice of having a system generated password or uncheck the tick box and add in a password. NewImage Click the Create button. And hopefully the user will receive the email. The email may take a little bit of time to arrange in the users email box!

6. Log into Oracle Machine Learning

You have 2 options. The first is to follow the link in the email or click on the Home button on the top right hand side of the screen. NewImage

You will then be logged into Oracle Machine Learning. Look out for my blog posts on using this product and how to run the demos. NewImage

Slides from my OOW Presentation

Here are the slides from my presentation (with Neil Chandler) at Oracle Open World and Oracle Code One.

1 - Code1-Nnets_REST-joint-ver2 NewImage

Saturday, October 20, 2018

Oracle 18c XE - Comes with in-database and R machine learning

As of today 20th October, Oracle has finally released Oracle 18c XE aka Express Edition

A very important word associated with Oracle 18c XE is the word 'FREE'

Yes it is FREE

This FREE product is backed full of features. Think of all the features that come with the Enterprise Edition of the Database. It comes with most of those features, including some of the extra add on features.

I said it comes with most features. There are a few features that don't come with XE, so go check out the full list here.

NewImage

There are a few restrictions:

  • Up to 12 GB of user data
  • Up to 2 GB of database RAM
  • Up to 2 CPU threads
  • Up to 3 Pluggable Databases

I know of so many companies and applications that easily meet the above restrictions.

For the Data Scientists and Machine Learning people, the Advanced Analytics option is now available with Oracle 18c XE. That means you can use the in-memory features for super fast analytics, use the in-database machine learning algorithms, and also use the embedded R feature called Oracle R Enterprise.

Yes you are limited to 12G of user data. That might be OK for most people but for those whose data is BIG then this isn't an option for you.

There is a phrase, "Your data isn't as big as you think", so maybe your data might fit within the 12G.

Either way this can be a great tool to allow you to try out machine learning for Free in a test lab environment.

Go download load it and give it a try.

Thursday, October 18, 2018

Creating an Autonomous Data Warehouse Cloud Service

The following outlines the steps to create a Autonomous Data Warehouse Cloud Service.
Log into your Oracle Cloud account and then follow these steps.
1. Select Autonomous Data Warehouse Cloud service from the side menu
NewImage
2. Select Create Autonomous Data Warehouse button
NewImage
3. Enter the Compartment details (Display Name, Database Name, CPU Core Count & Storage)
NewImage
4. Enter a Password for Administrator, and then click ‘Create Autonomous Data Warehouse’
NewImage
5. Wait until the ADWC is provisioned
Going from this
NewImage
to this
NewImage
And you should receive and email that looks like this
NewImage
6. Click on the name of the ADWS you created
NewImage
7. Click on the Service Console button
NewImage
8. Then click on Administration and then Download a Connection Wallet
Specify the password
NewImage
You an now use this to connect to the ADWS using SQL Developer
All done.

Monday, October 15, 2018

R vs Python vs SQL for Machine Learning (Infographic)

Next week I'll be giving several presentation on machine learning at Oracle Open World and Oracle Code One. In one of these presentation an evaluation of using R vs Python vs SQL will be given and discussed.

Check out the infographic containing the comparisons.

Click here to download the PDF version.

Info Graphic

Wednesday, October 10, 2018

OOW 2018 Chocolate Tasting

Calling all Oracle ACEs, Developer Champions and Oracle Product Managers from around the World.

Are you going to Oracle Open World or Oracle Code One?

If you are, bring some of your favourite chocolates from where you live and share them with other Oracle ACEs, Developer Champions and Oracle PMs.

Location : The Hub (Moscone West).

Date : Wednesday 24th October

Time : 3pm-4pm

All you have to do is to bring some of the best chocolate from your country or your favourite chocolate, meet with other people, talk about Oracle technologies and what you have learned during your time at Oracle Open World and Oracle Code One.

Please don't bring your typical high street, mass market type of chocolate. Bring the good stuff. Pick it up at your local chocolate shop or in the airport as you begin your travels.

Last year (2017) we had chocolate from 14 different countries. They were all very different and very tasty.

I'll have some Butlers Chocolates with me for the tasting. What chocolates will you bring?

Friday, September 7, 2018

OOW18 and Code One agendas with Date and Times

I've just received an email in from the organisers of Oracle Open World (18) and Oracle Code One (formally Java One) with details of when I will be presenting.

It's going to be a busy presenting schedule this year with 4 sessions.

It's going to be a busy presenting schedule this year with 3 sessions on the Monday.

Check out my sessions, dates and times.

Screenshot 2018 09 07 09 10 11

In addition to these sessions I'll also be helping out in the Demo area in the Developer Lounge. I'll be there on Wednesday afternoon handing out FREE beer.

Wednesday, August 29, 2018

Bringing Neural Networks to Production using GraphPipe

Machine learning is a fascinating topic. It has so much potential yet very few people talk about using machine learning in production. I've been highlighting the need for this for over 20 years now and only a very small number of machine learning languages and solutions are suitable for production use. Why? maybe it is due to the commercial aspects and as many of the languages and tools are driven by the open source community, one of the last things they get round to focusing on is production deployment. Rightly they are focused at developing more and more machine learning algorithms and features for developing models, but where the real value comes is will being able to embed machine learning model scoring in production system. Maybe this why the dominant players with machine learning in enterprises are still the big old analytics companies.

Yes that was a bit a of a rant but it is true. But over the summer and past few months there has been a number of articles about production deployment.

But this is not a new topic. For example, we have Predictive Model Markup Language (PMML) around for a long time. The aim of this was to allow the interchange of models between different languages. This would mean that the data scientist could develop their models using one language and then transfer or translate the model into another language that offers the same machine learning algorithms.

But the problem with this approach is that you may end up with different results being generated by the model in the development or lab environment versus the model being used in production. Why does this happen? Well the algorithms are developed by different people/companies and everyone has their preferences for how these algorithms are implemented.

To over come this some companies would rewrite their machine learning algorithms and models to ensure that development/lab results matched the results in production. But there is a very large cost associated with this development and ongoing maintenance as the models evolved. This would occur, maybe, every 3, 6, 9, 12 months. Somethings the time to write or rewrite each new version of the model would be longer than its lifespan.

These kind of problems have been very common and has impacted on model deployment in production.

In the era of cloud we are now seeing some machine learning cloud solutions making machine learning models available using REST services. These can, very easily, allow for machine learning models to be included in production applications. You are going to hear more about this topic over the coming year.

But, despite all the claims and wonders and benefits of cloud solutions, it isn't for everyone. Maybe at some time in the future but it mightn't be for some months or years to come.

So, how can we easily add machine learning model scoring/labeling to our production systems? Well we need some sort of middleware solutions.

Given the current enthusiasm for neural networks, and the need for GPUs, means that these cannot (easily) be deployed into production applications.

There have been some frameworks put forward for how to enable this. Once such framework is called Graphpipe. This has recently been made open source by Oracle.

Graphpipe

Graphpipe is a framework that to access and use machine learning models developed and running on different platforms. The framework allows you to perform model scoring across multiple neural networks models and create ensemble solutions based on these. Graphpipe development has been focused on performance (most other frameworks don't). It uses flatbuffers for efficient transfer of data and currently has integration with TensorFlow, PyTorch, MXNet, CNTK and via ONNX and caffe2.

Expect to have more extensions added to the framework.

Graphpipe website

Graphpipe getting started

Graphpipe blogpost

Graphpipe download

Monday, August 13, 2018

Spark docker images

Spark is a very popular environment for processing data and doing machine learning in a distributed environment.

When working in a development environment you might work on a single node. This can be your local PC or laptop, as not everyone will have access to a multi node distributed environment.

But what if you could spin up some docker images there by creating additional nodes for you to test out the scalability of your Spark code.

There are links to some Docker images that may help you to do this.

Or simply create a cloud account on the Databricks Community website to create your own Spark environment to play and learn.

Thursday, August 2, 2018

A selection of Hadoop Docker Images

When it comes to big data platforms one of the biggest challenges is getting a test environment setup where you can try out the various components. There are a few approaches to doing this this. The first is to setup your own virtual machine or some other container with the software. But this can be challenging to get just a handful of big data applications/software to work on one machine.

But there is an alternative approach. You can use one of the preconfigured environments from the likes of AWS, Google, Azure, Oracle, etc. But in most cases these come with a cost. Maybe not in the beginning but after a little us you will need to start handing over some dollars. But these require you to have access to the cloud i.e. wifi, to run these. Again not always possible!

So what if you want to have a local big data and Hadoop environment on your own PC or laptop or in your home or office test lab? There ware a lot of Virtual Machines available. But most of these have a sizeable hardware requirement. Particularly for memory, with many requiring 16+G of RAM ! Although in more recent times this might not be a problem but for many it still is. Your machines do not have that amount or your machine doesn't allow you to upgrade.

What can you do?

Have you considered using Docker? There are many different Hadoop Docker images available and these are not as resource or hardware hungry, unlike the Virtual Machines.

Here is a list of some that I've tried out and you might find them useful.

Cloudera QuickStart image

You may have tried their VM, now go dry the Cloudera QuickStart docker image.

Read about it here.

Check our Docker Hub for lots and lots of images.

Docker Hub is not the only place to get Hadoop Docker images. There are lots on GitHub Just do a quick Google search to find the many, many, many images.

These Docker Hadoop images are a great way for you to try out these Big Data platforms and environments with the minimum of resources.

Monday, July 23, 2018

Lessor known Apache Machine Learning languages

Machine learning is a very popular topic in recent times, and we keep hearing about languages such as R, Python and Spark. In addition to these we have commercially available machine learning languages and tools from SAS, IBM, Microsoft, Oracle, Google, Amazon, etc., etc. Everyone want a slice of the machine learning market!

The Apache Foundation supports the development of new open source projects in a number of areas. One such area is machine learning. If you have read anything about machine learning you will have come across Spark, and maybe you might believe that everyone is using it. Sadly this isn't true for lots of reasons, but it is very popular. Spark is one of the project support by the Apache Foundation.

But are there any other machine learning projects being supported under the Apache Foundation that are an alternative to Spark? The follow lists the alternatives and lessor know projects: (most of these are incubator/retired/graduated Apache projects)

Flink Flink is an open source system for expressive, declarative, fast, and efficient data analysis. Stratosphere combines the scalability and programming flexibility of distributed MapReduce-like platforms with the efficiency, out-of-core execution, and query optimization capabilities found in parallel databases. Flink was originally known as Stratosphere when it entered the Incubator.

Documentation

(graduated)

HORN HORN is a neuron-centric programming APIs and execution framework for large-scale deep learning, built on top of Apache Hama.

Wiki Page

(Retired)

HiveMail Hivemall is a library for machine learning implemented as Hive UDFs/UDAFs/UDTFs

Apache Hivemall offers a variety of functionalities: regression, classification, recommendation, anomaly detection, k-nearest neighbor, and feature engineering. It also supports state-of-the-art machine learning algorithms such as Soft Confidence Weighted, Adaptive Regularization of Weight Vectors, Factorization Machines, and AdaDelta. Apache Hivemall offers a variety of functionalities: regression, classification, recommendation, anomaly detection, k-nearest neighbor, and feature engineering. It also supports state-of-the-art machine learning algorithms such as Soft Confidence Weighted, Adaptive Regularization of Weight Vectors, Factorization Machines, and AdaDelta.

Documentation

(incubator)

MADlib Apache MADlib is an open-source library for scalable in-database analytics. It provides data-parallel implementations of mathematical, statistical and machine learning methods for structured and unstructured data. Key features include: Operate on the data locally in-database. Do not move data between multiple runtime environments unnecessarily; Utilize best of breed database engines, but separate the machine learning logic from database specific implementation details; Leverage MPP shared nothing technology, such as the Greenplum Database and Apache HAWQ (incubating), to provide parallelism and scalability.

Documentation

(graduated)

MXNet A Flexible and Efficient Library for Deep Learning . MXNet provides optimized numerical computation for GPUs and distributed ecosystems, from the comfort of high-level environments like Python and R MXNet automates common workflows, so standard neural networks can be expressed concisely in just a few lines of code.

Webpage

(incubator)

OpenNLP OpenNLP is a machine learning based toolkit for the processing of natural language text. OpenNLP supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, language detection and coreference resolution.

Documentation

(graduated)

PredictionIO PredictionIO is an open source Machine Learning Server built on top of state-of-the-art open source stack, that enables developers to manage and deploy production-ready predictive services for various kinds of machine learning tasks.

Documentation

(graduated)

SAMOA SAMOA provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms that run on top of distributed stream processing engines (DSPEs). It features a pluggable architecture that allows it to run on several DSPEs such as Apache Storm, Apache S4, and Apache Samza.

Documentation

(incubator)

SINGA SINGA is a distributed deep learning platform. An intuitive programming model based on the layer abstraction is provided, which supports a variety of popular deep learning models. SINGA architecture supports both synchronous and asynchronous training frameworks. Hybrid training frameworks can also be customized to achieve good scalability. SINGA provides different neural net partitioning schemes for training large models.

Documentation

(incubator)

Storm Storm is a distributed, fault-tolerant, and high-performance realtime computation system that provides strong guarantees on the processing of data. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language.

Documentation

(graduated)

SystemML SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations, to distributed computations such as Apache Hadoop MapReduce and Apache Spark.

Documentation

(graduated)

Big data ml

I will have a closer look that the following SQL based machine learning languages in a lager blog post:

- MADlib

- Storm

Thursday, July 12, 2018

Oracle Developer Champion

Yesterday evening I received an email titled 'Invitation to Developer Champion Program'.

What a surprise!
Oracle dev champion
The Oracle Developer Champion program was setup just a year ago and is aimed at people who are active in generating content and sharing their knowledge on new technologies including cloud, micro services, containers, Java, open source technologies, machine learning and various types of databases.
For me, I fit into the machine learning, cloud, open source technologies, a bit on chatbots and various types of databases areas. Well I think I do!

This made me look back over my activities for the past 12-18 months. As an Oracle ACE Director, we have to record all our activities. I'd been aware that the past 12-18 months had been a bit quieter than previous years. But when I looked back at all the blog posts, articles for numerous publications, books, and code contributions, etc. Even I was impressed with what I had achieved, even though it was a quiet period for me.

Membership of Oracle Developer Champion program is for one year, and the good people in Oracle Developer Community (ODC) will re-evaluate what I, and the others in the program, have been up to and will determine if you can continue for another year.

In addition to writing, contributing to projects, presenting, etc Oracle Developer Champions typically have leadership roles in user groups, answering questions on forums and providing feedback to product managers.

The list of existing Oracle Developer Champions is very impressive. I'm honoured to be joining this people.

Click on the image to go to the Oracle Developer Champion website to find out more.
Screen Shot 2018 07 12 at 17 21 32

And check out the list of existing Oracle Developer Champions.
 Oracle dev champion O ACEDirectorLogo clr

Thursday, June 28, 2018

My book on Oracle R Enterprise translated into Chinese

A couple of days ago the post man knocked on my door with a package. I hadn't ordered anything, so it was a puzzling what it might be.

When I opened the package I found 3 copies of a book in Chinese.

It was one of my books !

One of my books was translated into Chinese !

What a surprise, as I wasn't aware this was happening.

At this time I'm not sure where you can purchase the book, but I'll update this blog post when I find out.

Monday, June 18, 2018

Twitter Analytics using Python - Part 3

This is my third (of five) post on using Python to process Twitter data.

Check out my all the posts in the series.

In this post I'll have a quick look at how to save the tweets you have download. By doing this allows you to access them at a later point and to perform more analysis. You have a few instances of saving the tweets. The first of these is to save them to files and the second option is to save them to a table in a database.

Saving Tweets to files

In the previous blog post (in this series) I had converged the tweets to Pandas and then used the panda structure to perform some analysis on the data and create some charts. We have a very simple command to save to CSV.

# save tweets to a file
tweets_pd.to_csv('/Users/brendan.tierney/Dropbox/tweets.csv', sep=',')

We can inspect this file using a spreadsheet or some other app that can read CSV files and get the following.

Twitter app8

When you want to read these tweets back into your Python environment, all you need to do is the following.

# and if we want to reuse these tweets at a later time we can reload them
old_tweets = pd.read_csv('/Users/brendan.tierney/Dropbox/tweets.csv')

old_tweets

Tweet app9

That's all very easy!


Saving Tweets to a Database

There are two ways to add tweets to table in the database. There is the slow way (row-by-row) or the fast way doing a bulk insert.

Before we get started with inserting data, lets get our database connection setup and the table to store the tweets for our date. To do this we need to use the cx_oracle python library. The following codes shows the setting up of the connections details (without my actual login details), establishes the connects and then retrieves some basic connection details to prove we are connected.

# import the Oracle Python library
import cx_Oracle

# define the login details
p_username = "..."
p_password = "..."
p_host = "..."
p_service = "..."
p_port = "1521"

# create the connection
con = cx_Oracle.connect(user=p_username, password=p_password, dsn=p_host+"/"+p_service+":"+p_port)
cur = con.cursor()

# print some details about the connection and the library
print("Database version:", con.version)
print("Oracle Python version:", cx_Oracle.version)


Database version: 12.1.0.1.0
Oracle Python version: 6.3.1

Now we can create a table based on the current date.

# drop the table if it already exists
#drop_table = "DROP TABLE TWEETS_" + cur_date
#cur.execute(drop_table)

cre_table = "CREATE TABLE TWEETS_" + cur_date + " (tweet_id number, screen_name varchar2(100), place varchar2(2000), lang varchar2(20), date_created varchar2(40), fav_count number, retweet_count number, tweet_text varchar2(200))"

cur.execute(cre_table)

Now lets first start with the slow (row-by-row) approach. To do this we need to take our Panda data frame and convert it to lists that can be indexed individually.

lst_tweet_id = [item[0] for item in rows3]
lst_screen_name = [item[1] for item in rows3]
lst_lang =[item[3] for item in rows3]
lst_date_created = [item[4] for item in rows3]
lst_fav_count = [item[5] for item in rows3]
lst_retweet_count = [item[6] for item in rows3]
lst_tweet_text = [item[7] for item in rows3]

#define a cursor to use for the the inserts
cur = con.cursor()
for i in range(len(rows3)):
    #do the insert using the index. This can be very slow and should not be used on big data
    cur3.execute("insert into TWEETS_2018_06_12 (tweet_id, screen_name, lang, date_created, fav_count, retweet_count, tweet_text) values (:arg_1, :arg_2, :arg_3, :arg_4, :arg_5, :arg_6, :arg_7)",
                 {'arg_1':lst_tweet_id[i], 'arg_2':lst_screen_name[i], 'arg_3':lst_lang[i], 'arg_4':lst_date_created[i],
                  'arg_5':lst_fav_count[i], 'arg_6':lst_retweet_count[i], 'arg_7':lst_tweet_text[i]})

#commit the records to the database and close the cursor
con.commit()
cur.close()

Tweet app10

Now let us look a quicker way of doing this.

WARNING: It depends on the version of the cx_oracle library you are using. You may encounter some errors relating to the use of floats, etc. You might need to play around with the different versions of the library until you get the one that works for you. Or these issues might be fixed in the most recent versions.

The first step is to convert the panda data frame into a list.

rows = [tuple(x) for x in tweets_pd.values]
rows

Tweet app11

Now we can do some cursor setup like setting the array size. This determines how many records are sent to the database in each batch. Better to have a larger number than a single digit number.

cur = con.cursor()

cur.bindarraysize = 100

cur2.executemany("insert into TWEETS_2018_06_12 (tweet_id, screen_name, place, lang, date_created, fav_count, retweet_count, tweet_text) values (:1, :2, :3, :4, :5, :6, :7, :8)", rows)

Check out the other blog posts in this series of Twitter Analytics using Python.

Monday, June 4, 2018

Twitter Analytics using Python - Part 2

This is my second (of five) post on using Python to process Twitter data.

Check out my all the posts in the series.

In this post I was going to look at two particular aspects. The first is the converting of Tweets to Pandas. This will allow you to do additional analysis of tweets. The second part of this post looks at how to setup and process streaming of tweets. The first part was longer than expected so I'm going to hold the second part for a later post.

Step 6 - Convert Tweets to Pandas

In my previous blog post I show you how to connect and download tweets. Sometimes you may want to convert these tweets into a structured format to allow you to do further analysis. A very popular way of analysing data is to us Pandas. Using Pandas to store your data is like having data stored in a spreadsheet, with columns and rows. There are also lots of analytic functions available to use with Pandas.

In my previous blog post I showed how you could extract tweets using the Twitter API and to do selective pulls using the Tweepy Python library. Now that we have these tweet how do I go about converting them into Pandas for additional analysis? But before we do that we need to understand a bit more a bout the structure of the Tweet object that is returned by the Twitter API. We can examine the structure of the User object and the Tweet object using the following commands.

dir(user)

['__class__',
 '__delattr__',
 '__dict__',
 '__dir__',
 '__doc__',
 '__eq__',
 '__format__',
 '__ge__',
 '__getattribute__',
 '__getstate__',
 '__gt__',
 '__hash__',
 '__init__',
 '__init_subclass__',
 '__le__',
 '__lt__',
 '__module__',
 '__ne__',
 '__new__',
 '__reduce__',
 '__reduce_ex__',
 '__repr__',
 '__setattr__',
 '__sizeof__',
 '__str__',
 '__subclasshook__',
 '__weakref__',
 '_api',
 '_json',
 'contributors_enabled',
 'created_at',
 'default_profile',
 'default_profile_image',
 'description',
 'entities',
 'favourites_count',
 'follow',
 'follow_request_sent',
 'followers',
 'followers_count',
 'followers_ids',
 'following',
 'friends',
 'friends_count',
 'geo_enabled',
 'has_extended_profile',
 'id',
 'id_str',
 'is_translation_enabled',
 'is_translator',
 'lang',
 'listed_count',
 'lists',
 'lists_memberships',
 'lists_subscriptions',
 'location',
 'name',
 'needs_phone_verification',
 'notifications',
 'parse',
 'parse_list',
 'profile_background_color',
 'profile_background_image_url',
 'profile_background_image_url_https',
 'profile_background_tile',
 'profile_banner_url',
 'profile_image_url',
 'profile_image_url_https',
 'profile_link_color',
 'profile_location',
 'profile_sidebar_border_color',
 'profile_sidebar_fill_color',
 'profile_text_color',
 'profile_use_background_image',
 'protected',
 'screen_name',
 'status',
 'statuses_count',
 'suspended',
 'time_zone',
 'timeline',
 'translator_type',
 'unfollow',
 'url',
 'utc_offset',
 'verified']

dir(tweets)

['__class__',
 '__delattr__',
 '__dict__',
 '__dir__',
 '__doc__',
 '__eq__',
 '__format__',
 '__ge__',
 '__getattribute__',
 '__getstate__',
 '__gt__',
 '__hash__',
 '__init__',
 '__init_subclass__',
 '__le__',
 '__lt__',
 '__module__',
 '__ne__',
 '__new__',
 '__reduce__',
 '__reduce_ex__',
 '__repr__',
 '__setattr__',
 '__sizeof__',
 '__str__',
 '__subclasshook__',
 '__weakref__',
 '_api',
 '_json',
 'author',
 'contributors',
 'coordinates',
 'created_at',
 'destroy',
 'entities',
 'favorite',
 'favorite_count',
 'favorited',
 'geo',
 'id',
 'id_str',
 'in_reply_to_screen_name',
 'in_reply_to_status_id',
 'in_reply_to_status_id_str',
 'in_reply_to_user_id',
 'in_reply_to_user_id_str',
 'is_quote_status',
 'lang',
 'parse',
 'parse_list',
 'place',
 'retweet',
 'retweet_count',
 'retweeted',
 'retweets',
 'source',
 'source_url',
 'text',
 'truncated',
 'user']

We can see all this additional information to construct what data we really want to extract.

The following example illustrates the searching for tweets containing a certain word and then extracting a subset of the metadata associated with those tweets.

oracleace_tweets = tweepy.Cursor(api.search,q="oracleace").items()
tweets_data = []
for t in oracleace_tweets:
   tweets_data.append((t.author.screen_name,
                       t.place,
                       t.lang,
                       t.created_at,
                       t.favorite_count,
                       t.retweet_count,
                       t.text.encode('utf8')))

We print the contents of the tweet_data object.

print(tweets_data)

[('jpraulji', None, 'en', datetime.datetime(2018, 5, 28, 13, 41, 59), 0, 5, 'RT @tanwanichandan: Hello Friends,\n\nODevC Yatra is schedule now for all seven location.\nThis time we have four parallel tracks i.e. Databas…'), ('opal_EPM', None, 'en', datetime.datetime(2018, 5, 28, 13, 15, 30), 0, 6, "RT @odtug: Oracle #ACE Director @CaryMillsap is presenting 2 #Kscope18 sessions you don't want to miss! \n- Hands-On Lab: How to Write Bette…"), ('msjsr', None, 'en', datetime.datetime(2018, 5, 28, 12, 32, 8), 0, 5, 'RT @tanwanichandan: Hello Friends,\n\nODevC Yatra is schedule now for all seven location.\nThis time we have four parallel tracks i.e. Databas…'), ('cmvithlani', None, 'en', datetime.datetime(2018, 5, 28, 12, 24, 10), 0, 5, 'RT @tanwanichandan: Hel ......

I've only shown a subset of the tweets_data above.

Now we want to convert the tweets_data object to a panda object. This is a relative trivial task but an important steps is to define the columns names otherwise you will end up with columns with labels 0,1,2,3...

import pandas as pd

tweets_pd = pd.DataFrame(tweets_data,
                         columns=['screen_name', 'place', 'lang', 'created_at', 'fav_count', 'retweet_count', 'text'])

Now we have a panda structure that we can use for additional analysis. This can be easily examined as follows.

tweets_pd

 	screen_name 	place 	lang 	created_at 	fav_count 	retweet_count 	text
0 	jpraulji 	None 	en 	2018-05-28 13:41:59 	0 	5 	RT @tanwanichandan: Hello Friends,\n\nODevC Ya...
1 	opal_EPM 	None 	en 	2018-05-28 13:15:30 	0 	6 	RT @odtug: Oracle #ACE Director @CaryMillsap i...
2 	msjsr 	None 	en 	2018-05-28 12:32:08 	0 	5 	RT @tanwanichandan: Hello Friends,\n\nODevC Ya...

Now we can use all the analytic features of pandas to do some analytics. For example, in the following we do a could of the number of times a language has been used in our tweets data set/panda, and then plot it.

import matplotlib.pyplot as plt

tweets_by_lang = tweets_pd['lang'].value_counts()
print(tweets_by_lang)

lang_plot = tweets_by_lang.plot(kind='bar')
lang_plot.set_xlabel("Languages")
lang_plot.set_ylabel("Num. Tweets")
lang_plot.set_title("Language Frequency")

en    182
fr      7
es      2
ca      2
et      1
in      1

Pandas1

Similarly we can analyse the number of times a twitter screen name has been used, and limited to the 20 most commonly occurring screen names.

tweets_by_screen_name = tweets_pd['screen_name'].value_counts()
#print(tweets_by_screen_name)

top_twitter_screen_name = tweets_by_screen_name[:20]
print(top_twitter_screen_name)

name_plot = top_twitter_screen_name.plot(kind='bar')
name_plot.set_xlabel("Users")
name_plot.set_ylabel("Num. Tweets")
name_plot.set_title("Frequency Twitter users using oracleace")

oraesque           7
DBoriented         5
Addidici           5
odtug              5
RonEkins           5
opal_EPM           5
fritshoogland      4
svilmune           4
FranckPachot       4
hariprasathdba     3
oraclemagazine     3
ritan2000          3
yvrk1973           3
...

Pandas2

There you go, this post has shown you how to take twitter objects, convert them in pandas and then use the analytics features of pandas to aggregate the data and create some plots.


Check out the other blog posts in this series of Twitter Analytics using Python.

Wednesday, May 30, 2018

Call for Papers : UKOUG Annual Conferences : Closes 4th June at 9am (UK)

The call for Papers (presentations) for the UKOUG Annual Conferences is open until 9am (UK time) on Monday 4th June.

Ukoug18

Me: What are you waiting for? Go and submit a topic! Why not!

You: Humm, well..., (excuse, excuse, ...)

Me: What?

You: I couldn't do that! Present at a conference?

Me: Why not?

You: That is only for experts and I'm not one.

Me: Wrong! If you have a story to tell, then you can present.

You: But I've never presented before, it scares me, but one day I'd like to try.

Me: Go for it, do it. If you want you can co-present with me.

You: But, But, But .....


I'm sure you have experienced something like the above conversation before. You don't have to be an expert to present, you don't have to know everything about a product to present, you don't have to be using the latest and brightest technologies to present, you don't have to present about something complex, etc. (and the list goes on and on)

The main thing to remember is, if you have a story to tell then that is your presentation. Be it simple, complex, only you might be interested in it, it involves making lots of bits of technology work, you use a particular application in a certain way, you found something interesting, you used a new process, etc (and the list goes on and on)

I've talked to people who "ranted" for two hours about a certain topic (its was about Dates in Oracle), but when I said you should give a presentation on that, they say NO, I couldn't do that!. (If you are that person and you are reading this, then go on and submit that presentation).

If you don't want to present alone, then reach out to someone else and ask them if they are interested in co-presenting. Most experienced presenters would be very happy to do this.

You: But the topic area I'll talk about is not listed on the submission page?

Me: Good point, just submit it and pick the topic area that is closest.

You: But my topic would be of interest to the APPs and Tech conference, what do I do?

Me: Submit it to both, and let the agenda planners work out where it will fit.

I've presented at both APPs and Tech over the years and sometimess my Tech submission has been moved and accepted for the APPs conf, and vice versa.

Just do it!

Just do it

Monday, May 28, 2018

Twitter Analytics using Python - Part 1

(This is probably the first part of, probably, a five part blog series on twitter analytics using Python. Make sure to check out the other posts and I'll post a wrap up blog post that will point to all the posts in the series)

(Yes there are lots of other examples out there, but I've put these notes together as a reminder for myself and a particular project I'm testing)

In this first blog post I will look at what you need to do get get your self setup for analysing Tweets, to harvest tweets and to do some basics. These are covered in the following five steps.

Step 1 - Setup your Twitter Developer Account & Codes

Before you can start writing code you need need to get yourself setup with Twitter to allow you to download their data using the Twitter API.

To do this you need to register with Twitter. To do this go to apps.twitter.com. Log in using your twitter account if you have one. If not then you need to go create an account.

Next click on the Create New App button.

Twitter app1

Then give the Name of your app (Twitter Analytics using Python), a description, a webpage link (eg your blog or something else), click on the 'add a Callback URL' button and finally click the check box to agree with the Developer Agreement. Then click the 'Create your Twitter Application' button.

You will then get a web page like the following that contains lots of very important information. Keep the information on this page safe as you will need it later when creating your connection to Twitter.

Twitter app2

The details contained on this web page (and below what is shown in the above image) will allow you to use the Twitter REST APIs to interact with the Twitter service.

Step 2 - Install libraries for processing Twitter Data

As with most languages there is a bunch of code and libraries available for you to use. Similarly for Python and Twitter. There is the Tweepy library that is very popular. Make sure to check out the Tweepy web site for full details of what it will allow you to do.

To install Tweepy, run the following.

pip3 install tweepy

It will download and install tweepy and any dependencies.

Step 3 - Initial Python code and connecting to Twitter

You are all set to start writing Python code to access, process and analyse Tweets.

The first thing you need to do is to import the tweepy library. After that you will need to use the important codes that were defined on the Twitter webpage produced in Step 1 above, to create an authorised connection to the Twitter API.

Twitter app3

After you have filled in your consumer and access token values and run this code, you will not get any response.

Step 4 - Get User Twitter information

The easiest way to start exploring twitter is to find out information about your own twitter account. There is a API function called 'me' that gathers are the user object details from Twitter and from there you can print these out to screen or do some other things with them. The following is an example about my Twitter account.

#Get twitter information about my twitter account
user = api.me()

print('Name: ' + user.name)
print('Twitter Name: ' + user.screen_name)
print('Location: ' + user.location)
print('Friends: ' + str(user.friends_count))
print('Followers: ' + str(user.followers_count))
print('Listed: ' + str(user.listed_count))

Twitter app4

You can also start listing the last X number of tweets from your timeline. The following will take the last 10 tweets.

for tweets in tweepy.Cursor(api.home_timeline).items(10):
    # Process a single status
    print(tweets.text)
Twitter app5

An alternative is, that returns only 20 records, where the example above can return X number of tweets.

public_tweets = api.home_timeline()
for tweet in public_tweets:
    print(tweet.text)

Step 5 - Get Tweets based on a condition

Tweepy comes with a Search function that allows you to specify some text you want to search for. This can be hash tags, particular phrases, users, etc. The following is an example of searching for a hash tag.

for tweet in tweepy.Cursor(api.search,q="#machinelearning",
                           lang="en",
                           since="2018-05-01").items(10):
    print(tweet.created_at, tweet.text)

Twitter app7

You can apply additional search criteria to include restricting to a date range, number of tweets to return, etc


Check out the other blog posts in this series of Twitter Analytics using Python.