Collecting / Storing Tweets with Python and MySQL

A few days ago, I published Collecting / Storing Tweets with Python and MongoDB. In that post, I describe the steps needed to collect and store tweets gathered via the Twitter Streaming API.

I received a comment on that post asking how to store data into MySQL instead of MongoDB. Here’s what you’d need to do to make that change.

Collecting / Storing Tweets with Python and MySQL

In the previous post, we store the twitter data into MongoDB with one line of code:

Unlike MongoDB where we can insert the entire json object, if we want to use MySQL instead of MongoDB, we need to do some additional work with the ‘datajson’ object before storing it.

Let’s assume that we are interested in just capturing the username, date, Tweet and Tweet ID from twitter.  This is most likely the bare minimum information you’d want to capture from the API…there are many (many) more fields available but for now, we’ll go with these four.

Note: I’ll hold off on the MySQL specific changes for now and touch on them shortly.

Once you capture the tweet (line 38 in my original script) and have it stored in your datajson object, create a few variables to store the date, username, Tweet and ID.

Note in the above that we are using the parser.parse() command from the dateutil  module to parse the created_at date for storage into Mysql.

Now that we have our variables ready to go, we’ll need to store those variables into MySQL.  Before we can do that, we need to set up a MySQL connection. I use the python-mysql connector but you are free to use what you need to.  You’ll need to do a import MySQLdb  to get the connector imported into your script (and assuming you installed the connector with pip install mysql-python.

You’ll need to create a table to store this data. You can use the sql statement below to do so if you need assistance / guidance.

Now, let’s set up our MySQL connection, query and execute/commit for the script. I’ll use a function for this to be able to re-use it for each tweet captured.

That’s it.  You are now collecting tweets and storing those tweets into a MySQL database.

Full Script

 


MySQL for PythonWant more information on working with MySQL and Python? Check out the book titled MySQL for Python by Albert Lukaszewski.

 

Collecting / Storing Tweets with Python and MongoDB

A good amount of the work that I do involves using social media content for analyzing networks, sentiment, influencers and other various types of analysis.

In order to do this type of analysis, you first need to have some data to analyze.  You can also scrape websites like Twitter or Facebook using simple web scrapers, but I’ve always found it easier to use the API’s that these companies / websites provide to pull down data.

The Twitter Streaming API is ideal for grabbing data in real-time and storing it for analysis. Twitter also has a search API that lets you pull down a certain number of historical tweets (I think I read it was the last 1,000 tweets…but its been a while since I’ve looked at the Search API).   I’m a fan of the Streaming API because it lets me grab a much larger set of data than the Search API, but it requires you to build a script that ‘listens’ to the API for your required keywords and then store those tweets somewhere for later analysis.

There are tons of ways to connect up to the Streaming API. There are also quite a few Twitter API wrappers for Python (and most of them work very well).   I tend to use Tweepy more than others due to its ease of use and simple structure. Additionally, if I’m working on a small / short-term project, I tend to reach for MongoDB to store the tweets using the PyMongo module. For larger / longer-term projects I usually connect the streaming API script to MySQL instead of MongoDB simply because MySQL fits into my ecosystem of backup scripts, etc better than MongoDB does.  MongoDB is perfectly suited for this type of work for larger projects…I just tend to swing toward MySQL for those projects.

For this post, I wanted to share my script for collecting Tweets from the Twitter API and storing them into MongoDB.

Note: This script is a mashup of many other scripts I’ve found on the web over the years. I don’t recall where I found the pieces/parts of this script but I don’t want to discount the help I had from other people / sites in building this script.

Collecting / Storing Tweets with Python and MongoDB

Let’s set up our imports:

Next, set up your mongoDB path:

Next, set up the words that you want to ‘listen’ for on Twitter. You can use words or phrases seperated by commas.

Here, I’m listening for words related to maching learning, data science, etc.

Next, let’s set up our Twitter API Access information.  You can set these up here.

Time to build the listener class.

Now that we have the listener class, let’s set everything up to start listening.

Now you are ready to go. The full script is below. You can store this script as “streaming_API.py” and run it as “python streaming_API.py” and – assuming you set up mongoDB and your twitter API key’s correctly, you should start collecting Tweets.

The Full Script:

 

Jupyter with Vagrant

I’ve written about using vagrant for 99.9% of my python work on here before (see here and here for examples).   In addition to vagrant, I use jupyter notebooks on 99.9% of the work that I do, so I figured I’d spend a little time describing how I use jupyter with vagrant.

First off, you’ll need to have vagrant set up and running (descriptions for linux, MacOS, Windows).   Once you have vagrant installed, we need to make a few changes to the VagrantFile to allow port forwarding from the vagrant virtual machine to the browser on your computer. If you followed the Vagrant on Windows post, you’ll have already set up the configuration that you need for vagrant to forward the necessary port for jupyter.   For those that haven’t read that post, below are the tweaks you need to make.

My default VagrantFile is shown in figure 1 below.

VagrantFile Example
Figure 1: VagrantFile Example

You’ll only need to change 1 line to get port forwarding working.   You’ll need to change the line that reads:

to the following:

This line will forward port 8888 on the guest to port 8888 on the host. If you aren’t using the default port of 8888 for jupyter, you’ll need to change ‘8888’ to the port you wish to use.

Now that the VagrantFile is ready to go, do a quick ‘vagrant up’ and ‘vagrant ssh’ to start your vagrant VM and log into it. Next, set up any virtual environments that you want / need (I use virtualenv to set up a virtual environment for every project).  You can skip this step if you wish, but it is recommended.

If you set up a virtual environment, go ahead and source into it so that you are using a clean environment and then run the command below to install jupyter. If you didn’t go then you can just run the below to install jupyter.

You are all set.  Jupyter should be installed and ready to go. To run it so it is accessible from your browser, just run the following command:

This command tells jupyter to listen on any IP address.

In your browser,  you should be able to visit your new fangled jupyter (via vagrant) instance by visiting the following url:

Now you’re ready to go with jupyter with vagrant.


Note: If you are wanting / needing to learn Jupyter, I highly recommend Learning IPython for Interactive Computing and Data Visualization (amazon affiliate link). I recommend it to all my clients who are just getting started with jupyter and ipython.