Month: September 2017

Text Analytics with Python – A book review

Text Analytics with Python: A Practical Real-World Approach to Gaining Actionable Insights from your DataThis is a book review of Text Analytics with Python: A Practical Real-World Approach to Gaining Actionable Insights from your Data by Dipanjan Sarkar

One of my go-to books for natural language processing with Python has been Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit by Steven Bird, Ewan Klein, and Edward Loper.  This has been the book for me and was one of my dissertation references.  I used this book so much, that I I had to buy a second copy of this book because I wore the first one out.  I’ve read many other NLP books but haven’t found any that could match this book – till now.

Text Analytics with Python: A Practical Real-World Approach to Gaining Actionable Insights from your Data by Dipanjan Sarkar is a fantastic book and has now taken a permanent place on my bookshelf.

Unlike many books that I run across, this book spends plenty of time talking about the theory behind things rather than just doing some hand-waving and then showing some code. In fact, there isn’t any code (that I saw) until page 41. That’s impressive these days.   Here’s a quick overview of the book’s layout:

  • Chapter 1 provides the baseline for Natural Language. This is a very good overview for anyone that’s never worked much with NLP.
  • Chapter 2 is a python ‘refresher’. If you don’t know python at all but know some other language, this should get you started enough to use the rest of the book.
  • Chapter’s 3 – 7 is there the real fun begins. These chapters cover Text Classification, Summarization Similarity / Clustering and Semantic / Sentiment Analysis.

If you have some familiarity with python and NLP, you can jump to Chapter 3 and dive into the details.

What I really like about this book is that it places theory first.  I’m a big fan of ‘learning by doing’ but I think before you can ‘do’ you need to know ‘why’ you are doing what you are doing.  The code in the book is really well done as well and uses the NLTK,  Sklearn and gensim libraries for most of the work. Additionally, there are multiple ‘build your own’ sections where the author provides a very good overview (and walk-through) of what it takes to build your own functionality for your own NLP work.

This book is highly recommended.


Links in this post:

Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit by Steven Bird, Ewan Klein, and Edward Loper.

Text Analytics with Python: A Practical Real-World Approach to Gaining Actionable Insights from your Data by Dipanjan Sarkar

 

Eric D. Brown , D.Sc. has a doctorate in Information Systems with a specialization in Data Sciences, Decision Support and Knowledge Management. He writes about utilizing python for data analytics at pythondata.com and the crossroads of technology and strategy at ericbrown.com

Python and AWS Lambda – A match made in heaven

In recent months, I’ve begun moving some of my analytics functions to the cloud. Specifically, I’ve been moving them many of my python scripts and API’s to AWS’ Lambda platform using the Zappa framework.  In this post, I’ll share some basic information about Python and AWS Lambda…hopefully it will get everyone out there thinking about new ways to use platforms like Lambda.

Before we dive into an example of what I’m moving to Lambda, let’s spend some time talking about Lambda. When I first heard about, I was a confused…but once I ‘got’ it, I saw the value. Here’s the description of Lambda from AWS’ website:

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

Once I realized how easy it is to move code to lambda to use whenever/wherever I needed it, I jumped at the opportunity.  But…it took a while to get a good workflow in place to simplify deploying to lambda. I stumbled across Zappa and couldn’t be happier…it makes deploying to lambda simple (very simple).

OK.  So. Why would you want to move your code to Lambda?

Lots of reasons. Here’s a few:

  • Rather than host your own server to handle some API endpoints — move to Lambda
  • Rather than build out a complex development environment to support your complex system, move some of that complexity to Lambda and make a call to an API endpoint.
  • If you travel and want to downsize your travel laptop but still need to access your python data analytics stack move the stack to Lambda.
  • If you have a script that you run very irregularly and don’t want to pay $5 a month at Digital Ocean — move it to Lambda.

There are many other more sophisticated reasons of course, but these’ll do for now.

Let’s get started looking at python and AWS Lambda.  You’ll need an AWS account for this.

First – I’m going to talk a bit about building an API endpoint using Flask. You don’t have to use flask, but its an easy framework to use and you can quickly build an API endpoint with it with very little fuss.  With this example, I’m going to use Lambda to host an API endpoint that uses the Newspaper library to scrape a website, pull down the text and return that text to my local script.

Writing your first Flask + Lambda API

To get started, install Flask,Flask-Restful and Zappa.  You’ll want to do this in a fresh environment using virtualenv (see my previous posts about virtualenv and vagrant) because we’ll be moving this up to Lambda using Zappa.

Our flask driven API is going to be extremely simple and exist in less than 20 lines of code:

Note: The ‘host = 0.0.0.0’ and ‘port=50001’ are extranous and are how I use Flask with vagrant. If you keep this in and run it locally, you’d need to visit http://0.0.0.0:5001 to view your app.

The last thing you need to do is build your requirements.txt file for Zappa to use when building your application files to send to Lambda. For a quick/dirty requirements file, I used the following:

Now…let’s get this up to lambda.  With zappa, its as easy as a couple of command line instructions.

First, run the init command from the command line in your virtualenv:

You should see something similar to this:

zappa init screenshot

You’ll be asked a few questions, you can hit ‘enter’ to take the defaults or enter your own. For this eample, I used ‘dev’ for the environment name (you can set up multiple environments for dev, staging, production, etc) and made a S3 bucket for use with this application.

Zappa should realize you are working with Flask app and automatically set things up for you. It will ask you what the name of your Flask app’s main function is (in this case it is api.app). Lastly, Zappa will ask if you want to deploy to all AWS regions…I chose not to for this example. Once complete, you’ll have a zappa_settings.json file in your directory that will look something like the following:

I’ve found that I need to add more information to this json file before I can successfully deploy. For some reason, Zappa doesn’t add the “region” to the settings file. I also like to add the “runtime” as well. Edit your json file to read (feel free to use whatever region you want):

Now…you are ready to deploy. You can do that with the following command:

Zappa will set up all the necessary configurations and systems on AWS AND zip up your libraries and code and push it to Lambda.   I’ve not found another framework as easy to use as Zappa when it comes to deploying…if you know of one feel free to leave a comment.

After a minute or two, you should see a “Deployment Complete: …” message that includes the endpoint for your new API. In this case, Zappa built the following endpoint for me:

If you make some changes to your code and need to update Lambda, Zappa makes it easy to do that with the following command:

Additionally, if you want to add a ‘production’ lambda environment, all you need to do is add that new environment to your settings json file and deploy it. For this example, our settings file would change to:

Next, do a deploy prod and your production environment is ready to go at a new endpoint.

Interfacing with the API

Our code is pushed to Lambda and ready to start accepting requests.  In this example’s case, all we are doing is returning “hello world” but you can see the power in this for other functionality.  To check out the results, just open a browser and enter your Zappa Deployment URL and append /hello to the end of it like this:

You should see the standard “Hello World” response in your browser window.

You can find the code for the lambda api.py function here.

Note: At some point, I’ll pull this endpoint down…but will leave it up for a bit for users to play around with.


 

If you want to learn more about Lambda, there are two fairly good books on the topic – check them out (Amazon links):


 

Eric D. Brown , D.Sc. has a doctorate in Information Systems with a specialization in Data Sciences, Decision Support and Knowledge Management. He writes about utilizing python for data analytics at pythondata.com and the crossroads of technology and strategy at ericbrown.com
S&P 500 Forecast with confidence Bands

Stock market forecasting with prophet

In a previous post, I used stock market data to show how prophet detects changepoints in a signal (http://pythondata.wpengine.com/forecasting-time-series-data-prophet-trend-changepoints/). After publishing that article, I’ve received a few questions asking how well (or poorly) prophet can forecast the stock market so I wanted to provide a quick write-up to look at stock market forecasting with prophet.

This article highlights using prophet for forecasting the markets.  You can find a jupyter notebook with the full code used in this post here.

For this article, we’ll be using S&P 500 data from FRED. You can download this data into CSV format yourself or just grab a copy from the my github ‘examples’ directory here.  let’s load our data and plot it.

S&P 500 Plot
S&P 500 Plot

with

Now, let’s run this data through prophet. Take a look at http://pythondata.wpengine.com/forecasting-time-series-data-prophet-jupyter-notebook/ for more information on the basics of Prophet.

And, let’s take a look at our forecast.

S&P 500 Forecast Plot
S&P 500 Forecast Plot

With the data that we have, it is hard to see how good/bad the forecast (blue line) is compared to the actual data (black dots). Let’s take a look at the last 800 data points (~2 years) of forecast vs actual without looking at the future forecast (because we are just interested in getting a visual of the error between actual vs forecast).

S&P 500 Forecast Plot - Last two years of actuals vs Forecast
S&P 500 Forecast Plot – Last two years of Actuals (orange) vs Forecast (blue – listed as yhat)

You can see from the above chart, our forecast follows the trend quite well but doesn’t seem to that great at catching the ‘volatility’ of the market. Don’t fret though…this may be a very good thing though for us if we are interested in ‘riding the trend’ rather than trying to catch peaks and dips perfectly.

Let’s take a look at a few measures of accuracy.  First, we’ll look at a basic pandas dataframe describe function to see how thing slook then we’ll look at R-squared, Mean Squared Error (MSE) and Mean Absolute Error (MAE).

Those really aren’t bad numbers but they don’t really tell all of the story. Let’s take a look at a few more measures of accuracy.

Now, let’s look at R-squared using sklearn’sr2_score function:

We get a value of 0.91, which isn’t bad at all. I’ll take a 0.9 value in any first-go-round modeling approach.

Now, let’s look at mean squared error using sklearn’smean_squared_error function:

We get a value of 2260.27.

And there we have it…the real pointer to this modeling technique being a bit wonky.

An MSE of 2260.28 for a model that is trying to predict the S&P500 with values between 1900 and 2500 isn’t that good (remember…for MSE, closer to zero is better) if you are trying to predict exact changes and movements up/down.

Now, let’s look at the mean absolute error (MAE) using sklearn’s mean_absolute_error function. The MAE is the measurement of absolute error between two continuous variables and can give us a much better look at error rates than the standard mean.

For the MAE, we get 36.18

The MAE is continuing to tell us that the forecast by prophet isn’t ideal to use this forecast in trading.

Another way to look at the usefulness of this forecast is to plot the upper and lower confidence bands of the forecast against the actuals. You can do that by plotting yhat_upper and yhat_lower.

S&P 500 Forecast with confidence bands
S&P 500 Forecast with confidence bands

In the above chart, we can see the forecast (in blue) vs the actuals (in orange) with the upper and lower confidence bands in gray.

You can’t really tell anything quantifiable from this chart, but you can make a judgement on the value of the forecast. If you are trying to trade short-term (1 day to a few weeks) this forecast is almost useless but if you are investing with a timeframe of months to years, this forecast might provide some value to better understand the trend of the market and the forecasted trend.

Let’s go back and look at the actual forecast to see if it might tell us anything different than the forecast vs the actual data.

S&P 500 Forecast with confidence Bands
S&P 500 Forecast with confidence Bands

This chart is a bit easier to understand vs the default prophet chart (in my opinion at least). We can see throughout the history of the actuals vs forecast, that prophet does an OK job forecasting but has trouble with the areas when the market become very volatile.

Looking specifically at the future forecast, prophet is telling us that the market is going to continue rising and should be around 2750 at the end of the forecast period, with confidence bands stretching from 2000-ish to 4000-ish.  If you show this forecast to any serious trader / investor, they’d quickly shrug it off as a terrible forecast. Anything that has a 2000 point confidence interval is worthless in the short- and long-term investing world.

That said, is there some value in prophet’s forecasting for the markets? Maybe. Perhaps a forecast looking only as a few days/weeks into the future would be much better than one that looks a year into the future.  Maybe we can use the forecast on weekly or monthly data with better accuracy. Or…maybe we can use the forecast combined with other forecasts to make a better forecast. I may dig into that a bit more at some point in the future. Stay tuned.

Eric D. Brown , D.Sc. has a doctorate in Information Systems with a specialization in Data Sciences, Decision Support and Knowledge Management. He writes about utilizing python for data analytics at pythondata.com and the crossroads of technology and strategy at ericbrown.com