Python and AWS Lambda – A match made in heaven

In recent months, I’ve begun moving some of my analytics functions to the cloud. Specifically, I’ve been moving them many of my python scripts and API’s to AWS’ Lambda platform using the Zappa framework.  In this post, I’ll share some basic information about Python and AWS Lambda…hopefully it will get everyone out there thinking about new ways to use platforms like Lambda.

Before we dive into an example of what I’m moving to Lambda, let’s spend some time talking about Lambda. When I first heard about, I was a confused…but once I ‘got’ it, I saw the value. Here’s the description of Lambda from AWS’ website:

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

Once I realized how easy it is to move code to lambda to use whenever/wherever I needed it, I jumped at the opportunity.  But…it took a while to get a good workflow in place to simplify deploying to lambda. I stumbled across Zappa and couldn’t be happier…it makes deploying to lambda simple (very simple).

OK.  So. Why would you want to move your code to Lambda?

Lots of reasons. Here’s a few:

  • Rather than host your own server to handle some API endpoints — move to Lambda
  • Rather than build out a complex development environment to support your complex system, move some of that complexity to Lambda and make a call to an API endpoint.
  • If you travel and want to downsize your travel laptop but still need to access your python data analytics stack move the stack to Lambda.
  • If you have a script that you run very irregularly and don’t want to pay $5 a month at Digital Ocean — move it to Lambda.

There are many other more sophisticated reasons of course, but these’ll do for now.

Let’s get started looking at python and AWS Lambda.  You’ll need an AWS account for this.

First – I’m going to talk a bit about building an API endpoint using Flask. You don’t have to use flask, but its an easy framework to use and you can quickly build an API endpoint with it with very little fuss.  With this example, I’m going to use Lambda to host an API endpoint that uses the Newspaper library to scrape a website, pull down the text and return that text to my local script.

Writing your first Flask + Lambda API

To get started, install Flask,Flask-Restful and Zappa.  You’ll want to do this in a fresh environment using virtualenv (see my previous posts about virtualenv and vagrant) because we’ll be moving this up to Lambda using Zappa.

Our flask driven API is going to be extremely simple and exist in less than 20 lines of code:

Note: The ‘host = 0.0.0.0’ and ‘port=50001’ are extranous and are how I use Flask with vagrant. If you keep this in and run it locally, you’d need to visit http://0.0.0.0:5001 to view your app.

The last thing you need to do is build your requirements.txt file for Zappa to use when building your application files to send to Lambda. For a quick/dirty requirements file, I used the following:

Now…let’s get this up to lambda.  With zappa, its as easy as a couple of command line instructions.

First, run the init command from the command line in your virtualenv:

You should see something similar to this:

zappa init screenshot

You’ll be asked a few questions, you can hit ‘enter’ to take the defaults or enter your own. For this eample, I used ‘dev’ for the environment name (you can set up multiple environments for dev, staging, production, etc) and made a S3 bucket for use with this application.

Zappa should realize you are working with Flask app and automatically set things up for you. It will ask you what the name of your Flask app’s main function is (in this case it is api.app). Lastly, Zappa will ask if you want to deploy to all AWS regions…I chose not to for this example. Once complete, you’ll have a zappa_settings.json file in your directory that will look something like the following:

I’ve found that I need to add more information to this json file before I can successfully deploy. For some reason, Zappa doesn’t add the “region” to the settings file. I also like to add the “runtime” as well. Edit your json file to read (feel free to use whatever region you want):

Now…you are ready to deploy. You can do that with the following command:

Zappa will set up all the necessary configurations and systems on AWS AND zip up your libraries and code and push it to Lambda.   I’ve not found another framework as easy to use as Zappa when it comes to deploying…if you know of one feel free to leave a comment.

After a minute or two, you should see a “Deployment Complete: …” message that includes the endpoint for your new API. In this case, Zappa built the following endpoint for me:

If you make some changes to your code and need to update Lambda, Zappa makes it easy to do that with the following command:

Additionally, if you want to add a ‘production’ lambda environment, all you need to do is add that new environment to your settings json file and deploy it. For this example, our settings file would change to:

Next, do a deploy prod and your production environment is ready to go at a new endpoint.

Interfacing with the API

Our code is pushed to Lambda and ready to start accepting requests.  In this example’s case, all we are doing is returning “hello world” but you can see the power in this for other functionality.  To check out the results, just open a browser and enter your Zappa Deployment URL and append /hello to the end of it like this:

You should see the standard “Hello World” response in your browser window.

You can find the code for the lambda api.py function here.

Note: At some point, I’ll pull this endpoint down…but will leave it up for a bit for users to play around with.


 

If you want to learn more about Lambda, there are two fairly good books on the topic – check them out (Amazon links):


 

S&P 500 Forecast with confidence Bands

Stock market forecasting with prophet

In a previous post, I used stock market data to show how prophet detects changepoints in a signal (http://pythondata.wpengine.com/forecasting-time-series-data-prophet-trend-changepoints/). After publishing that article, I’ve received a few questions asking how well (or poorly) prophet can forecast the stock market so I wanted to provide a quick write-up to look at stock market forecasting with prophet.

This article highlights using prophet for forecasting the markets.  You can find a jupyter notebook with the full code used in this post here.

For this article, we’ll be using S&P 500 data from FRED. You can download this data into CSV format yourself or just grab a copy from the my github ‘examples’ directory here.  let’s load our data and plot it.

S&P 500 Plot
S&P 500 Plot

with

Now, let’s run this data through prophet. Take a look at http://pythondata.wpengine.com/forecasting-time-series-data-prophet-jupyter-notebook/ for more information on the basics of Prophet.

And, let’s take a look at our forecast.

S&P 500 Forecast Plot
S&P 500 Forecast Plot

With the data that we have, it is hard to see how good/bad the forecast (blue line) is compared to the actual data (black dots). Let’s take a look at the last 800 data points (~2 years) of forecast vs actual without looking at the future forecast (because we are just interested in getting a visual of the error between actual vs forecast).

S&P 500 Forecast Plot - Last two years of actuals vs Forecast
S&P 500 Forecast Plot – Last two years of Actuals (orange) vs Forecast (blue – listed as yhat)

You can see from the above chart, our forecast follows the trend quite well but doesn’t seem to that great at catching the ‘volatility’ of the market. Don’t fret though…this may be a very good thing though for us if we are interested in ‘riding the trend’ rather than trying to catch peaks and dips perfectly.

Let’s take a look at a few measures of accuracy.  First, we’ll look at a basic pandas dataframe describe function to see how thing slook then we’ll look at R-squared, Mean Squared Error (MSE) and Mean Absolute Error (MAE).

Those really aren’t bad numbers but they don’t really tell all of the story. Let’s take a look at a few more measures of accuracy.

Now, let’s look at R-squared using sklearn’sr2_score function:

We get a value of 0.91, which isn’t bad at all. I’ll take a 0.9 value in any first-go-round modeling approach.

Now, let’s look at mean squared error using sklearn’smean_squared_error function:

We get a value of 2260.27.

And there we have it…the real pointer to this modeling technique being a bit wonky.

An MSE of 2260.28 for a model that is trying to predict the S&P500 with values between 1900 and 2500 isn’t that good (remember…for MSE, closer to zero is better) if you are trying to predict exact changes and movements up/down.

Now, let’s look at the mean absolute error (MAE) using sklearn’s mean_absolute_error function. The MAE is the measurement of absolute error between two continuous variables and can give us a much better look at error rates than the standard mean.

For the MAE, we get 36.18

The MAE is continuing to tell us that the forecast by prophet isn’t ideal to use this forecast in trading.

Another way to look at the usefulness of this forecast is to plot the upper and lower confidence bands of the forecast against the actuals. You can do that by plotting yhat_upper and yhat_lower.

S&P 500 Forecast with confidence bands
S&P 500 Forecast with confidence bands

In the above chart, we can see the forecast (in blue) vs the actuals (in orange) with the upper and lower confidence bands in gray.

You can’t really tell anything quantifiable from this chart, but you can make a judgement on the value of the forecast. If you are trying to trade short-term (1 day to a few weeks) this forecast is almost useless but if you are investing with a timeframe of months to years, this forecast might provide some value to better understand the trend of the market and the forecasted trend.

Let’s go back and look at the actual forecast to see if it might tell us anything different than the forecast vs the actual data.

S&P 500 Forecast with confidence Bands
S&P 500 Forecast with confidence Bands

This chart is a bit easier to understand vs the default prophet chart (in my opinion at least). We can see throughout the history of the actuals vs forecast, that prophet does an OK job forecasting but has trouble with the areas when the market become very volatile.

Looking specifically at the future forecast, prophet is telling us that the market is going to continue rising and should be around 2750 at the end of the forecast period, with confidence bands stretching from 2000-ish to 4000-ish.  If you show this forecast to any serious trader / investor, they’d quickly shrug it off as a terrible forecast. Anything that has a 2000 point confidence interval is worthless in the short- and long-term investing world.

That said, is there some value in prophet’s forecasting for the markets? Maybe. Perhaps a forecast looking only as a few days/weeks into the future would be much better than one that looks a year into the future.  Maybe we can use the forecast on weekly or monthly data with better accuracy. Or…maybe we can use the forecast combined with other forecasts to make a better forecast. I may dig into that a bit more at some point in the future. Stay tuned.

Forecasting Time Series data with Prophet – Trend Changepoints

In these posts, I’ve been looking at using Prophet to forecast time series data at a monthly level using sales revenue data.  In this post, I want to look at a very interesting aspect of Prophet (and time series analysis) that most people overlook  – that of trend changepoints. This is the fourth in a series of posts about using Prophet to forecast time series data. The other parts can be found here:

Trend changepoint detection isn’t an easy thing to do. You could take the naive approach and just find local maxima and minima but those may or may not be changes in the overall trend of your signal.   There are many different methods for changepoint detection (a good paper looking at four methods can be found here – Trend analysis and change point techniques: a survey) but thankfully Prophet does trend changepoint detection behind the scenes for us (and it does a pretty good job of it).

For most people / tasks, the automatic detection performed by Prophet is good enough, but it never hurts to know how to tweak Prophet in case it misses some changepoints.

I’ve uploaded a jupyter notebook here and the sample data that I’m using here.  Rather than use the monthly sales data I’ve been using, i wanted to use something that has a bit more of a noticable trend so I grabbed the S&P 500 index from FRED.

The jupyter notebook has bit more detail about loading data and running prophet for this data, so I’ll just throw the commands here and you can jump over there to see more detail. These first steps are no different than the standard data loading / prep for prophet discussed in previous posts.

SP500 Daily Data Plotted
SP500 Daily Data Plotted

Now, let’s run prophet. Again, this is no different than the steps we’ve taken in previous posts for prophet.

Prophet has created our model and fit the data. It has also (behind the scenes) created some potential changepoints. We can access these changepoints with .changepoints.  By default, Prophet adds 25 changepoints into the initial 80% of the data-set. The number of changepoints can be set by using the n_changepoints parameter when initializing prophet (e.g., model=Prophet(n_changepoints=30).

You can view the changepoints by typing the following:

prophet changepoints

In addition to viewing the dates of the changepoints, we can also view a chart with changepoints added.

S&P 500 Prophet Model with Changepoints Added (in oragen)
S&P 500 Prophet Model with Changepoints Added (in oragen)

Taking a look at the possible changepoints (drawn in orange/red) in the above chart, we can see they fit pretty well with some of the highs and lows.

Prophet will also let us take a look at the magnitudes of these possible changepoints. You can look at this visualization with the following code:

SP500 Prophel Model changepoint Magnitudes
SP500 Prophel Model changepoint Magnitudes

We can see from the above chart, that there are quite a few of these changes points (found between 10 and 20 on the chart) that are very minimal in magnitude and are most likely to be ignored by prophet during forecasting be used in the forecasting.

Now, if we know where trends changed in the past, we can add these known changepoints into our dataframe for use by Prophet. For this data, I’m going to use the FRED website to find some of the low points and high points to use as trend changepoints. Note: In actuality, just because there is a low or high doesn’t mean its a real changepoint or trend change, but let’s assume it does.

S&P500 Prophet Model with Manually Set Changepoints
S&P500 Prophet Model with Manually Set Changepoints

We can see that by manually setting our changepoints (and only using a few points), we drastically changed the model compared to the model that prophet built for us using the automatic detection of changepoints. Unless you are very sure about your trend changepoints in the past, its probably good to keep the defaults that prophet provides.

Conclusion

Prophet’s use (and accessibility) of trend changepoints is wonderful, especially for those signals / datasets that have significant changes in trend during the lifetime of the signal. That said, unless you are certain about your changepoints, it might be best to let prophet do its thing automatically.

Note:  Please don’t think that because prophet does an OK job of forecasting the SP500 chart historically in this example that you should use it to ‘predict’ the markets. The markets are awfully tough to forecast…I used this market data because I knew there were some very clear changepoints in the data.