When it comes to forecasting data (time series or other types of series), people look to things like basic regression, ARIMA, ARMA, GARCH, or even Prophet but don’t discount the use of Random Forests for forecasting data.
Random Forests are generally considered a classification technique but regression is definitely something that Random Forests can handle.
For this post, I am going to use a dataset found here called Sales Prices of Houses in the City of Windsor (CSV here, description here). For the purposes of this post, I’ll only use the
lotsize columns. Note: In a future post, I’m planning to resist this data and perform multivariate regression with Random Forests.
To get started, let’s import all the necessary libraries to get started. As always, you can grab a jupyter notebook to run through this analysis yourself here.
import pandas as pd
import matplotlib.pyplot as plt
# lets set the figure size and color scheme for plots
# personal preference and not needed).
Now, lets load the data:
df = pd.read_csv('../examples/Housing.csv')
df = df[['price', 'lotsize']]
Again, we are only using two columns from the data set –
lotsize. Let’s plot this data to take a look at it visually to see if it makes sense to use
lotsize as a predictor of
Looking at the data, it looks like a decent guess to think
lotsize might forecast
Now, lets set up our dataset to get our training and testing data ready.
X = (dataset['lotsize'])
y = (dataset['price'])
X_train = X[X.index < 400]
y_train = y[y.index < 400]
X_test = X[X.index >= 400]
y_test = y[y.index >= 400]
In the above, we set X and y for the random forest regressor and then set our training and test data. For training data, we are going to take the first 400 data points to train the random forest and then test it on the last 146 data points.
Now, let’s run our random forest regression model. First, we need to import the Random Forest Regressor from sklearn:
from sklearn.ensemble.forest import RandomForestRegressor
And now….let’s run our Random Forest Regression and see what we get.
# build our RF model
RF_Model = RandomForestRegressor(n_estimators=100,
# let's get the labels and features in order to run our
# model fitting
labels = y_train#[:, None]
features = X_train[:, None]
# Fit the RF model with features and labels.
# Now that we've run our models and fit it, let's create
# dataframes to look at the results
# combine the training and testing dataframes to visualize
# and compare.
RF_predict = X_train_predict.append(X_test_predict)
Let’s visualize the
price and the
That’s really not a bad outcome for a wild guess that
price. Visually, it looks pretty good (although there are definitely errors).
Let’s look at the base level error. First, a quick plot of the ‘difference’ between the two.
df['diff']=df.predicted_price - df.price
There are some very large errors in there. Let’s look at some values like R-Squared and Mean Squared Error. First, lets import the appropriate functions from
from sklearn.metrics import r2_score
Now, lets look at R-Squared:
RSquared = r2_score(y_train[:, None], X_train_predict.reset_index().values)
R-Squared is 0.6976…or basically 0.7. That’s not great but not terribly bad either for a random guess. A value of 0.7 (or 70%) tells you that roughly 70% of the variation of the ‘signal’ is explained by the variable used as a predictor. That’s really not bad in the grand scheme of things.
I could go on with other calculations for error but the point of this post isn’t to show ‘accuracy’ but to show ‘process’ on how how to use Random Forest for forecasting.
Looks for more posts on using random forests for forecasting.
If you want a very good deep-dive into using Random Forest and other statistical methods for prediction, take a look at The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Amazon Affiliate link)