When it comes to forecasting data (time series or other types of series), people look to things like basic regression, ARIMA, ARMA, GARCH, or even Prophet but don’t discount the use of Random Forests for forecasting data.

Random Forests are generally considered a classification technique but regression is definitely something that Random Forests can handle.

For this post, I am going to use a dataset found here called Sales Prices of Houses in the City of Windsor (CSV here, description here). For the purposes of this post, I’ll only use the `price`

and `lotsize`

columns. *Note: In a future post, I’m planning to resist this data and perform multivariate regression with Random Forests.*

To get started, let’s import all the necessary libraries to get started. As always, you can grab a jupyter notebook to run through this analysis yourself here.

1 2 3 4 5 6 7 | import pandas as pd import matplotlib.pyplot as plt # lets set the figure size and color scheme for plots # personal preference and not needed). plt.rcParams['figure.figsize']=(20,10) plt.style.use('ggplot') |

Now, lets load the data:

1 2 | df = pd.read_csv('../examples/Housing.csv') df = df[['price', 'lotsize']] |

Again, we are only using two columns from the data set – `price`

and `lotsize`

. Let’s plot this data to take a look at it visually to see if it makes sense to use `lotsize`

as a predictor of `price`

.

1 | df.plot(subplots=True) |

Looking at the data, it looks like a decent guess to think `lotsize`

might forecast `price`

.

Now, lets set up our dataset to get our training and testing data ready.

1 2 3 4 5 6 7 8 | X = (dataset['lotsize']) y = (dataset['price']) X_train = X[X.index < 400] y_train = y[y.index < 400] X_test = X[X.index >= 400] y_test = y[y.index >= 400] |

In the above, we set X and y for the random forest regressor and then set our training and test data. For training data, we are going to take the first 400 data points to train the random forest and then test it on the last 146 data points.

Now, let’s run our random forest regression model. First, we need to import the Random Forest Regressor from sklearn:

1 | from sklearn.ensemble.forest import RandomForestRegressor |

And now….let’s run our Random Forest Regression and see what we get.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | # build our RF model RF_Model = RandomForestRegressor(n_estimators=100, max_features=1, oob_score=True) # let's get the labels and features in order to run our # model fitting labels = y_train#[:, None] features = X_train[:, None] # Fit the RF model with features and labels. rgr=RF_Model.fit(features, labels) # Now that we've run our models and fit it, let's create # dataframes to look at the results X_test_predict=pd.DataFrame( rgr.predict(X_test[:, None])).rename( columns={0:'predicted_price'}).set_index('predicted_price') X_train_predict=pd.DataFrame( rgr.predict(X_train[:, None])).rename( columns={0:'predicted_price'}).set_index('predicted_price') # combine the training and testing dataframes to visualize # and compare. RF_predict = X_train_predict.append(X_test_predict) |

Let’s visualize the `price`

and the `predicted_price`

.

1 | df[['price', 'predicted_price']].plot() |

That’s really not a bad outcome for a wild guess that `lotsize`

predicts `price`

. Visually, it looks pretty good (although there are definitely errors).

Let’s look at the base level error. First, a quick plot of the ‘difference’ between the two.

1 2 | df['diff']=df.predicted_price - df.price df['diff'].plot(kind='bar') |

There are some very large errors in there. Let’s look at some values like R-Squared and Mean Squared Error. First, lets import the appropriate functions from `sklearn`

.

1 | from sklearn.metrics import r2_score |

Now, lets look at R-Squared:

1 | RSquared = r2_score(y_train[:, None], X_train_predict.reset_index().values) |

R-Squared is 0.6976…or basically 0.7. That’s not great but not terribly bad either for a random guess. A value of 0.7 (or 70%) tells you that roughly 70% of the variation of the ‘signal’ is explained by the variable used as a predictor. That’s really not bad in the grand scheme of things.

I could go on with other calculations for error but the point of this post isn’t to show ‘accuracy’ but to show ‘process’ on how how to use Random Forest for forecasting.

Looks for more posts on using random forests for forecasting.

*If you want a very good deep-dive into using Random Forest and other statistical methods for prediction, take a look at The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Amazon Affiliate link)*