Search Results for: machine learning

Data Analytics & Python

data analytics & pythonSo you want (or need) to analyze some data. You’ve got some data in an excel spreadsheet or database somewhere and you’ve been asked to take that data and do something useful with it. Maybe its time for data analytics & Python?

Maybe you’ve been asked to build some models for predictive analytics. Maybe you’ve been asked to better understand your customer base based on their previous purchases and activity.  Perhaps you’ve been asked to build a new business model to generate new revenue.

Where do you start?

You could go out and spend a great deal of money on systems to help you in your analytics efforts, or you could start with tools that are available to you already.  You could open up excel, which is very much overlooked by people these days for data analytics. Or…you could install open source tools (for free!) and begin hacking away.

When I was in your shoes in my first days playing around with data, I started with excel. I quickly moved on to other tools because the things I needed to do seemed difficult to accomplish in excel. I then installed R and began to learn ‘real’ data analytics (or so I thought).

I liked (and still do like) R, but it never felt like ‘home’ to me.  After a few months poking around in R, I ran across python and fell in love. Python felt like home to me.

With python, I could quickly cobble together a script to do just about anything I needed to do. In the 5+ years I’ve been working with python now, I’ve not found anything that I cannot do with python and freely available modules.

Need to do some time series analysis and/or forecasting? Python and statsmodels (along with others).

Need to do some natural language processing?  Python and NLTK (along with others).

Need to do some machine learning work? Python and sklearn (along with others).

You don’t HAVE to use python for data analysis. R is perfectly capabale of doing the same things python is – and in some cases, R has more capabilities than python does because its been used an analytics tool for much longer than python has.

That said, I prefer python and use python in everything I do. Data analytics & python go together quite well.

 

 

 

 

About Eric D. Brown, D.Sc.

Eric D Brown, D.Sc.Big data.  Data analytics. Data science. Data processing. Predictive Analytics.

Regardless of what needs to be done or what you call the activity, the first thing you need to now is “how” to analyze data. You also need to have a tool set for analyzing data.

If you work for a large company, you may have a full blown big data suite of tools and systems to assist in your analytics work. Otherwise, you may have nothing but excel and open source tools to perform your analytics activities.

This post and this site is for those of you who don’t have the ‘big data’ systems and suites available to you. On this site, we’ll be talking about using python for data analytics.  I started this blog as a place for me write about working with python for my various data analytics projects.

I hold a Doctorate in Information Systems. The title of my dissertation title was ‘Analysis of Twitter Messages for Sentiment and Insight for use in Stock Market Decision Making‘.  During this research, I analyzed Twitter messages for sentiment and then analyzed that sentiment to determine whether there was any ‘signal’ that could be generated to make better decisions for investing in the stock market.

For my dissertation work, I used python and multiple python modules (NLTK, pandas, numpy, scikit-learn, scipy and many others) for the analysis.  During this time, I spent a lot of time (and money) learning python and the various modules. I read a lot of books and websites, asked plenty of questions on StackOverflow, took quite a few courses in python development and machine learning and set up mentoring relationships with a number of people around the world to help me get to where I needed to be for my research.

Don’t get me wrong…I learned A LOT of things during this time, but there didn’t seem to be any one website / book / person that brought everything together for someone trying to learn data analytics using python. There were great snippets all over the place, but not a single place to go to learn python for data analytics.

I want this website to be that.  I want people to land on this site and find all the material they need to do everything they need to do.  If you can’t find what you need here, let me know and I’ll help you out personally.

Thanks for checking the site out.

Eric D. Brown, D.Sc.

2016

 

 

Market Basket Analysis with Python and Pandas

If you’ve ever worked with retail data, you’ll most likely have run across the need to perform some market basket analysis (also called Cross-Sell recommendations).  If you aren’t sure what market basket analysis is, I’ve provided a quick overview below.

What is Market Basket Analysis?

In the simplest of terms, market basket analysis looks at retail sales data and determines what products are purchased together. For example, if you sell widgets and want to be able to recommend similar products and/or products that are purchased together, you can perform this type of analysis to be able to understand what products should be recommended when a user views a widget.

You can think of this type of analysis as generating the following ‘rules’:

  • If widget A, then recommend widget B, C and F
  • If widget L, then recommend widget X, Y and R

With these rules, you can then build our recommendation engines for your website, store and salespeople to use when selling products to customers. Market Basket Analysis requires a large amount of transaction data to work well. If you have a large amount of transactional data, you should be able to run a market basket analysis with ease. if you want to learn more about Market Basket Analysis, here’s some additional reading.

In the remainder of this article, I show you how to do this type of analysis using python and pandas.

Market Basket Analysis with Python and Pandas

There are a few approaches that you can take for this type of analysis.  You can use a pre-built library like MLxtend or you can build your own algorithm. I prefer the MLxtend library myself, but recently there’s been some memory issues using pandas and large datasets with MLxtend, so there have been times that I’ve needed to roll my own.

Below, I provide an example of using MLxtend as well as an example of how to roll your own analysis.

Market Basket Analysis with MLxtend

For this example, we’ll use the data set found here. This data-set contains enough data to be useful in understanding market basket analysis but isn’t too large that we can’t use MLxtend (because we can’t unstack the data, which is required to use MLxtend ).

To get started, you’ll need to have pandas and MLxtend installed:

pip install pandas mlxtend

Then, import your libraries:

import pandas as pd
from mlxtend.frequent_patterns import apriori
from mlxtend.frequent_patterns import association_rules

Now, lets read in the data and then drop any rows that don’t have an invoice number. Lastly, we’ll convert the `InvoiceNo` column to a string. NOTE: I downloaded the data file from here and stored it in a subdirectory named `data`.

df = pd.read_excel('data/Online Retail.xlsx')
df.dropna(axis=0, subset=['InvoiceNo'], inplace=True)
df['InvoiceNo'] = df['InvoiceNo'].astype('str')

In this data, there are some invoices that are ‘credits’ instead of ‘debits’ so we want to remove those. They are indentified with “C” in the InvoiceNo field. We can see an example of these types of invoices with the following:

df[df.InvoiceNo.str.contains('C', na=False)].head()

To remove these credit invoices, we can find all invoices with ‘C’ in them, and take the inverse of the results. That can be accomplished with the following line of code:

df = df[~df['InvoiceNo'].str.contains('C')]

Now, we are ready to start our market basket analysis. First, we’ll groupby the columns that we want to consider. For the purposes of this analysis, we’ll only look at the United Kingdom orders.

market_basket = df[df['Country'] =="United Kingdom"].groupby(
                ['InvoiceNo', 'Description'])['Quantity']

Next, we want to hot encode the data and get 1 transaction per row to prepare to run our mlxtend analysis.

market_basket = market_basket.sum().unstack().reset_index().fillna(0).set_index('InvoiceNo')

Let’s take a look at the output:

market_basket.head()

market basket analysis example

Looks like a bunch of zeros. What good is that? Well…its exactly what we want to see. We’ve encoded our data to show when a product is sold with another product. If there is a zero, that means those products haven’t sold together. Before we continue, we want to convert all of our numbers to either a `1` or a `0` (negative numbers are converted to zero, positive numbers are converted to 1). We can do this encoding step with the following function:

def encode_data(datapoint):
    if datapoint <= 0:
        return 0
    if datapoint >= 1:
        return 1

And now, we do our final encoding step:

market_basket = market_basket.applymap(encode_data)

Now, lets find out which items are frequently purchased together. We do this by applying the mlxtend `apriori` fuuinction to our dataset.

There one thing we need to think about first. the `apriori` function requires us to provide a minimum level of ‘support’. Support is defined as the percentage of time that an itemset appears in the dataset. If you set support = 50%, you’ll only get itemsets that appear 50% of the time. I like to set support to around 5% when starting out to be able to see some data/results and then adjust from there. Setting the support level to high could lead to very few (or no) results and setting it too low could require an enormous amount of memory to process the data.

In the case of this data, I originally set the `min_support` to 0.05 but didn’t receive any results, so I changed it to 0.03.

itemsets = apriori(market_basket, min_support=0.03, use_colnames=True)

The final step is to build your association rules using the mxltend `association_rules` function. You can set the metric that you are most interested in (either `lift` or `confidence` and set the minimum threshold for the condfidence level (called `min_threshold`). The `min_threshold` can be thought of as the level of confidence percentage that you want to return. For example, if you set `min_threshold` to 1, you will only see rules with 100% confidence. I usually set this to 0.7 to start with.

rules = association_rules(itemsets, metric="lift", min_threshold=0.5)

With this, we generate 16 rules for our market basket analysis.

MLxtend rules for market basket analysis

This gives us a good number of data points to look at for this analysis. Now, what does this tell us?

If you look in the `antecedents` column and the `consequents` column, you’ll see names of products. Each rule tells us that the `antecedents` is sold along with the `consequents`. You can use this information to build a cross-sell recommendation system that promotes these products with each other on your website (or in person when doing in-person sales).

Without knowing much more about the business that generated this data, we can’t really do much more with it. If you were using your own data, you’d be able to dig a bit deeper to find those rules with higher confidence and/or lift to help you understand the items that are sold together most often and start building strategies to promote those items (or other items if you are trying to grow sales in other areas of your business).

When can you not use MLxtend?

MLxtend can be used anytime you want and it is my preferred approach for market basket analysis. That said, there’s an issue (as of the date of this article) with using pandas with large datasets when performing the step of unstacking the data with this line:

market_basket = market_basket.sum().unstack().reset_index().fillna(0).set_index('InvoiceNo')

You can see the issue here.

When you run across this issue, you’ll need to find an approach to running a market basket analysis. You can probably find ways to work around the pandas unstack problem, but what I’ve done recently is just roll my own analysis (its actually pretty simple to do). That’s what I’ll show you below.

To get started, we need to import a few more libraries:

from itertools import combinations, groupby
from collections import Counter

Let’s use our original dataframe and assign it to a new df so we know we are working with a completely new data-set vs the above. We’ll use the same United Kingdom filter that we did before

df_manual = df[df['Country'] =="United Kingdom"]

Now, lets grab just the order data. For this,we’ll get the InvoiceNo and StockCode columns since all we care about is whether an item exists on an invoice. Remember, we’ve already removed the ‘credit’ invoices in the above steps so all we have are regular invoices. NOTE: There *will* be differences in the output of this approach vs MLxtend’s approach just like there will be differences in other approaches you might use for market basket analysis.

orders = df_manual.set_index('InvoiceNo')['StockCode']

Now that we have a pandas series of Items, Let’s calculate the item frequency and support values.

statistics = orders.value_counts().to_frame("frequency")
statistics['support']  = statistics / len(set(orders.index)) * 100

Let’s filter out any rows of data that doesn’t have support above our min_support level

min_support=0.03 # same value we used above.
items_above_support = statistics[statistics['support'] >= min_support].index
orders_above_support = orders[orders.isin(items_above_support)]

We next need to filter out orders that only had 1 items ordered on the invoice, since those items won’t provide any insight into our market basket analysis.

order_counts = orders.index.value_counts()
orders_over_two_index = order_counts[order_counts>=2].index
orders_over_two = orders[orders.index.isin(orders_over_two_index)]

Now, let’s calculate our stats dataframe again with this new order data-set.

statistics = orders_over_two.value_counts().to_frame("frequency")
statistics['support']  = statistics / len(set(orders_over_two.index)) * 100

Time to do the fun stuff. Calculating the itemsets / item pairs. We’ll create a function that will generate our itemsets and then send our new order dataset through the generator. Then, we calculate the frequency of each item with each other (named `frequencyAC`) as well as the support (named `supportAC`). Finally, we filter out the itemsets that are below our `min_support` level

def itemset_generator(orders):
    orders = orders.reset_index().values
    for order_id, order_object in groupby(orders, lambda x: x[0]):
        item_list = [item[1] for item in order_object]
        for item_pair in combinations(item_list, 2):
            yield item_pair
itemsets_gen = itemset_generator(orders_over_two)
itemsets  = pd.Series(Counter(itemsets_gen)).to_frame("frequencyAC")
itemsets['supportAC'] = itemsets['frequencyAC'] / len(orders_over_two_index) * 100
itemsets = itemsets[itemsets['supportAC'] >= min_support]

Finally, we can calculate our association rules. First, let’s unstack our itemsets and create the necessary data columns for support, lift, etc.

# Create table of association rules and compute relevant metrics
itemsets = itemsets.reset_index().rename(columns={'level_0': 'antecedents', 'level_1': 'consequents'})
itemsets = (itemsets
     .merge(statistics.rename(columns={'freq': 'freqA', 'support': 'antecedent support'}), left_on='antecedents', right_index=True)
     .merge(statistics.rename(columns={'freq': 'freqC', 'support': 'consequents support'}), left_on='consequents', right_index=True))

itemsets['confidenceAtoC'] = itemsets['supportAC'] / itemsets['antecedent support']
itemsets['confidenceCtoA'] = itemsets['supportAC'] / itemsets['consequents support']
itemsets['lift'] = itemsets['supportAC'] / (itemsets['antecedent support'] * itemsets['consequents support'])
itemsets=itemsets[['antecedents', 'consequents','antecedent support', 'consequents support', 'confidenceAtoC','lift']]

Finally, let’s look at our final rules. We want to look at only those items that have confidence > 0.5.

rules = itemsets
rules_over_50 = rules[(rules.confidenceAtoC >0.50)]
rules_over_50.set_index('antecedents',inplace=True)
rules_over_50.reset_index(inplace=True)
rules_over_50=rules_over_50.sort_values('lift', ascending=False)

Looking at the rules_over_50 data, we see our final set of rules using our ‘roll your own’ approach.

final rules for market basket

These rules are going to be a bit different than what we get with MLxtend, but that’s OK as it gives us another set of data to look at – and the only set of data to look at when your data is too large to use MLxtend. One extension to this approach would be to add in a step to replace the stockcode numbers with the item descriptions.  I’ll leave it to you to do that work.