pandas Cheat Sheet (via yhat)

pandas cheat sheetThe folks over at yhat just released a cheat sheet for pandas.  You can download the cheat sheet in PDF for here.

There’s a couple important functions that I use all the time missing from their cheat sheet (actually….there are a lot of things missing, but its a great starter cheat sheet).

A few things that I use all the time with pandas dataframes that are worth collecting in one place are provided below.

Renaming columns in a pandas dataframe:

Iterating over a pandas dataframe:

Splitting pandas dataframe into chunks:

The function plus the function call will split a pandas dataframe (or list for that matter) into NUM_CHUNKS chunks. I use this often when working with the multiprocessing libary.

Accessing the value of a specific cell:

This will give you the value of the last row’s “COLUMN” cell.  This may not be the ‘best’ way to do it, but it gets the value

Getting rows matching a condition:

The below will get all rows in a pandas dataframe that match the criteria.  In addition to finding equality, you can do all the logical operators.

Getting rows matching multiple conditions:

This gets rows that match a criteria in COLUMN1 and those that match another criteria in COLUMN2

Getting the ‘next’ row of data in a pandas dataframe

I’m currently working with stock market trade data that is output from a backtesting engine (I’m working with backtrader currently) in a pandas dataframe.  The format of the ‘transcations’ data that is provided out of the backtesting engine is shown below.

amount price value
date
2016-01-07 00:00:00+00:00 79.017119 195.33 -15434.413883
2016-09-07 00:00:00+00:00 -79.017119 218.84 17292.106354
2016-09-20 00:00:00+00:00 82.217609 214.41 -17628.277649
2016-11-16 00:00:00+00:00 -82.217609 217.56 17887.263119

The data provided gives four crucial pieces of information:

  • Date – The date of a transaction.
  • Amount – the number of shares purchased (positive number) or sold (negative number)  during the transaction.
  • Price – the price received or paid at the time of the sale.
  • Value – the cash value of the transaction.

Backtrader’s transactions dataframe is comprised of two rows make for one one transaction (the first is the ‘buy’ the second is the ‘sell).  For example, in the data above, the first two rows (Jan 7 2016 and Sept 7th 2016) are the ‘buy’ data and ‘sell’ data for one transaction. What I need to do with this data is transform it (using that term loosely) into one row of data for each transaction to store into  database for use in another analysis.

I could leave it in its current form, but I prefer to store transactions in one row when dealing with market backtests.

There are a few ways to attack this particular problem.  You could iterate over the dataframe and manually pick each row. That would be pretty straightforward, but not necessarily the best way.

While looking around the web for some pointers, I stumbled across this answer that does exactly what I need to do.   I added the following code to my script and — voila — I have my transactions transformed from two rows per transaction to one row.

Note: In the above, I only want to build rows that start with a positive amount in the ‘amount’ column because amount in the first row of a transactions is always positive. I then append each transformed transaction into an array to be used for more analysis down at a later point in the script.

Dask – A better way to work with large CSV files in Python

Dask dataframeIn a recent post titled Working with Large CSV files in Python, I shared an approach I use when I have very large CSV files (and other file types) that are too large to load into memory. While the approach I previously highlighted works well, it can be tedious to first load data into sqllite (or any other database) and then access that database to analyze data.   I just found a better approach using Dask.

While looking around the web to learn about some parallel processing capabilities, I ran across a python module named Dask, which describes itself as:

…is a flexible parallel computing library for analytic computing.

When I saw that, I was intrigued. There’s a lot that can be done with that statement  and I’ve got plans to introduce Dask into my various tool sets for data analytics.

While reading the docs, I ran across the ‘dataframe‘ concept and immediately new I’d found a new tool for working with large CSV files.  With Dask’s dataframe concept,  you can do out-of-core analysis (e.g., analyze data in the CSV without loading the entire CSV file into memory). Other than out-of-core manipulation, dask’s dataframe uses the pandas API, which makes things extremely easy for those of us who use and love pandas.

With Dask and its dataframe construct, you set up the dataframe must like you would in pandas but rather than loading the data into pandas, this appraoch keeps the dataframe as a sort of ‘pointer’ to the data file and doesn’t load anything until you specifically tell it to do so.

One note (that I always have to share):  If you are planning on working with your data set over time, its probably best to get the data into a database of some type.

An example using Dask and the Dataframe

First, let’s get everything installed. The documentation claims that you just need to install dask, but I had to install ‘toolz’ and ‘cloudpickle’ to get dask’s dataframe to import.  To install dask and its requirements, open a terminal and type (you need pip for this):

Now, let’s write some code to load csv data and and start analyzing it. For this example, I’m using the 311 Service Requests dataset from NYC’s Open Data portal.   You can download the dataset here: 311 Service Requests – 7Gb+ CSV

Set up your dataframe so you can analyze the 311_Service_Requests.csv file. This file is assumed to be stored in the directory that you are working in.

Unlike pandas, the data isn’t read into memory…we’ve just set up the dataframe to be ready to do some compute functions on the data in the csv file using familiar functions from pandas. Note: I used “dtype=’str'” in the read_csv to get around some strange formatting issues in this particular file.

Let’s take a look at the first few rows of the file using pandas’ head() call.  When you run this, the first X rows (however many rows you are looking at with head(X)) and then displays those rows.

Note: a small subset of the columns are shown below for simplicity

Unique Key Created Date Closed Date Agency
25513481 05/09/2013 12:00:00 AM 05/14/2013 12:00:00 AM HPD
25513482 05/09/2013 12:00:00 AM 05/13/2013 12:00:00 AM HPD
25513483 05/09/2013 12:00:00 AM 05/22/2013 12:00:00 AM HPD
25513484 05/09/2013 12:00:00 AM 05/12/2013 12:00:00 AM HPD
25513485 05/09/2013 12:00:00 AM 05/11/2013 12:00:00 AM HPD

We see that there’s some spaces in the column names. Let’s remove those spaces to make things easier to work with.

The cool thing about dask is that you can do things like renaming columns without loading all the data into memory.

There’s a column in this data called ‘Descriptor’ that has the problem types, and “radiator” is one of those problem types. Let’s take a look at how many service requests were because of some problem with a radiator.  To do this, you can filter the dataframe using standard pandas filtering (see below) to create a new dataframe.

Let’s see how many rows we have using the ‘count’ command

You’ll notice that when you run the above command, you don’t actually get count returned. You get a descriptor back similar  like “dd.Scalar<series-…, dtype=int64>

To actually compute the count, you have to call “compute” to get dask to run through the dataframe and count the number of records.

When you run this command, you should get something like the following

The above are just some samples for using dask’s dataframe construct.  Remember, we built a new dataframe using pandas’ filters without loading the entire original data set into memory.  They may not seem like much, but when working with a 7Gb+ file, you can save a great deal of time and effort using dask when compared to using the approach I previously mentioned.

Dask seems to have a ton of other great features that I’ll be diving into at some point in the near future, but for now, the dataframe construct has been an awesome find.