Skip to content

read_pickle much slower in v0.13 (not using cPickle when compat=False) #6899

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
patricksurry opened this issue Apr 17, 2014 · 11 comments · Fixed by #6983
Closed

read_pickle much slower in v0.13 (not using cPickle when compat=False) #6899

patricksurry opened this issue Apr 17, 2014 · 11 comments · Fixed by #6983
Labels
Performance Memory or execution speed performance
Milestone

Comments

@patricksurry
Copy link

See discussion here http://stackoverflow.com/questions/23122180/is-pandas-read-pickle-performance-crippled-in-version-0-13

My test dataset has pickled file size 1.6Gb, and contains about 13 million records.

With 0.12 the file takes 146s to load; with 0.13 is takes 982s (about 6.7x longer).

Using cProfile, you can see that v0.13 always uses native python pickle to load, even when compat=False. In 0.12 it uses cPickle to load. Seems like something is wrong with the logic in pandas/compat/pickle_compat.py

A workaround (if you know you don't need compatibility mode) is to use cPickle.load(open('foo.pickle')) instead of pandas.read_pickle('foo.pickle').

@jtratner
Copy link
Contributor

Thanks for the report! I'll try and take a look. There were some weird
unicode issues to handle but we definitely don't want to sacrifice pickle
performance

@jreback jreback added this to the 0.14.0 milestone Apr 18, 2014
@jreback
Copy link
Contributor

jreback commented Apr 27, 2014

@patricksurry I just tried this and explicity called cPickle; its actually about 30% SLOWER than the current pickle. not exactly sure why. (was using a test frame of 100k rows, 5 columns, with a datetime index).only a 48mb file, so that could be the reason (relative to what you are seeing).

try timing your file with:

import pickle
import cPickle

def f(file,m):
    with open(file) as fh:
           m.load(fh)

%timeit f('foo.pkl',pickle')
%timeit f('foo.pkl',cPickle')
%timeit pd.read_pickle('foo.pkl')

under both 0.12 and 0.13.1

@jreback
Copy link
Contributor

jreback commented Apr 27, 2014

500 MB file

In [18]: N=10000000

In [19]: index = date_range('20000101',periods=N,freq='H')

In [20]: df2 = DataFrame(dict(dict([ ("float{0}".format(i),randn(N)) for i in range(C) ])),
               index=index)

In [21]: df2.to_pickle('foo.pkl')

In [22]: !ls -ltr *.pkl
-rw-rw-r-- 1 jreback users 480000636 Apr 27  2014 foo.pkl

In [23]: %timeit f(cPickle)
1 loops, best of 3: 610 ms per loop

In [24]: %timeit f(pickle)
1 loops, best of 3: 392 ms per loop

# this is using cPickle
In [25]: %timeit pd.read_pickle('foo.pkl')
1 loops, best of 3: 609 ms per loop

@patricksurry
Copy link
Author

I wonder if that's because you're using a big array of float? <1s to load 10M rows sounds ridiculously fast :) (You didn't specify C, but I tried C=10 and saw similar speed.)

If you put some strings in there things get more sluggish - here's a 10x1M rows x 2 cols (one string and one float), which is about 230Mb on disk, and takes almost 4x longer to load with read_pickle compared to cPickle.

import pandas
import cPickle
from random import random, randrange
from timeit import Timer
N=1000000
t = pandas.DataFrame([[random() for _ in range(N)],['%08x'%randrange(16**8) for _ in range(N)]]).T
t2 = pandas.concat([t for _ in range(10)])
t2.to_pickle('foo.pickle')
Timer("pandas.read_pickle('foo.pickle')","import pandas").timeit(5)
==> 234.10312604904175 (~ 47s per execution)
Timer("cPickle.load(open('foo.pickle'))","import cPickle").timeit(5)
==> 65.66286087036133 (~ 13s per execution)

@jreback
Copy link
Contributor

jreback commented Apr 27, 2014

ok...can you show df.info() then can have a realistic comparison of dtypes

I agree my test is a bit contrived

@patricksurry
Copy link
Author

I'm not sure why both cols are object - given that the first is all float?

t2.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10000000 entries, 0 to 999999
Data columns (total 2 columns):
0 object
1 object
dtypes: object(2)>>> t2[:10]
0 1
0 0.7668373 c235317a
1 0.9184971 3fffd302
2 0.5608013 ba31311d
3 0.06004251 066232b7
4 0.7065527 9c2cbf76
5 0.1694165 0e034615
6 0.7740064 b96c7494
7 0.0637828 53de9642
8 0.8438448 4e0596b6
9 0.00191028 5dc8cd8e

[10 rows x 2 columns]

@jreback
Copy link
Contributor

jreback commented Apr 27, 2014

was about to say, you should NOT be using object dtype on non-strings, very bad idea.

Its how you constructed it with the T (i'll show u my example in a sec).

@jreback
Copy link
Contributor

jreback commented Apr 27, 2014

In [10]: N=5000000

In [11]: index = date_range('20000101',periods=N,freq='H')

In [12]: df2 = DataFrame(dict([ ("float{0}".format(i),randn(N)) for i in range(C) ]),
                index=index)

In [13]: df2['object'] = ['%08x'%randrange(16**8) for _ in range(N)]

In [14]: df2.to_pickle('foo.pkl')

In [16]: def f(m):
   ....:     with open('foo.pkl') as fh:
   ....:         m.load(fh)
   ....:         

In [17]: import pickle

In [18]: import cPickle

In [19]: %timeit f(pickle)
1 loops, best of 3: 12.3 s per loop

In [20]: %timeit pd.read_pickle('foo.pkl')
1 loops, best of 3: 2.36 s per loop

FYI (never a big fan of storing big pickles)

In [22]: df2.to_hdf('foo.h5','data')

In [23]: %timeit pd.read_hdf('foo.h5','data')
1 loops, best of 3: 1.42 s per loop

In [24]: !ls -ltr foo.*
-rw-rw-r-- 1 jreback users 315010064 Apr 27 17:55 foo.pkl
-rw-rw-r-- 1 jreback users 316068376 Apr 27 17:58 foo.h5

@jreback
Copy link
Contributor

jreback commented Apr 27, 2014

ok....back to original time (this is the vbench, ratio < 1 is better)
its a slightly different example that yours...

-------------------------------------------------------------------------------
Test name                                    | head[ms] | base[ms] |  ratio   |
-------------------------------------------------------------------------------
packers_read_pickle                          | 446.7856 | 2399.6547 |   0.1862 |

@jreback
Copy link
Contributor

jreback commented Apr 27, 2014

closed via #6899

@patricksurry
Copy link
Author

Thanks - I'll have a look at the HDF stuff.

On Sun, Apr 27, 2014 at 6:00 PM, jreback [email protected] wrote:

In [10]: N=5000000

In [11]: index = date_range('20000101',periods=N,freq='H')

In [12]: df2 = DataFrame(dict([ ("float{0}".format(i),randn(N)) for i in range(C) ]),
index=index)

In [13]: df2['object'] = ['%08x'%randrange(16**8) for _ in range(N)]

In [14]: df2.to_pickle('foo.pkl')

In [16]: def f(m):
....: with open('foo.pkl') as fh:
....: m.load(fh)
....:

In [17]: import pickle

In [18]: import cPickle

In [19]: %timeit f(pickle)
1 loops, best of 3: 12.3 s per loop

In [20]: %timeit pd.read_pickle('foo.pkl')
1 loops, best of 3: 2.36 s per loop

FYI (never a big fan of storing big pickles)

In [22]: df2.to_hdf('foo.h5','data')

In [23]: %timeit pd.read_hdf('foo.h5','data')
1 loops, best of 3: 1.42 s per loop

In [24]: !ls -ltr foo.*
-rw-rw-r-- 1 jreback users 315010064 Apr 27 17:55 foo.pkl
-rw-rw-r-- 1 jreback users 316068376 Apr 27 17:58 foo.h5


Reply to this email directly or view it on GitHubhttps://github.com//issues/6899#issuecomment-41510750
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Performance Memory or execution speed performance
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants