-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
read_pickle much slower in v0.13 (not using cPickle when compat=False) #6899
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for the report! I'll try and take a look. There were some weird |
@patricksurry I just tried this and explicity called cPickle; its actually about 30% SLOWER than the current pickle. not exactly sure why. (was using a test frame of 100k rows, 5 columns, with a datetime index).only a 48mb file, so that could be the reason (relative to what you are seeing). try timing your file with:
under both 0.12 and 0.13.1 |
500 MB file
|
I wonder if that's because you're using a big array of float? <1s to load 10M rows sounds ridiculously fast :) (You didn't specify C, but I tried C=10 and saw similar speed.) If you put some strings in there things get more sluggish - here's a 10x1M rows x 2 cols (one string and one float), which is about 230Mb on disk, and takes almost 4x longer to load with read_pickle compared to cPickle. import pandas |
ok...can you show I agree my test is a bit contrived |
I'm not sure why both cols are object - given that the first is all float?
[10 rows x 2 columns] |
was about to say, you should NOT be using object dtype on non-strings, very bad idea. Its how you constructed it with the T (i'll show u my example in a sec). |
FYI (never a big fan of storing big pickles)
|
ok....back to original time (this is the vbench, ratio < 1 is better)
|
closed via #6899 |
Thanks - I'll have a look at the HDF stuff. On Sun, Apr 27, 2014 at 6:00 PM, jreback [email protected] wrote:
|
See discussion here http://stackoverflow.com/questions/23122180/is-pandas-read-pickle-performance-crippled-in-version-0-13
My test dataset has pickled file size 1.6Gb, and contains about 13 million records.
With 0.12 the file takes 146s to load; with 0.13 is takes 982s (about 6.7x longer).
Using cProfile, you can see that v0.13 always uses native python pickle to load, even when compat=False. In 0.12 it uses cPickle to load. Seems like something is wrong with the logic in pandas/compat/pickle_compat.py
A workaround (if you know you don't need compatibility mode) is to use cPickle.load(open('foo.pickle')) instead of pandas.read_pickle('foo.pickle').
The text was updated successfully, but these errors were encountered: