-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
BUG: read_parquet does not respect index for arrow dtype backend #51726
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hey, thanks for tackling this so quickly, I appreciate it. One additional request is that RangeIndex can have a name, which would be inside If it helps, there's an implementation here already: https://github.com/apache/arrow/blob/45918a90a6ca1cf3fd67c256a7d6a240249e555a/python/pyarrow/pandas_compat.py#L943 and Wes introduced the dict-based index description here. It feels like an under-defined format so could be hard to write tests for this. |
thx, adjusted |
) | ||
index_columns = pa_table.schema.pandas_metadata.get("index_columns", []) | ||
result_dc = { | ||
col_name: arrays.ArrowExtensionArray(pa_col) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason to not use types_mapper
here (as is done for our own masked nullable arrays)?
Because all this handling of the index is done by pyarrow already, and if using to_pandas()
as for the other code paths would ensure this is done consistently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good idea, but it looks like arrow is not applying the types mapper to index either. Not sure why we didn't do it like this initially though.
I tried:
types_mapper = lambda x: pd.ArrowDtype(x)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's a bug in pyarrow (well, when this keyword was implemented the Index did not yet support EAs, so at that point it wasn't needed to consider the index as well). This was recently reported (apache/arrow#34283), and we can ensure to fix it for the next release in April.
Another reason to go the types_mapper way is that users can define a custom ExtensionArray that has its own conversion from pyarrow->pandas defined, and the current code here would ignore that.
Short term an option to overcome the Index bug could be to convert the Index manually back to an Arrow backed array. That's of course a bit wasteful in case it was not a zero-copy conversion .. But for people following the latest pyarrow releases it should only be a short-term issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I switched to types_mapper=pd.ArrowDtype
in #51766
If you have converting the Index on your agenda, I'd avoid implementing this ourselves for a couple of weeks basically.
Thoughts?
If you agree, then can just close here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my limited understanding of https://github.com/apache/arrow/blob/main/python/pyarrow/src/arrow/python/arrow_to_pandas.cc I used the manual conversion to avoid a conversion from numpy(?)
// Functions for pandas conversion via NumPy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are going through pandas_dtype.__from_arrow__(arr)
which receives an arrow array, so we should be good?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, whenever pyarrow detects an extension dtype for a column (either in the metadata, the pyarrow type itself or types_mapper keyword), we don't actually convert to numpy but directly pass the pyarrow array to dtype.__from_arrow__
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it thanks for the confirmation. I think #51766 should be sufficient then
Was this supposed to be addressed in #51766? |
Not directly, but I added type_mapper support for the index in pyarrow a couple of days ago. We will get this out of the box with pyarrow 12.0, so I wouldn't do anything on our side. |
doc/source/whatsnew/vX.X.X.rst
file if fixing a bug or adding a new feature.