-
Notifications
You must be signed in to change notification settings - Fork 13.4k
Lower intrinsics::offset
to mir::BinOp::Offset
#110822
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
(rustbot has picked a reviewer for you, use r? to override) |
Some changes occurred to MIR optimizations cc @rust-lang/wg-mir-opt Some changes occurred in compiler/rustc_codegen_cranelift cc @bjorn3 |
Oh, looks like ctfe doesn't handle the mir version? Will go deal with that. @rustbot author |
This comment has been minimized.
This comment has been minimized.
They're semantically the same, so this means the backends don't need to handle the intrinsic and means fewer MIR basic blocks in pointer arithmetic code.
f9bb4a0
to
05a665f
Compare
Some changes occurred to the CTFE / Miri engine cc @rust-lang/miri |
Ok, moved the @rustbot ready |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, looks like ctfe doesn't handle the mir version? Will go deal with that.
Yeah, LowerIntrinsics
gets run on runtime mir, so offset
wouldn't show up in emulate_intrinsic
.
Implementation looks fine, r=me nit or not
This comment has been minimized.
This comment has been minimized.
afc7450
to
05a665f
Compare
I liked the nit, but not with tidy forcing it to take a couple more lines, so I'll leave it with the nice clean diff. We'll change it to a @bors r=compiler-errors |
☀️ Test successful - checks-actions |
1 similar comment
☀️ Test successful - checks-actions |
Finished benchmarking commit (9c044d7): comparison URL. Overall result: no relevant changes - no action needed@rustbot label: -perf-regression Instruction countThis benchmark run did not return any relevant results for this metric. Max RSS (memory usage)ResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesThis benchmark run did not return any relevant results for this metric. |
let pointee_ty = left.layout.ty.builtin_deref(true).unwrap().ty; | ||
|
||
let offset_ptr = ecx.ptr_offset_inbounds(ptr, pointee_ty, offset_count)?; | ||
return Ok((Scalar::from_maybe_pointer(offset_ptr, ecx), false, left.layout.ty)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This indicates some opportunity for removing code duplication between CTFE and Miri.
…errors Use MIR's `Offset` for pointer `add` too ~~Status: draft while waiting for rust-lang#110822 to land, since this is built atop that.~~ ~~r? `@ghost~~` Canonical Rust code has mostly moved to `add`/`sub` on pointers, which take `usize`, instead of `offset` which takes `isize`. (And, relatedly, when `sub_ptr` was added it turned out it replaced every single in-tree use of `offset_from`, because `usize` is just so much more useful than `isize` in Rust.) Unfortunately, `intrinsics::offset` could only accept `*const` and `isize`, so there's a *huge* amount of type conversions back and forth being done. They're identity conversions in the backend, but still end up producing quite a lot of unhelpful MIR. This PR changes `intrinsics::offset` to accept `*const` *and* `*mut` along with `isize` *and* `usize`. Conveniently, the backends and CTFE already handle this, since MIR's `BinOp::Offset` [already supports all four combinations](https://github.com/rust-lang/rust/blob/adaac6b166df57ea5a20d56e4cce503b55aca927/compiler/rustc_const_eval/src/transform/validate.rs#L523-L528). To demonstrate the difference, I added some `mir-opt/pre-codegen/` tests around slice indexing. Here's the difference to `[T]::get_mut`, since it uses `<*mut _>::add` internally: ```diff `@@` -79,30 +70,21 `@@` fn slice_get_mut_usize(_1: &mut [u32], _2: usize) -> Option<&mut u32> { StorageLive(_12); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL StorageLive(_9); // scope 6 at $SRC_DIR/core/src/slice/index.rs:LL:COL _9 = _8 as *mut u32 (PtrToPtr); // scope 11 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL - StorageLive(_13); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL - _13 = _2 as isize (IntToInt); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL - StorageLive(_14); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL - StorageLive(_15); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL - _15 = _9 as *const u32 (Pointer(MutToConstPointer)); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL - _14 = Offset(move _15, _13); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL - StorageDead(_15); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL - _7 = move _14 as *mut u32 (PtrToPtr); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL - StorageDead(_14); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL - StorageDead(_13); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL + _7 = Offset(_9, _2); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL StorageDead(_9); // scope 6 at $SRC_DIR/core/src/slice/index.rs:LL:COL StorageDead(_12); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL StorageDead(_11); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL ``` rust-lang@1c1c8e4#diff-a841b6a4538657add3f39bc895744331453d0625e7aace128b1f604f0b63c8fdR80
They're semantically the same, so this means the backends don't need to handle the intrinsic and means fewer MIR basic blocks in pointer arithmetic code.