Add LLVM realtime sanitizer
This is a new attempt at adding the [LLVM real-time sanitizer](https://clang.llvm.org/docs/RealtimeSanitizer.html) to rust.
Previously this was attempted in https://github.com/rust-lang/rfcs/pull/3766.
Since then the `sanitize` attribute was introduced in https://github.com/rust-lang/rust/pull/142681 and it is a lot more flexible than the old `no_santize` attribute. This allows adding real-time sanitizer without the need for a new attribute, like it was proposed in the RFC. Because i only add a new value to a existing command line flag and to a attribute i don't think an MCP is necessary.
Currently real-time santizer is usable in rust code with the [rtsan-standalone](https://crates.io/crates/rtsan-standalone) crate. This downloads or builds the sanitizer runtime and then links it into the rust binary.
The first commit adds support for more detailed sanitizer information.
The second commit then actually adds real-time sanitizer.
The third adds a warning against using real-time sanitizer with async functions, cloures and blocks because it doesn't behave as expected when used with async functions. I am not sure if this is actually wanted, so i kept it in a seperate commit.
The fourth commit adds the documentation for real-time sanitizer.
Add -Zannotate-moves for profiler visibility of move/copy operations (codegen)
**Note:** this is an alternative implementation of https://github.com/rust-lang/rust/pull/147206; rather than being a MIR transform, it adds the annotations closer to codegen. It's functionally the same but the implementation is lower impact and it could be more correct.
---
This implements a new unstable compiler flag `-Zannotate-moves` that makes move and copy operations visible in profilers by creating synthetic debug information. This is achieved with zero runtime cost by manipulating debug info scopes to make moves/copies appear as calls to `compiler_move<T, SIZE>` and `compiler_copy<T, SIZE>` marker functions in profiling tools.
This allows developers to identify expensive move/copy operations in their code using standard profiling tools, without requiring specialized tooling or runtime instrumentation.
The implementation works at codegen time. When processing MIR operands (`Operand::Move` and `Operand::Copy`), the codegen creates an `OperandRef` with an optional `move_annotation` field containing an `Instance` of the appropriate profiling marker function. When storing the operand, `store_with_annotation()` wraps the store operation in a synthetic debug scope that makes it appear inlined from the marker.
Two marker functions (`compiler_move` and `compiler_copy`) are defined in `library/core/src/profiling.rs`. These are never actually called - they exist solely as debug info anchors.
Operations are only annotated if:
- We're generating debug info and the feature is enabled.
- Meets the size threshold (default: 65 bytes, configurable via `-Zannotate-moves=SIZE`), and is non-zero
- Has a memory representation
This has a very small size impact on object file size. With the default limit it's well under 0.1%, and even with a very small limit of 8 bytes it's still ~1.5%. This could be enabled by default.
Offload device
LLVM's offload functionality usually expects an extra dyn_ptr argument. We could avoid it,b ut likely gonna need it very soon in one of the follow-up PRs (e.g. to request shared memory). So we might as well already add it.
This PR adds a %dyn_ptr ptr to GPUKernel ABI functions, if the offload feature is enabled.
WIP
r? ```@ghost```
This implements a new unstable compiler flag `-Zannotate-moves` that makes
move and copy operations visible in profilers by creating synthetic debug
information. This is achieved with zero runtime cost by manipulating debug
info scopes to make moves/copies appear as calls to `compiler_move<T, SIZE>`
and `compiler_copy<T, SIZE>` marker functions in profiling tools.
This allows developers to identify expensive move/copy operations in their
code using standard profiling tools, without requiring specialized tooling
or runtime instrumentation.
The implementation works at codegen time. When processing MIR operands
(`Operand::Move` and `Operand::Copy`), the codegen creates an `OperandRef`
with an optional `move_annotation` field containing an `Instance` of the
appropriate profiling marker function. When storing the operand,
`store_with_annotation()` wraps the store operation in a synthetic debug
scope that makes it appear inlined from the marker.
Two marker functions (`compiler_move` and `compiler_copy`) are defined
in `library/core/src/profiling.rs`. These are never actually called -
they exist solely as debug info anchors.
Operations are only annotated if the type:
- Meets the size threshold (default: 65 bytes, configurable via
`-Zannotate-moves=SIZE`)
- Has a non-scalar backend representation (scalars use registers,
not memcpy)
This has a very small size impact on object file size. With the default
limit it's well under 0.1%, and even with a very small limit of 8 bytes
it's still ~1.5%. This could be enabled by default.
Add default sanitizers to TargetOptions
Some sanitizers are part of a system's ABI, like the shadow call stack on Aarch64 and RISC-V Fuchsia. Typically ABI options have other spellings, but LLVM has, for historical reasons, marked this as a sanitizer instead of an alternate ABI option. As a result, Fuchsia targets may not be compiled against the correct ABI unless this option is set. This hasn't caused correctness problems, since the backend reserves the SCS register, and thus preserves its value. But this is an issue for unwinding, as the SCS will not be an array of PCs describing the call complete call chain, and will have gaps from callers that don't use the correct ABI.
In the long term, I'd like to see all the sanitizer configs that all frontends copy from clang moved into llvm's libFrontend, and exposed so that frontend consumers can use a small set of simple APIs to use sanitizers in a consistent way across the LLVM ecosystem, but that work is not yet ready today.
Add alignment parameter to `simd_masked_{load,store}`
This PR adds an alignment parameter in `simd_masked_load` and `simd_masked_store`, in the form of a const-generic enum `core::intrinsics::simd::SimdAlign`. This represents the alignment of the `ptr` argument in these intrinsics as follows
- `SimdAlign::Unaligned` - `ptr` is unaligned/1-byte aligned
- `SimdAlign::Element` - `ptr` is aligned to the element type of the SIMD vector (default behavior in the old signature)
- `SimdAlign::Vector` - `ptr` is aligned to the SIMD vector type
The main motive for this is stdarch - most vector loads are either fully aligned (to the vector size) or unaligned (byte-aligned), so the previous signature doesn't cut it.
Now, stdarch will mostly use `SimdAlign::Unaligned` and `SimdAlign::Vector`, whereas portable-simd will use `SimdAlign::Element`.
- [x] `cg_llvm`
- [x] `cg_clif`
- [x] `miri`/`const_eval`
## Alternatives
Using a const-generic/"const" `u32` parameter as alignment (and we error during codegen if this argument is not a power of two). This, although more flexible than this, has a few drawbacks
- If we use an const-generic argument, then portable-simd somehow needs to pass `align_of::<T>()` as the alignment, which isn't possible without GCE
- "const" function parameters are just an ugly hack, and a pain to deal with in non-LLVM backends
We can remedy the problem with the const-generic `u32` parameter by adding a special rule for the element alignment case (e.g. `0` can mean "use the alignment of the element type), but I feel like this is not as expressive as the enum approach, although I am open to suggestions
cc `@workingjubilee` `@RalfJung` `@BoxyUwU`
Stabilize -Zno-jump-tables into -Cjump-tables=bool
I propose stabilizing the -Zno-jump-tables option into -Cjump-tables=<bool>.
# `-Zno-jump-tables` stabilization report
## What is the RFC for this feature and what changes have occurred to the user-facing design since the RFC was finalized?
No RFC was created for this option. This was a narrowly scoped option introduced in rust-lang/rust#105812 to support code generation requirements of the x86-64 linux kernel, and eventually other targets as Rust For Linux grows.
The tracking is rust-lang/rust#116592.
## What behavior are we committing to that has been controversial? Summarize the major arguments pro/con.
The behavior of this flag is well defined, and mimics the existing `-fno-jump-tables` option currently available with LLVM and GCC with some caveats:
* Unlike clang or gcc, this option may be ignored by the code generation backend. Rust can support multiple code-generation backends. For stabilization, only the LLVM backend honors this option.
* The usage of this option will not guarantee a library or binary is free of jump tables. To ensure a jump-table free binary, all crates in the build graph must be compiled with this option. This includes implicitly linked crates such as std or core.
* This option only enforces the crate being compiled is free of jump tables.
* No verification is done to ensure other crates are compiled with this option. Enforcing code generation options are applied across the crate graph is out of scope for this option.
What should the flag name be?
* As introduced, this option was named `-Zno-jump-tables`. However, other major toolchains allow both positive and negative variants of this option to toggle this feature. Renaming the option to `-Cjump-tables=<bool>` makes this option consistent, and if for some reason, expandable to other arguments in the future. Notably, many LLVM targets have a configurable and different thresholds for when to lower into a jump table.
## Are there extensions to this feature that remain unstable? How do we know that we are not accidentally committing to those.
No. This option is used exclusively to gate a very specific class of optimization.
## Summarize the major parts of the implementation and provide links into the code (or to PRs)
* The original PR rust-lang/rust#105812 by ```@ojeda```
* The stabilized CLI option is parsed as a bool:
68bfda9025/compiler/rustc_session/src/options.rs (L2025-L2026)
* This options adds an attribute to each llvm function via:
68bfda9025/compiler/rustc_codegen_llvm/src/attributes.rs (L210-L215)
* Finally, the rustc book is updated with the new option:
68bfda9025/src/doc/rustc/src/codegen-options/index.md (L212-L223)
## Has a call-for-testing period been conducted? If so, what feedback was received?
No. The option has originally created is being used by Rust For Linux to build the x86-64 kernel without issue.
## What outstanding bugs in the issue tracker involve this feature? Are they stabilization-blocking?
There are no outstanding issues.
## Summarize contributors to the feature by name for recognition and assuredness that people involved in the feature agree with stabilization
* ```@ojeda``` implemented this feature in rust-lang/rust#105815 as `-Zno-jump-tables`.
* ```@tgross35``` created and maintained the tracking issue rust-lang/rust#116592, and provided feedback about the naming of the cli option.
## What FIXMEs are still in the code for that feature and why is it ok to leave them there?
There are none.
## What static checks are done that are needed to prevent undefined behavior?
This option cannot cause undefined behavior. It is a boolean option with well defined behavior in both cases.
## In what way does this feature interact with the reference/specification, and are those edits prepared?
This adds a new cli option to `rustc`. The documentation is updated, and the unstable documentation cleaned up in this PR.
## Does this feature introduce new expressions and can they produce temporaries? What are the lifetimes of those temporaries?
No.
## What other unstable features may be exposed by this feature?
None.
## What is tooling support like for this feature, w.r.t rustdoc, clippy, rust-analzyer, rustfmt, etc.?
No support is required from other rust tooling.
## Open Items
- [x] Are there objections renaming `-Zno-jump-tables` to `-Cjump-tables=<bool>`? The consensus is no.
- [x] Is it desirable to keep `-Zno-jump-tables` for a period of time? The consensus is no.
---
Closesrust-lang/rust#116592
Some sanitizers are part of a system's ABI, like the shadow call stack
on Aarch64 and RISC-V Fuchsia. Typically ABI options have other
spellings, but LLVM has, for historical reasons, marked this as a
sanitizer instead of an alternate ABI option. As a result, Fuchsia
targets may not be compiled against the correct ABI unless this option
is set. This hasn't caused correctness problems, since the backend
reserves the SCS register, and thus preserves its value. But this is an
issue for unwinding, as the SCS will not be an array of PCs describing
the call complete call chain, and will have gaps from callers that don't
use the correct ABI.
In the long term, I'd like to see all the sanitizer configs that all
frontends copy from clang moved into llvm's libFrontend, and exposed so
that frontend consumers can use a small set of simple APIs to use
sanitizers in a consistent way across the LLVM ecosystem, but that work
is not yet ready today.
Add LLVM range attributes to slice length parameters
It'll take a bunch more work to do this in layout -- because of cycles in `struct Foo<'a>(&'a Foo<'a>);` -- so until we figure out how to do that well, just look for slices specifically and add the proper range for the length.
Rollup of 5 pull requests
Successful merges:
- rust-lang/rust#144194 (Provide additional context to errors involving const traits)
- rust-lang/rust#148232 (ci: add runners for vanilla LLVM 21)
- rust-lang/rust#148240 (rustc_codegen: fix musttail returns for cast/indirect ABIs)
- rust-lang/rust#148247 (Remove a special case and move another one out of reachable_non_generics)
- rust-lang/rust#148370 (Point at inner item when it uses generic type param from outer item or `Self`)
r? `@ghost`
`@rustbot` modify labels: rollup
Remove a special case and move another one out of reachable_non_generics
One is no longer necessary, the other is best placed in cg_llvm as it works around a cg_llvm bug.
rustc_codegen: fix musttail returns for cast/indirect ABIs
Fixesrust-lang/rust#148239https://github.com/rust-lang/rust/issues/144986
Explicit tail calls trigger `bug!` for any callee whose ABI returns via `PassMode::Cast`, and we forgot to to forward the hidden `sret` out-pointer when the ABI requested an indirect return. The former causes ICE, the latter produced malformed IR (wrong codegen) if the return value is large enough to need `sret`.
Updated the musttail helper to accept cast-mode returns, made it so that we pass the return pointer through the tail-call path.
Added two UI tests to demonstrate each case.
This is my first time contributing, please do check if I did it right.
r? theemathas
cg_llvm: Pass `debuginfo_compression` through FFI as an enum
There are only three possible values, making an enum more appropriate.
This avoids string allocation on the Rust side, and avoids ad-hoc `!strcmp` to convert back to an enum on the C++ side.
Extend attribute deduction to determine whether parameters using
indirect pass mode might have their address captured. Similarly to
the deduction of `readonly` attribute this information facilitates
memcpy optimizations.
This was a bit more invasive than I had kind of hoped. An alternate
approach would be to add an extra call_intrinsic_with_attrs() that would
have the new-in-this-change signature for call_intrinsic, but this felt
about equivalent and made it a little easier to audit the relevant
callsites of call_intrinsic().
Offload host2
r? `@oli-obk`
A follow-up to my previous gpu host PR. With this, I can (in theory) run a sufficiently simple Rust function on GPUs. I tested it on AMD, where the amdgcn tartget of rustc causes issues due to Addressspace castings, which might not be valid. If I (manually) fix them, I can run the generated IR on an AMD GPU. This should conceptually also work on NVIDIA or Intel. I updated the dev-guide acordingly: https://rustc-dev-guide.rust-lang.org/offload/usage.html
I am unhappy with the amount of standalone functions in my offload code, so in my second commit I bundled some of the code around two structs which are Rust versions of the LLVM/Offload structs which they represent. The structs themselves only have doc comments. Since I directly lower everything to llvm-ir I didn't saw a big value in modelling the struct member variables.
Fix ICE on offsetted ZST pointer
I'm not sure this is the *right* fix, but it's simple enough and does roughly what I'd expect. Like with the previous optimization to codegen usize rather than a zero-sized static, there's no guarantee that we continue returning a particular value from the offsetting.
A grep for `const_usize.*align` found the same code copied to rustc_codegen_gcc and cranelift but a quick skim didn't find other cases of similar 'optimization'. That said, I'm not convinced I caught everything, it's not trivial to search for this.
Closesrust-lang/rust#147516
Add vsx register support for ppc inline asm, and implement preserves_flag option
This should address the last(?) missing pieces of inline asm for ppc:
* Explicit VSX register support. ISA 2.06 (POWER7) added a 64x128b register overlay extending the fpr's to 128b, and unifies them with the vmx (altivec) registers. Implementations details within gcc/llvm percolate up, and require using the `x` template modifier. I have updated the inline asm to implicitly include this for vsx arguments which do not specify it. ~~Support for the gcc codegen backend is still a todo.~~
* Implement the `preserves_flags` option. All ABI's, and all ISAs store their flags in `cr`, and the carry bit lives inside `xer`. The other status registers hold sticky bits or control bits which do not affect branch instructions.
There is some interest in the e500 (powerpcspe) port. Architecturally, it has a very different FP ISA, and includes a simd extension called SPR (which is not IBM's cell SPE). Notably, it does not have altivec/fpr/vsx registers. It also has an SPE accumulator register which its ABI marks as volatile, but I am not sure if the compiler uses it.
Move computation of allocator shim contents to cg_ssa
In the future this should make it easier to use weak symbols for the allocator shim on platforms that properly support weak symbols. And it would allow reusing the allocator shim code for handling default implementations of the upcoming externally implementable items feature on platforms that don't properly support weak symbols.
In addition to make this possible, the alloc error handler is now handled in a way such that it is possible to avoid using the allocator shim when liballoc is compiled without `no_global_oom_handling` if you use `#[alloc_error_handler]`. Previously this was only possible if you avoided liballoc entirely or compiled it with `no_global_oom_handling`. You still need to avoid libstd and to define the symbol that indicates that avoiding the allocator shim is unstable.
Where supported, VSX is a 64x128b register set which encompasses
both the floating point and vector registers.
In the type tests, xvsqrtdp is used as it is the only two-argument
vsx opcode supported by all targets on llvm. If you need to copy
a vsx register, the preferred way is "xxlor xt, xa, xa".