The CStr docs used to say things about CStr that are only true for &CStr.
Also, it's the bytes that are being borrowed, not the reference. One could say that it's the reference that is doing the borrowing, rather than being borrowed.
Add new `function_casts_as_integer` lint
The `function_casts_as_integer` lint detects cases where users cast a function pointer into an integer.
*warn-by-default*
### Example
```rust
fn foo() {}
let x = foo as usize;
```
```
warning: casting a function into an integer implicitly
--> $DIR/function_casts_as_integer.rs:9:17
|
LL | let x = foo as usize;
| ^^^^^^^^
|
help: add `fn() as usize`
|
LL | let x = foo as fn() as usize;
| +++++++
```
### Explanation
You should never cast a function directly into an integer but go through a cast as `fn` first to make it obvious what's going on. It also allows to prevent confusion with (associated) constants.
Related to https://github.com/rust-lang/rust/issues/81686 and https://stackoverflow.com/questions/68701177/whats-the-meaning-of-casting-a-rust-enum-variant-to-a-numeric-data-type
r? ````@urgau````
stop specializing on `Copy`
fixes https://github.com/rust-lang/rust/issues/132442
`std` specializes on `Copy` to optimize certain library functions such as `clone_from_slice`. This is unsound, however, as the `Copy` implementation may not be always applicable because of lifetime bounds, which specialization does not take into account; the result being that values are copied even though they are not `Copy`. For instance, this code:
```rust
struct SometimesCopy<'a>(&'a Cell<bool>);
impl<'a> Clone for SometimesCopy<'a> {
fn clone(&self) -> Self {
self.0.set(true);
Self(self.0)
}
}
impl Copy for SometimesCopy<'static> {}
let clone_called = Cell::new(false);
// As SometimesCopy<'clone_called> is not 'static, this must run `clone`,
// setting the value to `true`.
let _ = [SometimesCopy(&clone_called)].clone();
assert!(clone_called.get());
```
should not panic, but does ([playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=6be7a48cad849d8bd064491616fdb43c)).
To solve this, this PR introduces a new `unsafe` trait: `TrivialClone`. This trait may be implemented whenever the `Clone` implementation is equivalent to copying the value (so e.g. `fn clone(&self) -> Self { *self }`). Because of lifetime erasure, there is no way for the `Clone` implementation to observe lifetime bounds, meaning that even if the `TrivialClone` has stricter bounds than the `Clone` implementation, its invariant still holds. Therefore, it is sound to specialize on `TrivialClone`.
I've changed all `Copy` specializations in the standard library to specialize on `TrivialClone` instead. Unfortunately, the unsound `#[rustc_unsafe_specialization_marker]` attribute on `Copy` cannot be removed in this PR as `hashbrown` still depends on it. I'll make a PR updating `hashbrown` once this lands.
With `Copy` no longer being considered for specialization, this change alone would result in the standard library optimizations not being applied for user types unaware of `TrivialClone`. To avoid this and restore the optimizations in most cases, I have changed the expansion of `#[derive(Clone)]`: Currently, whenever both `Clone` and `Copy` are derived, the `clone` method performs a copy of the value. With this PR, the derive macro also adds a `TrivialClone` implementation to make this case observable using specialization. I anticipate that most users will use `#[derive(Clone, Copy)]` whenever both are applicable, so most users will still profit from the library optimizations.
Unfortunately, Hyrum's law applies to this PR: there are some popular crates which rely on the precise specialization behaviour of `core` to implement "specialization at home", e.g. [`libAFL`](89cff63702/libafl_bolts/src/tuples.rs (L27-L49)). I have no remorse for breaking such horrible code, but perhaps we should open other, better ways to satisfy their needs – for example by dropping the `'static` bound on `TypeId::of`...
Constify `ControlFlow` methods with unstable bounds
Feature: `const_control_flow`
Tracking issue: rust-lang/rust#148739
This PR constifies the methods on `ControlFlow` with a dependency on rust-lang/rust#143874.
Constify `ControlFlow` methods without unstable bounds
Feature: `min_const_control_flow`
Tracking issue: rust-lang/rust#148738
This PR constifies some of the methods on `ControlFlow`.
Remove `#[const_trait]`
Remove `#[const_trait]` since we now have `const trait`. Update all structured diagnostics that still suggested the attribute.
r? ```@rust-lang/project-const-traits```
Add `overflow_checks` intrinsic
This adds an intrinsic which allows code in a pre-built library to inherit the overflow checks option from a crate depending on it. This enables code in the standard library to explicitly change behavior based on whether `overflow_checks` are enabled, regardless of the setting used when standard library was compiled.
This is very similar to the `ub_checks` intrinsic, and refactors the two to use a common mechanism.
The primary use case for this is to allow the new `RangeFrom` iterator to yield the maximum element before overflowing, as requested [here](https://github.com/rust-lang/rust/issues/125687#issuecomment-2151118208). This PR includes a working `IterRangeFrom` implementation based on this new intrinsic that exhibits the desired behavior.
[Prior discussion on Zulip](https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Ability.20to.20select.20code.20based.20on.20.60overflow_checks.60.3F)
update isolate_highest_one for NonZero<T>
## Rationale
Let `x = self` and
`m = (((1 as $Int) << (<$Int>::BITS - 1)).wrapping_shr(self.leading_zeros()))`
Then the previous code computed `NonZero::new_unchecked(x & m)`.
Since `m` has exactly one bit set (the most significant 1-bit of `x`), `(x & m) == m`.
Therefore, the masking step was redundant.
The shift is safe and does not need wrapping because:
* `self.leading_zeros() < $Int::BITS` because `self` is non-zero.
* The result of `unchecked_shr` is non-zero, satisfying the `NonZero` invariant. if wrapping happens we would be violating `NonZero` invariants.
why this micro optimization?
the old code was suboptimal it duplicated `$Int`’s isolate_highest_one logic instead of delegating to it. Since the type already wraps `$Int`, either delegation should be used for clarity or, if keeping a custom implementation, it should be optimized as above.
Add -Zannotate-moves for profiler visibility of move/copy operations (codegen)
**Note:** this is an alternative implementation of https://github.com/rust-lang/rust/pull/147206; rather than being a MIR transform, it adds the annotations closer to codegen. It's functionally the same but the implementation is lower impact and it could be more correct.
---
This implements a new unstable compiler flag `-Zannotate-moves` that makes move and copy operations visible in profilers by creating synthetic debug information. This is achieved with zero runtime cost by manipulating debug info scopes to make moves/copies appear as calls to `compiler_move<T, SIZE>` and `compiler_copy<T, SIZE>` marker functions in profiling tools.
This allows developers to identify expensive move/copy operations in their code using standard profiling tools, without requiring specialized tooling or runtime instrumentation.
The implementation works at codegen time. When processing MIR operands (`Operand::Move` and `Operand::Copy`), the codegen creates an `OperandRef` with an optional `move_annotation` field containing an `Instance` of the appropriate profiling marker function. When storing the operand, `store_with_annotation()` wraps the store operation in a synthetic debug scope that makes it appear inlined from the marker.
Two marker functions (`compiler_move` and `compiler_copy`) are defined in `library/core/src/profiling.rs`. These are never actually called - they exist solely as debug info anchors.
Operations are only annotated if:
- We're generating debug info and the feature is enabled.
- Meets the size threshold (default: 65 bytes, configurable via `-Zannotate-moves=SIZE`), and is non-zero
- Has a memory representation
This has a very small size impact on object file size. With the default limit it's well under 0.1%, and even with a very small limit of 8 bytes it's still ~1.5%. This could be enabled by default.
Sync str::rsplit_once example with str::split_once
This adds `"cfg=".rsplit_once('=')` case to `rsplit_once` example, bringing it in sync with example for `split_once`. For consistency and to make life easier for ones who want to ensure bahaviour of this specific edge case.
style: Use binary literals instead of hex literals in doctests for `highest_one` and `lowest_one`
For example, I think it's easier to understand that the index of the highest bit set to one in `16` is `4` as `0b10000` than as `0x10`.
```rust
assert_eq!(0x10_u64.highest_one(), Some(4));
```
Instead of:
```rust
assert_eq!(0b10000_u64.highest_one(), Some(4));
```
rust-lang/rust#145203
This implements a new unstable compiler flag `-Zannotate-moves` that makes
move and copy operations visible in profilers by creating synthetic debug
information. This is achieved with zero runtime cost by manipulating debug
info scopes to make moves/copies appear as calls to `compiler_move<T, SIZE>`
and `compiler_copy<T, SIZE>` marker functions in profiling tools.
This allows developers to identify expensive move/copy operations in their
code using standard profiling tools, without requiring specialized tooling
or runtime instrumentation.
The implementation works at codegen time. When processing MIR operands
(`Operand::Move` and `Operand::Copy`), the codegen creates an `OperandRef`
with an optional `move_annotation` field containing an `Instance` of the
appropriate profiling marker function. When storing the operand,
`store_with_annotation()` wraps the store operation in a synthetic debug
scope that makes it appear inlined from the marker.
Two marker functions (`compiler_move` and `compiler_copy`) are defined
in `library/core/src/profiling.rs`. These are never actually called -
they exist solely as debug info anchors.
Operations are only annotated if the type:
- Meets the size threshold (default: 65 bytes, configurable via
`-Zannotate-moves=SIZE`)
- Has a non-scalar backend representation (scalars use registers,
not memcpy)
This has a very small size impact on object file size. With the default
limit it's well under 0.1%, and even with a very small limit of 8 bytes
it's still ~1.5%. This could be enabled by default.
add extend_front to VecDeque with specialization like extend
ACP: https://github.com/rust-lang/libs-team/issues/658
Tracking issue: rust-lang/rust#146975
_Text below was written before opening the ACP_
Feature was requested in rust-lang/rust#69939, I recently also needed it so decided to implement it as my first contribution to the Rust standard library. I plan on doing more but wanted to start with a small change.
Some questions I had (both on implementation and design) with answers:
- Q: `extend` allows iterators that yield `&T` where `T` is `Clone`, should extend_front do too?
A: No, users can use [`copied`](https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.copied) and/or [`cloned`](https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.cloned).
- Q: Does this need a whole new trait like Extend or only a method on `VecDeque`?
A: No, see ACP.
- Q: How do I deal with all the code duplication? Most code is similar to that of `extend`, maybe there is a nice way to factor out the code around `push_unchecked`/`push_front_unchecked`.
Will come back to this later.
- Q: Why are certain things behind feature gates, `cfg(not(test))` like `vec::IntoIter` here and `cfg(not(no_global_oom_handling))` like `Vec::extend_from_within`? (I am also looking at implementing `VecDeque::extend_from_within`)
A: See https://github.com/rust-lang/rust/pull/146861#pullrequestreview-3250163369
- Q: Should `extend_front` act like repeated pushes to the front of the queue? This reverses the order of the elements. Doing it different might incur an extra move if the iterator length is not known up front (where do you start placing elements in the buffer?).
A: `extend_front` acts like repeated pushes, `prepend` preserves the element order, see ACP or tracking issue.