`Printer` cleanups
The trait `Printer` is implemented by six types, and the sub-trait `PrettyPrinter` is implemented by three of those types. The traits and the impls are complex and a bit of a mess. This PR starts to clean them up.
r? ``@davidtwco``
Distinguish prepending and replacing self ty in predicates
There are two kinds of functions called `with_self_ty`:
1. Prepends the `Self` type onto an `ExistentialPredicate` which lacks it in its internal representation.
2. Replaces the `Self` type of an existing predicate, either for diagnostics purposes or in the new trait solver when normalizing that self type.
This PR distinguishes these two because I often want to only grep for one of them. Namely, let's call it `with_replaced_self_ty` when all we're doing is replacing the self type.
Currently they are mostly named `cx`, which is a terrible name for a
type that impls `Printer`/`PrettyPrinter`, and is easy to confuse with
other types like `TyCtxt`. This commit changes them to `p`. A couple of
existing `p` variables had to be renamed to make way.
This helps me understand the structure of the code a lot.
If any of these are actually reachable, we can put the old code back,
add a new test case, and we will have improved our test coverage.
Remove the witness type from coroutine *args* (without actually removing the type)
This does as much of rust-lang/rust#144157 as we can without having to break rust-lang/rust#143545 and/or introduce some better way of handling higher ranked assumptions.
Namely, it:
* Stalls coroutines based off of the *coroutine* type rather than the witness type.
* Reworks the dtorck constraint hack to not rely on the witness type.
* Removes the witness type from the args of the coroutine, eagerly creating the type for nested obligations when needed (auto/clone impls).
I'll experiment with actually removing the witness type in a follow-up.
r? lcnr
This commit changes it to store a `Region` instead of a `RegionVid` for the `Var` cases:
- We avoid having to call `Region::new_var` to re-create `Region`s from
`RegionVid`s in a few places, avoiding the interning process, giving a
small perf win. (At the cost of the type allowing some invalid
combinations of values.)
- All the cases now store two `Region`s, so the commit also separates
the `ConstraintKind` (a new type) from the `sub` and `sup` arguments
in `Constraint`.
Currently there is `Ty` and `BoundTy`, and `Region` and `BoundRegion`,
and `Const` and... `BoundVar`. An annoying inconsistency.
This commit repurposes the existing `BoundConst`, which was barely used,
so it's the partner to `Const`. Unlike `BoundTy`/`BoundRegion` it lacks
a `kind` field but it's still nice to have because it makes the const
code more similar to the ty/region code everywhere.
The commit also removes `impl From<BoundVar> for BoundTy`, which has a
single use and doesn't seem worth it.
These changes fix the "FIXME: We really should have a separate
`BoundConst` for consts".
uniquify root goals during HIR typeck
We need to rely on region identity to deal with hangs such as https://github.com/rust-lang/trait-system-refactor-initiative/issues/210 and to keep the current behavior of `fn try_merge_responses`.
This is a problem as borrowck starts by replacing each *occurrence* of a region with a unique inference variable. This frequently splits a single region during HIR typeck into multiple distinct regions. As we assume goals to always succeed during borrowck, relying on two occurances of a region being identical during HIR typeck causes ICE. See the now fixed examples in https://github.com/rust-lang/trait-system-refactor-initiative/issues/27 and rust-lang/rust#139409.
We've previously tried to avoid this issue by always *uniquifying* regions when canonicalizing goals. This prevents caching subtrees during canonicalization which resulted in hangs for very large types. People rely on such types in practice, which caused us to revert our attempt to reinstate `#[type_length_limit]` in https://github.com/rust-lang/rust/pull/127670. The complete list of changes here:
- rust-lang/rust#107981
- rust-lang/rust#110180
- rust-lang/rust#114117
- rust-lang/rust#130821
After more consideration, all occurrences of such large types need to happen outside of typeck/borrowck. We know this as we already walk over all types in the MIR body when replacing their regions with nll vars.
This PR therefore enables us to rely on region identity inside of the trait solver by exclusively **uniquifying root goals during HIR typeck**. These are the only goals we assume to hold during borrowck. This is insufficient as type inference variables may "hide" regions we later uniquify. Because of this, we now stash proven goals which depend on inference variables in HIR typeck and reprove them after writeback. This closes https://github.com/rust-lang/trait-system-refactor-initiative/issues/127.
This was originally part of rust-lang/rust#144258 but I've moved it into a separate PR. While I believe we need to rely on region identity to fix the performance issues in some way, I don't know whether rust-lang/rust#144258 is the best approach to actually do so. Regardless of how we deal with the hangs however, this change is necessary and desirable regardless.
r? `@compiler-errors` or `@BoxyUwU`
`-Zhigher-ranked-assumptions`: Consider WF of coroutine witness when proving outlives assumptions
### TL;DR
This PR introduces an unstable flag `-Zhigher-ranked-assumptions` which tests out a new algorithm for dealing with some of the higher-ranked outlives problems that come from auto trait bounds on coroutines. See:
* rust-lang/rust#110338
While it doesn't fix all of the issues, it certainly fixed many of them, so I'd like to get this landed so people can test the flag on their own code.
### Background
Consider, for example:
```rust
use std::future::Future;
trait Client {
type Connecting<'a>: Future + Send
where
Self: 'a;
fn connect(&self) -> Self::Connecting<'_>;
}
fn call_connect<C>(c: C) -> impl Future + Send
where
C: Client + Send + Sync,
{
async move { c.connect().await }
}
```
Due to the fact that we erase the lifetimes in a coroutine, we can think of the interior type of the async block as something like: `exists<'r, 's> { C, &'r C, C::Connecting<'s> }`. The first field is the `c` we capture, the second is the auto-ref that we perform on the call to `.connect()`, and the third is the resulting future we're awaiting at the first and only await point. Note that every region is uniquified differently in the interior types.
For the async block to be `Send`, we must prove that both of the interior types are `Send`. First, we have an `exists<'r, 's>` binder, which needs to be instantiated universally since we treat the regions in this binder as *unknown*[^exist]. This gives us two types: `{ &'!r C, C::Connecting<'!s> }`. Proving `&'!r C: Send` is easy due to a [`Send`](https://doc.rust-lang.org/nightly/std/marker/trait.Send.html#impl-Send-for-%26T) impl for references.
Proving `C::Connecting<'!s>: Send` can only be done via the item bound, which then requires `C: '!s` to hold (due to the `where Self: 'a` on the associated type definition). Unfortunately, we don't know that `C: '!s` since we stripped away any relationship between the interior type and the param `C`. This leads to a bogus borrow checker error today!
### Approach
Coroutine interiors are well-formed by virtue of them being borrow-checked, as long as their callers are invoking their parent functions in a well-formed way, then substitutions should also be well-formed. Therefore, in our example above, we should be able to deduce the assumption that `C: '!s` holds from the well-formedness of the interior type `C::Connecting<'!s>`.
This PR introduces the notion of *coroutine assumptions*, which are the outlives assumptions that we can assume hold due to the well-formedness of a coroutine's interior types. These are computed alongside the coroutine types in the `CoroutineWitnessTypes` struct. When we instantiate the binder when proving an auto trait for a coroutine, we instantiate the `CoroutineWitnessTypes` and stash these newly instantiated assumptions in the region storage in the `InferCtxt`. Later on in lexical region resolution or MIR borrowck, we use these registered assumptions to discharge any placeholder outlives obligations that we would otherwise not be able to prove.
### How well does it work?
I've added a ton of tests of different reported situations that users have shared on issues like rust-lang/rust#110338, and an (anecdotally) large number of those examples end up working straight out of the box! Some limitations are described below.
### How badly does it not work?
The behavior today is quite rudimentary, since we currently discharge the placeholder assumptions pretty early in region resolution. This manifests itself as some limitations on the code that we accept.
For example, `tests/ui/async-await/higher-ranked-auto-trait-11.rs` continues to fail. In that test, we must prove that a placeholder is equal to a universal for a param-env candidate to hold when proving an auto trait, e.g. `'!1 = 'a` is required to prove `T: Trait<'!1>` in a param-env that has `T: Trait<'a>`. Unfortunately, at that point in the MIR body, we only know that the placeholder is equal to some body-local existential NLL var `'?2`, which only gets equated to the universal `'a` when being stored into the return local later on in MIR borrowck.
This could be fixed by integrating these assumptions into the type outlives machinery in a more first-class way, and delaying things to the end of MIR typeck when we know the full relationship between existential and universal NLL vars. Doing this integration today is quite difficult today.
`tests/ui/async-await/higher-ranked-auto-trait-11.rs` fails because we don't compute the full transitive outlives relations between placeholders. In that test, we have in our region assumptions that some `'!1 = '!2` and `'!2 = '!3`, but we must prove `'!1 = '!3`.
This can be fixed by computing the set of coroutine outlives assumptions in a more transitive way, or as I mentioned above, integrating these assumptions into the type outlives machinery in a more first-class way, since it's already responsible for the transitive outlives assumptions of universals.
### Moving forward
I'm still quite happy with this implementation, and I'd like to land it for testing. I may work on overhauling both the way we compute these coroutine assumptions and also how we deal with the assumptions during (lexical/nll) region checking. But for now, I'd like to give users a chance to try out this new `-Zhigher-ranked-assumptions` flag to uncover more shortcomings.
[^exist]: Instantiating this binder with infer regions would be incomplete, since we'd be asking for *some* instantiation of the interior types, not proving something for *all* instantiations of the interior types.
Unify `CoroutineWitness` sooner in typeck, and stall coroutine obligations based off of `TypingEnv`
* Stall coroutine obligations based off of `TypingMode` in the old solver.
* Eagerly assign `TyKind::CoroutineWitness` to the witness arg of coroutines during typeck, rather than deferring them to the end of typeck.
r? lcnr
This is part of https://github.com/rust-lang/rust/issues/143017.
Use relative visibility when noting sealed trait to reduce false positive
Fixesrust-lang/rust#143392
I used relative visibility instead of just determining if it's public or not.
r? compiler
trait_sel: `MetaSized` always holds temporarily
As a temporary measure while a proper fix for `tests/ui/sized-hierarchy/incomplete-inference-issue-143992.rs` is implemented, make `MetaSized` obligations always hold. In effect, temporarily reverting the `sized_hierarchy` feature. This is a small change that can be backported.
cc rust-lang/rust#143992
r? ```@lcnr```
As a temporary measure while a proper fix for
`tests/ui/sized-hierarchy/incomplete-inference-issue-143992.rs`
is implemented, make `MetaSized` obligations always hold. In effect,
temporarily reverting the `sized_hierarchy` feature. This is a small
change that can be backported.
Split up the `unknown_or_malformed_diagnostic_attributes` lint
This splits up the lint into the following lint group:
- `unknown_diagnostic_attributes` - triggers if the attribute is unknown to the current compiler
- `misplaced_diagnostic_attributes` - triggers if the attribute exists but it is not placed on the item kind it's meant for
- `malformed_diagnostic_attributes` - triggers if the attribute's syntax or options are invalid
- `malformed_diagnostic_format_literals` - triggers if the format string literal is invalid, for example if it has unpaired curly braces or invalid parameters
- this pr doesn't create it, but future lints for things like deprecations can also go here.
This PR does not start emitting lints in places that previously did not.
## Motivation
I want to have finer control over what `unknown_or_malformed_diagnostic_attributes` does
I have a project with fairly low msrv that is/will have a lower msrv than future diagnostic attributes. So lints will be emitted when I or others compile it on a lower msrv.
At this time, there are two options to silence these lints:
- `#[allow(unknown_or_malformed_diagnostic_attributes)]` - this risks diagnostic regressions if I (or others) mess up using the attribute, or if the attribute's syntax ever changes.
- write a build script to detect the compiler version and emit cfgs, and then conditionally enable the attribute:
```rust
#[cfg_attr(rust_version_99, diagnostic::new_attr_in_rust_99(thing = ..))]`
struct Foo;
```
or conditionally `allow` the lint:
```rust
// lib.rs
#![cfg_attr(not(current_rust), allow(unknown_or_malformed_diagnostic_attributes))]
```
I like to avoid using build scripts if I can, so the following works much better for me. That is what this PR will let me do in the future:
```rust
#[allow(unknown_diagnostic_attribute, reason = "attribute came out in rust 1.99 but msrv is 1.70")]
#[diagnostic::new_attr_in_rust_99(thing = ..)]`
struct Foo;