363 Commits

Author SHA1 Message Date
bors
a29efccb1e Auto merge of #107435 - matthiaskrgr:rollup-if5h6yu, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #106618 (Disable `linux_ext` in wasm32 and fortanix rustdoc builds.)
 - #107097 (Fix def-use dominance check)
 - #107154 (library/std/sys_common: Define MIN_ALIGN for m68k-unknown-linux-gnu)
 - #107397 (Gracefully exit if --keep-stage flag is used on a clean source tree)
 - #107401 (remove the usize field from CandidateSource::AliasBound)
 - #107413 (make more pleasant to read)
 - #107422 (Also erase substs for new infcx in pin move error)
 - #107425 (Check for missing space between fat arrow and range pattern)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2023-01-29 07:01:58 +00:00
Matthias Krüger
4caa9dfa71
Rollup merge of #107097 - tmiasko:ssa, r=cjgillot
Fix def-use dominance check

A definition does not dominate a use in the same statement. For example
in MIR generated for compound assignment x += a (when overflow checks
are disabled).
2023-01-29 06:14:17 +01:00
bors
3cdd0197e7 Auto merge of #106227 - bryangarza:ctfe-limit, r=oli-obk
Use stable metric for const eval limit instead of current terminator-based logic

This patch adds a `MirPass` that inserts a new MIR instruction `ConstEvalCounter` to any loops and function calls in the CFG. This instruction is used during Const Eval to count against the `const_eval_limit`, and emit the `StepLimitReached` error, replacing the current logic which uses Terminators only.

The new method of counting loops and function calls should be more stable across compiler versions (i.e., not cause crates that compiled successfully before, to no longer compile when changes to the MIR generation/optimization are made).

Also see: #103877
2023-01-29 04:11:27 +00:00
bors
db137ba7d4 Auto merge of #106959 - tmiasko:opt-funclets, r=davidtwco
Omit needless funclet partitioning
2023-01-27 03:25:16 +00:00
Tomasz Miąsko
e489971902 Fix def-use dominance check
A definition does not dominate a use in the same statement. For example
in MIR generated for compound assignment x += a (when overflow checks
are disabled).
2023-01-27 00:54:31 +01:00
bors
e187f8871e Auto merge of #107314 - matthiaskrgr:rollup-j40lnlj, r=matthiaskrgr
Rollup of 11 pull requests

Successful merges:

 - #106407 (Improve proc macro attribute diagnostics)
 - #106960 (Teach parser to understand fake anonymous enum syntax)
 - #107085 (Custom MIR: Support binary and unary operations)
 - #107086 (Print PID holding bootstrap build lock on Linux)
 - #107175 (Fix escaping inference var ICE in `point_at_expr_source_of_inferred_type`)
 - #107204 (suggest qualifying bare associated constants)
 - #107248 (abi: add AddressSpace field to Primitive::Pointer )
 - #107272 (Implement ObjectSafe and WF in the new solver)
 - #107285 (Implement `Generator` and `Future` in the new solver)
 - #107286 (ICE in new solver if we see an inference variable)
 - #107313 (Add Style Team Triagebot config)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2023-01-26 06:23:14 +00:00
bors
885bf62887 Auto merge of #105582 - saethlin:instcombine-assert-inhabited, r=cjgillot
InstCombine away intrinsic validity assertions

This optimization (currently) fires 246 times on the standard library. It seems to fire hardly at all on the big crates in the benchmark suite. Interesting.
2023-01-26 03:10:52 +00:00
Ben Kimock
5bfad5cc85 Thread a ParamEnv down to might_permit_raw_init 2023-01-23 19:25:10 -05:00
Bryan Garza
360db516cc Create stable metric to measure long computation in Const Eval
This patch adds a `MirPass` that tracks the number of back-edges and
function calls in the CFG, adds a new MIR instruction to increment a
counter every time they are encountered during Const Eval, and emit a
warning if a configured limit is breached.
2023-01-23 23:56:22 +00:00
Erik Desjardins
009192b01b abi: add AddressSpace field to Primitive::Pointer
...and remove it from `PointeeInfo`, which isn't meant for this.

There are still various places (marked with FIXMEs) that assume all pointers
have the same size and alignment. Fixing this requires parsing non-default
address spaces in the data layout string, which will be done in a followup.
2023-01-22 23:41:39 -05:00
Maybe Waffle
6a28fb42a8 Remove double spaces after dots in comments 2023-01-17 08:09:33 +00:00
Tomasz Miąsko
cd92bca49d Omit needless funclet partitioning 2023-01-17 00:00:00 +00:00
Tomasz Miąsko
836ef6162d Add comment to cleanup_kinds
based on the original commit message 1ae7ae0c1c7ed68c616273f245647afa47f3cbde
2023-01-10 09:53:18 +01:00
Jhonny Bill Mena
27744460e2 ADD - create and emit Bug support for Diagnostics
UPDATE - migrate constant span_bug to translatable diagnostic.
2022-12-27 20:59:22 -05:00
Jhonny Bill Mena
e26366ad99 [WIP] UPDATE - migrate intrinsic.rs to new diagnostic infrastructure
WIP - replacing span_invalid_monomorphization_error function. Still in progress due to its use in codegen_llvm inside macros
2022-12-27 20:59:21 -05:00
Jhonny Bill Mena
d41112a8c5 UPDATE - migrate constant.rs to new diagnostics infrastructure 2022-12-27 20:59:21 -05:00
Ralf Jung
9f241b3a26 abort immediately on bad mem::zeroed/uninit 2022-12-22 16:37:42 +01:00
bors
bdbe392a13 Auto merge of #105613 - Nilstrieb:rename-assert_uninit_valid, r=RalfJung
Rename `assert_uninit_valid` intrinsic

It's not about "uninit" anymore but about "filling with 0x01 bytes" so the name should at least try to reflect that.

This is actually not fully correct though, as it does still panic for all uninit with `-Zstrict-init-checks`. I'm not sure what the best way is to deal with that not causing confusion. I guess we could just remove the flag? I don't think having it makes a lot of sense anymore with the direction that we have chose to go. It could be relevant again if #100423 lands so removing it may be a bit over eager.

r? `@RalfJung`
2022-12-21 23:20:04 +00:00
bors
2b094b1ede Auto merge of #105446 - erikdesjardins:vt-size, r=nikic
Add 0..=isize::MAX range metadata to size loads from vtables

This is the (much belated) size counterpart to #91569.

Inspired by https://rust-lang.zulipchat.com/#narrow/stream/187780-t-compiler.2Fwg-llvm/topic/Range.20metadata.20for.20.60size_of_val.60.20and.20other.20isize.3A.3AMAX.20limits. This could help optimize layout computations based on the size of a dyn trait. Though, admittedly, adding this to vtables wouldn't be as beneficial as adding it to slice len, which is used much more often.

Miri detects this UB already: b7cc99142a/compiler/rustc_const_eval/src/interpret/traits.rs (L119-L121)
(In fact Miri goes further, [assuming a 48-bit address space on 64-bit platforms](9db224fc90/compiler/rustc_abi/src/lib.rs (L312-L331)), but I don't think we can assume that in an optimization.)
2022-12-18 22:01:39 +00:00
Matthias Krüger
ba71a63fde
Rollup merge of #105578 - erikdesjardins:addrspacecast, r=bjorn3
Fix transmutes between pointers in different address spaces (e.g. fn ptrs on AVR)

Currently, this causes a verifier error (https://godbolt.org/z/YYohed4bj), since it uses `bitcast`, which can't convert between address spaces.

Uncovered due to https://github.com/rust-lang/rust/pull/105545#discussion_r1045269309

r? `@bjorn3`
2022-12-14 17:17:57 +01:00
Nilstrieb
8b2a7da3b0 Rename assert_uninit_valid intrinsic
It's not about "uninit" anymore but about "filling with 0x01 bytes" so
the name should at least try to reflect that.
2022-12-13 18:08:35 +01:00
bors
37d7de3379 Auto merge of #105252 - bjorn3:codegen_less_pair_values, r=nagisa
Use struct types during codegen in less places

This makes it easier to use cg_ssa from a backend like Cranelift that doesn't have any struct types at all. After this PR struct types are still used for function arguments and return values. Removing those usages is harder but should still be doable.
2022-12-12 10:38:31 +00:00
Erik Desjardins
6085d33033 fix transmutes between pointers in different address spaces 2022-12-11 22:27:09 -05:00
Michael Goulet
bc293ed53e bug! with a better error message for failing Instance::resolve 2022-12-11 19:48:24 +00:00
Matthias Krüger
ab505298ea
Rollup merge of #105482 - wesleywiser:fix_debuginfo_ub, r=tmiasko
Fix invalid codegen during debuginfo lowering

In order for LLVM to correctly generate debuginfo for msvc, we sometimes need to spill arguments to the stack and perform some direct & indirect offsets into the value. Previously, this code always performed those actions, even when not required as LLVM would clean it up during optimization.

However, when MIR inlining is enabled, this can cause problems as the operations occur prior to the spilled value being initialized. To solve this, we first calculate the necessary offsets using just the type which is side-effect free and does not alter the LLVM IR. Then, if we are in a situation which requires us to generate the LLVM IR (and this situation only occurs for arguments, not local variables) then we perform the same calculation again, this time generating the appropriate LLVM IR as we go.

r? `@tmiasko` but feel free to reassign if you want 🙂

Fixes #105386
2022-12-10 15:01:45 +01:00
Jakob Degen
9fb8da8f8f Remove unneeded field from SwitchTargets 2022-12-09 04:53:10 -08:00
Wesley Wiser
7253057887 Don't generate pointer loads to spills unless necessary
In order for LLVM to correctly generate debuginfo for msvc, we sometimes
need to spill arguments to the stack and perform some direct & indirect
offsets into the value. Previously, this code always performed those
actions, even when not required as LLVM would clean it up during
optimization.

However, when MIR inlining is enabled, this can cause problems as the
operations occur prior to the spilled value being initialized. To solve
this, we first calculate the necessary offsets using just the type which
is side-effect free and does not alter the LLVM IR. Then, if we are in a
situation which requires us to generate the LLVM IR (and this situation
only occurs for arguments, not local variables) then we perform the same
calculation again, this time generating the appropriate LLVM IR as we
go.
2022-12-08 20:38:23 -05:00
Wesley Wiser
b33d1e26b2 Make debuginfo_offset_calcuation generic so we can resuse the logic
This will allow us to separate the act of calculating the offsets from
creating LLVM IR that performs the actions.
2022-12-08 20:38:23 -05:00
Wesley Wiser
525d0dd6e2 Factor out debuginfo offset calculation 2022-12-08 20:38:23 -05:00
Erik Desjardins
a99e97af97 Add 0..=isize::MAX range metadata to size loads from vtables 2022-12-08 01:30:07 -05:00
bors
53e4b9dd74 Auto merge of #104535 - mikebenfield:discr-fix, r=pnkfelix
rustc_codegen_ssa: Fix for codegen_get_discr

When doing the optimized implementation of getting the discriminant, the arithmetic needs to be done in the tag type so wrapping behavior works correctly.

Fixes #104519
2022-12-04 20:05:32 +00:00
bjorn3
262ace5284 Avoid from_immediate_or_packed_pair in ThreadLocalRef codegen 2022-12-04 12:53:46 +00:00
bjorn3
fff6296b62 Destruct landing_pad return value before passing it to cg_ssa 2022-12-03 18:27:18 +00:00
Maybe Waffle
1d42936b18 Prefer doc comments over //-comments in compiler 2022-11-27 11:19:04 +00:00
Michael Goulet
8f79fc24e3 Properly handle Pin<&mut dyn* Trait> receiver in codegen 2022-11-24 01:10:25 +00:00
Ralf Jung
3338244f69 deduplicate constant evaluation in cranelift backend
also sync LLVM and cranelift structure a bit
2022-11-19 14:08:12 +01:00
Michael Benfield
31c0645b9d rustc_codegen_ssa: Fix for codegen_get_discr
When doing the optimized implementation of getting the discriminant, the
arithmetic needs to be done in the tag type so wrapping behavior works
correctly.

Fixes #104519
2022-11-18 21:16:12 +00:00
bors
251831ece9 Auto merge of #103138 - nnethercote:merge-BBs, r=bjorn3
Merge basic blocks where possible when generating LLVM IR.

r? `@ghost`
2022-11-17 01:56:24 +00:00
Ralf Jung
1115ec601a cleanup and dedupe CTFE and Miri error reporting 2022-11-16 10:13:29 +01:00
Nicholas Nethercote
54082dd216 Merge basic blocks where possible when generating LLVM IR.
In `codegen_assert_terminator` we decide if a BB's successor is a
candidate for merging, which requires that it be the only successor, and
that it only have one predecessor. That result then gets passed down,
and if it reaches `funclet_br` with the appropriate BB characteristics,
then no `br` instruction is issued, a `MergingSucc::True` result is
passed back, and the merging proceeds in `codegen_block`.

The commit also adds `CachedLlbb`, a new type to help keep track of
each BB that has been merged into its predecessor.
2022-11-16 15:46:39 +11:00
Nicholas Nethercote
68194aa8d5 Use &mut Bx more.
For the next commit, `FunctionCx::codegen_*_terminator` need to take a
`&mut Bx` instead of consuming a `Bx`. This triggers a cascade of
similar changes across multiple functions. The resulting code is more
concise and replaces many `&mut bx` expressions with `bx`.
2022-11-16 15:46:39 +11:00
Camille GILLOT
b550eabfa6 Introduce composite debuginfo. 2022-11-15 17:53:50 +00:00
Ralf Jung
c78021709a add is_sized method on Abi and Layout, and use it 2022-11-13 12:23:53 +01:00
Michael Benfield
51918dcc51 rustc_codegen_ssa: Better code generation for niche discriminants.
In some cases we can avoid arithmetic before checking whether a niche
represents an untagged variant.

This is relevant to #101872
2022-11-11 05:54:30 +00:00
Nicholas Nethercote
003a3f8cd3 Use br instead of switch in more cases.
`codegen_switchint_terminator` already uses `br` instead of `switch`
when there is one normal target plus the `otherwise` target. But there's
another common case with two normal targets and an `otherwise` target
that points to an empty unreachable BB. This comes up a lot when
switching on the tags of enums that use niches.

The pattern looks like this:
```
bb1:                                              ; preds = %bb6
  %3 = load i8, ptr %_2, align 1, !range !9, !noundef !4
  %4 = sub i8 %3, 2
  %5 = icmp eq i8 %4, 0
  %_6 = select i1 %5, i64 0, i64 1
  switch i64 %_6, label %bb3 [
    i64 0, label %bb4
    i64 1, label %bb2
  ]

bb3:                                              ; preds = %bb1
  unreachable
```
This commit adds code to convert the `switch` to a `br`:
```
bb1:                                              ; preds = %bb6
  %3 = load i8, ptr %_2, align 1, !range !9, !noundef !4
  %4 = sub i8 %3, 2
  %5 = icmp eq i8 %4, 0
  %_6 = select i1 %5, i64 0, i64 1
  %6 = icmp eq i64 %_6, 0
  br i1 %6, label %bb4, label %bb2

bb3:                                              ; No predecessors!
  unreachable
```
This has a surprisingly large effect on compile times, with reductions
of 5% on debug builds of some crates. The reduction is all due to LLVM
taking less time. Maybe LLVM is just much better at handling `br` than
`switch`.

The resulting code is still suboptimal.
- The `icmp`, `select`, `icmp` sequence is silly, converting an `i1` to an `i64`
  and back to an `i1`. But with the current code structure it's hard to avoid,
  and LLVM will easily clean it up, in opt builds at least.
- `bb3` is usually now truly dead code (though not always, so it can't
  be removed universally).
2022-10-31 10:16:39 +11:00
Nicholas Nethercote
03f350f5a5 Clarify some cleanup stuff.
- Rearrange the match in `llbb_with_landing_pad` so the `(Some,Some)`
  cases are together.
- Add assertions to indicate two MSVC-only paths.
2022-10-25 12:07:35 +11:00
Nicholas Nethercote
a5bd5da594 Rename two TerminatorCodegenHelper methods.
`TerminatorCodegenHelper` has three methods `llblock`, `llbb`, and
`lltarget`. They're all similar, but the names given no indication of
the differences.

This commit renames `lltarget` as `llbb_with_landing_pad`, and `llblock`
as `llbb_with_cleanup`. These aren't fantastic names, but at least it's
now clear that `llbb` is the lowest-level of the three and the other two
wrap it.
2022-10-25 12:07:26 +11:00
Nicholas Nethercote
4e4092f8cc rustc_codegen_ssa: use more consistent naming.
Ensure:
- builders always have a `bx` suffix;
- backend basic blocks always have an `llbb` suffix,
- paired builders and basic blocks have consistent prefixes.
2022-10-25 12:07:23 +11:00
Michael Goulet
feb4244f54 Allow dyn* upcasting 2022-10-14 04:43:56 +00:00
Yuki Okushi
6755c2a89d
Rollup merge of #102641 - eholk:dyn-star-box, r=compiler-errors
Support casting boxes to dyn*

Boxes have a pointer type at codegen time which LLVM does not allow to be transparently converted to an integer. Work around this by inserting a `ptrtoint` instruction if the argument is a pointer.

r? ``@compiler-errors``

Fixes #102427
2022-10-13 09:41:25 +09:00