283 Commits

Author SHA1 Message Date
Michael Goulet
cbb3a847d2 Ensure query keys are printed with reduced queries 2025-06-03 20:56:52 +00:00
ismailarilik
85efae7302 Handle rustc_query_system cases of rustc::potential_query_instability lint 2025-05-14 08:59:55 +03:00
bors
3ef8e64ce9 Auto merge of #139758 - Zoxc:thread-local-graph, r=oli-obk
Use thread local dep graph encoding

This adds thread local encoding of dep graph nodes. Each thread has a `MemEncoder` that gets flushed to the global `FileEncoder` when it exceeds 64 kB. Each thread also has a local cache of dep indices. This means there can now be empty gaps in `SerializedDepGraph`.

Indices are marked green and also allocated by the new atomic operation `DepNodeColorMap::try_mark_green` as the encoder lock is removed.
2025-05-07 12:39:54 +00:00
John Kåre Alsaker
cdd104922d Use the portable AtomicU64 2025-05-06 18:36:14 +02:00
Zalathar
9e7fb67838 Rename graph::implementation::Graph to LinkedGraph 2025-05-06 14:35:06 +10:00
John Kåre Alsaker
e1e8857d54 Add some comments about thread local indices 2025-05-05 21:01:43 +02:00
John Kåre Alsaker
aeab2819f6 Tweak index chunk allocation 2025-05-05 20:50:32 +02:00
John Kåre Alsaker
c43a6f05d6 Add some comments 2025-05-01 10:20:34 +02:00
John Kåre Alsaker
d3ec14bbec Use thread local dep graph encoding 2025-05-01 10:20:31 +02:00
John Kåre Alsaker
9a6e8b3807 Make sure there's no duplicate indices in the dep graph 2025-04-22 16:57:28 +02:00
John Kåre Alsaker
36fa327623 Tweak edges 2025-04-22 16:57:28 +02:00
John Kåre Alsaker
4ec74df81f Use IndexVec::from_elem_n 2025-04-22 16:57:28 +02:00
John Kåre Alsaker
c6735f92c1 Add index to the dep graph format and encode via MemEncoder 2025-04-22 16:57:28 +02:00
Matthias Krüger
844b7c7935
Rollup merge of #139236 - Zoxc:anon-counter, r=davidtwco
Use a session counter to make anon dep nodes unique

This changes the unique session hash used to ensure unique anon dep nodes per session from a timestamp to a counter.

This is nicer for debugging as it makes the dep graph deterministic.
2025-04-17 00:14:24 +02:00
Jacob Pratt
5c494aa5c8
Rollup merge of #139636 - Zoxc:graph-edges-len-u32, r=compiler-errors
Encode dep node edge count as u32 instead of usize

This encodes the dep node edge count as u32 instead of usize. It doesn't need to be usize as the count is limited the the unique number of dep nodes which we use a 32-bit index for. It probably saves some branches for encoding / decoding, but it isn't too hot as the count gets packed in with other fields when it's low.

<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th><td align="right">Physical Memory</td><td align="right">Physical Memory</td><td align="right">%</th><td align="right">Committed Memory</td><td align="right">Committed Memory</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check:unchanged</td><td align="right">0.3343s</td><td align="right">0.3343s</td><td align="right"> -0.00%</td><td align="right">97.02 MiB</td><td align="right">96.89 MiB</td><td align="right"> -0.13%</td><td align="right">167.93 MiB</td><td align="right">167.80 MiB</td><td align="right"> -0.08%</td></tr><tr><td>🟣 <b>hyper</b>:check:unchanged</td><td align="right">0.1353s</td><td align="right">0.1350s</td><td align="right"> -0.19%</td><td align="right">61.91 MiB</td><td align="right">61.83 MiB</td><td align="right"> -0.13%</td><td align="right">124.64 MiB</td><td align="right">124.64 MiB</td><td align="right"> 0.00%</td></tr><tr><td>🟣 <b>regex</b>:check:unchanged</td><td align="right">0.2479s</td><td align="right">0.2481s</td><td align="right"> 0.10%</td><td align="right">78.34 MiB</td><td align="right">78.35 MiB</td><td align="right"> 0.02%</td><td align="right">145.20 MiB</td><td align="right">145.31 MiB</td><td align="right"> 0.08%</td></tr><tr><td>🟣 <b>syn</b>:check:unchanged</td><td align="right">0.5347s</td><td align="right">0.5333s</td><td align="right"> -0.26%</td><td align="right">118.63 MiB</td><td align="right">118.66 MiB</td><td align="right"> 0.02%</td><td align="right">193.05 MiB</td><td align="right">193.11 MiB</td><td align="right"> 0.03%</td></tr><tr><td>Total</td><td align="right">1.2521s</td><td align="right">1.2507s</td><td align="right"> -0.11%</td><td align="right">355.90 MiB</td><td align="right">355.74 MiB</td><td align="right"> -0.05%</td><td align="right">630.83 MiB</td><td align="right">630.87 MiB</td><td align="right"> 0.01%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9991s</td><td align="right"> -0.09%</td><td align="right">1 byte</td><td align="right">1.00 bytes</td><td align="right"> -0.06%</td><td align="right">1 byte</td><td align="right">1.00 bytes</td><td align="right"> 0.01%</td></tr></table>
2025-04-13 23:57:37 -04:00
John Kåre Alsaker
22dd86c015 Encode dep node edge count as u32 instead of usize 2025-04-10 20:59:22 +02:00
John Kåre Alsaker
1c568bbe6f Reuse the index from promoted nodes when coloring executed tasks 2025-04-05 14:41:08 +02:00
John Kåre Alsaker
927ad1659a Add a dep kind for use of the anon node with zero dependencies 2025-04-02 07:35:05 +02:00
John Kåre Alsaker
027251ff7d Use a session counter to make anon dep nodes unique 2025-04-02 07:12:01 +02:00
Michael Goulet
897acc3e5d Encode synthetic by-move coroutine body with a different DefPathData 2025-03-30 22:53:21 +00:00
John Kåre Alsaker
60e4a1b8f3 Remove prev_index_to_index field from CurrentDepGraph 2025-03-24 19:58:34 +01:00
John Kåre Alsaker
2736a2a84f Allow duplicates for side effect nodes 2025-03-19 20:12:37 +01:00
John Kåre Alsaker
68fd771bc1 Pass in dep kind names to the duplicate dep node check 2025-03-19 20:12:37 +01:00
John Kåre Alsaker
f5dc674bf8 Rename intern_new_node to alloc_new_node 2025-03-19 20:12:37 +01:00
John Kåre Alsaker
58c148a3d8 Use ShardedHashMap for anon_node_to_index 2025-03-19 20:12:37 +01:00
John Kåre Alsaker
129f39cb89 Use nodes_newly_allocated_in_current_session to lookup forbidden reads 2025-03-19 20:12:37 +01:00
John Kåre Alsaker
bd8b628262 Check for duplicate dep nodes when creating the index 2025-03-19 20:12:37 +01:00
John Kåre Alsaker
cdbf19a6fb Add fixme 2025-03-19 20:12:37 +01:00
John Kåre Alsaker
e70cafec4e Outline some cold code and turn on hash collision detection with debug_assertions 2025-03-19 20:12:37 +01:00
Camille GILLOT
5a21f890e9 Only use the new node hashmap for anonymous nodes. 2025-03-19 20:12:37 +01:00
John Kåre Alsaker
b43a29711e Fix record_diagnostic 2025-03-15 03:09:09 +01:00
John Kåre Alsaker
9a847b1ea5 Add comments 2025-03-14 18:55:02 +01:00
John Kåre Alsaker
453b51a65a Rename QuerySideEffects to QuerySideEffect 2025-03-14 18:39:27 +01:00
John Kåre Alsaker
3ca5220114 Represent diagnostic side effects as dep nodes 2025-03-14 16:01:58 +01:00
Josh Stone
3b0c2585c8 Convert ShardedHashMap to use hashbrown::HashTable
The `hash_raw_entry` feature has finished fcp-close, so the compiler
should stop using it to allow its removal. Several `Sharded` maps were
using raw entries to avoid re-hashing between shard and map lookup, and
we can do that with `hashbrown::HashTable` instead.
2025-03-10 17:08:30 -07:00
bors
ed897d5f85 Auto merge of #138267 - matthiaskrgr:rollup-vt76bhs, r=matthiaskrgr
Rollup of 12 pull requests

Successful merges:

 - #136127 (Allow `*const W<dyn A> -> *const dyn A` ptr cast)
 - #136968 (Turn order dependent trait objects future incompat warning into a hard error)
 - #137319 (Stabilize `const_vec_string_slice`)
 - #137885 (tidy: add triagebot checks)
 - #138040 (compiler: Use `size_of` from the prelude instead of imported)
 - #138084 (Use workspace lints for crates in `compiler/`)
 - #138158 (Move more layouting logic to `rustc_abi`)
 - #138160 (depend more on attr_data_structures and move find_attr! there)
 - #138192 (crashes: couple more tests)
 - #138216 (bootstrap: Fix stack printing when a step cycle is detected)
 - #138232 (Reduce verbosity of GCC build log)
 - #138242 (Revert "Don't test new error messages with the stage 0 compiler")

r? `@ghost`
`@rustbot` modify labels: rollup
2025-03-09 12:29:49 +00:00
Thalia Archibald
38fad984c6 compiler: Use size_of from the prelude instead of imported
Use `std::mem::{size_of, size_of_val, align_of, align_of_val}` from the
prelude instead of importing or qualifying them.

These functions were added to all preludes in Rust 1.80.
2025-03-07 13:37:04 -08:00
Michał Kostrubiec
e100fd4d52 Changed the dependency graph to start preallocated with 128 capacity 2025-02-27 16:11:39 +01:00
Michael Goulet
12e3911d81 Greatly simplify lifetime captures in edition 2024 2025-02-22 22:24:52 +00:00
Askar Safin
4a2c7f48b5 compiler/rustc_data_structures/src/sync.rs: remove atomics, but not AtomicU64! 2025-02-11 08:57:36 +03:00
bors
2f92f050e8 Auto merge of #136471 - safinaskar:parallel, r=SparrowLii
tree-wide: parallel: Fully removed all `Lrc`, replaced with `Arc`

tree-wide: parallel: Fully removed all `Lrc`, replaced with `Arc`

This is continuation of https://github.com/rust-lang/rust/pull/132282 .

I'm pretty sure I did everything right. In particular, I searched all occurrences of `Lrc` in submodules and made sure that they don't need replacement.

There are other possibilities, through.

We can define `enum Lrc<T> { Rc(Rc<T>), Arc(Arc<T>) }`. Or we can make `Lrc` a union and on every clone we can read from special thread-local variable. Or we can add a generic parameter to `Lrc` and, yes, this parameter will be everywhere across all codebase.

So, if you think we should take some alternative approach, then don't merge this PR. But if it is decided to stick with `Arc`, then, please, merge.

cc "Parallel Rustc Front-end" ( https://github.com/rust-lang/rust/issues/113349 )

r? SparrowLii

`@rustbot` label WG-compiler-parallel
2025-02-06 10:50:05 +00:00
Nicholas Nethercote
1fa9200475 Remove dep_node comment duplication.
`rustc_middle` and `rustc_query_system` both have a file called
`dep_node.rs` with a big comment at the top, and the comments are very
similar. The one in `rustc_query_system` looks like the original, and
the one in `rustc_middle` is a copy with some improvements.

This commit removes the comment from `rustc_middle` and updates the one
in `rustc_query_system` to include the improvements. I did it this way
because `rustc_query_system` is the crate that defines `DepNode`, and so
seems like the right place for the comment.
2025-02-04 08:34:10 +11:00
Askar Safin
0a21f1d0a2 tree-wide: parallel: Fully removed all Lrc, replaced with Arc 2025-02-03 13:25:57 +03:00
Martin Zacho
abe603212e remove code duplication when hashing query result and interning node
Refactored the duplicated code into a function.

`with_feed_task` currently passes the query key to `debug_assert!`.
This commit changes that, so it debug prints the `DepNode`, as in
`with_task`.
2025-01-13 20:25:46 +01:00
Michael Goulet
988f28d442 Make sure to record deps from cached task in new solver on first run 2024-12-04 16:15:44 +00:00
bors
6503543d11 Auto merge of #132282 - Noratrieb:it-is-the-end-of-serial, r=cjgillot
Delete the `cfg(not(parallel))` serial compiler

Since it's inception a long time ago, the parallel compiler and its cfgs have been a maintenance burden. This was a necessary evil the allow iteration while not degrading performance because of synchronization overhead.

But this time is over. Thanks to the amazing work by the parallel working group (and the dyn sync crimes), the parallel compiler has now been fast enough to be shipped by default in nightly for quite a while now.
Stable and beta have still been on the serial compiler, because they can't use `-Zthreads` anyways.
But this is quite suboptimal:
- the maintenance burden still sucks
- we're not testing the serial compiler in nightly

Because of these reasons, it's time to end it. The serial compiler has served us well in the years since it was split from the parallel one, but it's over now.

Let the knight slay one head of the two-headed dragon!

#113349

Note that the default is still 1 thread, as more than 1 thread is still fairly broken.

cc `@onur-ozkan` to see if i did the bootstrap field removal correctly, `@SparrowLii` on the sync parts
2024-11-12 15:14:56 +00:00
Noratrieb
505b8e1332 Delete the cfg(not(parallel)) serial compiler
Since it's inception a long time ago, the parallel compiler and its cfgs
have been a maintenance burden. This was a necessary evil the allow
iteration while not degrading performance because of synchronization
overhead.

But this time is over. Thanks to the amazing work by the parallel
working group (and the dyn sync crimes), the parallel compiler has now
been fast enough to be shipped by default in nightly for quite a while
now.
Stable and beta have still been on the serial compiler, because they
can't use `-Zthreads` anyways.
But this is quite suboptimal:
- the maintenance burden still sucks
- we're not testing the serial compiler in nightly

Because of these reasons, it's time to end it. The serial compiler has
served us well in the years since it was split from the parallel one,
but it's over now.

Let the knight slay one head of the two-headed dragon!
2024-11-12 13:38:58 +00:00
klensy
be3a635fe7 replace manual time convertions with std ones 2024-11-03 15:51:39 +03:00
klensy
746b675c5a fix clippy::clone_on_ref_ptr for compiler 2024-10-28 18:05:08 +03:00
Michael Goulet
c682aa162b Reformat using the new identifier sorting from rustfmt 2024-09-22 19:11:29 -04:00