Mark Rousskov da58efb11d Improve VecCache under parallel frontend
This replaces the single Vec allocation with a series of progressively
larger buckets. With the cfg for parallel enabled but with -Zthreads=1,
this looks like a slight regression in i-count and cycle counts (<0.1%).

With the parallel frontend at -Zthreads=4, this is an improvement (-5%
wall-time from 5.788 to 5.4688 on libcore) than our current Lock-based
approach, likely due to reducing the bouncing of the cache line holding
the lock. At -Zthreads=32 it's a huge improvement (-46%: 8.829 -> 4.7319
seconds).
2024-11-15 18:20:32 -05:00

23 lines
548 B
Rust

// tidy-alphabetical-start
#![allow(rustc::potential_query_instability, internal_features)]
#![feature(assert_matches)]
#![feature(core_intrinsics)]
#![feature(dropck_eyepatch)]
#![feature(hash_raw_entry)]
#![feature(let_chains)]
#![feature(min_specialization)]
#![warn(unreachable_pub)]
// tidy-alphabetical-end
pub mod cache;
pub mod dep_graph;
mod error;
pub mod ich;
pub mod query;
mod values;
pub use error::{HandleCycleError, LayoutOfDepth, QueryOverflow};
pub use values::Value;
rustc_fluent_macro::fluent_messages! { "../messages.ftl" }