raw-dylib is a link kind that allows rustc to link against a library
without having any library files present.
This currently only exists on Windows. rustc will take all the symbols
from raw-dylib link blocks and put them in an import library, where they
can then be resolved by the linker.
While import libraries don't exist on ELF, it would still be convenient
to have this same functionality. Not having the libraries present at
build-time can be convenient for several reasons, especially
cross-compilation. With raw-dylib, code linking against a library can be
cross-compiled without needing to have these libraries available on the
build machine. If the libc crate makes use of this, it would allow
cross-compilation without having any libc available on the build
machine. This is not yet possible with this implementation, at least
against libc's like glibc that use symbol versioning.
The raw-dylib kind could be extended with support for symbol versioning
in the future.
This implementation is very experimental and I have not tested it very
well. I have tested it for a toy example and the lz4-sys crate, where it
was able to successfully link a binary despite not having a
corresponding library at build-time.
warning: `peek` never called on `Peekable` iterator
--> compiler\rustc_session\src\utils.rs:130:13
|
130 | let mut args = std::env::args_os().map(|arg| arg.to_string_lossy().to_string()).peekable();
| ^^^^
|
= help: consider removing the call to `peekable`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unused_peekable
warning: `peek` never called on `Peekable` iterator
--> compiler\rustc_trait_selection\src\traits\error_reporting\suggestions.rs:4934:17
|
4934 | let mut bounds = pred.bounds.iter().peekable();
| ^^^^^^
|
= help: consider removing the call to `peekable`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unused_peekable
Add `try_canonicalize` to `rustc_fs_util` and use it over `fs::canonicalize`
This adds `try_canonicalize` which tries to call `fs::canonicalize`, but falls back to `std::path::absolute` if it fails. Existing `canonicalize` calls are replaced with it. `fs::canonicalize` is not guaranteed to work on Windows.
This commit implements both the native linking modifiers infrastructure
as well as an initial attempt at the individual modifiers from the RFC.
It also introduces a feature flag for the general syntax along with
individual feature flags for each modifier.
This PR modifies the macro expansion infrastructure to handle attributes
in a fully token-based manner. As a result:
* Derives macros no longer lose spans when their input is modified
by eager cfg-expansion. This is accomplished by performing eager
cfg-expansion on the token stream that we pass to the derive
proc-macro
* Inner attributes now preserve spans in all cases, including when we
have multiple inner attributes in a row.
This is accomplished through the following changes:
* New structs `AttrAnnotatedTokenStream` and `AttrAnnotatedTokenTree` are introduced.
These are very similar to a normal `TokenTree`, but they also track
the position of attributes and attribute targets within the stream.
They are built when we collect tokens during parsing.
An `AttrAnnotatedTokenStream` is converted to a regular `TokenStream` when
we invoke a macro.
* Token capturing and `LazyTokenStream` are modified to work with
`AttrAnnotatedTokenStream`. A new `ReplaceRange` type is introduced, which
is created during the parsing of a nested AST node to make the 'outer'
AST node aware of the attributes and attribute target stored deeper in the token stream.
* When we need to perform eager cfg-expansion (either due to `#[derive]` or `#[cfg_eval]`),
we tokenize and reparse our target, capturing additional information about the locations of
`#[cfg]` and `#[cfg_attr]` attributes at any depth within the target.
This is a performance optimization, allowing us to perform less work
in the typical case where captured tokens never have eager cfg-expansion run.