Fix using global options before an alias.
Options before an alias were being ignored (like `cargo -v b`). The solution is to extract those global options before expanding an alias, and then merging it later.
An alternative to this is to try to avoid discarding the options during expansion, but I couldn't figure out a way to get the position of the subcommand in the argument list. Clap only provides a way to get the arguments *following* the subcommand.
I also cleaned up some of the code in `Config::configure`, which was carrying some weird baggage from previous refactorings.
Fixes#7834
Stabilize config-profile.
This is a proposal to stabilize config-profiles. This feature was proposed in [RFC 2282](https://github.com/rust-lang/rfcs/pull/2282) and implemented in #5506. Tracking issue is rust-lang/rust#48683.
This is intended to land in 1.43 which will reach the stable channel on April 23rd.
This is a fairly straightforward extension of profiles where the exact same syntax from `Cargo.toml` can be specified in a config file. Environment variables are supported for everything except the `package` override table, where we do not support the ability to read arbitrary keys in the environment name.
Update pretty_env_logger requirement from 0.3 to 0.4
Updates the requirements on [pretty_env_logger](https://github.com/seanmonstar/pretty-env-logger) to permit the latest version.
<details>
<summary>Commits</summary>
<ul>
<li><a href="fa4e28537f"><code>fa4e285</code></a> v0.4.0</li>
<li><a href="693b5e7088"><code>693b5e7</code></a> Remove chrono dependency</li>
<li><a href="28c5ad0cbb"><code>28c5ad0</code></a> env_logger: 0.6.2 -> 0.7.0</li>
<li><a href="0288e4ed4b"><code>0288e4e</code></a> fixes env goof in README.md</li>
<li><a href="76d9fc7606"><code>76d9fc7</code></a> Make env_logger dependency public</li>
<li><a href="8ddffae2c5"><code>8ddffae</code></a> v0.3.1</li>
<li><a href="04c1aa50e1"><code>04c1aa5</code></a> require latest env_logger</li>
<li><a href="ce8c2f12fb"><code>ce8c2f1</code></a> fix deprecated calls</li>
<li><a href="67d2e7d68b"><code>67d2e7d</code></a> timestamps with milliseconds</li>
<li><a href="27eaa2bc1b"><code>27eaa2b</code></a> fix with_builder_1 example (<a href="https://github-redirect.dependabot.com/seanmonstar/pretty-env-logger/issues/25">#25</a>)</li>
<li>See full diff in <a href="https://github.com/seanmonstar/pretty-env-logger/compare/v0.3.0...v0.4.0">compare view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
- `@dependabot badge me` will comment on this PR with code to add a "Dependabot enabled" badge to your readme
Additionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):
- Update frequency (including time of day and day of week)
- Pull request limits (per update run and/or open at any time)
- Automerge options (never/patch/minor, and dev/runtime dependencies)
- Out-of-range updates (receive only lockfile updates, if desired)
- Security updates (receive only security updates, if desired)
</details>
Search for root manifest with ephemeral workspaces
Fixes#5495.
This seems like it's too simple to just work like this, but after trying a few different things, this was the only solution which worked reliably for me.
I've verified that no `/target` is present in the actual checkout location, the target directory used is actually the one created in `/tmp`.
I've also verified that both workspaces and "normal" packages still install through git and that a normal `cargo install --path` works too (though that doesn't use ephemeral workspaces anyways).
Store maximum queue length
Previously, the queue length was constantly decreasing as we built crates, which
meant that we were incorrectly displaying the progress bar. In debug builds,
this even led to panics (due to underflow on subtraction).
Not sure if we can add a test case for this. I have made the panic unconditional on release/debug though by explicitly checking that current is less than the maximum for the progress bar.
Fixes https://github.com/rust-lang/cargo/pull/7731#issuecomment-578358824.
Previously, the queue length was constantly decreasing as we built crates, which
meant that we were incorrectly displaying the progress bar. In debug builds,
this even led to panics (due to underflow on subtraction).
test: allow some flexibility in check::error_from_deep_recursion's expected diagnostic.
This should unblock https://github.com/rust-lang/rust/pull/68407, by loosening the expected output pattern.
As per https://github.com/rust-lang/rust/pull/68407#issuecomment-578189644, this is the change in the diagnostic:
```diff
-recursion limit reached while expanding the macro `m`
+recursion limit reached while expanding `m!`
```
Ideally I would use something like this regex:
```
recursion limit reached while expanding (the macro `m`|`m!`)
```
but AFAIK these tests don't support regexes.
Add tests for `cargo owner -a/-r` / `cargo yank --undo`
Follow up 5e15286.
There were no tests for `cargo owner -a/-r` and `cargo yank --undo`. However, These commands in tests had empty response. So I also update response handling with reference to 7dd0f932a8.
Scalable jobserver for rustc
This refactors the job queue code for support of [per-rustc process jobservers](https://github.com/rust-lang/rust/pull/67398). Furthermore, it also cleans up the code and refactors the main loop to be more amenable to understanding (splitting into methods and such).
Assignment of tokens to either rustc "threads" or processes is dedicated to the main loop, which proceeds in a strict "least recently requested" fashion among both thread and process token requests. Specifically, we will first allocate tokens to all pending process token requests (i.e., high-level units of work), and then (in per-rustc jobserver mode) follow up by assigning any remaining tokens to rustcs, again in the order that requests came into cargo (first request served first).
It's not quite clear that that model is good (no modeling or so has been done). On the other hand this strategy should mean that long-running crates will get more thread tokens once we bottom out in terms of rustc parallelism than short-running crates, which means that crates like syn which start early on but finish pretty late should hopefully get more parallelism nicely (without any more complex heuristics).
One plausible change that may be worth exploring is making the assignment prefer earlier rustc's, globally, rather than first attempting to spawn new crates and only then increasing parallelism for old crates. syn for example frequently gets compiled in the early storm of dozens of crates so is somewhat unlikely to have parallelism, until fairly late in its compilation.
We also currently conflate under this model the rayon threads and codegen threads. Eventually inside rustc those will probably(?) also be just one thing, and the rustc side of this implementation provides no information as to what the token request is for so we can't do better here yet.
This is both a performance optimization (avoiding O(n) shifting from the
beginning), and communicates intent in a nicer way overall.
It is plausible that we will eventually want to tie this data structure to
something like the DependencyQueue, i.e., to get more information on which rustc
to give tokens to. An old rustc with a very late dependency edge is less
important than one we'll need sooner, probably.
This removes the ad-hoc token re-send in the message processing; this sort of
decision should be left up to the main loop which manages tokens.
Notably, the behavior change here is that new tokens will go solely to spawning
new rustc *processes* rather than increasing rustc internal parallelism, unless
we can't spawn new processes.
Otherwise, before this commit, we may be saturating a single rustc with tokens
rather than creating lots of rustcs that can work in parallel. In particular in
the beginning of a build, it's likely that this is worse (i.e., crates are small
and rustc internal parallelism is not at that point all that helpful) since it
severely limits the benefits of pipelining and generally makes the build
nearly serial.
This has the slight behavior change where we won't ask for new dependencies and
so forth if no events have been received but I believe that there's no activity
that can happen if an event hasn't occurred (i.e., no state change has occurred)
so there's no need for us to actually do anything in practice.
To make sure we still record CPU usage and such sufficiently often that is also
moved into the inner "waiting for events" loop.