* sync: Implement map methods of parking_lot fame
Generally, this mimics the way `MappedRwLock*Guard`s are implemented in
`parking_lot`. By storing a raw pointer in the guards themselves
referencing the mapped data and maintaining type invariants through
`PhantomData`. I didn't try to think too much about this, so if someone
has objections I'd love to hear them.
I've also dropped the internal use of `ReleasingPermit`, since it made
the guards unecessarily large. The number of permits that need to be
released are already known by the guards themselves, and is instead
governed directly in the relevant `Drop` impls. This has the benefit of
making the guards as small as possible, for the non-mapped variants this
means a single reference is enough.
`fmt::Debug` impls have been adjusted to behave exactly like the
delegating impls in `parking_lot`. `fmt::Display` impls have been added
for all guard types which behave the same. This does change the format
of debug impls, for which I'm not sure if we provide any guarantees.
JoinHandle of threads created by the pool are now tracked and properly joined at
shutdown. If the thread does not return within the timeout, then it's not joined and
left to the OS for cleanup.
Also, break a cycle between wakers held by the timer and the runtime.
Fixes#2641, #2535
Previously, we would fail to reset the coop budget in this case, making
it so that `coop::poll_proceed` would perpetually yield `Poll::Pending`
in nested executers even when run in `block_in_place`.
This is also a further improvement on #2645.
The new implementation changes the behavior such that set_len is called
after poll_read. The motivation of this change is that it makes it much
more obvious that a rouge panic won't give the caller access to a vector
containing exposed uninitialized memory. The new implementation also
makes sure to not zero memory twice.
Additionally, it makes the various implementations more consistent with
each other regarding the naming of variables, and whether we store how many
bytes we have read, or how many were in the container originally.
Fixes: #2544
* Add Unit Test demonstrating the issue
This test demonstrates a panic that occurs when the user inserts an
item with an instant in the past, and then tries to reset the timeout
using the returned key
* Guard reset_at against removals of expired items
Trying to remove an already expired Timer Wheel entry (called by
DelayQueue.reset()) causes panics in some cases as described in (#2573)
This prevents this panic by removing the item from the expired queue and
not the wheel in these cases
Fixes: #2473
The documentation build failed with errors such as
error: `[read]` public documentation for `take` links to a private item
--> tokio/src/io/util/async_read_ext.rs:1078:9
|
1078 | / /// Creates an adaptor which reads at most `limit` bytes from it.
1079 | | ///
1080 | | /// This function returns a new instance of `AsyncRead` which will read
1081 | | /// at most `limit` bytes, after which it will always return EOF
... |
1103 | | /// }
1104 | | /// ```
| |_______________^
|
note: the lint level is defined here
--> tokio/src/lib.rs:13:9
|
13 | #![deny(intra_doc_link_resolution_failure)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
= note: the link appears in this line:
bytes read and future calls to [`read()`][read] may succeed.
`duplex` returns a pair of connected `DuplexStream`s.
`DuplexStream` is a bidirectional type that can be used to simulate IO,
but over an in-process piece of memory.
Dropping a runtime normally involves waiting for any outstanding blocking tasks
to complete. When this drop happens in an asynchronous context, we previously
would issue a cryptic panic due to trying to block in an asynchronous context.
This change improves the panic message, and adds a `shutdown_blocking()` function
which can be used to shutdown a runtime without blocking at all, as an out for
cases where this really is necessary.
Co-authored-by: Bryan Donlan <bdonlan@amazon.com>
Co-authored-by: Alice Ryhl <alice@ryhl.io>
For some yet unknown reason using the default on a wrapped `Bufreader<TcpStream>`
causes the hyper server to sometimes fail to send the entire body in the
response.
This fixes that problem for us and ensures that hyper has a chance to
use vectored IO (making it a good change regardless of the mentioned
bug)
A fast path in block_on_place was failing to call exit() in the case where we
were in a block_on call.
Fixes: #2639
Co-authored-by: Bryan Donlan <bdonlan@amazon.com>
## Motivation
When debugging asynchronous systems, it can be very valuable to inspect
what tasks are currently active (see #2510). The [`tracing` crate] and
related libraries provide an interface for Rust libraries and
applications to emit and consume structured, contextual, and async-aware
diagnostic information. Because this diagnostic information is
structured and machine-readable, it is a better fit for the
task-tracking use case than textual logging — `tracing` spans can be
consumed to generate metrics ranging from a simple counter of active
tasks to histograms of poll durations, idle durations, and total task
lifetimes. This information is potentially valuable to both Tokio users
*and* to maintainers.
Additionally, `tracing` is maintained by the Tokio project and is
becoming widely adopted by other libraries in the "Tokio stack", such as
[`hyper`], [`h2`], and [`tonic`] and in [other] [parts] of the broader Rust
ecosystem. Therefore, it is suitable for use in Tokio itself.
[`tracing` crate]: https://github.com/tokio-rs/tracing
[`hyper`]: https://github.com/hyperium/hyper/pull/2204
[`h2`]: https://github.com/hyperium/h2/pull/475
[`tonic`]: 570c606397/tonic/Cargo.toml (L48)
[other]: https://github.com/rust-lang/chalk/pull/525
[parts]: https://github.com/rust-lang/compiler-team/issues/331
## Solution
This PR is an MVP for instrumenting Tokio with `tracing` spans. When the
"tracing" optional dependency is enabled, every spawned future will be
instrumented with a `tracing` span.
The generated spans are at the `TRACE` verbosity level, and have the
target "tokio::task", which may be used by consumers to filter whether
they should be recorded. They include fields for the type name of the
spawned future and for what kind of task the span corresponds to (a
standard `spawn`ed task, a local task spawned by `spawn_local`, or a
`blocking` task spawned by `spawn_blocking`). Because `tracing` has
separate concepts of "opening/closing" and "entering/exiting" a span, we
enter these spans every time the spawned task is polled. This allows
collecting data such as:
- the total lifetime of the task from `spawn` to `drop`
- the number of times the task was polled before it completed
- the duration of each individual time that the span was polled (and
therefore, aggregated metrics like histograms or averages of poll
durations)
- the total time a span was actively being polled, and the total time
it was alive but **not** being polled
- the time between when the task was `spawn`ed and the first poll
As an example, here is the output of a version of the `chat` example
instrumented with `tracing`:

And, with multiple connections actually sending messages:

I haven't added any `tracing` spans in the example, only converted the
existing `println!`s to `tracing::info` and `tracing::error` for
consistency. The span durations in the above output are generated by
`tracing-subscriber`. Of course, a Tokio-specific subscriber could
generate even more detailed statistics, but that's follow-up work once
basic tracing support has been added.
Note that the `Instrumented` type from `tracing-futures`, which attaches
a `tracing` span to a future, was reimplemented inside of Tokio to avoid
a dependency on that crate. `tracing-futures` has a feature flag that
enables an optional dependency on Tokio, and I believe that if another
crate in a dependency graph enables that feature while Tokio's `tracing`
support is also enabled, it would create a circular dependency that
Cargo wouldn't be able to handle. Also, it avoids a dependency for a
very small amount of code that is unlikely to ever change.
There is, of course, room for plenty of future work here. This might
include:
- instrumenting other parts of `tokio`, such as I/O resources and
channels (possibly via waker instrumentation)
- instrumenting the threadpool so that the state of worker threads
can be inspected
- writing `tracing-subscriber` `Layer`s to collect and display
Tokio-specific data from these traces
- using `track_caller` (when it's stable) to record _where_ a task
was `spawn`ed from
However, this is intended as an MVP to get us started on that path.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Previously, dropping the Write handle would issue a `shutdown(Both)`. However,
shutting down the read half is not portable and not the correct action to take.
This changes the behavior of OwnedWriteHalf to only perform a `shutdown(Write)`
on drop.
* doc: Update the docs of "pause" to state that time will still advance
This was changed in #2059. This had me extremely confused for some time
as my timeouts fired immediately, without the wrapped future that were
waiting on IO to actually run long enough.
I am not sure about the exact wording here but this had me very confused
for some time. Deprecating "pause" and giving it a more accurate name
may be a good idea as well.
```rust
async fn timeout_advances() {
time::pause();
timeout(ms(1), async {
// Change to 1 and the this future resolve, 2 or
// more and the timeout resolves
for _ in 0..2 {
tokio::task::yield_now().await
}
})
.await
.unwrap();
}
```
* Update tokio/src/time/clock.rs
Co-authored-by: Alice Ryhl <alice@ryhl.io>
Co-authored-by: Alice Ryhl <alice@ryhl.io>