In the `readiness` future, before inserting a waiter into the list, the current socket readiness is eagerly checked. However, it would return as a `ReadyEvent` the entire socket readiness, instead of just the interest desired from `readiness(interest)`. This would result in the later call to `clear_readiness(event)` removing all of it.
Closes#2886
Adding closed future, makes it possible to select over closed and some other
work, so that the task is woken when the channel is closed and can proactively
cancel itself.
Added a mpsc::Sender::closed future that will become ready when the receiver
is closed.
As tokio does not rely on poisoning, we can
avoid always unwrapping when locking by handling
the `PoisonError` in the Mutex shim.
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
Updates the mpsc channel to use the intrusive waker based sempahore.
This enables using `Sender` with `&self`.
Instead of using `Sender::poll_ready` to ensure capacity and updating
the `Sender` state, `async fn Sender::reserve()` is added. This function
returns a `Permit` value representing the reserved capacity.
Fixes: #2637
Refs: #2718 (intrusive waiters)
These functions have object safety issues. It also has been decided to
avoid vectored operations on the I/O traits. A later PR will bring back
vectored operations on specific types that support them.
Refs: #2879, #2716
As we go into 0.3 we no longer need to support older versions of Rust where
IoSlice did not implement Copy and Clone, so we can more easily initialize the
IoSlice array in net::tcp::stream.
Co-authored-by: Bryan Donlan <bdonlan@amazon.com>
This change will still internally compile any `signal` resources
required when `process` is enabled on unix systems, but it will not
publicly turn on the cargo feature
This refactors I/O registration in a few ways:
- Cleans up the cached readiness in `PollEvented`. This cache used to
be helpful when readiness was a linked list of `*mut Node`s in
`Registration`. Previous refactors have turned `Registration` into just
an `AtomicUsize` holding the current readiness, so the cache is just
extra work and complexity. Gone.
- Polling the `Registration` for readiness now gives a `ReadyEvent`,
which includes the driver tick. This event must be passed back into
`clear_readiness`, so that the readiness is only cleared from `Registration`
if the tick hasn't changed. Previously, it was possible to clear the
readiness even though another thread had *just* polled the driver and
found the socket ready again.
- Registration now also contains an `async fn readiness`, which stores
wakers in an instrusive linked list. This allows an unbounded number
of tasks to register for readiness (previously, only 1 per direction (read
and write)). By using the intrusive linked list, there is no concern of
leaking the storage of the wakers, since they are stored inside the `async fn`
and released when the future is dropped.
- Registration retains a `poll_readiness(Direction)` method, to support
`AsyncRead` and `AsyncWrite`. They aren't able to use `async fn`s, and
so there are 2 reserved slots for those methods.
- IO types where it makes sense to have multiple tasks waiting on them
now take advantage of this new `async fn readiness`, such as `UdpSocket`
and `UnixDatagram`.
Additionally, this makes the `io-driver` "feature" internal-only (no longer
documented, not part of public API), and adds a second internal-only
feature, `io-readiness`, to group together linked list part of registration
that is only used by some of the IO types.
After a bit of discussion, changing stream-based transports (like
`TcpStream`) to have `async fn read(&self)` is punted, since that
is likely too easy of a footgun to activate.
Refs: #2779, #2728
*In `watch::Receiver::changed` `Notified` was polled
for the first time to ensure the waiter is registered while
assuming that the first poll will always return `Pending`.
It is the case however that another instance of `Notified`
is dropped without receiving its notification, this "orphaned"
notification can be used to satisfy another waiter without
even registering it. This commit accounts for that scenario.
When the mpsc channel receiver closes the channel, receiving should
return `None` once all in-progress sends have completed. When a sender
reserves capacity, this prevents the receiver from fully shutting down.
Previously, when the sender, after reserving capacity, dropped without
sending a message, the receiver was not notified. This results in
blocking the shutdown process until all sender handles drop.
This patch adds a receiver notification when the channel is both closed
and all outstanding sends have completed.
When the mpsc channel receiver closes the channel, receiving should
return `None` once all in-progress sends have completed. When a sender
reserves capacity, this prevents the receiver from fully shutting down.
Previously, when the sender, after reserving capacity, dropped without
sending a message, the receiver was not notified. This results in
blocking the shutdown process until all sender handles drop.
This patch adds a receiver notification when the channel is both closed
and all outstanding sends have completed.
* Add const constructors to `RwLock`, `Notify`, and `Semaphore`.
Referring to the types in `tokio::sync`.
Also add `const` to `new` for the remaining atomic integers in `src/loom` and `UnsafeCell`.
Builds upon previous work in #2790Closes#2756
Decouples getting the latest `watch` value from receiving the change
notification. The `Receiver` async method becomes
`Receiver::changed()`. The latest value is obtained from
`Receiver::borrow()`.
The implementation is updated to use `Notify`. This requires adding
`Notify::notify_waiters`. This method is generally useful but is kept
private for now.