Included changes
- all simple references like `<type>.<name>.html` for these types
- enum
- fn
- struct
- trait
- type
- simple references for methods, like struct.DelayQueue.html#method.poll
Refs: #1473
Some reasons to prefer logical count as the default:
- Chips reporting many logical CPUs vs physical, such as via
hyperthreading, probably know better than us about the workload the CPUs
can handle.
- The logical count (`num_cpus::get()`) takes into consideration
schedular affinity, and cgroups CPU quota, in case the user wants to
limit the amount of CPUs a process can use.
Closes#2269
Fixes a couple bugs in the work-stealing queue introduced as
part of #2315. First, the cursor needs to be able to represent more
values than the size of the buffer. This is to be able to track if
`tail` is ahead of `head` or if they are identical. This bug resulted in
the "overflow" path being taken before the buffer was full.
The second bug can happen when a queue is being stolen from concurrently
with stealing into. In this case, it is possible for buffer slots to be
overwritten before they are released by the stealer. This is harder to
happen in practice due to the first bug preventing the queue from
filling up 100%, but could still happen. It triggered an assertion in
`steal_into`. This bug slipped through due to a bug in loom not
correctly catching the case. The loom bug is fixed as part of
tokio-rs/loom#119.
Fixes: #2382
Previously, the `Mutex::lock`, `RwLock::{read, write}`, and
`Semaphore::acquire` futures in `tokio::sync` implemented `Send + Sync`
automatically. This was by virtue of being implemented using a `poll_fn`
that only closed over `Send + Sync` types. However, this broke in
PR #2325, which rewrote those types using the new `batch_semaphore`.
Now, they await an `Acquire` future, which contains a `Waiter`, which
internally contains an `UnsafeCell`, and thus does not implement `Sync`.
Since removing previously implemented traits breaks existing code, this
inadvertantly caused a breaking change. There were tests ensuring that
the `Mutex`, `RwLock`, and `Semaphore` types themselves were `Send +
Sync`, but no tests that the _futures they return_ implemented those
traits.
I've fixed this by adding an explicit impl of `Sync` for the
`batch_semaphore::Acquire` future. Since the `Waiter` type held by this
struct is only accessed when borrowed mutably, it is safe for it to
implement `Sync`.
Additionally, I've added to the bounds checks for the effected
`tokio::sync` types to ensure that returned futures continue to
implement `Send + Sync` in the future.
The new queue uses `u8` to track offsets. Cursors are expected to wrap.
An operation was performed with `+` instead of `wrapping_add`. This was
not _obviously_ issue before as it is difficult to wrap a `usize` on
64bit platforms, but wrapping a `u8` is trivial.
The fix is to use `wrapping_add` instead of `+`. A new test is added
that catches the issue.
Fixes#2361
The work-stealing scheduler includes an optimization where each worker
includes a single slot to store the **last** scheduled task. Tasks in
scheduler's LIFO slot are executed next. This speeds up and reduces
latency with message passing patterns.
Previously, this optimization was susceptible to starving other tasks in
certain cases. If two tasks ping-ping between each other without ever
yielding, the worker would never execute other tasks.
An early PR (#2160) introduced a form of pre-emption. Each task is
allocated a per-poll operation budget. Tokio resources will return ready
until the budget is depleted, at which point, Tokio resources will
always return `Pending`.
This patch leverages the operation budget to limit the LIFO scheduler
optimization. When executing tasks from the LIFO slot, the budget is
**not** reset. Once the budget goes to zero, the task in the LIFO slot
is pushed to the back of the queue.
## Motivation
When cancelling futures which are waiting to acquire semaphore permits,
there is a possible dangling pointer if notified futures are dropped
after the notified wakers have been split into a separate list. Because
these futures' wait queue nodes are no longer in the main list guarded
by the lock, their `Drop` impls will complete immediately, and they may
be dropped while still in the list of tasks to notify.
## Solution
This branch fixes this by popping from the wait list inside the lock.
The wakers of popped nodes are temporarily stored in a stack array,
so that they can be notified after the lock is released. Since the
size of the stack array is fixed, we may in some cases have to loop
multiple times, acquiring and releasing the lock, until all permits
have been released. This may also have the possible side advantage of
preventing a thread releasing a very large number of permits from
starving other threads that need to enqueue waiters.
I've also added a loom test that can reliably reproduce a segfault
on master, but passes on this branch (after a lot of iterations).
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This reverts commit 826fc21abfd04779cde92e4cc33de2adb42cf327.
The code was intentional. Holding the lock while notifying is
unnecessary. Also change the code to use `drop` so clippy doesn't
confuse people against their will.
Fixes a test from a PR that was written before the recent loom upgrade.
A change in the details how loom executes models resulted in the test to
start failing. The fix is to reduce the number of iterations performed
by the test.
This commit improves the memory ordering in the implementation of
Inner's increment function. The former code did a sequentially
consistent load of self.num, then entered a loop with a sequentially
consistent compare and swap on the same, bailing out with and Err only
if the loaded value was MAX_TIMEOUTS. The use of SeqCst means that all
threads must observe all relevant memory operations in the same order,
implying synchronization between all CPUs.
This commit adjusts the implementation in two key ways. First, the
initial load of self.num is now down with Relaxed ordering. If two
threads entered this code simultaneously, formerly, tokio required
that one proceed before the other, negating their parallelism. Now,
either thread may proceed without coordination. Second, the SeqCst
compare_and_swap is changed to a Release, Relaxed
compare_exchange_weak. The first memory ordering referrs to success:
if the value is swapped the load of that value for comparison will be
Relaxed and the store will be Release. The second memory ordering
referrs to failure: if the value is not swapped the load is
Relaxed. The _weak variant may spuriously fail but will generate
better code.
These changes mean that it is possible for more loops to be taken per
call than strictly necessary but with greater parallelism available on
this operation, improved energy consumption as CPUs don't have to
coordinate as much.
When the queue was polled and yielded an index from the wheel, the delay
until the next item was never updated. As a result, when one item was
yielded from `poll_idx` the following insert erronously updated the
delay to the instant of the inserted item.
Fixes: #1700
Loom is having a big refresh to improve performance and tighten up the
concurrency model. This diff tracks those changes.
Included in the changes is the removal of `CausalCell` deferred checks.
This is due to it technically being undefined behavior in the C++11
memory model. To address this, the work-stealing queue is updated to
avoid needing this behavior. This is done by limiting the queue to have
one concurrent stealer.
The resume function was breaking the guarantee that Instants should
never be less than any previously measured Instants when created.
Altered the pause and resume function such that they will not break this
guarantee. After resume, the time should continue from where it left
off.
Created test to prove that the advanced function still works as
expected.
Added additional tests for the pause/advance/resume functions.
## Motivation
Many of Tokio's synchronization primitives (`RwLock`, `Mutex`,
`Semaphore`, and the bounded MPSC channel) are based on the internal
semaphore implementation, called `semaphore_ll`. This semaphore type
provides a lower-level internal API for the semaphore implementation
than the public `Semaphore` type, and supports "batch" operations, where
waiters may acquire more than one permit at a time, and batches of
permits may be released back to the semaphore.
Currently, `semaphore_ll` uses an atomic singly-linked list for the
waiter queue. The linked list implementation is specific to the
semaphore. This implementation therefore requires a heap allocation for
every waiter in the queue. These allocations are owned by the semaphore,
rather than by the task awaiting permits from the semaphore. Critically,
they are only _deallocated_ when permits are released back to the
semaphore, at which point it dequeues as many waiters from the front of
the queue as can be satisfied with the released permits. If a task
attempts to acquire permits from the semaphore and is cancelled (such as
by timing out), their waiter nodes remain in the list until they are
dequeued while releasing permits. In cases where large numbers of tasks
are cancelled while waiting for permits, this results in extremely high
memory use for the semaphore (see #2237).
## Solution
@Matthias247 has proposed that Tokio adopt the approach used in his
`futures-intrusive` crate: using an _intrusive_ linked list to store the
wakers of tasks waiting on a synchronization primitive. In an intrusive
list, each list node is stored as part of the entry that node
represents, rather than in a heap allocation that owns the entry.
Because futures must be pinned in order to be polled, the necessary
invariant of such a list --- that entries may not move while in the list
--- may be upheld by making the waiter node `!Unpin`. In this approach,
the waiter node can be stored inline in the future, rather than
requiring separate heap allocation, and cancelled futures may remove
their nodes from the list.
This branch adds a new semaphore implementation that uses the intrusive
list added to Tokio in #2210. The implementation is essentially a hybrid
of the old `semaphore_ll` and the semaphore used in `futures-intrusive`:
while a `Mutex` around the wait list is necessary, since the intrusive
list is not thread-safe, the permit state is stored outside of the mutex
and updated atomically.
The mutex is acquired only when accessing the wait list — if a task
can acquire sufficient permits without waiting, it does not need to
acquire the lock. When releasing permits, we iterate over the wait
list from the end of the queue until we run out of permits to release,
and split off all the nodes that received enough permits to wake up
into a separate list. Then, we can drain the new list and notify those
wakers *after* releasing the lock. Because the split operation only
modifies the pointers on the head node of the split-off list and the
new tail node of the old list, it is O(1) and does not require an
allocation to return a variable length number of waiters to notify.
Because of the intrusive list invariants, the API provided by the new
`batch_semaphore` is somewhat different than that of `semaphore_ll`. In
particular, the `Permit` type has been removed. This type was primarily
intended allow the reuse of a wait list node allocated on the heap.
Since the intrusive list means we can avoid heap-allocating waiters,
this is no longer necessary. Instead, acquiring permits is done by
polling an `Acquire` future returned by the `Semaphore` type. The use of
a future here ensures that the waiter node is always pinned while
waiting to acquire permits, and that a reference to the semaphore is
available to remove the waiter if the future is cancelled.
Unfortunately, the current implementation of the bounded MPSC requires a
`poll_acquire` operation, and has methods that call it while outside of
a pinned context. Therefore, I've left the old `semaphore_ll`
implementation in place to be used by the bounded MPSC, and updated the
`Mutex`, `RwLock`, and `Semaphore` APIs to use the new implementation.
Hopefully, a subsequent change can update the bounded MPSC to use the
new semaphore as well.
Fixes#2237
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Since the original shell runtime was implemented, utilities have been
added to encapsulate `unsafe`. The shell runtime is now able to use
those utilities and not include its own `unsafe` code.
tokio-util uses tokio::stream::StreamExt, which was not introduced until
tokio v0.2.5. The current dependency specification is incorrect, and
breaks with cargo update -Z minimal-versions.
This PR was prompted by having encountered a few cases of people not noticing that Runtime::handle can be cloned, and therefore not realizing it could be moved to another thread.
A single call to `poll` on a top-level task may potentially do a lot of
work before it returns `Poll::Pending`. If a task runs for a long period
of time without yielding back to the executor, it can starve other tasks
waiting on that executor to execute them, or drive underlying resources.
See for example rust-lang/futures-rs#2047, rust-lang/futures-rs#1957,
and rust-lang/futures-rs#869. Since Rust does not have a runtime, it is
difficult to forcibly preempt a long-running task.
Consider a future like this one:
```rust
use tokio::stream::StreamExt;
async fn drop_all<I: Stream>(input: I) {
while let Some(_) = input.next().await {}
}
```
It may look harmless, but consider what happens under heavy load if the
input stream is _always_ ready. If we spawn `drop_all`, the task will
never yield, and will starve other tasks and resources on the same
executor.
This patch adds a `coop` module that provides an opt-in mechanism for
futures to cooperate with the executor to avoid starvation. This
alleviates the problem above:
```
use tokio::stream::StreamExt;
async fn drop_all<I: Stream>(input: I) {
while let Some(_) = input.next().await {
tokio::coop::proceed().await;
}
}
```
The call to [`proceed`] will coordinate with the executor to make sure
that every so often control is yielded back to the executor so it can
run other tasks.
The implementation uses a thread-local counter that simply counts how
many "cooperation points" we have passed since the task was first
polled. Once the "budget" has been spent, any subsequent points will
return `Poll::Pending`, eventually making the top-level task yield. When
it finally does yield, the executor resets the budget before
running the next task.
The budget per task poll is currently hard-coded to 128. Eventually, we
may want to make it dynamic as more cooperation points are added. The
number 128 was chosen more or less arbitrarily to balance the cost of
yielding unnecessarily against the time an executor may be "held up".
At the moment, all the tokio leaf futures ("resources") call into coop,
but external futures have no way of doing so. We probably want to
continue limiting coop points to leaf futures in the future, but may
want to also enable third-party leaf futures to cooperate to benefit the
ecosystem as a whole. This is reflected in the methods marked as `pub`
in `mod coop` (even though the module is only `pub(crate)`). We will
likely also eventually want to expose `coop::limit`, which enables
sub-executors and manual `impl Future` blocks to avoid one sub-task
spending all of their poll budget.
Benchmarks (see tokio-rs/tokio#2160) suggest that the overhead of `coop`
is marginal.