60 Commits

Author SHA1 Message Date
Hamir Mahal
d4178cf349
tokio: avoid positional fmt params when possible (#6978) 2024-11-18 13:50:58 +01:00
Alice Ryhl
ebe241647e
ci: use cargo deny (#6931) 2024-10-23 18:48:07 +02:00
Tim Vilgot Mikael Fredenberg
feb742c58e
chore: replace num_cpus with available_parallelism (#6709) 2024-07-22 23:15:23 +02:00
Michael Scholten
daa89017da
ci: fix new clippy warnings (#6569) 2024-05-18 10:09:37 +02:00
Weijia Jiang
f6eb1ee196
time: lazily init timers on first poll (#6512) 2024-05-03 15:37:52 +02:00
M.Amin Rayej
bb25a06f34
chore: fix dead code warnings (#6423) 2024-03-22 13:44:18 +03:30
Patrick McGleenon
e392c4ff1e
chore: update CI to clippy 1.76 (#6334)
Co-authored-by: Rafael Bachmann <rafael.bachmann.93@gmail.com>
2024-02-10 10:45:40 +01:00
Sergei Fomin
7536132065
sync: use AtomicBool in broadcast channel future (#6298) 2024-01-27 18:52:55 +00:00
Alice Ryhl
52f28dcb4f
benches: fix benchmarking conflicts for real this time (#6246) 2023-12-28 09:52:18 +00:00
Alice Ryhl
7cae89af47
benches: fix benchmarking conflicts (#6243) 2023-12-22 22:43:05 +00:00
oliver
4aa7bbff4c
chore: typo fixes (#6213)
Co-authored-by: kwfn <calm.rain5339@fastmail.com>
2023-12-11 19:22:55 +00:00
Weijia Jiang
3a4aef17b2
runtime: reduce the lock contention in task spawn (#6001) 2023-12-07 10:45:07 +00:00
Aaron Schweiger
881b510a07
sync: add mpsc::Receiver::recv_many (#6010) 2023-10-17 11:01:41 +02:00
Tymoteusz Wiśniewski
ca89c5b2ec
benches: move sender to a spawned task in watch benchmark (#6034) 2023-09-28 11:28:12 +02:00
M.Amin Rayej
b046c0dcbb
benches: use criterion instead of bencher (#5981) 2023-09-10 16:42:53 +02:00
Taiki Endo
af6c87a045
chore: upgrade remaining 2018 edition crates to 2021 edition (#5788) 2023-06-12 02:21:50 +09:00
Carl Lerche
79a7e78c0d
rt(threaded): basic self-tuning of injection queue (#5720)
Each multi-threaded runtime worker prioritizes pulling tasks off of its
local queue. Every so often, it checks the injection (global) queue for
work submitted there. Previously, "every so often," was a constant
"number of tasks polled" value. Tokio sets a default of 61, but allows
users to configure this value.

If workers are under load with tasks that are slow to poll, the
injection queue can be starved. To prevent starvation in this case, this
commit implements some basic self-tuning. The multi-threaded scheduler
tracks the mean task poll time using an exponentially-weighted moving
average. It then uses this value to pick an interval at which to check
the injection queue.

This commit is a first pass at adding self-tuning to the scheduler.
There are other values in the scheduler that could benefit from
self-tuning (e.g. the maintenance interval). Additionally, the
current-thread scheduler could also benfit from self-tuning. However, we
have reached the point where we should start investigating ways to unify
logic in both schedulers. Adding self-tuning to the current-thread
scheduler will be punted until after this unification.
2023-06-01 08:13:24 -07:00
Carl Lerche
3a94eb0893
rt: batch pop from injection queue when idle (#5705)
In the multi-threaded scheduler, when there are no tasks on the local
queue, a worker will attempt to pull tasks from the injection queue.
Previously, the worker would only attempt to poll one task from the
injection queue then continue trying to find work from other sources.
This can result in the injection queue backing up when there are many
tasks being scheduled from outside of the runtime.

This patch updates the worker to try to poll more than one task from the
injection queue when it has no more local work. Note that we also don't
want a single worker to poll **all** tasks on the injection queue as
that would result in work becoming unbalanced.
2023-05-23 08:16:41 -07:00
Carl Lerche
93bde0870f
rt: use task::Inject with current_thread scheduler (#5702)
Previously, the current_thread scheduler used its own injection queue
instead of sharing the same one as the multi-threaded scheduler. This
patch updates the current_thread scheduler to use the same injection
queue as the multi-threaded one (`task::Inject`).

`task::Inject` includes an optimization where it does not need to
acquire the mutex if the queue is empty.
2023-05-21 00:08:00 +00:00
Tymoteusz Wiśniewski
db543639e1
sync: reduce contention in Notify (#5503) 2023-04-19 13:07:10 +02:00
Carl Lerche
ee1c940709
time: Improve Instant::now() perf with test-util (#5513)
The test-util feature flag is only intended to be used with tests.
However, it is possible to enable it in release mode accidentally. This
patch reduces the overhead of `Instant::now()` when the `test-util`
feature flag is enabled but `time::pause()` is not called.

The optimization is implemented by adding a static atomic flag that
tracks if `time::pause()` has ever been called. In `Instant::now()`, the
atomic flag is first checked before the thread-local and mutex are
accessed.
2023-02-27 10:21:42 -08:00
tijsvd
d7abdbb315
benches: mutex contention in watch::Receiver bench (#5472) 2023-02-18 14:51:08 +00:00
Christopher Hunt
e106c4d32b
benches: benchmark for things in block_on (#5440)
This additional benchmark exercises a common request/reply pattern using an MPSC for requests along with a oneshot payload as a reply mechanism. When used in a current threaded scenario, the bench is 17 times faster on my machine than when using the multi-threaded runtime and one worker thread. Not only that, but if I increase the number of worker threads to 6, performance degrades further.

Does this suggest a scheduling problem with the multi-threaded runtime?

No matter what, hopefully the benchmarks are a useful addition.
2023-02-14 23:05:10 +00:00
Simon Farnsworth
6bdcb813c6
io: make copy continue filling the buffer when writer stalls (#5066) 2022-10-03 12:58:21 +02:00
xxchan
ff6fbc327d
sync: add #[must_use] to lock guards (#4886) 2022-08-09 13:44:29 +02:00
Toby Lawrence
4b6bb1d9a7
chore(util): start v0.7 release cycle (#4313)
* chore(util): start v0.7 release cycle

Signed-off-by: Toby Lawrence <toby@nuclearfurnace.com>
2021-12-10 13:16:17 -05:00
Taiki Endo
fe770dc509
chore: fix newly added warnings (#4253) 2021-11-22 18:40:57 +09:00
Alice Ryhl
51fad066e2
bench: update spawn benchmarks (#3927) 2021-07-07 10:52:53 +02:00
Taiki Endo
08ed41f339
chore: fix typos (#3907) 2021-07-01 02:06:56 +09:00
Taiki Endo
17c7ce616c
benches: fix build error (#3769) 2021-05-09 22:26:20 +09:00
Stefan Sydow
177522cd43
benchmark: add file reading benchmarks (#3013) 2021-05-05 21:49:00 +02:00
Lucio Franco
8efa62013b
Move stream items into tokio-stream (#3277)
This change removes all references to `Stream` from
within the `tokio` crate and moves them into a new
`tokio-stream` crate. Most types have had their
`impl Stream` removed as well in-favor of their
inherent methods.

Closes #2870
2020-12-15 20:24:38 -08:00
Carl Lerche
473ddaa277
chore: prepare for Tokio 1.0 work (#3238) 2020-12-09 09:42:05 -08:00
Carl Lerche
97c2c4203c
chore: automate running benchmarks (#3140)
Uses Github actions to run benchmarks.
2020-11-13 19:30:52 -08:00
Lucio Franco
07802b2c84
rt: worker_threads must be non-zero (#2947)
Co-authored-by: Alice Ryhl <alice@ryhl.io>
2020-10-12 15:15:40 -04:00
Lucio Franco
8880222036
rt: Remove threaded_scheduler() and basic_scheduler() (#2876)
Co-authored-by: Alice Ryhl <alice@ryhl.io>
Co-authored-by: Carl Lerche <me@carllerche.com>
2020-10-12 13:44:54 -04:00
Mikail Bagishov
99d4061203
bench: fix unused_mut lint in benches (#2889) 2020-09-27 11:07:55 +02:00
Ivan Petkov
7ae5b7bd4f
signal: move driver to runtime thread (#2835)
Refactors the signal infrastructure to move the driver to the runtime
thread. This follows the model put forth by the I/O driver and time
driver.
2020-09-22 15:40:44 -07:00
Lucio Franco
d600ab9a8f
rt: Refactor Runtime::block_on to take &self (#2782)
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2020-08-27 20:05:48 -04:00
Carl Lerche
6ccefb77e2
chore: prepare for v0.3 breaking changes (#2747)
Bug fixes will be applied to the v0.2.x branch.
2020-08-07 20:27:53 -07:00
Eliza Weisman
acf8a7da7a
sync: new internal semaphore based on intrusive lists (#2325)
## Motivation

Many of Tokio's synchronization primitives (`RwLock`, `Mutex`,
`Semaphore`, and the bounded MPSC channel) are based on the internal
semaphore implementation, called `semaphore_ll`. This semaphore type
provides a lower-level internal API for the semaphore implementation
than the public `Semaphore` type, and supports "batch" operations, where
waiters may acquire more than one permit at a time, and batches of
permits may be released back to the semaphore.

Currently, `semaphore_ll` uses an atomic singly-linked list for the
waiter queue. The linked list implementation is specific to the
semaphore. This implementation therefore requires a heap allocation for
every waiter in the queue. These allocations are owned by the semaphore,
rather than by the task awaiting permits from the semaphore. Critically,
they are only _deallocated_ when permits are released back to the
semaphore, at which point it dequeues as many waiters from the front of
the queue as can be satisfied with the released permits. If a task
attempts to acquire permits from the semaphore and is cancelled (such as
by timing out), their waiter nodes remain in the list until they are
dequeued while releasing permits. In cases where large numbers of tasks
are cancelled while waiting for permits, this results in extremely high
memory use for the semaphore (see #2237).

## Solution

@Matthias247 has proposed that Tokio adopt the approach used in his
`futures-intrusive` crate: using an _intrusive_ linked list to store the
wakers of tasks waiting on a synchronization primitive. In an intrusive
list, each list node is stored as part of the entry that node
represents, rather than in a heap allocation that owns the entry.
Because futures must be pinned in order to be polled, the necessary
invariant of such a list --- that entries may not move while in the list
--- may be upheld by making the waiter node `!Unpin`. In this approach,
the waiter node can be stored inline in the future, rather than
requiring  separate heap allocation, and cancelled futures may remove
their nodes from the list.

This branch adds a new semaphore implementation that uses the intrusive
list added to Tokio in #2210. The implementation is essentially a hybrid
of the old `semaphore_ll` and the semaphore used in `futures-intrusive`:
while a `Mutex` around the wait list is necessary, since the intrusive
list is not thread-safe, the permit state is stored outside of the mutex
and updated atomically. 

The mutex is acquired only when accessing the wait list — if a task 
can acquire sufficient permits without waiting, it does not need to
acquire the lock. When releasing permits, we iterate over the wait
list from the end of the queue until we run out of permits to release,
and split off all the nodes that received enough permits to wake up
into a separate list. Then, we can drain the new list and notify those
wakers *after* releasing the lock. Because the split operation only
modifies the pointers on the head node of the split-off list and the
new tail node of the old list, it is O(1) and does not require an
allocation to return a variable length number of waiters to notify.


Because of the intrusive list invariants, the API provided by the new
`batch_semaphore` is somewhat different than that of `semaphore_ll`. In
particular, the `Permit` type has been removed. This type was primarily
intended allow the reuse of a wait list node allocated on the heap.
Since the intrusive list means we can avoid heap-allocating waiters,
this is no longer necessary. Instead, acquiring permits is done by
polling an `Acquire` future returned by the `Semaphore` type. The use of
a future here ensures that the waiter node is always pinned while
waiting to acquire permits, and that a reference to the semaphore is
available to remove the waiter if the future is cancelled.
Unfortunately, the current implementation of the bounded MPSC requires a
`poll_acquire` operation, and has methods that call it while outside of
a pinned context. Therefore, I've left the old `semaphore_ll`
implementation in place to be used by the bounded MPSC, and updated the
`Mutex`, `RwLock`, and `Semaphore` APIs to use the new implementation.
Hopefully, a subsequent change can update the bounded MPSC to use the
new semaphore as well.

Fixes #2237

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2020-03-23 13:45:48 -07:00
Carl Lerche
a78b1c65cc
rt: cleanup and simplify scheduler (scheduler v2.5) (#2273)
A refactor of the scheduler internals focusing on simplifying and
reducing unsafety. There are no fundamental logic changes.

* The state transitions of the core task component are refined and
reduced.
* `basic_scheduler` has most unsafety removed.
* `local_set` has most unsafety removed.
* `threaded_scheduler` limits most unsafety to its queue implementation.
2020-03-05 10:31:37 -08:00
Lucio Franco
4a24c7063b sync: add mpsc benchmark (#2166) 2020-01-27 20:48:35 -08:00
Carl Lerche
50b91c0247
chore: move benches to separate crate (#2028)
This allows the `benches` crate to depend on `tokio` with all feature
flags. This is a similar strategy used for `examples`.
2019-12-24 20:53:20 -08:00
Carl Lerche
cb4aea394e
Update Tokio to Rust 2018 (#1082) 2019-05-14 10:27:36 -07:00
Carl Lerche
80162306e7
chore: apply rustfmt to all crates (#917) 2019-02-21 11:56:15 -08:00
Carl Lerche
ab07733d66
Deprecate executor re-exports (#412) 2018-06-12 14:41:12 -07:00
Carl Lerche
1f91a890b4
Fix benches (#188)
Some of the benchhmarks were broken and/or using deprecated APIs. This
patch updates the benches and requires them all to compile without
warnings in order to pass CI.
2018-03-06 14:40:09 -08:00
Roman
8605d5d243 Make benches compilable again (#133) 2018-02-13 10:02:06 -08:00
Alex Crichton
108e1a2c1a Blanket rename Core to Reactor
This commit uses a script to rename `Core` to `Reactor` all at once, notably:

    find . -name '*.rs' | xargs sed -i 's/\bCore\b/Reactor/g'
2017-12-05 09:02:07 -08:00