A step towards collapsing Tokio sub crates into a single `tokio`
crate (#1318).
The sync implementation is now provided by the main `tokio` crate.
Functionality can be opted out of by using the various net related
feature flags.
A step towards collapsing Tokio sub crates into a single `tokio`
crate (#1318).
The executor implementation is now provided by the main `tokio` crate.
Functionality can be opted out of by using the various net related
feature flags.
## Motivation
The `tokio_net::driver` module currently stores the state associated
with scheduled IO resources in a `Slab` implementation from the `slab`
crate. Because inserting items into and removing items from `slab::Slab`
requires mutable access, the slab must be placed within a `RwLock`. This
has the potential to be a performance bottleneck especially in the context of
the work-stealing scheduler where tasks and the reactor are often located on
the same thread.
`tokio-net` currently reimplements the `ShardedRwLock` type from
`crossbeam` on top of `parking_lot`'s `RwLock` in an attempt to squeeze
as much performance as possible out of the read-write lock around the
slab. This introduces several dependencies that are not used elsewhere.
## Solution
This branch replaces the `RwLock<Slab>` with a lock-free sharded slab
implementation.
The sharded slab is based on the concept of _free list sharding_
described by Leijen, Zorn, and de Moura in [_Mimalloc: Free List
Sharding in Action_][mimalloc], which describes the implementation of a
concurrent memory allocator. In this approach, the slab is sharded so
that each thread has its own thread-local list of slab _pages_. Objects
are always inserted into the local slab of the thread where the
insertion is performed. Therefore, the insert operation needs not be
synchronized.
However, since objects can be _removed_ from the slab by threads other
than the one on which they were inserted, removal operations can still
occur concurrently. Therefore, Leijen et al. introduce a concept of
_local_ and _global_ free lists. When an object is removed on the same
thread it was originally inserted on, it is placed on the local free
list; if it is removed on another thread, it goes on the global free
list for the heap of the thread from which it originated. To find a free
slot to insert into, the local free list is used first; if it is empty,
the entire global free list is popped onto the local free list. Since
the local free list is only ever accessed by the thread it belongs to,
it does not require synchronization at all, and because the global free
list is popped from infrequently, the cost of synchronization has a
reduced impact. A majority of insertions can occur without any
synchronization at all; and removals only require synchronization when
an object has left its parent thread.
The sharded slab was initially implemented in a separate crate (soon to
be released), vendored in-tree to decrease `tokio-net`'s dependencies.
Some code from the original implementation was removed or simplified,
since it is only necessary to support `tokio-net`'s use case, rather
than to provide a fully generic implementation.
[mimalloc]: https://www.microsoft.com/en-us/research/uploads/prod/2019/06/mimalloc-tr-v1.pdf
## Performance
These graphs were produced by out-of-tree `criterion` benchmarks of the
sharded slab implementation.
The first shows the results of a benchmark where an increasing number of
items are inserted and then removed into a slab concurrently by five
threads. It compares the performance of the sharded slab implementation
with a `RwLock<slab::Slab>`:
<img width="1124" alt="Screen Shot 2019-10-01 at 5 09 49 PM" src="https://user-images.githubusercontent.com/2796466/66078398-cd6c9f80-e516-11e9-9923-0ed6292e8498.png">
The second graph shows the results of a benchmark where an increasing
number of items are inserted and then removed by a _single_ thread. It
compares the performance of the sharded slab implementation with an
`RwLock<slab::Slab>` and a `mut slab::Slab`.
<img width="925" alt="Screen Shot 2019-10-01 at 5 13 45 PM" src="https://user-images.githubusercontent.com/2796466/66078469-f0974f00-e516-11e9-95b5-f65f0aa7e494.png">
Note that while the `mut slab::Slab` (i.e. no read-write lock) is
(unsurprisingly) faster than the sharded slab in the single-threaded
benchmark, the sharded slab outperforms the un-contended
`RwLock<slab::Slab>`. This case, where the lock is uncontended and only
accessed from a single thread, represents the best case for the current
use of `slab` in `tokio-net`, since the lock cannot be conditionally
removed in the single-threaded case.
These benchmarks demonstrate that, while the sharded approach introduces
a small constant-factor overhead, it offers significantly better
performance across concurrent accesses.
## Notes
This branch removes the following dependencies `tokio-net`:
- `parking_lot`
- `num_cpus`
- `crossbeam_util`
- `slab`
This branch adds the following dev-dependencies:
- `proptest`
- `loom`
Note that these dev dependencies were used to implement tests for the
sharded-slab crate out-of-tree, and were necessary in order to vendor
the existing tests. Alternatively, since the implementation is tested
externally, we _could_ remove these tests in order to avoid picking up
dev-dependencies. However, this means that we should try to ensure that
`tokio-net`'s vendored implementation doesn't diverge significantly from
upstream's, since it would be missing a majority of its tests.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This adds an extra spawned task during the thread-pool shutdown loom
test. This results in additional cases being tested, primarily tasks
being stolen.
A step towards collapsing Tokio sub crates into a single `tokio`
crate (#1318).
The `io` implementation is now provided by the main `tokio` crate.
Functionality can be opted out of by using the various net related
feature flags.
A step towards collapsing Tokio sub crates into a single `tokio`
crate (#1318).
The `net` implementation is now provided by the main `tokio` crate.
Functionality can be opted out of by using the various net related
feature flags.
Previously, support for `blocking` was done through a static `POOL` that
would spawn threads on demand. While this made the pool accessible at
all times, it made it hard to configure, and it was impossible to keep
multiple blocking pools.
This patch changes `blocking` to instead use a "default" global like the
ones used for timers, executors, and the like. There is now
`blocking::with_pool`, which is used by both thread-pool workers and the
current-thread runtime to ensure that a pool is available to tasks.
This patch also changes `ThreadPool` to spawn its worker threads on the
blocking pool rather than as free-standing threads. This is in
preparation for the coming in-place blocking work.
One downside of this change is that thread names are no longer
"semantic". All threads are named by the pool name, and individual
threads are not (currently) given names with numerical suffixes like
before.
Historically, logging has been added haphazardly. Here, we entirely
remove logging as none of it is particularly useful. In the future, we
will add tracing back in order to expose useful data to the user of
Tokio.
Related to #1318, Tokio APIs that are "less stable" are moved into a new
`tokio-util` crate. This crate will mirror `tokio` and provide
additional APIs that may require a greater rate of breaking changes.
As examples require `tokio-util`, they are moved into a separate
crate (`examples`). This has the added advantage of being able to avoid
example only dependencies in the `tokio` crate.
A step towards collapsing Tokio sub crates into a single `tokio`
crate (#1318).
The `timer` implementation is now provided by the main `tokio` crate.
The `timer` functionality may still be excluded from the build by
skipping the `timer` feature flag.
## Motivation
The `tokio_net` resources can be created outside of a runtime due to how tokio
has been used with futures to date. For example, this allows a `TcpStream` to be
created, and later passed into a runtime:
```
let stream = TcpStream::connect(...).and_then(|socket| {
// do something
});
tokio::run(stream);
```
In order to support this functionality, the reactor was lazily bound to the
resource on the first call to `poll_read_ready`/`poll_write_ready`. This
required a lot of additional complexity in the binding logic to support.
With the tokio 0.2 common case, this is no longer necessary and can be removed.
All resources are expected to be created from within a runtime, and should panic
if not done so.
Closes#1168
## Solution
The `tokio_net` crate now assumes there to be a `CURRENT_REACTOR` set on the
worker thread creating a resource; this can be assumed if called within a tokio
runtime. If there is no current reactor, the application will panic with a "no
current reactor" message.
With this assumption, all the unsafe and atomics have been removed from
`tokio_net::driver::Registration` as it is no longer needed.
There is no longer any reason to pass in handles to the family of `from_std` methods on `net` resources. `Handle::current` has therefore a more restricted private use where it is only used in `driver::Registration::new`.
Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>
A step towards collapsing Tokio sub crates into a single `tokio`
crate (#1318).
The `fs` implementation is now provided by the main `tokio` crate. The
`fs` functionality may still be excluded from the build by skipping the
`fs` feature flag.
This patch is a ground up rewrite of the existing work-stealing thread
pool. The goal is to reduce overhead while simplifying code when
possible.
At a high level, the following architectural changes were made:
- The local run queues were switched for bounded circle buffer queues.
- Reduce cross-thread synchronization.
- Refactor task constructs to use a single allocation and always include
a join handle (#887).
- Simplify logic around putting workers to sleep and waking them up.
**Local run queues**
Move away from crossbeam's implementation of the Chase-Lev deque. This
implementation included unnecessary overhead as it supported
capabilities that are not needed for the work-stealing thread pool.
Instead, a fixed size circle buffer is used for the local queue. When
the local queue is full, half of the tasks contained in it are moved to
the global run queue.
**Reduce cross-thread synchronization**
This is done via many small improvements. Primarily, an upper bound is
placed on the number of concurrent stealers. Limiting the number of
stealers results in lower contention. Secondly, the rate at which
workers are notified and woken up is throttled. This also reduces
contention by preventing many threads from racing to steal work.
**Refactor task structure**
Now that Tokio is able to target a rust version that supports
`std::alloc` as well as `std::task`, the pool is able to optimize how
the task structure is laid out. Now, a single allocation per task is
required and a join handle is always provided enabling the spawner to
retrieve the result of the task (#887).
**Simplifying logic**
When possible, complexity is reduced in the implementation. This is done
by using locks and other simpler constructs in cold paths. The set of
sleeping workers is now represented as a `Mutex<VecDeque<usize>>`.
Instead of optimizing access to this structure, we reduce the amount the
pool must access this structure.
Secondly, we have (temporarily) removed `threadpool::blocking`. This
capability will come back later, but the original implementation was way
more complicated than necessary.
**Results**
The thread pool benchmarks have improved significantly:
Old thread pool:
```
test chained_spawn ... bench: 2,019,796 ns/iter (+/- 302,168)
test ping_pong ... bench: 1,279,948 ns/iter (+/- 154,365)
test spawn_many ... bench: 10,283,608 ns/iter (+/- 1,284,275)
test yield_many ... bench: 21,450,748 ns/iter (+/- 1,201,337)
```
New thread pool:
```
test chained_spawn ... bench: 147,943 ns/iter (+/- 6,673)
test ping_pong ... bench: 537,744 ns/iter (+/- 20,928)
test spawn_many ... bench: 7,454,898 ns/iter (+/- 283,449)
test yield_many ... bench: 16,771,113 ns/iter (+/- 733,424)
```
Real-world benchmarks improve significantly as well. This is testing the hyper hello
world server using: `wrk -t1 -c50 -d10`:
Old scheduler:
```
Running 10s test @ http://127.0.0.1:3000
1 threads and 50 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 371.53us 99.05us 1.97ms 60.53%
Req/Sec 114.61k 8.45k 133.85k 67.00%
1139307 requests in 10.00s, 95.61MB read
Requests/sec: 113923.19
Transfer/sec: 9.56MB
```
New scheduler:
```
Running 10s test @ http://127.0.0.1:3000
1 threads and 50 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 275.05us 69.81us 1.09ms 73.57%
Req/Sec 153.17k 10.68k 171.51k 71.00%
1522671 requests in 10.00s, 127.79MB read
Requests/sec: 152258.70
Transfer/sec: 12.78MB
```
As discussed in #1620, the attribute names for `#[tokio::main]` and
`#[tokio::test]` aren't great. Specifically, they both use
`single_thread` and `multi_thread`, as opposed to names that match the
runtime names: `current_thread` and `threadpool`. This PR changes the
former to the latter.
Fixes#1627.
`is_terminated` must return `true` until the future has been polled at least once to make sure that the associated block in select is called even after the delay has elapsed.
You use `Delay` in a `select!` by [fusing it](https://docs.rs/futures-preview/0.3.0-alpha.19/futures/future/trait.FutureExt.html#method.fuse):
```rust
let delay = tokio::timer::delay(/* ... */);
let delay = delay.fuse();
select! {
_ = delay => {
/* work here */
}
}
```
When polling the task, the current waker is saved to the oneshot state.
When the handle is migrated to a new task and polled again, the waker
must be swaped from the old waker to the new waker. In some cases, there
is a potential for the old waker to leak.
This bug was caught by loom with the recently added memory leak
detection.
Use a counter to count notifications. This protects against spurious
wakeups by pthreads and other libraries. The state transitions now
track num_idle precisely.
The standard library's `io` module has small utilities such as `repeat`,
`empty`, and `sink`, which return `Read` and `Write` implementations.
These can come in handy in some circiumstances. `tokio::io` has no
equivalents that implement `AsyncRead`/`AsyncWrite`.
This commit adds `repeat`, `empty`, and `sink` helpers to `tokio::io`.
In the past, it was not possible to choose to use the multi-threaded
tokio `Runtime` in tests, which meant that any test that transitively
used `executor::threadpool::blocking` would fail with
```
'blocking' annotation used from outside the context of a thread pool
```
This patch adds a runtime annotation attribute to `#[tokio::test]` just
like `#[tokio::main]` has, which lets users opt in to the threadpool
runtime over `current_thread` (the default).
The algorithm backing `AtomicWaker` effectively uses a spin lock backed
by notifying & yielding the current task. This adds a `spin_lock_hint`
annotation to cover this case.
While, in practice, the omission of `spin_lock_hint` would not cause
problems, there are platforms that do not handle spin locks very well
and could enter a deadlock in pathological cases.
- Adds a minimum `rt-current-thread` optional feature that exports
`tokio::runtime::current_thread`.
- Adds a `macros` optional feature to enable the `#[tokio::main]` and
`#[tokio::test]` attributes.
- Adjusts `#[tokio::main]` macro to select a runtime "automatically" if
a specific strategy isn't specified. Allows using the macro with only
the rt-current-thread feature.
* Removes most pin-projection related unsafe code.
* Removes manual Unpin implementations.
As references always implement Unpin, there is no need to implement
Unpin manually.
* Adds tests to check that Unpin requirement does not change accidentally
because changing Unpin requirements will be breaking changes.