General rustdoc improvements (#450)

* Normalize links to docs.rs/CRATE/M.N/...

docs.rs is smart enough to show docs for the latest M.N.P release when
M.N is used in the link. For example:

  https://docs.rs/mio/0.6/mio/struct.Poll.html

..will show mio 0.6.14 and later docs. While using the `M.N.*`
(ASTERISK) syntax also works, `M.N` is the more common usage, so
standarize a few existing links to that format.

* Fix missing or malformed rustdoc links

* executor lib rustdoc minor format change

* Promote tokio-threadpool crate level comments to rustdoc

* Replace hidden tokio::executor::thread_pool docs with deprecation note

* Fix typo/simplify util module rustdoc

* Reuse some tokio::executor::thread_pool rustdoc for the crate

Relates to #421
This commit is contained in:
David Kellum 2018-07-22 13:35:30 -07:00 committed by Carl Lerche
parent c17ecb53e7
commit 491f15827b
13 changed files with 152 additions and 194 deletions

View File

@ -53,7 +53,7 @@ These components provide the runtime components necessary for building
an asynchronous application.
[net]: https://docs.rs/tokio/0.1/tokio/net/index.html
[reactor]: https://docs.rs/tokio/0.1.1/tokio/reactor/index.html
[reactor]: https://docs.rs/tokio/0.1/tokio/reactor/index.html
[scheduler]: https://tokio-rs.github.io/tokio/tokio/runtime/index.html
## Example

View File

@ -44,79 +44,10 @@
pub mod current_thread;
#[deprecated(since = "0.1.8", note = "use tokio-threadpool crate instead")]
#[doc(hidden)]
/// Re-exports of [`tokio-threadpool`], deprecated in favor of the crate.
///
/// [`tokio-threadpool`]: https://docs.rs/tokio-threadpool/0.1
pub mod thread_pool {
//! Maintains a pool of threads across which the set of spawned tasks are
//! executed.
//!
//! [`ThreadPool`] is an executor that uses a thread pool for executing
//! tasks concurrently across multiple cores. It uses a thread pool that is
//! optimized for use cases that involve multiplexing large number of
//! independent tasks that perform short(ish) amounts of computation and are
//! mainly waiting on I/O, i.e. the Tokio use case.
//!
//! Usually, users of [`ThreadPool`] will not create pool instances.
//! Instead, they will create a [`Runtime`] instance, which comes with a
//! pre-configured thread pool.
//!
//! At the core, [`ThreadPool`] uses a work-stealing based scheduling
//! strategy. When spawning a task while *external* to the thread pool
//! (i.e., from a thread that is not part of the thread pool), the task is
//! randomly assigned to a worker thread. When spawning a task while
//! *internal* to the thread pool, the task is assigned to the current
//! worker.
//!
//! Each worker maintains its own queue and first focuses on processing all
//! tasks in its queue. When the worker's queue is empty, the worker will
//! attempt to *steal* tasks from other worker queues. This strategy helps
//! ensure that work is evenly distributed across threads while minimizing
//! synchronization between worker threads.
//!
//! # Usage
//!
//! Thread pool instances are created using [`ThreadPool::new`] or
//! [`Builder::new`]. The first option returns a thread pool with default
//! configuration values. The second option allows configuring the thread
//! pool before instantiating it.
//!
//! Once an instance is obtained, futures may be spawned onto it using the
//! [`spawn`] function.
//!
//! A handle to the thread pool is obtained using [`ThreadPool::sender`].
//! This handle is **only** able to spawn futures onto the thread pool. It
//! is unable to affect the lifecycle of the thread pool in any way. This
//! handle can be passed into functions or stored in structs as a way to
//! grant the capability of spawning futures.
//!
//! # Examples
//!
//! ```rust
//! # extern crate tokio;
//! # extern crate futures;
//! # use tokio::executor::thread_pool::ThreadPool;
//! use futures::future::{Future, lazy};
//!
//! # pub fn main() {
//! // Create a thread pool with default configuration values
//! let thread_pool = ThreadPool::new();
//!
//! thread_pool.spawn(lazy(|| {
//! println!("called from a worker thread");
//! Ok(())
//! }));
//!
//! // Gracefully shutdown the threadpool
//! thread_pool.shutdown().wait().unwrap();
//! # }
//! ```
//!
//! [`ThreadPool`]: struct.ThreadPool.html
//! [`ThreadPool::new`]: struct.ThreadPool.html#method.new
//! [`ThreadPool::sender`]: struct.ThreadPool.html#method.sender
//! [`spawn`]: struct.ThreadPool.html#method.spawn
//! [`Builder::new`]: struct.Builder.html#method.new
//! [`Runtime`]: ../../runtime/struct.Runtime.html
pub use tokio_threadpool::{
Builder,
Sender,

View File

@ -5,7 +5,7 @@
//! provides a few major components:
//!
//! * A multi threaded, work-stealing based task [scheduler][runtime].
//! * A [reactor][reactor] backed by the operating system's event queue (epoll, kqueue,
//! * A [reactor] backed by the operating system's event queue (epoll, kqueue,
//! IOCP, etc...).
//! * Asynchronous [TCP and UDP][net] sockets.
//! * Asynchronous [filesystem][fs] operations.
@ -17,7 +17,7 @@
//! Guide level documentation is found on the [website].
//!
//! [website]: https://tokio.rs/docs/getting-started/hello-world/
//! [futures]: http://docs.rs/futures
//! [futures]: http://docs.rs/futures/0.1
//!
//! # Examples
//!

View File

@ -62,6 +62,9 @@
//! [rt]: struct.Runtime.html
//! [concurrent-rt]: ../struct.Runtime.html
//! [chan]: https://docs.rs/futures/0.1/futures/sync/mpsc/fn.channel.html
//! [reactor]: ../../reactor/struct.Reactor.html
//! [executor]: https://tokio.rs/docs/getting-started/runtime-model/#executors
//! [timer]: ../../timer/index.html
mod builder;
mod runtime;

View File

@ -76,6 +76,9 @@
//! [runtime]: ../runtime/struct.Runtime.html
//! [tokio-timer]: https://docs.rs/tokio-timer
//! [ext]: ../util/trait.FutureExt.html#method.deadline
//! [Deadline]: struct.Deadline.html
//! [Delay]: struct.Delay.html
//! [Interval]: struct.Interval.html
pub use tokio_timer::{
Deadline,

View File

@ -1,8 +1,9 @@
//! Utilities for working with Tokio.
//!
//! This module contains utilities that are useful for working with Tokio.
//! Currently, this only includes [`FutureExt`][FutureExt]. However, this will
//! include over time.
//! Currently, this only includes [`FutureExt`], but this may grow over time.
//!
//! [`FutureExt`]: trait.FutureExt.html
mod future;

View File

@ -7,8 +7,9 @@
//!
//! The executor is responsible for ensuring that [`Future::poll`] is called
//! whenever the task is notified. Notification happens when the internal
//! state of a task transitions from "not ready" to ready. For example, a socket
//! might have received data and a call to `read` will now be able to succeed.
//! state of a task transitions from *not ready* to *ready*. For example, a
//! socket might have received data and a call to `read` will now be able to
//! succeed.
//!
//! This crate provides traits and utilities that are necessary for building an
//! executor, including:
@ -29,6 +30,7 @@
//! [`enter`]: fn.enter.html
//! [`DefaultExecutor`]: struct.DefaultExecutor.html
//! [`Park`]: park/index.html
//! [`Future::poll`]: https://docs.rs/futures/0.1/futures/future/trait.Future.html#tymethod.poll
#![deny(missing_docs, missing_debug_implementations, warnings)]
#![doc(html_root_url = "https://docs.rs/tokio-executor/0.1.2")]

View File

@ -42,7 +42,7 @@
//! [`park_timeout`]: trait.Park.html#tymethod.park_timeout
//! [`unpark`]: trait.Unpark.html#tymethod.unpark
//! [up]: trait.Unpark.html
//! [mio]: https://docs.rs/mio/0.6.13/mio/struct.Poll.html
//! [mio]: https://docs.rs/mio/0.6/mio/struct.Poll.html
use std::marker::PhantomData;
use std::rc::Rc;

View File

@ -190,7 +190,9 @@ impl File {
///
/// # Panics
///
/// This function will panic if [`shutdown`] has been called.
/// This function will panic if `shutdown` has been called.
///
/// [std]: https://doc.rust-lang.org/std/fs/struct.File.html
pub fn into_std(mut self) -> StdFile {
self.std.take().expect("`File` instance already shutdown")
}

View File

@ -104,7 +104,7 @@ pub mod length_delimited {
//! [`FramedRead`] adapts an [`AsyncRead`] into a `Stream` of [`BytesMut`],
//! such that each yielded [`BytesMut`] value contains the contents of an
//! entire frame. There are many configuration parameters enabling
//! [`FrameRead`] to handle a wide range of protocols. Here are some
//! [`FramedRead`] to handle a wide range of protocols. Here are some
//! examples that will cover the various options at a high level.
//!
//! ## Example 1
@ -370,7 +370,7 @@ pub mod length_delimited {
//! [`AsyncRead`]: ../../trait.AsyncRead.html
//! [`AsyncWrite`]: ../../trait.AsyncWrite.html
//! [`Encoder`]: ../trait.Encoder.html
//! [`BytesMut`]: https://docs.rs/bytes/~0.4/bytes/struct.BytesMut.html
//! [`BytesMut`]: https://docs.rs/bytes/0.4/bytes/struct.BytesMut.html
pub use ::length_delimited::*;
}

View File

@ -1,117 +1,81 @@
//! A work-stealing based thread pool for executing futures.
#![doc(html_root_url = "https://docs.rs/tokio-threadpool/0.1.5")]
#![deny(warnings, missing_docs, missing_debug_implementations)]
// The Tokio thread pool is designed to scheduled futures in Tokio based
// applications. The thread pool structure manages two sets of threads:
//
// * Worker threads.
// * Backup threads.
//
// Worker threads are used to schedule futures using a work-stealing strategy.
// Backup threads, on the other hand, are intended only to support the
// `blocking` API. Threads will transition between the two sets.
//
// The advantage of the work-stealing strategy is minimal cross-thread
// coordination. The thread pool attempts to make as much progress as possible
// without communicating across threads.
//
// # Crate layout
//
// The primary type, `Pool`, holds the majority of a thread pool's state,
// including the state for each worker. Each worker's state is maintained in an
// instance of `worker::Entry`.
//
// `Worker` contains the logic that runs on each worker thread. It holds an
// `Arc` to `Pool` and is able to access its state from `Pool`.
//
// `Task` is a harness around an individual future. It manages polling and
// scheduling that future.
//
// # Worker overview
//
// Each worker has two queues: a deque and a mpsc channel. The deque is the
// primary queue for tasks that are scheduled to run on the worker thread. Tasks
// can only be pushed onto the deque by the worker, but other workers may
// "steal" from that deque. The mpsc channel is used to submit futures while
// external to the pool.
//
// As long as the thread pool has not been shutdown, a worker will run in a
// loop. Each loop, it consumes all tasks on its mpsc channel and pushes it onto
// the deque. It then pops tasks off of the deque and executes them.
//
// If a worker has no work, i.e., both queues are empty. It attempts to steal.
// To do this, it randomly scans other workers' deques and tries to pop a task.
// If it finds no work to steal, the thread goes to sleep.
//
// When the worker detects that the pool has been shut down, it exits the loop,
// cleans up its state, and shuts the thread down.
//
// # Thread pool initialization
//
// By default, no threads are spawned on creation. Instead, when new futures are
// spawned, the pool first checks if there are enough active worker threads. If
// not, a new worker thread is spawned.
//
// # Spawning futures
//
// The spawning behavior depends on whether a future was spawned from within a
// worker or thread or if it was spawned from an external handle.
//
// When spawning a future while external to the thread pool, the current
// strategy is to randomly pick a worker to submit the task to. The task is then
// pushed onto that worker's mpsc channel.
//
// When spawning a future while on a worker thread, the task is pushed onto the
// back of the current worker's deque.
//
// # Sleeping workers
//
// Sleeping workers are tracked using a treiber stack [1]. This results in the
// thread that most recently went to sleep getting woken up first. When the pool
// is not under load, this helps threads shutdown faster.
//
// Sleeping is done by using `tokio_executor::Park` implementations. This allows
// the user of the thread pool to customize the work that is performed to sleep.
// This is how injecting timers and other functionality into the thread pool is
// done.
//
// [1]: https://en.wikipedia.org/wiki/Treiber_Stack
//
// # Notifying workers
//
// When there is work to be done, workers must be notified. However, notifying a
// worker requires cross thread coordination. Ideally, a worker would only be
// notified when it is sleeping, but there is no way to know if a worker is
// sleeping without cross thread communication.
//
// The two cases when a worker might need to be notified are:
//
// 1) A task is externally submitted to a worker via the mpsc channel.
// 2) A worker has a back log of work and needs other workers to steal from it.
//
// In the first case, the worker will always be notified. However, it could be
// possible to avoid the notification if the mpsc channel has two or greater
// number of tasks *after* the task is submitted. In this case, we are able to
// assume that the worker has previously been notified.
//
// The second case is trickier. Currently, whenever a worker spawns a new future
// (pushing it onto its deque) and when it pops a future from its mpsc, it tries
// to notify a sleeping worker to wake up and start stealing. This is a lot of
// notification and it **might** be possible to reduce it.
//
// Also, whenever a worker is woken up via a signal and it does find work, it,
// in turn, will try to wake up a new worker.
//
// # `blocking`
//
// The strategy for handling blocking closures is to hand off the worker to a
// new thread. This implies handing off the `deque` and `mpsc`. Once this is
// done, the new thread continues to process the work queue and the original
// thread is able to block. Once it finishes processing the blocking future, the
// thread has no additional work and is inserted into the backup pool. This
// makes it available to other workers that encounter a `blocking` call.
//! A work-stealing based thread pool for executing futures.
//!
//! The Tokio thread pool supports scheduling futures and processing them on
//! multiple CPU cores. It is optimized for the primary Tokio use case of many
//! independent tasks with limited computation and with most tasks waiting on
//! I/O. Usually, users will not create a `ThreadPool` instance directly, but
//! will use one via a [`runtime`].
//!
//! The `TheadPool` structure manages two sets of threads:
//!
//! * Worker threads.
//! * Backup threads.
//!
//! Worker threads are used to schedule futures using a work-stealing strategy.
//! Backup threads, on the other hand, are intended only to support the
//! `blocking` API. Threads will transition between the two sets.
//!
//! The advantage of the work-stealing strategy is minimal cross-thread
//! coordination. The thread pool attempts to make as much progress as possible
//! without communicating across threads.
//!
//! ## Worker overview
//!
//! Each worker has two queues: a deque and a mpsc channel. The deque is the
//! primary queue for tasks that are scheduled to run on the worker thread. Tasks
//! can only be pushed onto the deque by the worker, but other workers may
//! "steal" from that deque. The mpsc channel is used to submit futures while
//! external to the pool.
//!
//! As long as the thread pool has not been shutdown, a worker will run in a
//! loop. Each loop, it consumes all tasks on its mpsc channel and pushes it onto
//! the deque. It then pops tasks off of the deque and executes them.
//!
//! If a worker has no work, i.e., both queues are empty. It attempts to steal.
//! To do this, it randomly scans other workers' deques and tries to pop a task.
//! If it finds no work to steal, the thread goes to sleep.
//!
//! When the worker detects that the pool has been shut down, it exits the loop,
//! cleans up its state, and shuts the thread down.
//!
//! ## Thread pool initialization
//!
//! Note, users normally will use the threadpool created by a [`runtime`].
//!
//! By default, no threads are spawned on creation. Instead, when new futures are
//! spawned, the pool first checks if there are enough active worker threads. If
//! not, a new worker thread is spawned.
//!
//! ## Spawning futures
//!
//! The spawning behavior depends on whether a future was spawned from within a
//! worker or thread or if it was spawned from an external handle.
//!
//! When spawning a future while external to the thread pool, the current
//! strategy is to randomly pick a worker to submit the task to. The task is then
//! pushed onto that worker's mpsc channel.
//!
//! When spawning a future while on a worker thread, the task is pushed onto the
//! back of the current worker's deque.
//!
//! ## Blocking annotation strategy
//!
//! The [`blocking`] function is used to annotate a section of code that
//! performs a blocking operation, either by issuing a blocking syscall or
//! performing any long running CPU-bound computation.
//!
//! The strategy for handling blocking closures is to hand off the worker to a
//! new thread. This implies handing off the `deque` and `mpsc`. Once this is
//! done, the new thread continues to process the work queue and the original
//! thread is able to block. Once it finishes processing the blocking future, the
//! thread has no additional work and is inserted into the backup pool. This
//! makes it available to other workers that encounter a [`blocking`] call.
//!
//! [`blocking`]: fn.blocking.html
//! [`runtime`]: https://docs.rs/tokio/0.1/tokio/runtime/
extern crate tokio_executor;
@ -128,6 +92,56 @@ extern crate log;
#[cfg(feature = "unstable-futures")]
extern crate futures2;
// ## Crate layout
//
// The primary type, `Pool`, holds the majority of a thread pool's state,
// including the state for each worker. Each worker's state is maintained in an
// instance of `worker::Entry`.
//
// `Worker` contains the logic that runs on each worker thread. It holds an
// `Arc` to `Pool` and is able to access its state from `Pool`.
//
// `Task` is a harness around an individual future. It manages polling and
// scheduling that future.
//
// ## Sleeping workers
//
// Sleeping workers are tracked using a [treiber stack]. This results in the
// thread that most recently went to sleep getting woken up first. When the pool
// is not under load, this helps threads shutdown faster.
//
// Sleeping is done by using `tokio_executor::Park` implementations. This allows
// the user of the thread pool to customize the work that is performed to sleep.
// This is how injecting timers and other functionality into the thread pool is
// done.
//
// ## Notifying workers
//
// When there is work to be done, workers must be notified. However, notifying a
// worker requires cross thread coordination. Ideally, a worker would only be
// notified when it is sleeping, but there is no way to know if a worker is
// sleeping without cross thread communication.
//
// The two cases when a worker might need to be notified are:
//
// 1. A task is externally submitted to a worker via the mpsc channel.
// 2. A worker has a back log of work and needs other workers to steal from it.
//
// In the first case, the worker will always be notified. However, it could be
// possible to avoid the notification if the mpsc channel has two or greater
// number of tasks *after* the task is submitted. In this case, we are able to
// assume that the worker has previously been notified.
//
// The second case is trickier. Currently, whenever a worker spawns a new future
// (pushing it onto its deque) and when it pops a future from its mpsc, it tries
// to notify a sleeping worker to wake up and start stealing. This is a lot of
// notification and it **might** be possible to reduce it.
//
// Also, whenever a worker is woken up via a signal and it does find work, it,
// in turn, will try to wake up a new worker.
//
// [treiber stack]: https://en.wikipedia.org/wiki/Treiber_Stack
pub mod park;
mod blocking;

View File

@ -33,6 +33,7 @@ thread_local!(static CURRENT_TIMER: RefCell<Option<Handle>> = RefCell::new(None)
/// This function panics if there already is a default timer set.
///
/// [`Delay`]: ../struct.Delay.html
/// [`Delay::new`]: ../struct.Delay.html#method.new
pub fn with_default<F, R>(handle: &Handle, enter: &mut Enter, f: F) -> R
where F: FnOnce(&mut Enter) -> R
{

View File

@ -26,6 +26,7 @@
//! [`Delay`]: ../struct.Delay.html
//! [`Now`]: trait.Now.html
//! [`Now::now`]: trait.Now.html#method.now
//! [`SystemNow`]: struct.SystemNow.html
// This allows the usage of the old `Now` trait.
#![allow(deprecated)]