* Added timeouts to all tests that were missing them
- any issue we have will likely result in deadlocks/starvation so its
best if all tests quickly timeout rather than require getting killed
or have the CI timeout itself
* Added a `support` module and put a bunch of helpers there to DRY the
tests
* As observed in alexcrichton/tokio-signal#38, Signal instances can starve based on the order
they are created in, and this ordering appears to be platform/OS
specific
* The crux of the issue is that we woud only *attempt* to broadcast any
pending signals if we successfully read out at least one byte from the
global pipe.
* For reasons unclear to me, the affected Signal instance would get
woken up after the signal handler writes to the global pipe, but it
would immediately hit a WouldBlock error and give up, bypassing the
broadcast attempt (even though the pending flag was correctly set).
- Maybe this has to do with OS specifics with how the bytes are
delivered (or not), or with some complex interaction with tokio and
the pipe registration. It seems fishy since strace logs didn't show
the signal handler pipe write fail either, but I'm all out of ideas
* The fix appears simple: unconditionally attempt to broadcast any
pending signals *any* time a Driver instance is woken up.
* Since we perform an atomic check for each pending signal, we know that
each (coalesced) signal broadcast will happen at most once. If we were
supuriously woken up and no signals were pending, then nothing will be
yielded to any pollers of Signal
* The down side is that since each Signal instance polls a Driver
instance, each poll to Signal will essentially perform N atomic
operations (N = number of signals we support) in an attempt to broadcast
any pending signals.
- However, we can revisit optimizing this better in the future
Fixesalexcrichton/tokio-signal#38
* We introduce a new global structure which keeps track of how many
signal streams have been registered with a given event loop (the event
loop is identified by its OS file descriptor)
* We only attempt to deregister our global evented pipe from any event
loop if and only if we are the last signal that was registered with it
* Currently, whenever a new signal stream is created we attempt to
register a global pipe with the event loop to drive events.
* We also (correctly) swallow any descriptor-already-registered errors
since the same pipe is always used
* However, we currently *deregister* the same global pipe *any time* a
Signal stream is dropped.
* This means that if 2 or more of Signal instances exist simultaneously
(even if listening for different signals) and one of them is dropped,
the remainder will starve (until any new signal is created again).
* Cargo runs each integration-style-test in its own process. Since the
tests use global data structures specific to the process, we should run
them in an isolated manner to avoid having cross-test interactions
* Fixesalexcrichton/tokio-signal#39
This allows tokio-signal to build with `-Z minimal-versions` - see
https://github.com/rust-lang/cargo/issues/5657#issuecomment-401110172
for more details.
Earlier versions depend on log 0.3.1, which itself depends on libc
0.1, which doesn't build on any post-1.0 version of rust.
This text historically was copied verbatim from rust-lang/rust's own README [1]
with the intention of licensing projects the same as rustc's own license, namely
a dual MIT/Apache-2.0 license. The clause about "various BSD-like licenses"
isn't actually correct for almost all projects other than rust-lang/rust and
the wording around "both" was slightly ambiguous.
This commit updates the wording to match more precisely what's in the
standard library [2], namely clarifying that there aren't any BSD-like licenses
in this repository and that the source is licensable under either license, at
your own discretion.
[1]: f0fe716dbc (license)
[2]: f0fe716dbc/src/libstd/lib.rs (L5-L9)
Replace the sequential counting (which might be exhausted) by an address
of an object (in a box, so it doesn't change). This is also a unique, so
it is acceptable ID.
Run multiple loops (both in parallel and sequentially) to make sure
broadcasting to multiple of them works and we work even after the
initial loop has gone away.