Compare commits

...

376 Commits

Author SHA1 Message Date
ripytide
a1c277bc90
docs: correct rng pre-requisite comment (#835)
Fixes #826
2025-07-22 09:42:24 -04:00
ripytide
cd2bfa0f58
improve tower-layer docs (#834)
* improve tower-layer docs

* fix compile issues
2025-07-22 09:33:17 -04:00
cppforliving
9e901d450d
service: Improve unsized types' support (#650)
… and fix missing ready call in example.
2025-07-09 20:53:33 +02:00
ripytide
d21cdbf044
Fix clippy lints (#832) 2025-07-05 14:47:08 +00:00
tottoto
fcef5928a2
chore: Use tokio-stream UnboundedReceiverStream (#831) 2025-07-05 06:56:12 -04:00
Paolo Barbolini
fe3156587c
Bump rand to v0.9 (#811) 2025-07-03 09:52:53 +02:00
Gigabuidl
50d839d3b0
docs: update dead link in util/rng.rs (#820) 2025-07-02 11:13:32 -04:00
Icemic
9754acb5dc
fix: Use minimal tokio features (#828) 2025-06-30 05:37:12 +00:00
tottoto
b79c0c7497
chore: Remove unused dependency (#822) 2025-06-18 00:38:41 +02:00
Jonas Platte
6a3ab07b4c
style: address clippy lints (#827) 2025-06-12 13:30:19 -04:00
Tait Hoyem
ec81e5797b
no-std compatiblity for underlying traits (#810) 2025-06-01 00:02:07 +02:00
tottoto
81658e65ad
chore: Replace type related to future with standard library (#805) 2025-04-29 13:14:11 -07:00
katelyn martin
abb375d08c
chore: add Buffer breaking change to changelog (#819)
in #635, some subtle breaking changes were made to how `Buffer` works.

this is documented in the description of that PR, here:

> I had to change some of the integration tests slightly as part of this
> change. This is because the buffer implementation using semaphore
> permits is _very subtly_ different from one using a bounded channel. In
> the `Semaphore`-based implementation, a semaphore permit is stored in
> the `Message` struct sent over the channel. This is so that the capacity
> is used as long as the message is in flight. However, when the worker
> task is processing a message that's been recieved from the channel,
> the permit is still not dropped. Essentially, the one message actively
> held by the worker task _also_ occupies one "slot" of capacity, so the
> actual channel capacity is one less than the value passed to the
> constructor, _once the first request has been sent to the worker_. The
> bounded MPSC changed this behavior so that capacity is only occupied
> while a request is actually in the channel, which broke some tests
> that relied on the old (and technically wrong) behavior.

bear particular attention to this:

> The bounded MPSC changed this behavior so that capacity is only
> occupied while a request is actually in the channel, which broke some
> tests that relied on the old (and technically wrong) behavior.

this is a change in behavior that might affect downstream callers.

this commit adds mention of these changes to the changelog, to help
consumers navigate the upgrade from tower 0.4 to 0.5.

Signed-off-by: katelyn martin <me+cratelyn@katelyn.world>
2025-03-14 16:50:55 -04:00
katelyn martin
6c8d98b470
chore: add Buffer breaking changes to changelog (#818)
in #654, breaking changes were made to the `Buffer` type. this commit
adds mention of these breaking changes to the changelog, so that users
upgrading from 0.4 to 0.5 can have record of what changed, and why.
2025-03-12 18:34:40 -04:00
katelyn martin
fb646693bf
chore: note Budget breaking change in changelog (#817)
`Budget` is now a trait in the 0.5 release. this is a breaking change relative to the 0.4 release, where it was a concrete [struct](https://docs.rs/tower/0.4.13/tower/retry/budget/struct.Budget.html).

this commit updates the changelog to characterize this as a breaking change, rather than an additive change.
2025-03-11 17:04:17 -04:00
katelyn martin
ee149f0170
chore: fix broken links in changelog (#816)
this commit fixes some broken PR links in the changelog, related to the 0.5.2 release.
2025-03-11 16:52:01 -04:00
katelyn martin
aade4e34ff
chore: add breaking changes to changelog (#815)
in #637, breaking changes were made to the `Either<A, B>` service.

this commit adds documentation of these breaking changes to the changelog, so that users upgrading from 0.4 to 0.5 have record of what changed when, and why.
2025-03-11 16:42:38 -04:00
Carlos O'Ryan
954e4c7e8d
docs: bad documentation in ExponentialBackoffMaker (#809) 2024-12-27 11:42:55 +00:00
Jess Izen
34a6951a46
add ServiceBuilder::boxed_clone_sync helper (#804) 2024-12-20 18:34:40 -05:00
Sean McArthur
7dc533ef86 tower v0.5.2 2024-12-11 08:25:54 -05:00
Bhuwan Pandit
a09fd9742d
chore: fix dead code warning for 'Sealed' trait and 'sample_floyd2' func (#799) 2024-12-10 14:47:18 -05:00
Jess Izen
f57e31b0e6
Add util::BoxCloneSyncServiceLayer (#802)
cc #777
2024-12-10 14:31:45 -05:00
tim gretler
da24532017
Add util::BoxCloneSyncService (#777)
Closes #770
2024-12-10 13:38:18 -05:00
Elichai Turkel
6283f3aff1
Upgrade http and sync_wrapper dependencies (#788) 2024-11-19 11:49:39 -05:00
Jonas Platte
71551010ac
Prepare release of v0.5.1 (#791) 2024-08-21 19:36:33 -04:00
Arnaud Gourlay
b2c48b46a3
Bump dependency on tower-layer (#787) 2024-08-15 14:03:34 +00:00
Toby Lawrence
fec9e559e2
tower-layer: drop versions from dev dependencies (#782) 2024-08-13 12:48:56 -04:00
David Barsky
646804d77e
chore: prepare to release tower-0.5.0, tower-layer-0.3.3, tower-service-0.3.3, and tower-test-0.4.1 (#781) 2024-08-02 15:21:30 -04:00
Dirk Stolle
7202cfeecd
chore: fix a few typos (#780) 2024-07-23 19:54:14 -04:00
Glen De Cauwsemaecker
85080a5617
use workspace dependencies for tower (#778)
Co-authored-by: Toby Lawrence <tobz@users.noreply.github.com>
2024-07-23 11:26:01 -04:00
Glen De Cauwsemaecker
88a7d3e01e
fix warnings found when running check/doc commands (#779)
Co-authored-by: Toby Lawrence <tobz@users.noreply.github.com>
2024-07-23 10:55:12 -04:00
Dirk Stolle
a6e98a7d69
chore: update GitHub Actions CI (#740) 2024-07-23 09:37:48 -04:00
Toby Lawrence
74e925d2c8
chore: fix spelling errors (#775)
Co-authored-by: Dirk Stolle <striezel-dev@web.de>
2024-07-21 12:36:46 -04:00
Daniél Kerkmann
89ac74f320
feat: Make new functions const when possible (#760)
* feat: Make new functions const when possible

The main reason was to allow to initialize the RateLimitLayer in a const context.
So why not making ever new function const (wherever it's possible). :P

* Change the assert to use an MSRV-compatible function.

---------

Co-authored-by: Toby Lawrence <tobz@users.noreply.github.com>
2024-07-20 13:18:29 -04:00
Glen De Cauwsemaecker
032d17f689
ensure that re-exported symbols show feature label in root (#754)
Co-authored-by: Toby Lawrence <tobz@users.noreply.github.com>
2024-07-20 13:11:02 -04:00
Josh Stone
05a0a25dcc
Upgrade to indexmap v2 (MSRV 1.63) (#741)
Co-authored-by: Toby Lawrence <tobz@users.noreply.github.com>
2024-07-20 13:01:30 -04:00
Glen De Cauwsemaecker
7d723eb2fa
remove generic parameters from Reconnect::new (#755)
these were not used, as the only parameters used
come from the impl block (directly and indirectly)

Co-authored-by: Toby Lawrence <tobz@users.noreply.github.com>
2024-07-20 12:42:34 -04:00
tottoto
f286933bec
chore: Remove unmatched deny ignore config (#733)
Co-authored-by: Toby Lawrence <tobz@users.noreply.github.com>
2024-07-20 12:31:24 -04:00
mxsm
08917603c1
docs: Fix some spelling mistakes (#747) 2024-07-19 16:27:07 -04:00
Eric Crosson
39adf5c509
docs: fix grammar (#749)
This commit uses the correct form of "it's".

"Its" is possessive describes a noun, while "it's" is a contraction that
is short for "it is". Since "ready" is not a noun, we must use the
contraction in this case.

In addition, this commit adds some missing commas.

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2023-11-06 21:27:25 +00:00
0x5459
a4c20a388b chore: remove useless code 2023-11-06 16:12:37 -05:00
a-kenji
bf4ea94834
docs: Fix duplicate words in README (#734) 2023-05-16 19:34:51 +00:00
Misha Zharov
be1a4faf66
Check if must_use will fix the issue (#728) 2023-04-28 11:38:59 +02:00
qthree
0c3ae8856e
Derive Clone for AsyncFilterLayer (#731) 2023-04-21 15:25:52 +02:00
Misha Zharov
0604f20c48
Bump version numbers (#729) 2023-04-11 18:55:52 +02:00
Adrien Guillo
d2f1703c48
Derive Clone for RetryLayer (#726) 2023-03-24 21:59:48 +00:00
Spencer Bartholomew
664cb35abb
Fix axum readme link (#721) 2023-02-26 18:37:05 +01:00
Spencer Bartholomew
64182d8243
Recommend Axum instead of Warp (#720) 2023-02-26 15:46:02 +01:00
Jeffrey Hutchins
74881d5311
Copy editing building-a-middleware-from-scratch.md (#718) 2023-01-31 19:42:19 +01:00
Conrad Ludgate
b01bb12ddd
rng: use a simpler random 2-sampler (#716) 2023-01-10 12:11:30 -05:00
Frederik Haaning
6f3050614f
timeout: fix typo in docs (#711) 2022-12-01 11:06:11 -05:00
David Pedersen
c34182d0b3
util: make BoxService impl Sync via SyncWrapper (#702)
* util: make `BoxService` impl `Sync` via `SyncWrapper`

* changelog

* format
2022-11-27 10:09:18 +00:00
JJ Ferman
387d2844b7
Updating generic to be consistent (#710) 2022-11-09 08:02:13 +01:00
Alex Rudy
787f5dd81b
util: Adds a BoxCloneServiceLayer (#708)
This layer is similar to a BoxLayer, but produces a BoxCloneService instead, so can be used when
the underlying layers must be clone.

Co-authored-by: Lucio Franco <luciofranco14@gmail.com>
2022-11-04 11:06:23 -04:00
Leonardo Yvens
c9d84cde0c
util: Two call_all bug fixes (#709)
One is handling poll_ready errors (#706).

The other is fixing the TODO about disarming poll_ready, since there is no disarm this makes sure
`poll_ready` is only called if `call` will immediately follow.
2022-11-01 15:28:09 -04:00
boraarslan
d27ba65891
retry: Add Budget trait (#703) 2022-10-24 18:28:22 +00:00
Heiko Seeberger
582a0e0c74
util: improve ServiceExt::oneshot docs (#704) 2022-10-24 16:02:51 +00:00
Sam Lewis
c049ded33f
discover: Implement Clone for Change (#701)
Implements Clone for discover::Change, if the underlying key and value
both implement clone.

This is convenient for use-cases where a single change needs to be
duplicated, and sent to multiple discover streams.

Co-authored-by: Lucio Franco <luciofranco14@gmail.com>
2022-10-17 18:41:55 +00:00
Sam Lewis
7d829f198e
ready-cache: Allow iteration over ready services (#700)
Adds `iter_ready` and `iter_ready_mut` to allow iteration over ready
services within the ready_cache. This allows the ready cache to be used
for router-like services that wish to direct requests towards specific
services. Allowing iteration directly means that cache keys do not have
to be redundantly stored separate to the ready_cache.
2022-10-17 13:19:26 -04:00
Jonas Platte
87fa8ef782
layer: Implement Layer for tuples of up to 16 elements (#694) 2022-10-04 18:54:39 +00:00
Jonas Platte
3f31ffd2cf
chore: Use doc_auto_cfg (#693)
… so extra doc(cfg) attributes aren't needed in most places.
2022-09-25 16:03:54 +02:00
Daniel Cormier
c5632a26aa
docs(tower): moved docs from private tower::util::boxed module (#684)
Now they're on the public `BoxService` struct that referenced them.

Fixes #683.
2022-09-13 16:24:38 +00:00
Daniel Sedlak
4362dfc70c
retry: Extend Policy trait documentation (#690) 2022-09-12 11:33:20 -04:00
Lucio Franco
b12f14861f
retry: Add generic backoff utilities (#685)
This adds a new `Backoff` trait and a `ExponentialBackoff`
implementation borrwoed from `linkerd2-proxy`. This provides the initial
building blocks for a more fully batteries included retry policy.
2022-08-30 13:59:17 -04:00
Andrew Banchich
b6b0f27197
Remove lspower due to deprecation (#688) 2022-08-30 11:58:36 -04:00
Lucio Franco
e0558266a3
util: Add rng utilities (#686)
This adds new PRNG utilities that only use libstd and not the external
`rand` crate. This change's motivation are that in tower middleware that
need PRNG don't need the complexity and vast utilities of the `rand`
crate.

This adds a `Rng` trait which abstracts the simple PRNG features tower
needs. This also provides a `HasherRng` which uses the `RandomState`
type from libstd to generate random `u64` values. In addition, there is
an internal only `sample_inplace` which is used within the balance p2c
middleware to randomly pick a ready service. This implementation is
crate private since its quite specific to the balance implementation.

The goal of this in addition to the balance middlware getting `rand`
removed is for the upcoming `Retry` changes. The `next_f64` will be used
in the jitter portion of the backoff utilities in #685.

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2022-08-25 13:06:24 -04:00
Lucio Franco
aec7b8f417
retry: Change Policy to accept &mut self (#681)
This changes the `Policy` trait in the `retry` layer to accept `&mut
self` instead of `&self` and changes the output type of the returned
future to `()`. The motivation for this change is to simplify
the trait a bit. By the trait methods having mutable references it means
that for each retry request session you can mutate the local policy.
This is because the policy is cloned for each individual request that
arrives into the retry middleware. In addition, this allows the `Policy`
trait to be object safe.
2022-08-23 14:30:09 -04:00
Folyd
6d34340f1e
chore: fix Service doc links in tower-layer (#579)
Co-authored-by: Lucio Franco <luciofranco14@gmail.com>
2022-08-15 13:13:11 -04:00
Folyd
06a0e597e5
docs: improve docs of retry budget (#613)
Co-authored-by: Oliver Gould <ver@buoyant.io>
Co-authored-by: Lucio Franco <luciofranco14@gmail.com>
2022-08-11 11:06:03 -04:00
Bryant Luk
8805309486
service: Call inner.poll_ready() in docs when cloning inner (#679)
- The documentation should call self.inner.poll_ready() in the
  Wrapper::poll_ready() call to emphasize that self.inner may only be
  ready on the original instance and not a clone of inner.
2022-08-03 09:56:39 -07:00
Russell Cohen
19c1a1dbb8
retry: improve flexiblity of retry policy (#584)
* improve flexiblity of retry policy

retry::Policy is an effective way to expressing retries, however, there two use cases that
as it stands today, cannot be expressed:
- Altering the final response (eg. to record the fact that you ran out of retries)
- Altering the request (eg. to set a header to the server indicating that this request is a retry)

(Technically the second is possible with `clone_request`, but it's a little unclear _which_ request would actually get sent).

This change implements what I think is pretty close to the minimal update to make this possible, namely, `req`
and `Res` both become mutable references. This enables policies to mutate them during execution & enables both of the
use cases above without complicating the "simple path" callers who don't need this behavior.

**This is a breaking change.** However, the fixes are only a couple of `&mut` and potentially a call to `as_ref()`.

* Update changelog

* doc updates

- Wrap docs to 80 characters
- Small doc tweaks / rewrites to clarify & remove `you`

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
Co-authored-by: Lucio Franco <luciofranco14@gmail.com>
2022-08-01 13:24:34 -04:00
Eliza Weisman
ee826286fd
chore: allow publishing releases from version branches (#674) 2022-06-17 13:41:20 -07:00
Oliver Gould
edd922d6d0
ready-cache: Ensure cancelation can be observed (#668)
`tokio::task` enforces a cooperative scheduling regime that can cause
`oneshot::Receiver::poll` to return pending after the sender has sent an
update. `ReadyCache` uses a oneshot to notify pending services that they
should not become ready. When a cancelation is not observed, the ready
cache return service instances that should have been canceled, which
breaks assumptions and causes an invalid state.

This branch replaces the use of `tokio::sync::oneshot` for canceling 
pending futures with a custom cancelation handle using an `AtomicBool`
and `futures::task::AtomicWaker`. This ensures that canceled `Pending`
services are always woken even when the task's budget is exceeded.
Additionally, cancelation status is now always known to the `Pending`
future, by checking the `AtomicBool` immediately on polls, even in cases
where the canceled `Pending` future was woken by the inner `Service`
becoming ready, rather than by the cancelation.

Fixes #415

Signed-off-by: Oliver Gould <ver@buoyant.io>
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2022-06-17 12:06:35 -07:00
Eliza Weisman
22b6fc743b
ci: run MSRV checks with minimal dep versions (#670)
In many cases, new releases of a dependency can break compatibility with
Tower's minimum supported Rust version (MSRV). It shouldn't be necessary
for Tower to bump its MSRV when a dependency does, as users on older
Rust versions should be able to depend on older versions of that crate.
Instead, we should probably just run our MSRV checks with minimal
dependency versions.

This branch changes Tower's CI jobs to do that. It was also necessary to 
make some changes to the `Cargo.toml` to actually fix the build with
minimal dependency versions.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2022-06-17 11:23:32 -07:00
Eliza Weisman
45a13b128d
ready_cache: just use pin_project for Pending (#667)
This gets rid of the `Unpin` impl with the weird comment on it.

Alternatively, we could just put a `S: Unpin` bound on `Pending`, but
this changes the public API to require that the service type is `Unpin`.
In practice, it will be, but we could also just avoid the trait bound.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2022-06-17 09:15:23 -07:00
Matt Klein
3c170aaf19
load_shed: make constructor for Overloaded error public (#661)
This allows for mocking. This also matches what is done for
the timeout Elapsed error.

Signed-off-by: Matt Klein <mklein@lyft.com>
2022-06-13 11:25:11 -07:00
Matt Klein
8b522920f8
service: clarify docs around shared resource consumption in poll_ready() (#662)
Signed-off-by: Matt Klein <mklein@lyft.com>
2022-06-10 10:52:39 -07:00
Noah Kennedy
5064987ffe
log: don't enable this feature by default (#660)
Unfortunately, tracing/log is a non-additive feature (https://github.com/tokio-rs/tracing/issues/1793), so enabling it is a bit radioactive.
2022-04-14 09:25:42 +02:00
Bruno
34d6e7befa
fix: broken Service link (#659) 2022-04-07 13:33:15 -04:00
Eliza Weisman
71292ee683
balance: remove Pool (#658)
Per #456, there are a number of issues with the `balance::Pool` API that
limit its usability, and it isn't widely used. In the discussion on that
issue, we agreed that it should probably just be removed in 0.5 --- it
can be replaced with something more useful later.

This branch removes `balance::Pool`.

CLoses #456.
2022-03-31 11:09:04 -07:00
Leonardo Yvens
cc9e5bea6f
util: Fix call_all hang when stream is pending (#656)
Currently `call_all` will hang in a busy loop if called when the input
stream is pending.
2022-03-29 12:25:51 -07:00
Eliza Weisman
9f86b8f3d8
buffer: change Buffer to be generic over inner service's Future (#654)
* buffer: change `Buffer` to be generic over inner service's `Future`

This commit changes the `Buffer` service to be generic over the inner
service's `Future` type, rather than over the `Service` type itself. The
`Worker` type is still generic over the inner `Service` (and, it must
be, as it stores an instance of that service). This should reduce type
complexity of buffers a bit, as they will erase nested service types.

Unfortunately, `Buffer` is still generic over the inner service's
`Future`, which may also be nested, but it is likely to be less complex
than the `Service`. It would be nice if we could erase the `Future` type
as well, but I don't believe this is possible without either boxing the
futures or changing them to always be spawned on a background task,
neither of which seems desirable here.

Closes #641

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2022-03-29 10:41:19 -07:00
Eliza Weisman
e4e440906d
util: remove deprecated ServiceExt::ready_and (#652)
These were deprecated in #567. In 0.5, we can just remove them
entirely.

Closes #568

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2022-03-11 09:51:00 -08:00
Leonardo Yvens
d8c73fcb33
util: Unecessary bound in ServiceExt::call_all (#651) 2022-03-10 12:53:08 +01:00
Leonardo Yvens
04f0bd0cc3
util: Change CallAll error type to Svc::Error (#649) 2022-03-09 21:17:31 +01:00
Folyd
4bcfaeb33e
chore: replace Never with Infallible (#612)
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2022-03-08 09:13:16 -08:00
Eliza Weisman
0e907963c3
rewrite buffer to use bounded MPSC (#635)
* rewrite `buffer` to use bounded MPSC

## Motivation

Currently, `tower::buffer` uses `tokio::sync::mpsc`'s _unbounded_
channel plus a `tokio::sync::Semaphore`, in order to re-implement a
bounded channel. This was necessary because when this code was updated
to the latest version of `tokio`, there was no way to reserve a
non-borrowed send permit from a `Sender`. Thus, it was necessary to use
the `Semaphore` for the future that is polled in `poll_ready` to acquire
send capacity, since a `Permit` from the `Sender` could not be stored in
the struct until it's consumed in `call`.

This code is Not Ideal. Reimplementing the bounded channel makes the
implementation more complicated, and means that there is a bunch of
extra stuff we have to do to e.g. propagate cancellations/service errors
to tasks waiting on `poll_ready`. The bounded MPSC would solve most of
this for us. It might also be a bit more efficient, since we would only
have a single reference-counted heap allocation (the `Sender`), rather
than two (the `Sender` _and_ the `Arc<Semaphore>`).

in `tokio-util` v0.7, the semantics of `PollSender` was changed to have
a `poll_reserve` rather than `poll_send_done`, which is the required
behavior for `Buffer`. Therefore, we can now just use the
`tokio_util::sync::PollSender` type, and the buffer implementation is
now much simpler.

## Solution

This branch changes the buffer to use only a single bounded MPSC via
`PolLSender`, rather than an unbounded MPSC and a semaphore. The bounded
MPSC internally manages its semaphore, so we can now remove a lot of
complexity in the current implementation.

I had to change some of the integration tests slightly as part of this
change. This is because the buffer implementation using semaphore
permits is _very subtly_ different from one using a bounded channel. In
the `Semaphore`-based implementation, a semaphore permit is stored in
the `Message` struct sent over the channel. This is so that the capacity
is used as long as the message is in flight. However, when the worker
task is processing a message that's been recieved from the channel, the
permit is still not dropped. Essentially, the one message actively held
by the worker task _also_ occupies one "slot" of capacity, so the actual
channel capacity is one less than the value passed to the constructor,
_once the first request has been sent to the worker_. The bounded MPSC
changed this behavior so that capacity is only occupied while a request
is actually in the channel, which broke some tests that relied on the
old (and technically wrong) behavior.

## Notes

There is one sort of significant issue with this change, which is that
it unfortunately requires adding `Send` and `'static` bounds to the
`T::Future` and `Request` types. This is because they occur within the
`Message` type that's sent over the channel, and a MPSC's
`OwnedPermit<T>` for a message of type `T` is only `Send` or `'static`
when `T` is. For `PollSender` to be `Send` + `Sync`, the
`ReusableBoxFuture` that it uses internally requires the future be `Send
+ 'static`, which it is not when `OwnedPermit` isn't.

I don't believe that it's actually _necessary_ for the `OwnedPermit<T>`
type to require `T: Send` in order to be `Send`, or `T: 'static` in
order to be valid for the `'static` lifetime. An `OwnedPermit` never
actually contains an instance of type `T`, it just represents the
_capacity_ to send that type to the channel. The channel itself will
actually contain the values of type `T`. Therefore, it's possible this
could be changed upstream in Tokio, although I haven't looked into it
yet.

This is, however, a breaking change.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2022-02-25 17:04:21 +00:00
Oliver Gould
f0e999521d
spawn-ready: Remove useless MakeSpawnReady type (#536)
The `spawn_ready` module has two notable (public API) problems:

1. `MakeSpawnReady` is useless, as it doesn't use the target type in any
   novel way. It's nothing more than a `MapResponse`.
2. The `SpawnReadyLayer` type produces a `MakeSpawnReady` (which is, as
   mentioned, useless).

This change removes the `spawn_ready::make` module and modifies
`SpawnReadyLayer` to produce `SpawnReady` instances directly.

---

This is unfortunately a breaking change. However, the current state of
`SpawnReadyLayer` is so convoluted that I can't imagine anyone is
actually using it... If the breakage is untenable, I can revert the
module changes and include only the `JoinHandle` change.
2022-02-17 20:19:58 +00:00
David Pedersen
522687c2cd
util: rework Either (#637)
In practice I've found `Either` be hard to use since it changes the
error type to `BoxError`. That means if you combine two infallible
services you get a service that, to the type system, is fallible. That
doesn't work well with [axum's error
handling](https://docs.rs/axum/latest/axum/error_handling/index.html)
model which requires all services to be infallible and thus always
return a response. So you end up having to add boilerplate just to
please the type system.

Additionally, the fact that `Either` implements `Future` also means we
cannot fully remove the dependency on `pin-project` since
`pin-project-lite` doesn't support tuple enum variants, only named
fields.

This PR reworks `Either` to address these:

- It now requires the two services to have the same error type so no
  type information is lost. I did consider doing something like `where
  B::Error: From<A::Error>` but I hope this simpler model will lead to
  better compiler errors.
- Changes the response future to be a struct with a private enum using
  `pin-project-lite`
- Removes the `Future` impl so we can remove the dependency on
  `pin-project`

Goes without saying that this is a breaking change so we have to wait
until tower 0.5 to ship this.

cc @jplatte

Fixes #594
Fixes #550
2022-02-17 20:09:44 +00:00
Grachev Mikhail
eee7d24e3a
tower: fix docs typo (#646) 2022-02-17 19:31:52 +00:00
Eliza Weisman
12a06035eb
chore: prepare to release tower v0.4.12 (#642)
* chore: prepare to release v0.4.12

# 0.4.12 (February 16, 2022)

### Fixed

- **hedge**, **load**, **retry**: Fix use of `Instant` operations that
  can panic on platforms where `Instant` is not monotonic ([#633])
- Disable `attributes` feature on `tracing` dependency ([#623])
- Remove unused dependencies and dependency features with some feature
  combinations ([#603], [#602])
- **docs**: Fix a typo in the RustDoc for `Buffer` ([#622])

### Changed

- Updated minimum supported Rust version (MSRV) to 1.49.0.
- **hedge**: Updated `hdrhistogram` dependency to v7.0 ([#602])
- Updated `tokio-util` dependency to v0.7 ([#638])

[#633]: https://github.com/tower-rs/tower/pull/633
[#623]: https://github.com/tower-rs/tower/pull/623
[#603]: https://github.com/tower-rs/tower/pull/603
[#602]: https://github.com/tower-rs/tower/pull/602
[#622]: https://github.com/tower-rs/tower/pull/622
[#638]: https://github.com/tower-rs/tower/pull/638

* add msrv
2022-02-16 22:42:08 +00:00
Eliza Weisman
5e280fedca
ci: fix wrong tag regex for tower (#644)
In #643, I accidentally included a `v` before the version number in the
regex for matching release tags for the `tower` crate, but not for
`tower-whatever` crates. All previous release tags on this repo don't
use a `v`, so adding it was a mistake. This branch removes it.
2022-02-16 22:07:08 +00:00
Eliza Weisman
9c184d81bc
chore: bump MSRV to 1.49.0 (#645)
`tower` builds are now failing on CI because Tokio v1.17.0 bumped MSRV
to 1.49.0. This branch updates `tower`'s MSRV to 1.49.0 to track Tokio's
MSRV. I also added nicer documentation of the MSRV based on Tokio's, and
added the `rust-version` Cargo metadata to the `tower` crate's
`Cargo.toml`.

Note that `tower-service` and `tower-layer` can technically continue to
support much earlier Rust versions than `tower` can, since they don't
depend on external crates and are very small. We could consider testing
separate, older MSRVs on CI for those crates individually. I didn't do
that in this PR, though, because I wasn't sure if this was worth the
effort and I just wanted to get CI passing again.
2022-02-16 13:14:05 -08:00
Eliza Weisman
7b6587e412
ci: automatically publish release notes to GitHub Releases (#643)
This branch adds a GitHub Actions workflow to automatically publish
release notes to GitHub Releases when a tag is pushed on the `master`
branch that corresponds to a release.

The release notes are parsed from the changelog using
taiki-e/create-release-action. The workflow will only run when a tag
matching `tower-[0-9].+` or `tower-[a-z]+-[0-9].+` is pushed to the
`master` branch on the origin (`tower-rs/tower`) repo.
2022-02-16 20:20:29 +01:00
Eliza Weisman
386de64ab4
tower: update tokio-util to v0.7 (#638)
This PR updates `tokio-util` to v0.7.

It also updates the minimum `tokio` dependency to v1.6.0.
This is because `tokio-util` requires at least `tokio` v1.6.0 for
`mpsc::Sender::reserve_owned`, but it only specifies a minimum version
of v1.0.0. This is incorrect and should be considered an upstream bug,
but updating our tokio dep fixes this, so that should at least unbreak
`tower`'s build for now.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2022-02-11 11:29:57 -08:00
Eliza Weisman
db43c07e96
tower: fix annoying clippy lints (#639)
This fixes a bunch of minor clippy lints. None of them were particularly
major, but I was getting tired of the warnings showing up in vscode.

The one lint that had to be ignored rather than fixed is the
`clippy::bool_assert_comparison` lint, which triggers on the
`tower_test::assert_request_eq!` macro. The lint triggers when writing
code like `assert_eq!(whatever, true)` rather than simply
`assert!(whatever)`. In this case, this occurs because the macro makes
an assertion about a request value, and in _some_ tests, the request
type is `bool`. We can't change this to use `assert!`, because in most
cases, when the request is not `bool`, we actually do need `assert_eq!`,
so I ignored that warning.
2022-02-11 11:11:15 -08:00
Eliza Weisman
665834c7a6
ci: disable fail-fast for Rust version matrix jobs (#640)
Tower has test and check jobs on CI that run on a build matrix
including a number of Rust versions. By default, GitHub Actions has
fail-fast semantics for matrix jobs, so if any matrix job fails, the
rest are cancelled and the build is failed. This is intended to help
builds complete faster.

This isn't really the ideal behavior for testing across multiple Rust
versions. When a build fails on a particular toolchain version, we would
ideally like to know whether the failure is localized to that version or
exists on _all_ Rust versions. This is particularly important for builds
on nightly Rust, as the nightly toolchain is more likely to contain
compiler regressions that might not be our fault at all. Similarly, we
might want to know if a change only broke the build on our MSRV, or if
it broke the build everywhere --- such an issue would be fixed
differently.

This also currently means that the nightly test run failing will prevent
PRs from being merged, even if the failure is due to a nightly compiler
regression. We currently only *require* the stable and MSRV test runs
to pass in order to merge a PR, but because the fail-fast behavior
will cancel them if the nightly build fails, this means that nightly failing
will effectively prevent merging PRs...which, given that it's not marked
as required, seems different from what we intended.

Therefore, this PR changes the CI workflow to disable fail-fast behavior
on the cross-version test jobs.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2022-02-11 11:05:03 -08:00
Oliver Gould
2f50b49f5a Avoid time operations that can panic
We have reports of runtime panics (linkerd/linkerd2#7748) that sound a
lot like rust-lang/rust#86470. We don't have any evidence that these
panics originate in tower, but we have some potentialy flawed `Instant`
arithmetic that could panic in this way.

Even though this is almost definitely a bug in Rust, it seems most
prudent to actively avoid the uses of `Instant` that are prone to this
bug.

This change replaces uses of `Instant::elapsed` and `Instant::sub` with
calls to `Instant::saturating_duration_since` to prevent this class of
panic. These fixes should ultimately be made in the standard library,
but this change lets us avoid this problem while we wait for those
fixes.

See also hyperium/hyper#2746
2022-01-31 16:23:47 -08:00
Jake Shadle
20d7b55556
chore: reenable all cargo deny checks (#632) 2022-01-31 09:27:27 +01:00
Christian Legnitto
71a1578f27
chore: Update links to layer list in README (#627)
Previous links were broken. These now link to https://docs.rs/tower/latest/tower/#modules.

Co-authored-by: Oliver Gould <ver@buoyant.io>
2022-01-18 08:08:23 +01:00
Jonas Platte
d0d8707ac0
Fix sometimes-unused dependencies (#603)
`tower` currently has required dependencies that may not be used
unless certain features are enabled.

This change updates `tower` to make these dependencies optional.

Furthermore, this change removes use of `html_root_url`, which is no
longer recommended (https://github.com/rust-lang/api-guidelines/pull/230),
and updates the documented release instructions.
2022-01-17 15:25:07 -08:00
Paolo Barbolini
7f004da56f
Import tracing without the attributes feature (#623)
tracing-attributes depends on syn and proc-macro2, which are slow to compile
2021-12-29 09:47:08 -08:00
Sibi Prabakaran
373a010ff5
Minor doc fix for buffer function (#622) 2021-12-24 10:55:07 +01:00
Oliver Gould
b20b3cbd57
Assert that no unsafe code is used in tower (#621)
The tower crates do not include any `unsafe` code, but tools like
[`cargo-geiger`][cg] can't necessarily detect that. This change adds a
stronger assertion at the top of each crate with a
`#![forbid(unsafe_code)]` directive to assert that no unsafe code is
used in the crate. This also serves as a more obvious obstacle to
introducing unsafe code in future changes.

[cg]: https://github.com/rust-secure-code/cargo-geiger
2021-11-24 15:26:16 -08:00
Qinxuan Chen
2d167204dc
tower: update hdrhistogram to 7.0 and remove useless dependencies (#620)
Signed-off-by: koushiro <koushiro.cqx@gmail.com>
2021-11-19 10:12:48 +01:00
David Pedersen
7674109b28
tower: prepare to release 0.4.11 (#618)
* tower: prepare to release 0.4.11

Added

- **util**: Add `CloneBoxService` which is a `Clone + Send` boxed `Service` ([#615])
- **util**: Add `ServiceExt::boxed` and `ServiceExt::clone_boxed` for applying the
  `BoxService` and `CloneBoxService` middleware ([#616])
- **builder**: Add `ServiceBuilder::boxed` and `ServiceBuilder::clone_boxed` for
  applying `BoxService` and `CloneBoxService` layers ([#616])

Fixed

- **balance**: Remove redundant `Req: Clone` bound from `Clone` impls
  for `MakeBalance`, and `MakeBalanceLayer` ([#607])
- **balance**: Remove redundant `Req: Debug` bound from `Debug` impls
  for `MakeBalance`, `MakeFuture`, `Balance`, and `Pool` ([#607])
- **ready-cache**: Remove redundant `Req: Debug` bound from `Debug` impl
  for `ReadyCache` ([#607])
- **steer**: Remove redundant `Req: Debug` bound from `Debug` impl
  for `Steer` ([#607])
- **util**: Remove redundant `F: Clone` bound
  from `ServiceExt::map_request` ([#607])
- **docs**: Fix `doc(cfg(...))` attributes
  of `PeakEwmaDiscover`, and `PendingRequestsDiscover` ([#610])
- **util**: Remove unnecessary `Debug` bounds from `impl Debug for BoxService` ([#617])
- **util**: Remove unnecessary `Debug` bounds from `impl Debug for UnsyncBoxService` ([#617])

[#607]: https://github.com/tower-rs/tower/pull/607
[#610]: https://github.com/tower-rs/tower/pull/610
[#616]: https://github.com/tower-rs/tower/pull/616
[#617]: https://github.com/tower-rs/tower/pull/617
[#615]: https://github.com/tower-rs/tower/pull/615

* sorting

* Rename `CloneBoxService` to `BoxCloneService`

* formatting

* also update changelog
2021-11-18 20:40:01 +01:00
David Pedersen
4d80f7ed90
builder,util: add convenience methods for boxing services (#616)
* builder,util: add convenience methods for boxing services

This adds a couple of new methods to `ServiceBuilder` and `ServiceExt`:

- `ServiceBuilder::boxed`
- `ServiceExt::boxed`
- `ServiceBuilder::clone_boxed`
- `ServiceExt::clone_boxed`

They apply `BoxService::layer` and `CloneBoxService::layer`
respectively.

* fix doc links

* add missing `cfg`s

* Update tower/CHANGELOG.md

Co-authored-by: Eliza Weisman <eliza@buoyant.io>

* Apply suggestions from code review

Co-authored-by: Eliza Weisman <eliza@buoyant.io>

* not sure why rustdoc cannot infer these

* line breaks

* trailing whitespace

* make docs a bit more consistent

* fix doc links

* update tokio

* don't pull in old version of tower

* Don't run `cargo deny check bans` as it hangs

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-11-18 13:56:19 +01:00
David Pedersen
48f8ae90a4
util: remove unnecessary Debug bounds from boxed services (#617) 2021-11-11 21:59:29 +01:00
David Pedersen
973bf71583
util: add CloneBoxService (#615)
* util: add `CloneService`

This upstreams a little utility I'm using a bunch in axum. Its often
useful to erase the type of a service while still being able to clone
it.

`BoxService` isn't `Clone` previously you had to combine it with
`Buffer` but doing that a lot (which we did in axum) had measurable
impact on performance.

* Address review feedback

* remove needless trait bounds
2021-11-09 21:39:27 +01:00
cppforliving
62df5e72b0
docs: Fix doc(cfg(...)) attributes (#610) (#610)
`RUSTDOCFLAGS='--cfg docsrs' cargo +nightly doc --all-features` outputs some warnings
```
warning: unused attribute `doc`
  --> tower/src/load/peak_ewma.rs:52:20
   |
52 | #[cfg_attr(docsrs, doc(cfg(feature = "discover")))]
   |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |
   = note: `#[warn(unused_attributes)]` on by default
note: the built-in attribute `doc` will be ignored, since it's applied to the macro invocation `pin_project`
  --> tower/src/load/peak_ewma.rs:53:1
   |
53 | pin_project! {
   | ^^^^^^^^^^^

warning: unused attribute `doc`
  --> tower/src/load/pending_requests.rs:31:20
   |
31 | #[cfg_attr(docsrs, doc(cfg(feature = "discover")))]
   |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |
note: the built-in attribute `doc` will be ignored, since it's applied to the macro invocation `pin_project`
  --> tower/src/load/pending_requests.rs:32:1
   |
32 | pin_project! {
   | ^^^^^^^^^^^

warning: `tower` (lib doc) generated 2 warnings
```

This PR is an attempt to fix this.
2021-11-09 09:26:57 -08:00
Folyd
0f16ea5652
docs: Replace ready_and() with ready() in docs (#611) 2021-11-02 15:19:50 +01:00
cppforliving
80c6e38f2f
Remove redundant Clone, and Debug bounds (#607)
The balancer types derive `Debug` and `Clone` implementations, but
this unnecessarily requires that its type parameters implement these
traits.

This change provides manual implementations for `Clone` and `Debug`
to avoid this unintentional restriction.

Closes #606
2021-10-27 22:33:39 -07:00
David Pedersen
d4865641e7
tower: prepare to release 0.4.10 (#608)
* tower: prepare to release 0.4.10

- Fix accidental breaking change when using the
  `rustdoc::broken_intra_doc_links` lint ([#605])
- Clarity that tower's minimum supported rust version is 1.46 ([#605])

[#605]: https://github.com/tower-rs/tower/pull/605

* Update tower/CHANGELOG.md

Co-authored-by: Oliver Gould <ver@buoyant.io>

Co-authored-by: Oliver Gould <ver@buoyant.io>
2021-10-19 21:56:23 +02:00
David Pedersen
1c9631d7b3
chore: Bump MSRV to 1.46 (#605)
* Actually check MSCV on CI

* check broken_intra_doc_links on CI

* clean up CI

* fix other crates

* bump MSRV to 1.42 because of tracing-core

* attempt to fix http not working on 1.42

* use `--workspace` instead of `--all`

`--all` is deprecated

* make tower-service build on 1.42

* force http version 0.2.4

* actually msrv is 1.46 because of tokio

* fix running `cargo fmt`

* clean up

* also run tests on 1.46

* ignore rustsec in time
2021-10-19 16:28:55 +02:00
David Pedersen
62e09024aa
tower: prepare to release 0.4.9 (#602)
- Migrate to pin-project-lite ([#595])
- **builder**: Implement `Layer` for `ServiceBuilder` ([#600])
- **builder**: Add `ServiceBuilder::and_then` analogous to
  `ServiceExt::and_then` ([#601])

[#600]: https://github.com/tower-rs/tower/pull/600
[#601]: https://github.com/tower-rs/tower/pull/601
[#595]: https://github.com/tower-rs/tower/pull/595
[pin-project-lite]: https://crates.io/crates/pin-project-lite
2021-10-14 09:12:28 +02:00
David Pedersen
c4cb3b0788
builder: Add ServiceBuilder::and_then (#601)
This one was missing.

Was the only combinator from `ServiceExt` that wasn't on
`ServiceBuilder` so now they match.
2021-09-04 13:05:46 -07:00
David Pedersen
d91c0f5ba3
builder: Implement Layer for ServiceBuilder (#600) 2021-09-03 18:54:49 +02:00
David Pedersen
3a134ba08a
util: Refactor BoxService (#598) 2021-08-26 06:54:29 +02:00
Michael-J-Ward
ee131aaf46
Migrate to pin project lite (#595)
* REMOVE ME updates peak_wema test to pass

* adds pin_project_lite dependency

* uses pin_project_lite for load::Constant

* uses pin_project_lite for load::PencingRequestsDiscover

* uses pin_project_lite for load::PeakEwma

* uses pin_project_lite for load::Completion

* uses pin_project_lite for tests::support::IntoStream

Turns IntoStream into a regular struct because pin_project_lite does not and will support tuple structs.

416be96f77/src/lib.rs (L401-L408)

* refactors opaque_future into a regular struct

This enables migration to pin_project_lite, which does not and will not support tuple structs
416be96f77/src/lib.rs (L401-L408)

* migrates opaque_future to use pin_project_lite

* removes tuple variant from load_shed::ResponseState enum

* migrates load_shed::future to pin_project_lite

* removes tuple variant from filter::future::State

* migrates filter::future to pin_project_lite

Note: the doc comment on AsyncResponseFuture::service was also reduced to a regular comment.

This is a known limitation of pin_project_lite that the they have labeled as "help wanted".
https://github.com/taiki-e/pin-project-lite/issues/3#issuecomment-745194112

* migrates retry::Retry to pin_project_lite

* refactors retry::future::State to enable pin_project_lite

pin_project_lite has the current limitation of nto supporting doc comments
https://github.com/taiki-e/pin-project-lite/issues/3#issuecomment-745194112

pin_project_lite does not and will not support tuple variants
416be96f77/src/lib.rs (L401-L408)

* migrates retry::future to pin_project_lite

* migrates spawn_ready::make to pin_project_lite

* refactors buffer::future::ResponseState to allow pin_project_lite

* migrates buffer::future to pin_project_lite

* refactors util::AndThenFuture to allow pin_project_lite

* migrates util::AndThenFuture to pin_project_lite

* migrates hedge::Future to pin_project_lite

* migrates hedge::select::ResponseFuture to pin_project_lite

* refactors hedge::delay enum for pin_project_lite

* refactors reconnect::future enum for pin_project_lite

* refactors oneshot::State enum for pin_project_lite

* migrates util::oneshot to pin_project_lite

* migrates reconnect::future to pin_project_lite

* migrates hedge::delay to pin_project_lite

* migrates hedge::latency to pin_project_lite

* migrates discover::list to pin_project_lite

* migrates timeout::future to pin_project_lite

* migrates balance::pool to pin_project_lite

* migrates balance::p2c::make to pin_project_lite

* migrates balance::p2c::service to pin_project_lite

* migrates call_all::ordered to pin_project_lite

* migrates call_all::common to pin_project_lite

* migrates call_all::unordered to pin_project_lite

* migrates util::optional::future to pin_project_lite

* migrates limit::concurrency::future to pin_project_lite

* migrates tower-balance example to pin_project_lite

* applies cargo fmt

* migrates tower-test to pin_project_lite

* fixes cargo hack check

peak_wma and pending_requests will now properly compile without the "discover" feature enabled.

* fixes lint rename warning on nightly

broken_intra_doc_links has been renamed to rustdoc::broken_intra_doc_links

* migrates buffer::Worker to pin_project_lite

pin_project_lite does support PinnedDrop
https://github.com/taiki-e/pin-project-lite/pull/25/files

However, it does not support generic trait bounds on the PinnedDrop impl.

To workaround this, I removed the T::Error bound from the Worker struct definition,
and moved `close_semaphore` to a a new impl without that trait bound.

* fixes abort_on_drop test

This test was also failing on master.

* applies cargo fmt
2021-07-28 13:48:47 -04:00
David Pedersen
77760198f1
docs: add "Building a middleware from scratch" guide (#590)
This adds a guide that explains how to implement a middleware from scratch without taking any shortcuts. It walks through implementing `Timeout` as it exists in Tower today.

The hope is that once users have read [the previous guide](https://tokio.rs/blog/2021-05-14-inventing-the-service-trait) followed by this one they should be fully equipped to implement their own middleware.
2021-06-07 11:06:30 +02:00
kazk
31dbc90c45
Add kube to the list of libraries (#592) 2021-06-06 08:54:21 +02:00
David Pedersen
b5d2c8f1d3
tower: prepare to release 0.4.8 (#591) 2021-05-28 22:18:09 +02:00
Jerome Gravel-Niquet
74f9047f30
Allow reusable concurrency limit via GlobalConcurrencyLimit (#574)
* limit: global concurrency limit layer from a owned semaphore

* new_owned -> new_shared + docs improvements

Co-authored-by: David Pedersen <david.pdrsn@gmail.com>

* keep exposing Semaphore, but rename the API a bit and make it simpler to use

* missed a spot

* minor docs fixes

* update changelog

Co-authored-by: David Pedersen <david.pdrsn@gmail.com>
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-05-28 16:45:44 +02:00
Lucio Franco
71ece8ea5a
Fix warning in oneshot test (#587)
Co-authored-by: David Pedersen <david.pdrsn@gmail.com>
2021-05-18 16:31:29 -04:00
David Pedersen
ce2f031523
docs: guide moved to tokio.rs (#588) 2021-05-18 14:35:19 +02:00
David Pedersen
31d5a6652a
Fix a couple of typos in the guide (#586) 2021-05-17 19:36:04 +02:00
David Pedersen
5e0c8da260
docs: Add "Inventing the Service trait" guide (#585)
This adds the first Tower guide called "Inventing the `Service` trait". It attempts to motivate all the parts to `Service` by walking the user through how they could have invented `Service` themselves, from scratch. It goes into quite a bit of detail but hopefully it paints a somewhat complete picture in the end.

The next guide I want to write is about how to implement a proper `Timeout` middleware using `Layer`, pin-project, and all the bells and whistles.

Ref: https://github.com/tower-rs/tower/issues/33
2021-05-14 22:59:43 +02:00
David Pedersen
53ec99eb8f
builder: Add ServiceBuilder::map_result (#583)
Noticed that `ServiceBuilder` didn't have `map_result`, only `then`
which is async.
2021-05-07 09:42:31 -07:00
David Pedersen
7b1528815e
service: Clarify subtly around cloning and readiness (#548)
* service: Clarify subtly around cloning and readiness

* Fix typo
2021-05-06 09:26:50 +02:00
Eliza Weisman
58544d65d9
tower: prepare to release 0.4.7 (#582)
# 0.4.7 (April 27, 2021)

### Added

- **builder**: Add `ServiceBuilder::check_service` to check the request,
    response, and error types of the output service. ([#576])
- **builder**: Add `ServiceBuilder::check_service_clone` to check the
  output service can be cloned. ([#576])

### Fixed

- **spawn_ready**: Abort spawned background tasks when the `SpawnReady`
  service is dropped, fixing a potential task/resource leak (#[581])
- Fixed broken documentation links ([#578])

[#576]: https://github.com/tower-rs/tower/pull/576
[#578]: https://github.com/tower-rs/tower/pull/578
[#581]: https://github.com/tower-rs/tower/pull/581
2021-04-27 12:59:57 -07:00
Oliver Gould
8ceee45fdf
spawn-ready: Abort background tasks when the service is dropped (#581)
The `SpawnReady` service spawns a background task whenever the inner
service is not ready; but when the `SpawnReady` service is dropped, this
task continues to run until it becomes ready (at which point the ready
service will be dropped). This can cause resource leaks when the inner
service never becomes ready.

This change adds a `Drop` implementation for the `SpawnReady` service
that aborts the background task when one is present.

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-04-27 10:26:07 -07:00
Michał Zabielski
7b4ebc6c3e
chore: add explicit feature flag for tower-balance example (#580) 2021-04-22 16:38:58 -04:00
Folyd
1467321fab
chore: convert rest doc-links to intra-doc links (#578) 2021-04-21 13:02:58 +02:00
David Pedersen
9b4261fe8a
builder: Add type check utility methods (#576) 2021-04-08 23:39:15 +02:00
Eliza Weisman
e1760d385d
tower: prepare to release v0.4.6 (#573)
Deprecated

- **util**: Deprecated `ServiceExt::ready_and` (renamed to
  `ServiceExt::ready`). ([#567])
- **util**: Deprecated `ReadyAnd` future (renamed to `Ready`). ([#567])

Added

- **builder**: Add `ServiceBuilder::layer_fn` to add a layer built from
  a function. ([#560])
- **builder**: Add `ServiceBuilder::map_future` for transforming the
  futures produced by a service. ([#559])
- **builder**: Add `ServiceBuilder::service_fn` for applying `Layer`s to
  an async function using `util::service_fn`. ([#564])
- **util**: Add example for `service_fn`. ([#563])
- **util**: Add `BoxLayer` for creating boxed `Layer` trait objects.
  ([#569])

[#567]: https://github.com/tower-rs/tower/pull/567
[#560]: https://github.com/tower-rs/tower/pull/560
[#559]: https://github.com/tower-rs/tower/pull/559
[#564]: https://github.com/tower-rs/tower/pull/564
[#563]: https://github.com/tower-rs/tower/pull/563
[#569]: https://github.com/tower-rs/tower/pull/569

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-03-01 09:32:29 -08:00
David Pedersen
f2505d2ade
util: Add BoxLayer (#569)
I've run into a use case where I need to return a `Layer` with a complex
type from a function. I previously used `impl Layer<S, Service = impl
Service<...>>` but that impacted compile times quite significantly.
Boxing the `Layer` fixed it. Thought it made sense to upstream.

The `Send + Sync + 'static` bounds on the inner `dyn Layer` were
necessary to get it working with hyper.

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-02-26 15:35:47 -08:00
David Barsky
208160e6bb
chore: add Netlify redirect to tower/` (#572)
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-02-25 17:28:48 -05:00
David Pedersen
7461abc6b5
builder: Add ServiceBuilder::service_fn (#564)
* builder: Add `ServiceBuilder::service_fn`

A small convenience for doing `.service(service_fn(handler_function))`.

* Docs tweaks

* Apply suggestions from code review

Co-authored-by: Eliza Weisman <eliza@buoyant.io>

* Fix

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-02-25 22:52:37 +01:00
David Pedersen
9948f2561d
builder: Add ServiceBuilder::layer_fn (#560)
This gem as just [sitting in Tonic][sit] waiting to be upstreamed.

[sit]: https://github.com/hyperium/tonic/blob/master/tonic/src/transport/service/layer.rs

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-02-25 22:33:35 +01:00
David Barsky
b538246296
chore: remove unnecessary mut and imports in tests (#570)
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-02-25 16:19:59 -05:00
David Barsky
db0b2c5885
chore: create doc preview website with Netlify (#571)
* chore: create doc preview website with Netlify

* Remove redirects
2021-02-25 13:10:15 -08:00
David Pedersen
a63e3cf07a
util: Add example for service_fn (#563) 2021-02-25 10:17:37 -08:00
David Barsky
a8884ae150
util: Rename ServiceExt::ready_and to ServiceExt::ready (#567)
This PR renames: 

- `ServiceExt::ready_and` to `ServiceExt::ready`
- the `ReadyAnd` future to `Ready`
- the associated documentation to refer to `ServiceExt::ready`
  and `ReadyAnd`.

This PR deprecates:

- the `ServiceExt::ready_and` method
- the `ReadyAnd` future

These can be removed in Tower 0.5.

My recollection of the original conversation surrounding the
introduction of the `ServiceExt::ready_and` combinator in
https://github.com/tower-rs/tower/pull/427 was that it was meant to be a
temporary workaround for the unchainable `ServiceExt::ready` combinator
until the next breaking release of the Tower crate. The unchainable
`ServiceExt::ready` combinator was removed, but `ServiceExt::ready_and`
was not renamed. I believe, but am not 100% sure, that this was an
oversight.
2021-02-25 09:37:45 -08:00
David Pedersen
04369f3b8f
builder: Add ServiceBuilder::map_future (#559)
* builder: Add `ServiceBuilder::map_future`

I forgot at add this one in #542. Think its nice to have for
consistency.

* Update tower/src/builder/mod.rs

Co-authored-by: Eliza Weisman <eliza@buoyant.io>

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-02-18 21:53:57 +01:00
David Pedersen
ba3d431c04
tower: prepare to release v0.4.5 (#558)
* tower: prepare to release v0.4.5

* Minor docs fix

* fix wordings

* Fix tense

* more wording

* move
2021-02-11 00:12:52 +01:00
Eliza Weisman
ec547b329b
spawn_ready: propagate tracing spans (#557)
Currently, when using `SpawnReady`, the current `tracing` span is not
propagated to the spawned task when the inner service is not ready. This
means that any traces emitted by the inner service's `poll_ready` occur
in their own root span, rather than the span the `SpawnReady` service
was polled in.

This branch fixes this by propagating the current trace span to the
spawned task.

This means that "spawn-ready" now enables the "tracing" feature. In the
future, we may want to consider feature-flagging `tracing` separately
from the middleware implementation that contain tracing instrumentation,
but doing so would break traces if the feature flag isn't enabled. This
doesn't break API compatibility, but it *does* break functionality, so
we may not want to do that until the next breaking change release.

I also added tests for span propagation. I realized the same test could
easily be applied to `Buffer`, which also propagates `tracing` spans, to
guard against regressions, so I also added a test for buffer.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-02-10 15:01:58 -08:00
David Pedersen
f90f518f9f
make: Add Shared (#533)
* make: Add `Shared`

Fixes https://github.com/tower-rs/tower/issues/262

`Shared` is a `MakeService` that produces services by cloning an inner
service.

* Fix build with different set of features

* Formatting

* Make `Shared` generic over any target

* Fix tests

* Move `Shared` into its own file

* Add example
2021-02-10 22:28:23 +01:00
kazk
e49700a79d
Add util::option_layer and ServiceBuilder::option_layer (#555)
* Add `util::option_layer` and `ServiceBuilder::option_layer`

Closes #553

* Apply suggestions to improve the docs

Co-authored-by: Eliza Weisman <eliza@buoyant.io>

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-02-10 22:17:25 +01:00
Eliza Weisman
ccfaffc1f4
buffer, limit: use tokio-util's PollSemaphore (#556)
The [`Semaphore` implementation in `tokio::sync`][1] doesn't expose a
`poll_acquire` method to acquire owned semaphore permits in a
non-async-await context. Currently, the `limit::concurrency` and
`buffer` middleware use our own pollable wrapper for
`tokio::sync::Semaphore`. This works by creating a new
`Semaphore::acquire_owned` future and boxing it every time the semaphore
starts acquiring a new permit.

Recently, the `tokio_util` crate introduced its own [`PollSemaphore`
wrapper type][2]. This provides the same functionality of a pollable
version of `tokio::sync`'s `Semaphore`, just like our semaphore wrapper.
However, `tokio_util`'s version is significantly more efficient: rather
than allocating a new `Box` for _each_ `acquire_owned` future, it uses
the [`ReusableBoxFuture` type][3] to reuse a *single* allocation every
time a new `acquire_owned` future is needed. This means that rather than
allocating *every* time a `Buffer` or `ConcurrencyLimit` service starts
acquiring a new permit, there's a single allocation for each clone of
the service. Unless services are cloned per-request, this means that the
allocation is moved out of the request path in most cases.

I had originally considered an approach similar to this, but I didn't
think the reduced allocations were worth adding a bunch of unsafe code
in `tower` (which presently contains no unsafe code). However, this type
fits in perfectly in `tokio-util`, and now that there's an upstream
implementation, we should use it.

This introduces a dependency on `tokio-util` when the "limit" or "buffer"
features are enabled.

Additionally, I've added a new test for `Buffer` asserting that once an
individual `Buffer` service has been driven to readiness but not called,
additional `poll_ready` calls won't acquire additional buffer capacity.
This reproduces a bug that existed in earlier versions of
`tower::buffer`, which could result in starvation of buffer waiters.
This bug doesn't exist in 0.4, but I wanted to ensure that changing the
buffer internals here didn't introduce any new regressions.

[1]: https://docs.rs/tokio/1.2.0/tokio/sync/struct.Semaphore.html
[2]: https://docs.rs/tokio-util/0.6.3/tokio_util/sync/struct.PollSemaphore.html
[3]: https://docs.rs/tokio-util/0.6.3/src/tokio_util/sync/poll_semaphore.rs.html#13-16

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-02-10 11:52:40 -08:00
David Pedersen
52fab7c446
steer: Implement Clone for Steer (#554) 2021-02-09 15:41:07 -08:00
David Pedersen
0226ef0f4c
Make combinators implement Debug in more cases (#552)
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-01-27 23:47:04 +01:00
David Pedersen
02b6731350
Only pull in tracing when necessary (#551)
Fixes https://github.com/tower-rs/tower/issues/549

Tested with `cargo hack check --each-feature --no-dev-deps`
2021-01-27 14:16:07 -08:00
David Pedersen
d80044f825
util: Add ServiceExt::map_future (#542)
* util: Add `ServiceExt::map_future`

I ran into a thing today where I wanted to write a middleware that wraps
all futures produced by a service in a `tracing::Span`. So something
like `self.inner.call(req).instrument(span)`.

Afaik all the combinators we have today receive the value produced by
the future and not the future itself. At that point its too late to call
`.instrument`. So I thought this made sense to add a combinator for.

* Add additional trait bounds to improve UX

* Add docs

* Update changelog

* Clean up debug impl

* Better debug impl for `MapFutureLayer`
2021-01-27 09:39:15 +01:00
David Pedersen
776f215135
tower-service: prepare to release v0.3.1 (#540)
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-01-23 12:31:05 +01:00
Eliza Weisman
3d9efce2d3
tower: expand on usage docs, mention compatible libraries (#529)
This branch adds to the "Usage" section in the `tower` crate's README
and lib.rs docs. In particular, I've added a discussion of the various
use-cases for Tower, and a list of crates that support Tower.

Let me know if I'm missing anything!
2021-01-21 09:29:21 -08:00
David Pedersen
886f72a53e
tower: prepare to release v0.4.4 (#539) 2021-01-20 16:21:57 +01:00
David Pedersen
2683ab6231
Add "full" feature that turns on all other features (#532)
* Add "full" feature that turns on all other features

Fixes https://github.com/tower-rs/tower/issues/530

* Don't include "log" feature in "full"

Makes it easier to disable "log" since its already included in the
default feature set.

* Fix changelog
2021-01-20 16:02:04 +01:00
Oliver Gould
f2bb123928
spawn-ready: Avoid oneshot allocations (#538)
This change modifies the `SpawnReady` service to track a
`tokio::task::JoinHandle` instead of creating a `tokio::sync::oneshot`
on each pending poll. This avoid unnecessary allocation.

The `spawn_ready` test is also fixed to avoid using `thread::sleep`,
instead using `tokio::time::pause`.
2021-01-19 10:12:16 -08:00
teor
1cd65c104c
util: Fix some doc comment typos (#537)
* Fix a comment typo in util::and_then

* Fix apostrophes in tower::util doc comments

* Add a missing word in the tower::util module comment
2021-01-19 12:03:37 +01:00
David Pedersen
f62c5b5350
service: Improve example in docs (#510)
* Fix typos in docs

* Make example in `Service` docs runnable

* Apply suggestions from code review

Co-authored-by: Eliza Weisman <eliza@buoyant.io>

* Update tower-service/src/lib.rs

Co-authored-by: Lucio Franco <luciofranco14@gmail.com>

* Update tower-service/src/lib.rs

Co-authored-by: Lucio Franco <luciofranco14@gmail.com>

* Update tower-service/src/lib.rs

Co-authored-by: Lucio Franco <luciofranco14@gmail.com>

* Update tower-service/src/lib.rs

Co-authored-by: Lucio Franco <luciofranco14@gmail.com>

* Consistent casing

* Consistent casing - take 2

* Update changelog

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
Co-authored-by: Lucio Franco <luciofranco14@gmail.com>
2021-01-19 10:25:00 +01:00
David Pedersen
487e531c8b limit: Implement Clone for RateLimitLayer 2021-01-18 12:26:55 +01:00
David Pedersen
78a32743bd timeout: Implement Clone for TimeoutLayer 2021-01-18 12:26:38 +01:00
David Pedersen
2af77e811e util: Implement Clone for FilterLayer 2021-01-18 12:26:13 +01:00
David Pedersen
af03a45ced
util: Implement Layer for Either<A, B> (#531)
* util: Implement `Layer` for `Either<A, B>`

* formatting
2021-01-16 10:23:22 +01:00
Eliza Weisman
5ad1757367
tower: prepare to releasse v0.4.3 (#528)
I also fixed up the changelog entries for v0.4.2 while I was here.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-13 13:19:59 -08:00
Eliza Weisman
359db0ee3f
util: add FutureService::new, with relaxed bounds (#523)
* util: add `FutureService::new`, with relaxed bounds

There are a couple issues with the current implementation of
`FutureService`.

The first, and less important, is a minor usability
issue: there's no `FutureService::new`, just a free function that
returns `FutureService`. While the free function is nice in some cases,
it means that if a user *is* naming the type someplace, they need to
import `tower::util::future_service` *and* `tower::util::FutureService`,
which is slightly annoying. Also, it just kind of violates the common
assumption that most publicly constructable types have a `new`,
requiring a look at the docs.

The second, more significant issue is that the `future_service` function
places a `Service` bound on the future's output type. While this is of
course necessary for the *`Service` impl* on `FutureService`, it's not
required to construct a `FutureService`. Of course, you generally don't
want to construct a `FutureService` that *won't* implement service.
However, the bound also means that additional generic parameters are now
required at the site where the `FutureService` is constructed. In
particular, the caller must now either know the request type, or be
generic over one.

In practice, when other middleware returns or constructs a
`FutureService`, this essentially means that it's necessary to add a
`PhantomData` for the request type parameter. This complicates code, and
perhaps more importantly, increases compile times, especially with
deeply-ensted middleware stacks.

As an example of the downside of aggressive bounds at the constructor,
it's worth noting that the implementation of `FutureService` currently
in `tower` is based directly on a similar implementation in
`linkerd2-proxy`. Except for the difference of whether or not the
constructor has a `Service` bound on the future's output, the two
implementations are very similar, almost identical. This gist shows some
of the change necessary to replace our otherwise identical
implementation with the `tower` version that bounds the `Service` type
at construction-time:

https://gist.github.com/hawkw/a6b07f9f4a8bce0c4b61036ed94114db

This PR solves these issues by adding a `FutureService::new` constructor
that does not introduce the `Service` bound. I didn't change the
`future_service` function: I don't *think* removing bounds is a breaking
change, but it is a modification to a publicly exposed function's type
signature, so I'm a little leery about it. Also, I thought that the more
aggressive bounding at construction-time might still be useful in
simpler use-cases where the `FutureService` is not part of a more
complex middleware stack, and that the free fn might be more likely to
be used in those cases anyway.

cc @davidpdrsn

* relax bounds on free fn

Signed-off-by: Eliza Weisman <eliza@buoyant.io>

* Revert "relax bounds on free fn"

This reverts commit 5ee4fd36c3d1849acede223218a1d457306b9247. This
actually *is* breaking --- it would mean removing the `R` type parameter
for the request type on the function. This changes the function
definition, which might break uses of it.
2021-01-13 12:09:18 -08:00
Eliza Weisman
1270d691d2
chore: improve README docs, add links (#527)
This branch updates the READMEs for all Tower crates.

I've added the lib.rs docs to the `tower` crate's README, and added
crates.io, docs.rs, and updated CI badges to all the crates READMEs.
Since we no longer use Azure Pipelines for CI or Gitter for chat, I've
removed those badges and replaced them with GitHub Actions and Discord
badges, respectively.

I also fixed a typo in the `tower` lib.rs docs that was breaking some of the
RustDoc links, since I noticed it after copying those docs into the README.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-13 10:47:42 -08:00
Eliza Weisman
b8927bc32a
util: impl Clone for AndThen, MapRequest and MapErr layers (#525)
The `Layer` types for `AndThen`, `MapRequest`, and `MapErr` middleware
were missing `Clone` impls, which the `MapResponse`, `MapResult`, and
`Then` middleware's layer types *do* have. These layers should all be
`Clone` when the function is `Clone`.

I've added `derive(Clone)` to all these layer types.
2021-01-12 13:23:01 -08:00
Eliza Weisman
aa29693bd1
util: add layer fns to middleware (#524)
This branch adds `AndThen::layer`, `Then::layer`, `MapErr::layer`,
`MapRequest::layer`, `MapResponse::layer`, and `MapResult::layer`
associated functions that simply return each middleware's associated
layer type. This can be more convenient in some cases, since it avoids
having to import both the layer type and the middleware type.

Similar functions already exist for other middleware, such as
`BoxService`.
2021-01-12 12:47:50 -08:00
Eliza Weisman
00377d1f80
chore: install nightly rust before building docs
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-12 11:07:41 -08:00
Eliza Weisman
ec497f7dd8
chore: fix yaml syntax in docs workflow
Whoops, I put the env var in the wrong place, lol

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-12 10:59:36 -08:00
Eliza Weisman
0ad132bdbe
chore: add --all-features and --cfg docsrs to cargo doc CI workflow (#526)
Tower's docs builds are currently failing on `master` because of broken
docs links. These links are broken because they reference modules that
are feature flagged and disabled by default.

In order to build docs successfully, we should build with
`--all-features`. This also means we'll actually build docs for feature
flagged modules --- because we weren't doing that previously, we
actually weren't building most of the docs on CI.

Additionally, I've changed the docs build workflow to build on nightly and
to set `RUSTDOCFLAGS="--cfg docsrs"` so that we can use `doc(cfg)`
attributes when building the Git API docs.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-12 10:42:42 -08:00
Eliza Weisman
b4169c7753
filter: add get_ref, get_mut, and into_inner (#522)
This branch adds methods to access the inner service of a `Filter` or an
`AsyncFilter`. These are identical to the similarly-named method
provided by many other middleware services.

Along with the changes in #521, this is also necessary to implement
filtered versions of foreign traits in downstream code. I probably
should've added this in that PR, but I wasn't thinking it through...
2021-01-11 15:19:43 -08:00
Eliza Weisman
7aecf78ac0
filter: expose predicate check as methods on Filter and AsyncFilter (#521)
## Motivation

In Linkerd 2, we have our own `RequestFilter` middleware. Now that Tower
0.4 has been released, including a new synchronous request filtering
middleware, we'd like to migrate to Tower's filter middleware. However,
there's one hitch: we implement additional traits for our filtering
middleware --- in particular, we use a `NewService` trait, which is
essentially a synchronous version of `MakeService`, for services that
can be constructed immediately. We'd like to be able to implement that
trait for `tower::filter::Filter` services as well.

## Solution

This branch adds new `Filter::check` and `AsyncFilter::check` methods
which expose the inner `Predicate::check` and `AsyncPredicate::check`
methods on the filter service's predicate. This can be used to call into
the predicate directly, allowing external traits to be implemented for
`Filter`/`AsyncFilter` in downstream code. Of course, the additional
traits must be defined in the crate providing implementaitons for
`tower::filter`, since `Filter` and `AsyncFilter` would be foreign
types, but in our use case, at least, this isn't an issue.
2021-01-11 12:39:46 -08:00
Eliza Weisman
0e818a467e
tower: fix unused macros warning (#520)
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-11 12:10:11 -08:00
Lucio Franco
e988f87824
tower: Prepare 0.4.2 release (#519)
* tower: Prepare `0.4.2` release

* Update tower/CHANGELOG.md

Co-authored-by: Eliza Weisman <eliza@buoyant.io>

* add missing PR link

* Add layer export changes to changelog

* fix changelog

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-01-11 12:16:32 -05:00
David Pedersen
7d3443e6f0
Re-export layer_fn and LayerFn from tower (#516)
Co-authored-by: Lucio Franco <luciofranco14@gmail.com>
2021-01-11 11:56:26 -05:00
David Pedersen
d25589d3a6
Fix all the docs links (#515)
* Fix docs links

* Add `#![deny(broken_intra_doc_links)]` to all crates

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-01-11 11:44:50 -05:00
Lucio Franco
b4d3844693
Fix Semaphore to be Sync (#518) 2021-01-11 11:37:29 -05:00
Eliza Weisman
b447771855
tower: prepare to release 0.4.1 (#514)
* tower: prepare to release 0.4.1

This branch updates the tower-layer dependency to 0.3.1 and prepares a
new release of `tower`. This should fix the broken re-exports of
`layer_fn` and get us a successful docs.rs build.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-07 15:26:59 -08:00
Eliza Weisman
44ad621c56
layer: prepare to release v0.3.1 (#513)
* layer: prepare to release v0.3.1

Signed-off-by: Eliza Weisman <eliza@buoyant.io>

* whitespace
2021-01-07 15:05:45 -08:00
Eliza Weisman
ca685ae943
test: update tower-test to v0.4 (#512)
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-07 14:33:55 -08:00
Eliza Weisman
992702fd20
prepare Tower 0.4 for release (#511)
This branch updates the changelogs and version numbers for Tower 0.4.

* update changelog for 0.4
* add changelog blurb
* update version to 0.4

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-07 13:35:05 -08:00
David Pedersen
bebe677074
steer: Add more true to life example for Steer docs (#506)
I think `Steer` is looking pretty useful. I have use case where want
some requests (such as `GET /metrics`) to go to one service and other
requests to some other service. I imagine this is quite a common use
case for `Steer` so I figure the example in the docs would be more
helpful if it showed how to accomplish something like that.

In general I think tower's docs could be better at explaining how to
combine all the different pieces to solve real problems. Hopefully this
might help a bit wrt `Steer`.

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-01-06 15:54:50 -08:00
Harry Barber
f171390703
util: Add and_then combinator (#485)
## Motivation

https://docs.rs/futures/0.3.8/futures/future/trait.TryFutureExt.html#method.and_then is a useful method on futures. Perhaps it'd be nice to replicate this for the `ServiceExt` API.

Co-authored-by: Harry Barber <harry.barber@disneystreaming.com>
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2021-01-06 15:08:05 -08:00
Eliza Weisman
3b7c91ed58
make all combinator futures opaque (#509)
* make all combinator futures opaque

For stability reasons, we probably don't want to expose future
combinator types from the `futures_util` crate in public APIs. If we
were to change which combinators are used to implement these futures, or
if `futures_util` made breaking changes, this could cause API breakage.

This branch wraps all publicly exposed `futures_util` types in the
`opaque_future!` macro added in #508, to wrap them in newtypes that hide
their internal details. This way, we can change these futures' internals
whenever we like.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-06 15:06:31 -08:00
Eliza Weisman
2a7d47adda
rewrite tower::filter (#508)
## Motivation

It was pointed out that there is currently some overlap between the
`try_with` `Service` combinator and `tower::filter` middleware (see https://github.com/tower-rs/tower/pull/499#discussion_r549522471 ).
`try_with` synchronously maps from a `Request` ->
`Result<DifferentRequest, Error>`, while `tower::filter`
_asynchronously_ maps from a `&Request` to a `Result<(), Error>`. The
key differences are: - `try_with` takes a request by value, and allows
the predicate to return a *different* request value - `try_with` also
permits changing the _type_ of the request - `try_with` is synchronous,
while `tower::filter` is asynchronous - `tower::filter` has a
`Predicate` trait, which can be implemented by more than just functions.
For example, a struct with a `HashSet` could implement `Predicate` by
failing requests that match the values in the hashset.

It definitely seems like there's demand for both synchronous and
asynchronous request filtering. However, the APIs we have currently
differ pretty significantly. It would be nice to make them more
consistent with each other.

As an aside, `tower::filter` [does not seem all that widely used][1].


Meanwhile, `linkerd2-proxy` defines its own `RequestFilter` middleware,
using a [predicate trait][2] that's essentially in between `tower::filter` and
`ServiceExt::try_with`: - it's synchronous, like `try_with` - it allows
modifying the type of the request, like `try_with` - it uses a trait for
predicates, rather than a `Fn`, like `tower::filter` - it uses a similar
naming scheme to `tower::filter` ("filtering" rather than "with"/"map").

[1]: https://github.com/search?l=&p=1&q=%22tower%3A%3Afilter%22+extension%3Ars&ref=advsearch&type=Code
[2]: 24bee8cbc5/linkerd/stack/src/request_filter.rs (L8-L12)

## Solution

This branch rewrites `tower::filter` to make the following changes:

* Predicates are synchronous by default. A separate `AsyncFilter` type
  and an `AsyncPredicate` trait are available for predicates returning
  futures.
* Predicates may now return a new `Request` type, allowing `Filter` and
  `AsyncFilter` to subsume `try_map_request`.
* Predicates may now return any error type, and errors are now converted
  into `BoxError`s.

Closes #502

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-06 12:18:20 -08:00
David Pedersen
b7a7c280ee
util: Remove ServiceExt::ready (#507) 2021-01-05 10:12:36 -08:00
David Pedersen
3bdaae9284
Fix clippy warnings (#505)
Thought I might as well fix the clippy warnings. Feel free to 
close this if you don't want the git churn.
2021-01-04 14:49:05 -08:00
Eliza Weisman
fdd66e5305
docs pass (#490)
This branch makes the following changes:

* New `lib.rs` docs for `tower`, which should hopefully provide a
  better explanation of Tower's core abstractions & their
  relationships
* Nicer docs for `ServiceBuilder`
* Added `#[doc(cfg(...))]` attributes for feature flagged APIs
* Example improvements
* Fixing a bunch of broken intra-rustdoc links

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-04 13:52:23 -08:00
Eliza Weisman
bef0ade3cb
util: Add then combinator (#500)
Currently, `ServiceExt` and `ServiceBuilder` provide combinators for
mapping successful responses to other responses, and mapping errors to
other errors, but don't provide a way to map between `Ok` and `Err`
results.

For completeness, this branch adds a new `then` combinator, which takes
a function from `Result` to `Result` and applies it when the service's
future completes. This can be used for recovering from some errors or
for rejecting some `Ok` responses. It can also be used for behaviors
that should be run when a service's future completes regardless of
whether it completed successfully or not.

Depends on #499
2021-01-04 11:59:24 -08:00
David Pedersen
d0fde833b1
builder: Make ServiceBuilder::service take self by reference (#504)
* Make `ServiceBuilder::service` take `self` by reference

Fixes #501

* Fix clippy warning

* Update changelog
2021-01-04 11:53:32 -08:00
Eliza Weisman
878b10f57d
util: unify and rename combinators (#499)
Currently, the `ServiceExt` trait has `with` and `try_with` methods for
composing a `Service` with functions that modify the request type, and
`map_err` and `map_ok` methods for composing a service with functions
that modify the response type. Meanwhile, `ServiceBuilder` has
`map_request` for composing a service with a request mapping function,
and `map_response` for composing a service with a response-mapping
function. These combinators are very similar in purpose, but are
implemented using different middleware types that essentially duplicate
the same behavior.

Before releasing 0.4, we should probably unify these APIs. In
particular, it would be good to de-duplicate the middleware service
types, and to unify the naming.

This commit makes the following changes:
- Rename the `ServiceExt::with` and `ServiceExt::try_with` combinators
  to `map_request` and `try_map_request`
  - Rename the `ServiceExt::map_ok` combinator to `map_response`
- Unify the `ServiceBuilder::map_request` and `ServiceExt::map_request`
  combinators to use the same `Service` type
- Unify the `ServiceBuilder::map_response` and
  `ServiceExt::map_response` combinators to use the same `Service` type
- Unify the `ServiceBuilder::map_err` and `ServiceExt::map_err`
  combinators to use the same `Service` type
- Only take `FnOnce + Clone` when in response/err combinators, which
  require cloning into the future type. `MapRequest` and `TryMapRequest`
  now take `FnMut(Request)`s and don't clone them every time they're
  called
- Reexport future types for combinators where it makes sense.
- Add a `try_map_request` method to `ServiceBuilder`

Closes #498

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2021-01-04 10:27:05 -08:00
David Pedersen
0100800dda
Add MakeService::into_service and MakeService::as_service (#492)
Resolves #253

Adds `MakeService::into_service` and `MakeService::as_service` which
converts `MakeService`s into `Service`s. `into_service` consumes `self`
and `as_service` borrows `self` mutably.
2020-12-29 15:45:43 -08:00
Oliver Gould
6be2ff68dd
util: Add BoxService layer helpers (#503)
This change adds `BoxService::layer` and `UnsyncBoxService::layer`
helpers to provide a convenience helper for boxing services in layered
stacks.

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2020-12-29 15:18:22 -08:00
David Pedersen
0ede7c335e
util: Add FutureService (#495)
Resolves #264 

This branch adds `FutureService` to `tower::util`, which implements
`Service` for `Future<Output = impl Service>`. Before the future has
completed, `poll_ready` on `FutureService` will drive it to completion,
returning `Ready` only once the future has completed. Subsequent calls
will poll the service returned by the future.

See https://github.com/tower-rs/tower/issues/264#issuecomment-751400523

Based on
1be301f2b0/linkerd/stack/src/future_service.rs

Co-authored-by: Daiki Mizukami <tesaguriguma@gmail.com>
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2020-12-29 10:44:12 -08:00
David Pedersen
f76fe9f38f
util: add layer_fn (#491)
Resolves #267

I went with `layer_fn` over `layer::from_fn` because changing
`service_fn` would be a breaking change. But I don't mind changing it if
you think thats more appropriate 😊 

Co-authored-by: Eliza Weisman <eliza@buoyant.io>
2020-12-29 10:33:31 -08:00
George Hahn
704bfa819e
layer: fix+improve doc links (#487)
- Fixes a link to the `Service` trait on the `tower_layer` module page.
  ([current page](https://docs.rs/tower-layer/0.3.0/tower_layer/index.html))
- Adds a link to `Service` on the `Layer` trait page.
  ([current page](https://docs.rs/tower-layer/0.3.0/tower_layer/trait.Layer.html))

These changes use old style links - intra-rustdoc links won't work
here because `tower`/`tower_service` aren't referenced by this crate.
2020-12-28 15:07:01 -08:00
Oliver Gould
ff85e3ade8
buffer: Add Clone + Copy impls for BufferLayer (#493)
It's generally the case that layers need to be shareable. There's no
reason that `BufferLayer` should not implement both `Clone` and `Copy`.

This change adds manual `Clone` and `Copy` implementations for
`BufferLayer`.
2020-12-28 13:09:05 -08:00
Oliver Gould
4f65e3017b
balance: Remove MakeBalance::from_rng (#497)
`MakeBalance`'s use of `SmallRng` is problematic: since it clones the
`SmallRng` between `Balance` instances, each instance will have the same
state and produce an identical sequence of values. This probably isn't
_dangerous_, though it is certainly unexpected.

This change removes the `MakeBalance::from_rng` and
`MakeBalanceLayer::from_rng` helpers. The `MakeBalance` service now uses
the default RNG via `Balance::new`. `Balance::new` now creates its
`SmallRng` from the `thread_rng` instead of the default entropy source,
as the default entropy source may use the slower `getrandom`. From the
[`rand` docs][from_entropy]:

> In case the overhead of using getrandom to seed many PRNGs is an
> issue, one may prefer to seed from a local PRNG, e.g.
> from_rng(thread_rng()).unwrap().

Finally, this change updates the balnancer to the most recent version of
`rand`, v0.8.0.

[from_entropy]: https://docs.rs/rand/0.8.0/rand/trait.SeedableRng.html#method.from_entropy
2020-12-28 12:41:35 -08:00
Eliza Weisman
45974d018d
update to Tokio 1.0 (#489)
This branch updates Tower to depend on Tokio v1.0. In particular, the
following changes were necessary:

* `tokio::sync::Semaphore` now has a `close` operation, so permit
  acquisition is fallible. Our uses of the semaphore are updated to
  handle this. Also, this allows removing the janky homemade
  implementation of closing semaphores by adding a big pile of
  permits!

* `tokio::sync`'s channels are no longer `Stream`s. This necessitated a
  few changes:
  - Replacing a few explicit `poll_next` calls with `poll_recv`
  - Updating some tests that used `mpsc::Receiver` as a `Stream` to add
    a wrapper type that makes it a `Stream`
  - Updating `CallAll`'s examples (I changed it to just use a
    `futures::channel` MPSC)

* `tokio::time::Sleep` is no longer `Unpin`. Therefore, the rate-limit
  `Service` needs to `Box::pin` it. To avoid the overhead of
  allocating/deallocating `Box`es every time the rate limit is
  exhausted, I moved the `Sleep` out of the `State` enum and onto the
  `Service` struct, and changed the code to `reset` it every time the
  service is rate-limited. This way, we only allocate the box once when
  the service is created.

There should be no actual changes in functionality.

Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2020-12-23 11:59:54 -08:00
Eliza Weisman
124816a40e
ci: install cargo hack from GitHub release binary (#486)
CI is currently busted due to [issues with caching `cargo-hack`][1].
Currently, we cache the `cargo-hack` executable to speed up builds by
avoiding the overhead of compiling it from source in every build.

Recently, `cargo-hack` has started publishing binaries on GitHub
Releases. Rather than compiling it on CI and caching it, we can just
download the binary instead. This ought to fix the build.

See also taiki-e/cargo-hack#89 and taiki-e/cargo-hack#91.

[1]: https://github.com/tower-rs/tower/runs/1425940763
2020-12-23 11:49:46 -08:00
Alice Ryhl
3a8d31c60f
fix typo (#482) 2020-11-27 14:49:38 -05:00
Henry de Valence
d4d1c67c6a
hedge: use auto-resizing histograms (#484)
The previous code used a fixed-size histogram with an upper bound of 10_000 ms
(10s).  This meant that the `Hedge` middleware would display errors when used
with services that take longer than 10s to complete a response.  Instead, use a
constructor that produces an auto-resizing histogram.  In the future, if the
auto-resizing behavior is an issue, Tower could add a second constructor for
the Hedge middleware that allows specifying bounds, but for now this change is
transparent and avoids spurious errors.
2020-11-19 10:04:49 -08:00
Harry Barber
5e1e077448 Add clones to combinators 2020-11-05 15:18:43 -08:00
Eliza Weisman
450fa3d2be
spawn_ready: put back poll_closed (#481)
Tokio put this method back in 0.3.2, so we can use it again.

Closes #478
2020-10-28 15:15:56 -04:00
Eliza Weisman
43c44922af
buffer: wake tasks waiting for channel capacity when terminating (#480) 2020-10-28 11:41:11 -04:00
Eliza Weisman
069c9085b1
tests: print traces from tests (#479) 2020-10-28 11:40:27 -04:00
Eliza Weisman
ddc64e8d4d
update to Tokio 0.3 (#476)
This branch updates Tower to Tokio 0.3.

Unlike  #474, this branch uses Tokio 0.3's synchronization primitives,
rather than continuing to depend on Tokio 0.2. I think that we ought to
try to use Tokio 0.3's channels whenever feasible, because the 0.2
channels have pathological memory usage patterns in some cases (see
tokio-rs/tokio#2637). @LucioFranco let me know what you think of the
approach used here and we can compare notes!

For the most part, this was a pretty mechanical change: updating
versions in Cargo.toml, tracking feature flag changes, renaming
`tokio::time::delay` to `sleep`, and so on. Tokio's channel receivers
also lost their `poll_recv` methods, but we can easily replicate that by
enabling the `"stream"` feature and using `poll_next` instead.

The one actually significant change is that `tokio::sync::mpsc::Sender`
lost its `poll_ready` method, which impacts the way `tower::buffer` is
implemeted. When the buffer's channel is full, we want to exert
backpressure in `poll_ready`, so that callers such as load balancers
could choose to call another service rather than waiting for buffer
capacity. Previously, we did this by calling `poll_ready` on the
underlying channel sender.

Unfortunately, this can't be done easily using Tokio 0.3's bounded MPSC
channel, because it no longer exposes a polling-based interface, only an
`async fn ready`, which borrows the sender. Therefore, we implement our
own bounded MPSC on top of the unbounded channel, using a semaphore to
limit how many items are in the channel.

I factored out the code for polling a semaphore acquire future from
`limit::concurrency` into its own module, and reused it in `Buffer`.

Additionally, the buffer tests needed to be updated, because they
currently don't actually poll the buffer service before calling it. This
violates the `Service` contract, and the new code actually fails as a
result.

Closes #473 
Closes #474

Co-authored-by: Lucio Franco <luciofranco14@gmail.com>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2020-10-27 11:21:18 -07:00
Henry de Valence
1a84543317
hedge: don't reserve slots for hedged requests (#472) 2020-10-02 10:02:31 -04:00
Oliver Gould
ad348d8ee5
ready-cache: Properly expose error source (#467)
The `ready_cache::error::Failed` type does not actually expose its inner
error type via `Error::source`. This change fixes this and improves
debug formatting.

This change adds a new constraint, ensuring that keys impl Debug in
order for the `error::Failed` type to impl Debug/Error.
2020-08-31 07:21:04 -07:00
Harry Barber
ab7518ef13
Add Future-like combinators (#396) 2020-07-17 16:33:00 -04:00
Bruce Guenter
4316894422
Add get_ref functions to more service layers (#463)
* Implement get_ref, get_mut, and into_inner for Retry

* Implement get_ref, get_mut, and into_inner for Timeout
2020-07-10 10:47:39 -04:00
Taiki Endo
b12a3e3ae9
Remove uses of pin_project::project attribute (#458)
pin-project will deprecate the project attribute due to some unfixable
limitations.

Refs: https://github.com/taiki-e/pin-project/issues/225
2020-06-15 12:38:34 -04:00
Lucio Franco
007b648ea9
Clean up readme and update status (#453) 2020-05-08 13:54:45 -04:00
Bruce Guenter
98e0e41db1
Rework ConcurrencyLimit to use upstream tokio Semaphore (#451) 2020-05-06 11:06:40 -04:00
Lucio Franco
a0a66b10a2
Upgrade cargo deny action (#452) 2020-05-06 09:45:57 -04:00
Jon Gjengset
1c2d50680a
Spring cleaning for tower::balance (#449)
Noteworthy changes:

 - All constructors now follow the same pattern: `new` uses OS entropy,
   `from_rng` takes a `R: Rng` and seeds the randomness from there.
   `from_rng` is fallible, since randomness generators can be fallible.
 - `BalanceLayer` was renamed to `MakeBalanceLayer`, since it is not
   _really_ a `BalanceLayer`. The name of `BalanceMake` was also
   "normalized" to `MakeBalance`.

Another observation: the `Debug` bound on `Load::Metric` in
`p2c::Balance`, while not particularly onerous, generates really
confusing errors if you forget it include it. And crucially, the error
never points at `Debug` (should we file a compiler issue?), so I pretty
much had to guess my way to that being wrong in the doc example.

It would probably be useful to add a documentation example to
`MakeBalanceLayer` or `MakeBalance` (I suspect just one of them is fine,
since they're basically the same). Since I've never used it, and find it
hard to think of uses for it, it might be good if someone with more
experience with it wrote one.
2020-04-24 13:21:11 -04:00
Jon Gjengset
6a25d322b5 Use only one alias for Box<dyn Error>
This was a mostly mechanical change. I think in at least one place it
results in a `'static` bound being added, but the next tower release
will be breaking anyway, so that's okay.

I think it helps to also document the alias at the top to (eventually)
explain how people can interact with the error they get back to discover
the "deeper cause".
2020-04-24 10:30:20 -04:00
Eliza Weisman
8752a38117
util: fix oneshot dropping pending services immediately (#447)
## Motivation

Commit #330 introduced a regression when porting `tower-util::Oneshot`
from `futures` 0.1 to `std::future`. The *intended* behavior is that a
oneshot future should repeatedly call `poll_ready` on the oneshotted
service until it is ready, and then call the service and drive the
returned future. However, #330 inadvertently changed the oneshot future
to poll the service _once_, call it if it is ready, and then drop it,
regardless of its readiness.

In the #330 version of oneshot, an `Option` is used to store the
request while waiting for the service to become ready, so that it can be
`take`n and moved into the service's `call`. However, the `Option`
contains both the request _and_ the service itself, and is taken the
first time the service is polled. `futures::ready!` is then used when
polling the service, so the method returns immediate if it is not ready.
This means that the service itself (and the request), which were taken
out of the `Option`, will be dropped, and if the oneshot future is
polled again, it will panic.

## Solution

This commit changes the `Oneshot` future so that only the request lives
in the `Option`, and it is only taken when the service is called, rather
than every time it is polled. This fixes the bug.

I've also added a test for this which fails against master, but passes
after this change.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2020-04-23 16:07:48 -07:00
Steven Fackler
82e578b5b0
Impl Layer for &Layer (#446) 2020-04-21 17:11:27 -04:00
Jon Gjengset
39112cb0ba
Tidy up tower::load (#445)
This also renames the `Instrument` trait, and related types, to better
reflect what they do. Specifically, the trait is now called
`TrackCompletion`, and `NoInstrument` is called `CompleteOnResponse`.

Also brings back balance example and makes it compile.
2020-04-20 14:55:40 -04:00
Jon Gjengset
05b165056b
Tidy up tower::buffer (#444) 2020-04-17 17:41:51 -04:00
Jon Gjengset
c87fdd9c1e
Change Discover to be a sealed trait (#443)
* Change Discover to be a sealed trait

`Discover` was _really_ just a `TryStream<Item = Change>`, so this
change makes that much clearer. Specifically, users are intended to use
`Discover` only in bounds, whereas implementors should implement
`Stream` with the appropriate `Item` type. `Discover` then comes with a
blanket implementation for anything that implements `TryStream`
appropriately. This obviates the need for the `discover::stream` module.
2020-04-17 16:27:44 -04:00
Jon Gjengset
5947e2e145
Some more spring clean fixes. (#442)
* Add doc feature annotations

* Modules should be published or removed
2020-04-17 16:03:15 -04:00
Lucio Franco
85b657bf93
Remove path deps for tower-service (#441) 2020-04-17 14:00:38 -04:00
Lucio Franco
5e1788f494
rate: Fix rate limit not resetting (#439) 2020-04-16 11:31:58 -04:00
Lucio Franco
cd7dd12315
Refactor github actions (#436)
Signed-off-by: Lucio Franco <luciofranco14@gmail.com>
2020-04-14 19:20:20 -04:00
Lucio Franco
8a73440c1a
reconnect: Rework to allow real reconnecting (#437)
Signed-off-by: Lucio Franco <luciofranco14@gmail.com>
Co-authored-by: Jon Gjengset <jon@thesquareplanet.com>
2020-04-14 16:42:37 -04:00
Lucio Franco
d34019045f
Add Map service combinator (#435)
Signed-off-by: Lucio Franco <luciofranco14@gmail.com>
Co-authored-by: David Barsky <dbarsky@amazon.com>
2020-04-14 15:16:16 -04:00
Akshay Narayan
0520a6a467
New sub-crate: tower-steer (#426) 2020-03-31 21:26:13 -04:00
Jon Gjengset
81cfbab19e
Merge pull request #432 from tower-rs/2020-spring-clean
2020: merge all the middleware
2020-03-31 16:55:48 -04:00
Jon Gjengset
9dd2314048
step 4: make features do the right thing 2020-03-31 16:26:53 -04:00
Jon Gjengset
2e06782241
step 3: make ci work again 2020-03-31 16:26:52 -04:00
Jon Gjengset
c4d70b535b
step 2: make all the tests work again 2020-03-31 16:12:32 -04:00
Jon Gjengset
8df2a3e410
step 1: move all things to where they're going
Note that this also moves all crates from `log` to `tracing`.
It also does not set any dependencies as optional.
2020-03-31 13:31:21 -04:00
Jon Gjengset
0f9eb648a5
limit: prepare 0.3.1 release (#430) 2020-03-25 19:51:59 -04:00
Jon Gjengset
378433fc75
limit: Forward tower_load::Load (#429) 2020-03-25 19:46:05 -04:00
Jon Gjengset
b575175210
util: prepare 0.3.1 release (#428) 2020-03-23 13:02:43 -04:00
Jon Gjengset
52fde9767c
util: Add ReadyAnd to do what Ready should do (#427)
* util: Add ReadyAnd to do what Ready should do

`ServiceExt::ready` says that it produces "A future yielding the service
when it is ready to accept a request." This is not true; it does _not_
yield the service when it is ready, it yields unit. This makes it
impossible to chain service ready with service call, which is sad.

This PR adds `ready_and`, which does what `ready` promised. It also
deprecates `ready` with the intention that we remove `ready` in a future
version, and make the strictly more general `ready_and` take its place.
We can't do it now since it's not a backwards-compatible change even
though it _probably_ wouldn't break any code.

The PR also updates the docs so that they reflect the observed behavior.
2020-03-23 12:49:44 -04:00
Jon Gjengset
b6f5f586c5
Add Buffer::new note on how to set bound (#425) 2020-03-04 15:48:33 -05:00
Jake Ham
52d9e95a38
Fix documentation links in README (#422)
Updated the README, fixing the links to documentation. This now links
to each packages documentation on docs.rs. Not all packages have been
released to crates.io, so their documentation pages are empty.
2020-02-27 11:42:08 -05:00
Jon Gjengset
ba1fdd755b ready-cache: Prepare for 0.3.1 release
This also fixes up the various documentation URLs, which were still
pointing to 0.1.x.
2020-02-24 13:14:23 -05:00
Jon Gjengset
414e3b0809
ready-cache: Avoid panic on strange race (#420)
It's been observed that occasionally tower-ready-cache would panic
trying to find an already canceled service in `cancel_pending_txs`
(#415). The source of the race is not entirely clear, but extensive
debugging demonstrated that occasionally a call to `evict` would send on
the `CancelTx` for a service, yet that service would be yielded back
from `pending` in `poll_pending` in a non-`Canceled` state. This
is equivalent to saying that this code may panic:

```rust
async {
  let (tx, rx) = oneshot::channel();
  tx.send(42).unwrap();
  yield_once().await;
  rx.try_recv().unwrap(); // <- may occasionally panic
}
```

I have not been able to demonstrate a self-contained example failing in
this way, but it's the only explanation I have found for the observed
bug. Pinning the entire runtime to one core still produced the bug,
which indicates that it is not a memory ordering issue. Replacing
oneshot with `mpsc::channel(1)` still produced the bug, which indicates
that the bug is not with the implementation of `oneshot`. Logs also
indicate that the `ChannelTx` we send on in `evict()` truly is the same
one associated with the `ChannelRx` polled in `Pending::poll`, so we're
not getting our wires crossed somewhere. It truly is bizarre.

This patch resolves the issue by considering a failure to find a
ready/errored service's `CancelTx` as another signal that a service has
been removed. Specifically, if `poll_pending` finds a service that
returns `Ok` or `Err`, but does _not_ find its `CancelTx`, then it
assumes that it must be because the service _was_ canceled, but did not
observe that cancellation signal.

As an explanation, this isn't entirely satisfactory, since we do not
fully understand the underlying problem. It _may_ be that a canceled
service could remain in the pending state for a very long time if it
does not become ready _and_ does not see the cancellation signal (so it
returns `Poll::Pending` and is not removed). That, in turn, might cause
an issue if the driver of the `ReadyCache` then chooses to re-use a key
they believe they have evicted. However, any such case _must_ first hit
the panic that exists in the code today, so this is still an improvement
over the status quo.

Fixes #415.
2020-02-24 13:03:43 -05:00
Jon Gjengset
be156e733d ready-cache: restore assert for dropped cancel tx
When ready-cache was upgraded from futures 0.1 to `std::future` in
e2f1a49cf3bb29ec7a72feee2f31d9b8ab39d32a, this `expect` was removed, and
the code instead silently ignores the error. That's probably not what we
want, so this patch restores that assertion.
2020-02-20 17:08:07 -05:00
Jon Gjengset
1a67100aab Restore commented-out p2c assertion 2020-02-20 16:33:54 -05:00
Jon Gjengset
ae34c9b4a1 Add more tower-ready-cache tests 2020-02-20 16:33:54 -05:00
Jon Gjengset
96529148d8 Remove irrelevant comment
The assertion there isn't even true anyway, since the p2c may not yet
have "seen" the removal of a service, because it stopped when it found a
ready service.
2020-02-20 16:01:19 -05:00
Jon Gjengset
650e5be58e balance: Add a stress test for p2c
The hope for this was to reproduce #415 (which it does not sadly), but
at least it adds a test for p2c!
2020-02-20 16:01:19 -05:00
Jon Gjengset
47c3a14560 tower: Prepare 0.3.1 release 2020-01-17 22:53:08 -05:00
Jon Gjengset
ccfe7da592 tower: Allow opting out of tracing/log
This is of particular importance since the `log` feature of `tracing`
(currently) fails to compile if the `tracing` dependency is renamed.
Without a way to disable it in `tower`, any package that both depends on
`tower` **and** renames `tracing` in its dependencies is doomed.
2020-01-17 17:01:43 -05:00
Lucio Franco
7e35b758be
Remove azure and rename gh actions (#409)
* Remove azure

* Rename actions

* Rename workflow

* Reduce amount of actions

* Fix patch
2020-01-09 19:23:03 -05:00
László Nagy
40103d84ce Use GitHub actions (#407)
* gh-403: add basic github actions

* gh-403: add environment variables during test

* gh-403: fix error in tower-balance example

* gh-403: rename build workflow

* gh-403: fix release workflow

* gh-403: add GitHub page publish workflow

* gh-403: remove release workflow

* gh-403: run per crate build

* gh-403: replace build to check
2020-01-09 19:02:40 -05:00
Lucio Franco
7b48479bd2
util: Remove dev dep on tower (#401)
* util: Remove dev dep on tower

* Fix rustc bug

* enable call-all by default
2019-12-19 18:17:21 -05:00
Lucio Franco
d63665515c
ready-cache: Add readme (#402) 2019-12-19 17:56:43 -05:00
Lucio Franco
fe7919b1a4
Use Into<Error> for all Services (#400) 2019-12-19 17:30:23 -05:00
Lucio Franco
86eef82d2f
Remove default features for futures dep (#399)
* Remove default features for futures dep

* Add missing alloc feature
2019-12-19 14:20:41 -05:00
Lucio Franco
1e87d7ca8b
Bump crates and changelog dates (#397) 2019-12-19 13:44:40 -05:00
Lucio Franco
2fede40bdb
balance: Upgrade rand to 0.7 (#398) 2019-12-19 13:44:07 -05:00
Sean McArthur
2dc9a72bea tower-util: remove dead code 2019-12-11 13:13:07 -08:00
Sean McArthur
1863304331 move ServiceExt to tower-util crate 2019-12-11 12:13:51 -08:00
Lucio Franco
2e9e2d1813
limit: Vendor tokio::sync::Semaphore (#388) 2019-12-11 15:08:42 -05:00
Lucio Franco
fd2d034e97
ci: Re-enable ci (#389)
* ci: Re-enable ci

* ci: Re-enable ci

* Actually use stable
2019-12-11 15:01:02 -05:00
Sean McArthur
f6650b90c7 re-enable CI for tower-layer and tower-util 2019-12-11 11:25:13 -08:00
Sean McArthur
f130e5e113 tower-util: reduce dependencies, make call-all optional 2019-12-11 11:25:13 -08:00
Juan Alvarez
1843416dfe remove service, make and layer path deps (#382) 2019-12-06 11:59:56 -05:00
Lucio Franco
423ecee7e9
Remove unused deps (#381) 2019-12-05 23:42:01 -05:00
Lucio Franco
fdc7460f5a
Add rt-core feature to buffer (#380) 2019-12-05 20:17:36 -05:00
Lucio Franco
e2f1a49cf3
Update the rest of the crates and upgrade ready cache to `std::f… (#379)
* Update hedge, filter, load, load-shed, and more

* Update ready cache

* Prepare release for ready-cache

* fix merge

* Update balance

* Prepare balance release
2019-12-05 14:21:47 -05:00
Lucio Franco
0d2a3778ad
Update tower and tower-util and prep for release (#378)
* Update tower and tower-util

* Prepare them for release

* fmt

* Get tower tests working
2019-12-04 22:48:43 -05:00
Lucio Franco
54dd475ec0
Update buffer and prepare for release (#377)
* Update buffer and prepare for release

* Update tower-buffer/src/service.rs

Co-Authored-By: Eliza Weisman <eliza@buoyant.io>

* fmt
2019-12-04 20:31:27 -05:00
Lucio Franco
15c58e8842
Update retry and prepare for release (#376)
* Update retry and prepare for release

* fmt
2019-12-04 19:36:46 -05:00
Lucio Franco
877c194b1b
Update tower-limit and prepare for release (#375)
* wip

* Refactor limit tests and prep for release
2019-12-04 09:53:52 -05:00
Lucio Franco
ec6215fb2f
Update timeout, tower-test and reconnect (#373)
* Update timeout and prepare 0.3

* Update tower-test and prepare release

* Update lib doc path

* Update reconnect and prepare for release
2019-12-02 19:14:15 -05:00
David Barsky
45e311c2f2 layer: Prepare 0.3.0 Release (#372)
* layer: prepare 0.3.0 release

* fmt

* Update tower-layer/src/lib.rs
2019-11-29 16:09:47 -05:00
Lucio Franco
b6c67182cb
make: Prepare 0.3 release and update docs (#370)
* make: Prepare 0.3 release and update docs

* rebase against origin/master + get doc tests to compile

* fmt

* fix build
2019-11-29 15:44:03 -05:00
Lucio Franco
c3c6780d31
service: Update docs and prepare for 0.3 release (#369)
* service: Update docs and prepare for 0.3 release

* Update rustmft

* Disable main tower crate
2019-11-29 11:48:08 -05:00
Lucio Franco
a4cb384751
Remove v0.3.x branch note on readme (#368) 2019-11-29 11:19:15 -05:00
Lucio Franco
bb5c02ca58
Disable all crates execpt tower-service 2019-11-29 09:23:54 -05:00
Lucio Franco
a62fe875c4
Disable tower-balance from ci 2019-11-29 09:15:10 -05:00
David Barsky
a4c02f5d9c Revert "get building"
186a0fb4a326a1f056565122cba0f5c87b4e6889
2019-11-28 15:21:27 -05:00
David Barsky
186a0fb4a3 get building 2019-11-28 15:15:41 -05:00
Lucio Franco
51a374c564 Fix up last few merge issues 2019-11-26 10:32:49 -05:00
Lucio Franco
87ad2e1cc8 Merge remote-tracking branch 'origin/master' into v0.3.x 2019-11-26 10:32:02 -05:00
Oliver Gould
7e55b7fa0b
Introduce tower-ready-cache (#303)
In #293, `balance` was refactored to manage dispatching requests over a
set of equivalent inner services that may or may not be ready.

This change extracts the core logic of managing a cache of ready
services into a dedicated crate, leaving the balance crate to deal with
node selection.
2019-11-12 09:44:16 -08:00
Oliver Gould
2d24d84e7c
Cleanup unused dependencies (#364)
I've run `cargo udeps` to discover some unused/misplaced dependencies.
2019-11-11 09:52:33 -08:00
Oliver Gould
4a4593d522
balance: Update rand to 0.7 (#363) 2019-11-09 14:30:44 -08:00
Pen Tree
52dbdda23d Expect the poll_acquire error, not return (#362)
* Expect the poll_acquire error, not return

* Remove Error in tower-limit
2019-10-31 14:06:04 -04:00
Pen Tree
fac5c361a4 Fix tower-service docs (#361) 2019-10-18 17:18:55 -04:00
Lucio Franco
e414b2b7d3
Prepare buffer 0.1.2 release (#360) 2019-10-11 11:39:34 -04:00
Lucio Franco
30f11bfaa2
Prepare limit 0.1.1 release (#359) 2019-10-11 11:22:14 -04:00
Lucio Franco
abe5b78542
Remove tokio alpha.6 patches (#357)
* Remove tokio alpha.6 patches

* Remove ci patch
2019-09-30 21:15:26 -04:00
Lucio Franco
3bff86e28e
make: Add alpha.2a changelog 2019-09-30 20:53:39 -04:00
Lucio Franco
7fa1054892
make: Bump version to alpha.2a (#356) 2019-09-30 20:40:28 -04:00
Jon Gjengset
2653f70884 Bumps for 0.3.0-alpha.2 (#355)
* Bump all to futures-* alpha.19

* Prepare for alpha.2 release

* Make tower-service also a path dep

* Use new tokio alpha
2019-09-30 18:56:26 -04:00
Taiki Endo
03dc7069aa Update pin-project to 0.4 (#350) 2019-09-30 14:58:27 -04:00
Jon Gjengset
d5b36b54a5
Re-enable all CI (#353)
CI has to run on nightly for the time being.

Also includes changes to make buffer tests more reliable.
2019-09-24 18:56:37 -04:00
Jon Gjengset
6baf381879
Consistently apply deny/warn rules (#352)
This makes all tower subcrates have the following lints as warn (rather
than allow): `missing_docs`, `rust_2018_idioms`, `unreachable_pub`, and
`missing_debug_implementations`. In addition, it consistently applies
`deny(warning)` *only* under CI so that deprecations and macro changes in minor
version bumps in dependencies will never cause `tower` crates to stop
compiling, and so that tests can be run even if not all warnings have been
dealt with. See also https://github.com/rust-unofficial/patterns/blob/master/anti_patterns/deny-warnings.md

Note that `tower-reconnect` has the `missing_docs` lint disabled for now
since it contained _no_ documentation previously. Also note that this
patch does not add documentation to the various `new` methods, as they
are considered self-explanatory. They are instead marked as
`#[allow(missing_docs)]`.
2019-09-23 17:28:14 -04:00
Taiki Endo
5a561b7776 layer: remove unused dependencies (#351) 2019-09-23 09:54:08 -04:00
Sean McArthur
55b5150a89 tower-make:v0.3.0-alpha.2 2019-09-20 15:09:09 -07:00
Sean McArthur
52075f3c6f Update tower-make to tokio-io v0.2.0-alpha.5 2019-09-20 15:09:09 -07:00
Luke Steensen
b86d7fb6e4 limit: Add trace log when rate limit is exceeded (#348) 2019-09-17 17:19:23 -04:00
Luke Steensen
8509ab879d Fix up broken dependencies and deprecated methods (#347)
* fix up broken dependencies and deprecated methods

* use released version of tracing-subscriber

Co-Authored-By: Lucio Franco <luciofranco14@gmail.com>
2019-09-17 15:29:11 -04:00
Mackenzie Clark
f4a81d2c7d fix tower-service helloworld docs example to use new futures (#346) 2019-09-15 14:09:58 -05:00
Lucio Franco
ca951d56f4
Prepare tower-buffer 0.3.0-alpha.1b release (#345)
* buffer: Fix unused Stream warning

* Prepare `tower-buffer` 0.3.0-alpha.1b release

* Update buffer version in balance
2019-09-14 12:47:38 -04:00
Lucio Franco
206f3d9941
Prepare tower-buffer 0.3.0-alpha.1a release (#343)
Signed-off-by: Lucio Franco <luciofranco14@gmail.com>
2019-09-13 16:36:35 -04:00
Lucio Franco
167f791a9f
Add v0.3.x branch to run CI (#344)
Signed-off-by: Lucio Franco <luciofranco14@gmail.com>
2019-09-13 15:38:23 -04:00
Mackenzie Clark
e8cef688a0 change poll_next to poll_recv (#342) 2019-09-13 15:18:13 -04:00
Lucio Franco
42376d484b
tower: Add date to changelog 2019-09-11 16:45:19 -04:00
Lucio Franco
fc211c3b7c
load-shed: Add date to changelog 2019-09-11 16:36:11 -04:00
Lucio Franco
7bb3a646a7
spawn-ready: Add date to changelog 2019-09-11 16:33:43 -04:00
Lucio Franco
2f6c0afde3
buffer: Add date to changelog 2019-09-11 16:31:30 -04:00
Lucio Franco
67a9e27177
balance: Fix tokio-sync channel poll fn 2019-09-11 16:27:02 -04:00
Lucio Franco
bd62f64d6c
balance: Add changelog entry and remove publish false 2019-09-11 16:21:34 -04:00
Lucio Franco
48f97c3dce
load: Remove publish false and add date to changelog 2019-09-11 16:19:53 -04:00
Lucio Franco
768528e737
discover: Add date to changelog 2019-09-11 16:16:07 -04:00
Lucio Franco
c40185901e
limit: Add date to changelog 2019-09-11 16:09:29 -04:00
Lucio Franco
589eb44377
util: Add date to changelog 2019-09-11 16:02:34 -04:00
Lucio Franco
a3c16e85f9
make: Add changelog date 2019-09-11 15:59:30 -04:00
Lucio Franco
a8def9349f
reconnect: Remove publish false 2019-09-11 15:58:43 -04:00
Lucio Franco
90ef2a64b4
reconnect: Add date to changelog 2019-09-11 15:57:55 -04:00
Lucio Franco
04fd0c5898
Merge branch 'v0.3.x' of github.com:tower-rs/tower into v0.3.x 2019-09-11 15:57:32 -04:00
Jon Gjengset
395889c763
Make Ready only take Service by reference (#340)
Rather than consuming `self` and returning `(Self, _)`. This did mean
that a few crates that depended on `Ready` to own the `Service` and
provide it once it was ready had to change to call `poll_ready`
directly. Which in turn meant adding in some PhantomData<Request> so
that the impl blocks wouldn't be under-constrainted. Take, for example:

```
impl<K, S: Service<Req>, Req> Future for UnreadyService<K, S>
```

would fail to compile with

```
error[E0207]: the type parameter `Req` is not constrained by the impl trait, self type, or predicates
```
2019-09-11 15:49:51 -04:00
Lucio Franco
6c68c56dc4
retry: Update changelog with date 2019-09-11 15:12:33 -04:00
Lucio Franco
9b8db5b393
test: Add changelog entry and date 2019-09-11 15:08:20 -04:00
Lucio Franco
97a2bc18b9
timeout: Update changelog release date 2019-09-11 15:00:42 -04:00
Lucio Franco
83752ab6c2
layer: Add date to changelog 2019-09-11 14:49:23 -04:00
Jon Gjengset
3d642f5ca0
This bumps tower-hedge to 0.3.0-alpha.1 (#334) 2019-09-11 14:00:22 -04:00
Lucio Franco
fb124a14f0
Pin all the alpha based dependencies (#339)
Signed-off-by: Lucio Franco <luciofranco14@gmail.com>
2019-09-11 13:57:27 -04:00
Taiki Endo
921325ac2d Pin the version of pin-project 2019-09-11 10:00:27 -07:00
Taiki Endo
65e07064db Update pin-project to 0.4.0-alpha.11 2019-09-11 10:00:27 -07:00
Jon Gjengset
d333e9f32f
spawn-ready: fix tests after removal of E (#337) 2019-09-11 11:09:17 -04:00
Jon Gjengset
4c1df65a75
cargo fmt 2019-09-11 10:30:41 -04:00
Jon Gjengset
87976ae418
Update tower-balance to std::future (#335)
This bumps tower-balance to 0.3.0-alpha.1

It also adds delegate impls for `Discover` through `Pin`, and makes `tower-load::Constant: Debug`.
2019-09-10 18:15:32 -04:00
Jon Gjengset
1ca999fde1
Update tower-spawn-ready to std::future (#332)
This bumps tower-spawn-ready to 0.3.0-alpha.1
2019-09-10 17:06:34 -04:00
Jon Gjengset
0802ca2bce
Update tower-util and tower to std::future (#330)
This bumps tower-util and tower to 0.3.0-alpha.1
2019-09-10 14:51:07 -04:00
Jon Gjengset
9691d0d379
Update tower-reconnect to std::future (#333)
This bumps tower-reconnect to 0.3.0-alpha.1

It also makes the tower-make version consistent
2019-09-10 11:48:01 -04:00
Jon Gjengset
adca66cf74
Update tower-filter to std::future (#331)
This bumps tower-filter to 0.3.0-alpha.1
2019-09-10 11:39:51 -04:00
Jon Gjengset
eac0ea30c3
Use the same version of pin-project everywhere (#329) 2019-09-09 16:31:40 -04:00
Jon Gjengset
4eb47b01dc
Update tower-timeout to std::future (#328)
This bumps tower-timeout to 0.3.0-alpha.1
2019-09-09 16:20:09 -04:00
Jon Gjengset
233aab1988
Obviate need for as_mut to assert_request_eq (#327)
Calling a method on `Pin<&mut Self>` moves the pin, which means you can't call more methods later. The solution to this is to use `Pin::as_mut`. But it's annoying to have to do that to _every_ call to the `assert_request_eq!` helper macro from `tower-test`, so I made it do it for me.
2019-09-09 15:28:41 -04:00
Jon Gjengset
4f71951221
Update tower-retry to std::future (#326)
This bumps tower-retry to 0.3.0-alpha.1
2019-09-09 15:10:46 -04:00
Jon Gjengset
154bd69b9f
Update tower-limit to std::future (#324)
This bumps tower-limit to 0.3.0-alpha.1
2019-09-09 12:09:41 -04:00
Jon Gjengset
390e124525
Update tower-load-shed to std::future (#325)
This bumps tower-load-shed to 0.3.0-alpha.1
2019-09-09 12:09:19 -04:00
Jon Gjengset
693965fa4a
Update tower-buffer to std::future (#323)
This bumps tower-buffer to 0.3.0-alpha.1
2019-09-09 12:07:28 -04:00
Jon Gjengset
f8097a60f6
Update tower-load to std::future (#321)
This bumps tower-load to 0.3.0-alpha.1
2019-09-09 09:22:49 -04:00
Jon Gjengset
db116d1937
Update tower-layer to std::future (#322)
This bumps tower-layer to 0.3.0-alpha.1
2019-09-07 00:22:18 -04:00
Jon Gjengset
5ad02b73d9
Update tower-discover to std::future (#320)
This bumps tower-discover to 0.3.0-alpha.1
2019-09-07 00:20:21 -04:00
John Doneth
7ae5967e7a Update tower-test to std::future::Future (#316)
* update tower-test to std::future

* refactoring tower-test tests

* everything works

* whoops, un-delete the tower dir

* cleanup & update links

* undo changes to tower-filter for this PR

* use pin_utils::unsafe_pinned

* use tokio-test
2019-09-03 10:26:46 -04:00
Stan Bondi
ae611db665 Update service_fn to std::future::Future (#318)
This small PR ports service_fn to `std::future`.
2019-09-03 10:12:33 -04:00
Lucio Franco
84da777ad1
Update tokio-io and prep release (#314)
* Bump tokio-io to alpha.4

* Prep tower-make release alpha.2
2019-08-30 11:01:19 -04:00
Lucio Franco
6cb9c1099e
make: More release clean up 2019-08-27 12:44:13 -04:00
Lucio Franco
652137aaa3
Update MakeService and MakeConnection (#313)
* Update MakeService and MakeConnection

* Create tower-make crate

* Remove Makers from tower-util
2019-08-27 12:39:14 -04:00
Lucio Franco
793e2e8e94
Add a note about v0.3.x branch to the readme (#312)
* Add a note about v0.3.x branch to the readme

Signed-off-by: Lucio Franco <luciofranco14@gmail.com>

* Fix link

Signed-off-by: Lucio Franco <luciofranco14@gmail.com>
2019-08-20 23:20:01 -04:00
Lucio Franco
1351aaa9d8
service: Add changelog entry for 0.3.0-alpha.1 2019-08-20 14:34:30 -04:00
Lucio Franco
fe9cef6006
Update tower-service to std::future::Future (#311) 2019-08-20 14:31:09 -04:00
Gabe Jackson
b7faef31e9 docs: Minor typo + wording fixes (tower-util) (#309) 2019-08-19 10:13:56 -04:00
Gabe Jackson
168539ed9e docs: Minor typo + wording fixes (#310) 2019-08-17 17:10:17 -04:00
Lucio Franco
26d096bd99
timeout: Add Elapsed::new and prepare 0.1.1 release (#308)
* Add Elapsed::new

* Prep tower-timeout 0.1.1 release
2019-07-30 15:14:24 -04:00
Lucio Franco
72219ce862
Prep buffer and tower release (#305)
* Prep buffer 0.1.1 release

* Prep release for tower 0.1.1
2019-07-19 14:21:07 -04:00
Jon Gjengset
40fbb85c4b
Notify Pool when Services are dropped (#301)
Prior to this change, when `Balance` dropped a failing service, `Pool`
would not be notified of this fact. This meant that it never updated
`.services`, and so it might not add a new backing `Service` (e.g., due
to `max_services`) even though no working backing exist.

With this change, dropped services notify the `Pool` so that it knows to
re-check its limits. It also gains some much-needed tests.
2019-07-15 13:50:01 -04:00
Jon Gjengset
491dfbe634
Early push to bring tracing into tower (#298)
Of particular note is that this change lets spans trace requests through `tower::Buffer` by internally carrying the `Span` at the time of `call` along with the request to the worker.
2019-07-12 14:46:50 -04:00
Oliver Gould
b39a4881d8
builder: Add into_inner (#299)
When using a `ServiceBuilder`, it's not possible to obtain the
underlying `Layer` implementation.

Adding a `ServiceBuilder::into_inner` allows callers to retrieve this
field instead of only being able to build a `Service`.
2019-07-09 11:43:59 -07:00
Oliver Gould
18b30eb70e
balance: Only balance over ready endpoints (#293)
In 03ec4aa, the balancer was changed to make a quick endpoint decision.
This, however, means that the balancer can return NotReady when it does
in fact have a ready endpoint.

This changes the balancer to separate unready endpoints, only
performing p2c over ready endpoints. Unready endpoints are tracked with
a FuturesUnordered that supports eviction via oneshots.

The main downside of this change is that the Balancer must become
generic over the Request type.
2019-07-05 20:46:33 -07:00
Jon Gjengset
67a11f27ff
Fix some simple compile-time warnings (#297) 2019-07-05 17:10:11 -04:00
Oliver Gould
03ec4aafa8
balance: Specialize the balancer for P2C (#288)
As described in #286, `Balance` had a few problems:
- it is responsible for driving all inner services to readiness, making
  its `poll_ready` O(n) and not O(1);
- the `choose` abstraction was a hinderance. If a round-robin balancer
  is needed it can be implemented separately without much duplicate
  code; and
- endpoint errors were considered fatal to the balancer.

This changes replaces `Balance` with `p2c::Balance` and removes the
`choose` module.

Endpoint service failures now cause the service to be removed from the
balancer gracefully.

Endpoint selection is now effectively constant time, though it biases
for availability in the case when random selection does not yield an
available endpoint.

`tower-test` had to be updated so that a mocked service could fail after
advertising readiness.
2019-06-04 13:59:47 -07:00
Jon Gjengset
313530c875
Set a cap on the number of services in pool (#291)
This is very useful to avoid spinning up a million database connections for example (like I just did).
2019-05-31 15:12:01 -04:00
Oliver Gould
82e7b8a27b
spawn-ready: Change layer to operate over MakeSpawnReady (#290)
The initial implementation of spawn-ready didn't properly compose over
MakeService instances. This introduces `MakeSpawnReady` as a factory
type for SpawnReady, and changes Layer to produce `MakeSpawnReady`
instances.
2019-05-30 12:07:08 -07:00
Oliver Gould
a496fbf72c
Extract tower-load from tower-balance (#285)
The tower-balance crate includes the `Load` and `Instrument` traits,
which are likely useful outside of balancers; and certainly have no
tight coupling with any specific balancer implementation. This change
extracts these protocol-agnostic traits into a dedicated crate.

The `Load` trait includes a latency-aware _PeakEWMA_ load strategy as
well as a simple _PendingRequests_ strategy for latency-agnostic
applications.

The `Instrument` trait is used by both of these strategies to track
in-flight requests without knowing protocol details. It is expected that
protocol-specific crates will provide, for instance, HTTP
time-to-first-byte latency strategies.

A default `NoInstrument` implementation tracks the a request until its
response future is satisfied.

This crate should only be published once tower-balance is published.

Part of https://github.com/tower-rs/tower/issues/286
2019-05-29 10:32:02 -07:00
Oliver Gould
42f4b7781e
spawn-ready: Drives a service's readiness on an executor (#283)
Some layers cannot guarantee that they will poll inner services in a
timely fashion. For instance, the balancer polls its inner services to
check for readiness, but it does so randomly. If its inner service
must be polled several times to become ready, e.g., because it's driving
the initiation of a TLS connection, then the balancer may not drive the
handshake to completion.

The `SpawnReady` layer ensures that its inner service is driven to
readiness by spawning a background task.
2019-05-29 09:57:46 -07:00
Alex Leong
a611a14096
Fix tower-hedge tests and add to CI (#287)
Signed-off-by: Alex Leong <alex@buoyant.io>
2019-05-28 13:33:42 -07:00
Oliver Gould
9b27863a61
balance: Configure weights from keys, not services (#281)
* balance: Configure weights from keys, not services

The initial weighted balancing implementation required that the
underlying service implement `HasWeight`.

Practically, this doesn't work that well, since this may force
middlewares to implement this trait as well.

To fix this, we change the type bounds so that _keys_, not services,
must implement `HasWeight`.

This has a drawback, though, in that Weight, which contains a float,
cannot implement `Hash` or `Eq`, which is required by the balancer. This
tradeoff seems manageable, though (and is already addressed in linkerd,
for instance). We should follow-up with a change to alter the internal
representation of Weight to alleviate this.
2019-05-09 12:17:43 -07:00
Sean McArthur
b9c2fea0fc Greatly improve Debug output of ServiceBuilder (#280) 2019-05-09 08:44:40 -07:00
Marcus Griep
4d6d2c8572 Minor docs fix (#277) 2019-04-29 13:50:47 -07:00
Carl Lerche
8a646dd25c
ci: fix isRelease condition (#274) 2019-04-27 09:32:40 -07:00
Alex Leong
73c74252e6 Add hedge retry middleware (#236)
Add tower-hedge, a layer that preemptively retries requests which have been
outstanding for longer than a given latency percentile.  If either of the original
future or the retry future completes, that value is used.  For more information
about hedge requests, see: [The Tail at Scale][1]

[1]: https://cseweb.ucsd.edu/~gmporter/classes/fa17/cse124/post/schedule/p74-dean.pdf

Signed-off-by: Alex Leong <alex@buoyant.io>
2019-04-27 09:32:26 -07:00
277 changed files with 16295 additions and 7342 deletions

121
.github/workflows/CI.yml vendored Normal file
View File

@ -0,0 +1,121 @@
name: CI
on:
push:
branches:
- master
pull_request: {}
env:
MSRV: 1.64.0
jobs:
check-stable:
# Run `cargo check` first to ensure that the pushed code at least compiles.
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- name: Check
run: cargo check --workspace --all-features --all-targets
check-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- name: cargo doc
working-directory: ${{ matrix.subcrate }}
env:
RUSTDOCFLAGS: "-D rustdoc::broken_intra_doc_links"
run: cargo doc --all-features --no-deps
check-msrv:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: "install Rust ${{ env.MSRV }}"
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.MSRV }}
- name: "install Rust nightly"
uses: dtolnay/rust-toolchain@nightly
- name: Select minimal versions
run: |
cargo update -Z minimal-versions
cargo update -p lazy_static --precise 1.5.0
- name: Check
run: |
rustup default ${{ env.MSRV }}
cargo check --all --all-targets --all-features --locked
cargo-hack:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- name: install cargo-hack
uses: taiki-e/install-action@cargo-hack
- name: cargo hack check
working-directory: ${{ matrix.subcrate }}
run: cargo hack check --each-feature --no-dev-deps --workspace
test-versions:
# Test against the stable, beta, and nightly Rust toolchains on ubuntu-latest.
needs: check-stable
runs-on: ubuntu-latest
strategy:
# Disable fail-fast. If the test run for a particular Rust version fails,
# don't cancel the other test runs, so that we can determine whether a
# failure only occurs on a particular version.
fail-fast: false
matrix:
rust: [stable, beta, nightly]
steps:
- uses: actions/checkout@v4
- name: "install Rust ${{ matrix.rust }}"
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ matrix.rust }}
- name: Run tests
run: cargo test --workspace --all-features
test-msrv:
needs: check-msrv
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: "install Rust ${{ env.MSRV }}"
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.MSRV }}
- name: "install Rust nightly"
uses: dtolnay/rust-toolchain@nightly
- name: Select minimal versions
run: |
cargo update -Z minimal-versions
cargo update -p lazy_static --precise 1.5.0
- name: test
run: |
rustup default ${{ env.MSRV }}
cargo check --workspace --all-features --locked
style:
needs: check-stable
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
with:
components: rustfmt
- name: rustfmt
run: cargo fmt --all -- --check
deny-check:
name: cargo-deny check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: EmbarkStudios/cargo-deny-action@v1
with:
command: check

7
.github/workflows/patch.toml vendored Normal file
View File

@ -0,0 +1,7 @@
# Patch dependencies to run all tests against versions of the crate in the
# repository.
[patch.crates-io]
tower = { path = "tower" }
tower-layer = { path = "tower-layer" }
tower-service = { path = "tower-service" }
tower-test = { path = "tower-test" }

32
.github/workflows/publish.yml vendored Normal file
View File

@ -0,0 +1,32 @@
name: Deploy API Documentation
on:
push:
branches:
- master
jobs:
publish:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Install nightly Rust
uses: dtolnay/rust-toolchain@master
with:
toolchain: nightly
- name: Generate documentation
run: cargo doc --workspace --no-deps --all-features
env:
# Enable the RustDoc `#[doc(cfg(...))]` attribute.
RUSTDOCFLAGS: --cfg docsrs
- name: Deploy documentation
if: success()
uses: crazy-max/ghaction-github-pages@v1
with:
target_branch: gh-pages
build_dir: target/doc
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

24
.github/workflows/release.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: create github release
on:
push:
tags:
- tower-[0-9]+.*
- tower-[a-z]+-[0-9]+.*
jobs:
create-release:
name: Create GitHub release
# only publish from the origin repository
if: github.repository_owner == 'tower-rs'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: taiki-e/create-gh-release-action@v1.3.0
with:
prefix: "(tower)|(tower-[a-z]+)"
changelog: "$prefix/CHANGELOG.md"
title: "$prefix $version"
branch: "(master)|(v[0-9]+.[0-9]+.x)"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -2,17 +2,26 @@
members = [
"tower",
"tower-balance",
"tower-buffer",
"tower-discover",
"tower-filter",
"tower-layer",
"tower-limit",
"tower-load-shed",
"tower-reconnect",
"tower-retry",
"tower-service",
"tower-test",
"tower-timeout",
"tower-util",
]
[workspace.dependencies]
futures = "0.3.22"
futures-core = "0.3.22"
futures-util = { version = "0.3.22", default-features = false }
hdrhistogram = { version = "7.0", default-features = false }
http = "1"
indexmap = "2.0.2"
pin-project-lite = "0.2.7"
quickcheck = "1"
rand = "0.9"
slab = "0.4.9"
sync_wrapper = "1"
tokio = "1.6.2"
tokio-stream = "0.1.1"
tokio-test = "0.4"
tokio-util = { version = "0.7.0", default-features = false }
tracing = { version = "0.1.2", default-features = false }
tracing-subscriber = { version = "0.3", default-features = false }

102
README.md
View File

@ -3,13 +3,25 @@
Tower is a library of modular and reusable components for building robust
networking clients and servers.
[![Build Status][azure-badge]][azure-url]
[![Gitter][gitter-badge]][gitter-url]
[![Crates.io][crates-badge]][crates-url]
[![Documentation][docs-badge]][docs-url]
[![Documentation (master)][docs-master-badge]][docs-master-url]
[![MIT licensed][mit-badge]][mit-url]
[![Build Status][actions-badge]][actions-url]
[![Discord chat][discord-badge]][discord-url]
[azure-badge]: https://dev.azure.com/tower-rs/Tower/_apis/build/status/tower-rs.tower?branchName=master
[azure-url]: https://dev.azure.com/tower-rs/Tower/_build/latest?definitionId=1&branchName=master
[gitter-badge]: https://badges.gitter.im/tower-rs/tower.svg
[gitter-url]: https://gitter.im/tower-rs/tower
[crates-badge]: https://img.shields.io/crates/v/tower.svg
[crates-url]: https://crates.io/crates/tower
[docs-badge]: https://docs.rs/tower/badge.svg
[docs-url]: https://docs.rs/tower
[docs-master-badge]: https://img.shields.io/badge/docs-master-blue
[docs-master-url]: https://tower-rs.github.io/tower/tower
[mit-badge]: https://img.shields.io/badge/license-MIT-blue.svg
[mit-url]: LICENSE
[actions-badge]: https://github.com/tower-rs/tower/workflows/CI/badge.svg
[actions-url]:https://github.com/tower-rs/tower/actions?query=workflow%3ACI
[discord-badge]: https://img.shields.io/discord/500028886025895936?logo=discord&label=discord&logoColor=white
[discord-url]: https://discord.gg/EeF3cQw
## Overview
@ -17,52 +29,20 @@ Tower aims to make it as easy as possible to build robust networking clients and
servers. It is protocol agnostic, but is designed around a request / response
pattern. If your protocol is entirely stream based, Tower may not be a good fit.
## Project Layout
## Supported Rust Versions
Tower consists of a number of components, each of which live in their own sub
crates.
Tower will keep a rolling MSRV (minimum supported Rust version) policy of **at
least** 6 months. When increasing the MSRV, the new Rust version must have been
released at least six months ago. The current MSRV is 1.64.0.
* [`tower`]: The main user facing crate that provides batteries included tower services ([docs][t-docs]).
## `no_std`
* [`tower-service`]: The foundational traits upon which Tower is built
([docs][ts-docs]).
`tower` itself is _not_ `no_std` compatible, but `tower-layer` and `tower-service` are.
* [`tower-layer`]: The foundational trait to compose services together
([docs][tl-docs]).
## Getting Started
* [`tower-balance`]: A load balancer. Load is balanced across a number of
services ([docs][tb-docs]).
* [`tower-buffer`]: A buffering middleware. If the inner service is not ready to
handle the next request, `tower-buffer` stores the request in an internal
queue ([docs][tbuf-docs]).
* [`tower-discover`]: Service discovery abstraction ([docs][td-docs]).
* [`tower-filter`]: Middleware that conditionally dispatch requests to the inner
service based on a predicate ([docs][tf-docs]).
* [`tower-limit`]: Middleware limiting the number of requests that are
processed ([docs][tlim-docs]).
* [`tower-reconnect`]: Middleware that automatically reconnects the inner
service when it becomes degraded ([docs][tre-docs]).
* [`tower-retry`]: Middleware that retries requests based on a given `Policy`
([docs][tretry-docs]).
* [`tower-test`]: Testing utilies ([docs][ttst-docs]).
* [`tower-timeout`]: Middleware that applies a timeout to requests
([docs][tt-docs]).
* [`tower-util`]: Miscellaneous additional utilities for Tower
([docs][tu-docs]).
## Status
Currently, only [`tower-service`], the foundational trait, has been released to
crates.io. The rest of the library will be following shortly.
If you're brand new to Tower and want to start with the basics we recommend you
check out some of our [guides].
## License
@ -74,30 +54,4 @@ Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in Tower by you, shall be licensed as MIT, without any additional
terms or conditions.
[`tower`]: tower
[t-docs]: https://tower-rs.github.io/tower/doc/tower/index.html
[`tower-service`]: tower-service
[ts-docs]: https://docs.rs/tower-service/
[`tower-layer`]: tower-layer
[tl-docs]: https://docs.rs/tower-layer/
[`tower-balance`]: tower-balance
[tb-docs]: https://tower-rs.github.io/tower/doc/tower_balance/index.html
[`tower-buffer`]: tower-buffer
[tbuf-docs]: https://tower-rs.github.io/tower/doc/tower_buffer/index.html
[`tower-discover`]: tower-discover
[td-docs]: https://tower-rs.github.io/tower/doc/tower_discover/index.html
[`tower-filter`]: tower-filter
[tf-docs]: https://tower-rs.github.io/tower/doc/tower_filter/index.html
[`tower-limit`]: tower-limit
[tlim-docs]: https://tower-rs.github.io/tower/doc/tower_limit/index.html
[`tower-reconnect`]: tower-reconnect
[tre-docs]: https://tower-rs.github.io/tower/doc/tower_reconnect/index.html
[`tower-retry`]: tower-retry
[tretry-docs]: https://tower-rs.github.io/tower/doc/tower_retry/index.html
[`tower-timeout`]: tower-timeout
[`tower-test`]: tower-test
[ttst-docs]: https://tower-rs.github.io/tower/doc/tower_test/index.html
[`tower-rate-limit`]: tower-rate-limit
[tt-docs]: https://tower-rs.github.io/tower/doc/tower_timeout/index.html
[`tower-util`]: tower-util
[tu-docs]: https://tower-rs.github.io/tower/doc/tower_util/index.html
[guides]: https://github.com/tower-rs/tower/tree/master/guides

View File

@ -1,35 +0,0 @@
trigger: ["master"]
pr: ["master"]
jobs:
- template: ci/azure-rustfmt.yml
parameters:
name: rustfmt
# Basic test run on all platforms
- template: ci/azure-test-stable.yml
parameters:
name: Linux_Stable
displayName: Test
vmImage: ubuntu-16.04
crates:
- tower-balance
- tower-buffer
- tower-discover
- tower-filter
- tower-layer
- tower-limit
- tower-load-shed
- tower-reconnect
- tower-retry
- tower-service
- tower-test
- tower-timeout
- tower-util
- tower
- template: ci/azure-deploy-docs.yml
parameters:
dependsOn:
- rustfmt
- Linux_Stable

View File

@ -1,39 +0,0 @@
parameters:
dependsOn: []
jobs:
- job: documentation
displayName: 'Deploy API Documentation'
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
pool:
vmImage: 'Ubuntu 16.04'
dependsOn:
- ${{ parameters.dependsOn }}
steps:
- template: azure-install-rust.yml
parameters:
platform: ${{parameters.name}}
rust_version: stable
- script: |
cargo doc --all --no-deps
cp -R target/doc '$(Build.BinariesDirectory)'
displayName: 'Generate Documentation'
- script: |
set -e
git --version
ls -la
git init
git config user.name 'Deployment Bot (from Azure Pipelines)'
git config user.email 'deploy@tower-rs.com'
git config --global credential.helper 'store --file ~/.my-credentials'
printf "protocol=https\nhost=github.com\nusername=carllerche\npassword=%s\n\n" "$GITHUB_TOKEN" | git credential-store --file ~/.my-credentials store
git remote add origin https://github.com/tower-rs/tower
git checkout -b gh-pages
git add .
git commit -m 'Deploy Tower API documentation'
git push -f origin gh-pages
env:
GITHUB_TOKEN: $(githubPersonalToken)
workingDirectory: '$(Build.BinariesDirectory)'
displayName: 'Deploy Documentation'

View File

@ -1,28 +0,0 @@
steps:
# Linux and macOS.
- script: |
set -e
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUSTUP_TOOLCHAIN
echo "##vso[task.setvariable variable=PATH;]$PATH:$HOME/.cargo/bin"
env:
RUSTUP_TOOLCHAIN: ${{parameters.rust_version}}
displayName: "Install rust (*nix)"
condition: not(eq(variables['Agent.OS'], 'Windows_NT'))
# Windows.
- script: |
echo "windows"
curl -sSf -o rustup-init.exe https://win.rustup.rs
rustup-init.exe -y --default-toolchain %RUSTUP_TOOLCHAIN%
set PATH=%PATH%;%USERPROFILE%\.cargo\bin
echo "##vso[task.setvariable variable=PATH;]%PATH%;%USERPROFILE%\.cargo\bin"
env:
RUSTUP_TOOLCHAIN: ${{parameters.rust_version}}
displayName: Install rust (windows)
condition: eq(variables['Agent.OS'], 'Windows_NT')
# All platforms.
- script: |
rustc -Vv
cargo -V
displayName: Query rust and cargo versions

View File

@ -1,9 +0,0 @@
steps:
- bash: |
set -e
if git log --no-merges -1 --format='%s' | grep -q '[ci-release]'; then
echo "##vso[task.setvariable variable=isRelease]true"
fi
failOnStderr: true
displayName: Check if release commit

View File

@ -1,17 +0,0 @@
steps:
- script: |
set -e
# Remove any existing patch statements
mv Cargo.toml Cargo.toml.bck
sed -n '/\[patch.crates-io\]/q;p' Cargo.toml.bck > Cargo.toml
# Patch all crates
cat ci/patch.toml >> Cargo.toml
# Print `Cargo.toml` for debugging
echo "~~~~ Cargo.toml ~~~~"
cat Cargo.toml
echo "~~~~~~~~~~~~~~~~~~~~"
displayName: Patch Cargo.toml

View File

@ -1,16 +0,0 @@
jobs:
# Check formatting
- job: ${{ parameters.name }}
displayName: Check rustfmt
pool:
vmImage: ubuntu-16.04
steps:
- template: azure-install-rust.yml
parameters:
rust_version: stable
- bash: |
rustup component add rustfmt
displayName: Install rustfmt
- bash: |
cargo fmt --all -- --check
displayName: Check formatting

View File

@ -1,31 +0,0 @@
parameters:
crates: []
jobs:
- job: ${{ parameters.name }}
displayName: ${{ parameters.displayName }}
pool:
vmImage: ${{ parameters.vmImage }}
steps:
- template: azure-install-rust.yml
parameters:
rust_version: stable
- template: azure-is-release.yml
- ${{ each crate in parameters.crates }}:
- script: cargo test
env:
CI: 'True'
displayName: cargo test -p ${{ crate }}
workingDirectory: $(Build.SourcesDirectory)/${{ crate }}
condition: and(succeeded(), not(variables['isRelease']))
- template: azure-patch-crates.yml
- ${{ each crate in parameters.crates }}:
- script: cargo test
env:
CI: 'True'
displayName: cargo test -p ${{ crate }}
workingDirectory: $(Build.SourcesDirectory)/${{ crate }}

View File

@ -1,17 +0,0 @@
# Patch dependencies to run all tests against versions of the crate in the
# repository.
[patch.crates-io]
tower = { path = "tower" }
tower-balance = { path = "tower-balance" }
tower-buffer = { path = "tower-buffer" }
tower-discover = { path = "tower-discover" }
tower-filter = { path = "tower-filter" }
tower-layer = { path = "tower-layer" }
tower-limit = { path = "tower-limit" }
tower-load-shed = { path = "tower-load-shed" }
tower-reconnect = { path = "tower-reconnect" }
tower-retry = { path = "tower-retry" }
tower-service = { path = "tower-service" }
tower-test = { path = "tower-test" }
tower-timeout = { path = "tower-timeout" }
tower-util = { path = "tower-util" }

22
deny.toml Normal file
View File

@ -0,0 +1,22 @@
[advisories]
vulnerability = "deny"
unmaintained = "warn"
notice = "warn"
[licenses]
unlicensed = "deny"
allow = []
deny = []
copyleft = "warn"
allow-osi-fsf-free = "either"
confidence-threshold = 0.8
[bans]
multiple-versions = "deny"
highlight = "all"
skip = []
[sources]
unknown-registry = "warn"
unknown-git = "warn"
allow-git = []

22
examples/Cargo.toml Normal file
View File

@ -0,0 +1,22 @@
[package]
name = "examples"
version = "0.0.0"
publish = false
edition = "2018"
# If you copy one of the examples into a new project, you should be using
# [dependencies] instead.
[dev-dependencies]
tower = { version = "0.4", path = "../tower", features = ["full"] }
tower-service = "0.3"
tokio = { version = "1.0", features = ["full"] }
rand = "0.9"
pin-project = "1.0"
futures = "0.3.22"
tracing = "0.1"
tracing-subscriber = "0.2"
hdrhistogram = "7"
[[example]]
name = "balance"
path = "balance.rs"

27
guides/README.md Normal file
View File

@ -0,0 +1,27 @@
# Tower Guides
These guides are meant to be an introduction to Tower. At least basic Rust
experience is assumed. Some experience with asynchronous Rust is also
recommended. If you're brand new to async Rust, we recommend the [Asynchronous
Programming in Rust][async-book] book or the [Tokio tutorial][tokio-tutorial].
Additionally, some of these guides explain Tower from the perspective of HTTP
servers and clients. However, Tower is useful for any network protocol that
follows an async request/response pattern. HTTP is used here because it is a
widely known protocol, and one of Tower's more common use-cases.
## Guides
- ["Inventing the `Service` trait"][invent] walks through how Tower's
fundamental [`Service`] trait could be designed from scratch. If you have no
experience with Tower and want to learn the absolute basics, this is where you
should start.
- ["Building a middleware from scratch"][build] walks through how to build the
[`Timeout`] middleware as it exists in Tower today, without taking any shortcuts.
[async-book]: https://rust-lang.github.io/async-book/
[tokio-tutorial]: https://tokio.rs/tokio/tutorial
[invent]: https://tokio.rs/blog/2021-05-14-inventing-the-service-trait
[build]: https://github.com/tower-rs/tower/blob/master/guides/building-a-middleware-from-scratch.md
[`Service`]: https://docs.rs/tower/latest/tower/trait.Service.html
[`Timeout`]: https://docs.rs/tower/latest/tower/timeout/struct.Timeout.html

View File

@ -0,0 +1,674 @@
# Building a middleware from scratch
In ["Inventing the `Service` trait"][invent] we learned all the motivations
behind [`Service`] and why its designed the way it is. We also built a few
smaller middleware ourselves but we took a few shortcuts in our implementation.
In this guide we're going to build the `Timeout` middleware as it exists in
Tower today without taking any shortcuts.
Writing a robust middleware requires working with async Rust at a slightly lower
level than you might be used to. The goal of this guide is to demystify the
concepts and patterns so you can start writing your own middleware and maybe
even contribute back to the Tower ecosystem!
## Getting started
The middleware we're going to build is [`tower::timeout::Timeout`]. It will set
a limit on the maximum duration its inner `Service`'s response future is allowed
to take. If it doesn't produce a response within some amount of time, an error
is returned. This allows the client to retry that request or report an error to
the user, rather than waiting forever.
Lets start by writing a `Timeout` struct that holds the `Service` its wrapping
and the duration of the timeout:
```rust
use std::time::Duration;
struct Timeout<S> {
inner: S,
timeout: Duration,
}
```
As we learned in ["Inventing the `Service` trait"][invent] its important for
services to implement `Clone` such that you can convert the `&mut self` given to
`Service::call` into an owned `self` that can be moved into the response future,
if necessary. We should therefore add `#[derive(Clone)]` to our struct. We
should also derive `Debug` while we're at it:
```rust
#[derive(Debug, Clone)]
struct Timeout<S> {
inner: S,
timeout: Duration,
}
```
Next we write a constructor:
```rust
impl<S> Timeout<S> {
pub fn new(inner: S, timeout: Duration) -> Self {
Timeout { inner, timeout }
}
}
```
Note that we omit bounds on S even though we expect it to implement Service, as
the [Rust API guidelines recommend][rust-guidelines].
Now the interesting bit. How to implement `Service` for `Timeout<S>`? Lets start
with an implementation that just forwards everything to the inner service:
```rust
use tower::Service;
use std::task::{Context, Poll};
impl<S, Request> Service<Request> for Timeout<S>
where
S: Service<Request>,
{
type Response = S::Response;
type Error = S::Error;
type Future = S::Future;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
// Our middleware doesn't care about backpressure, so it's ready as long
// as the inner service is ready.
self.inner.poll_ready(cx)
}
fn call(&mut self, request: Request) -> Self::Future {
self.inner.call(request)
}
}
```
Until you've written lots of middleware writing out a skeleton like this makes
the process a bit easier.
To actually add a timeout to the inner service what we essentially have to do is
detect when the future returned by `self.inner.call(request)` has been running
longer than `self.timeout` and abort with an error.
The approach we're going to take is to call [`tokio::time::sleep`] to get a
future that completes when we're out of time and then select the value from
whichever of the two futures is the first to complete. We could also use
`tokio::time::timeout` but `sleep` works just as well.
Creating both futures is done like this:
```rust
use tokio::time::sleep;
fn call(&mut self, request: Request) -> Self::Future {
let response_future = self.inner.call(request);
// This variable has type `tokio::time::Sleep`.
//
// We don't have to clone `self.timeout` as it implements the `Copy` trait.
let sleep = tokio::time::sleep(self.timeout);
// what to write here?
}
```
One possible return type is `Pin<Box<dyn Future<...>>>`. However we want our
`Timeout` to add as little overhead as possible, so we would like to find a way
to avoid allocating a `Box`. Imagine we have a large stack, with dozens of
nested `Service`s, where each layer allocates a new `Box` for every request
that passes through it. That would result in a lot of allocations which might
impact performance[^1].
## The response future
To avoid using `Box` lets instead write our own `Future` implementation. We
start by creating a struct called `ResponseFuture`. It has to be generic over
the inner service's response future type. This is analogous to wrapping services
in other services, but this time we're wrapping futures in other futures.
```rust
use tokio::time::Sleep;
pub struct ResponseFuture<F> {
response_future: F,
sleep: Sleep,
}
```
`F` will be the type of `self.inner.call(request)`. Updating our `Service`
implementation we get:
```rust
impl<S, Request> Service<Request> for Timeout<S>
where
S: Service<Request>,
{
type Response = S::Response;
type Error = S::Error;
// Use our new `ResponseFuture` type.
type Future = ResponseFuture<S::Future>;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.inner.poll_ready(cx)
}
fn call(&mut self, request: Request) -> Self::Future {
let response_future = self.inner.call(request);
let sleep = tokio::time::sleep(self.timeout);
// Create our response future by wrapping the future from the inner
// service.
ResponseFuture {
response_future,
sleep,
}
}
}
```
A key point here is that Rust's futures are _lazy_. That means nothing actually
happens until they're `await`ed or polled. So `self.inner.call(request)` will
return immediately without actually processing the request.
Next we go ahead and implement `Future` for `ResponseFuture`:
```rust
use std::{pin::Pin, future::Future};
impl<F, Response, Error> Future for ResponseFuture<F>
where
F: Future<Output = Result<Response, Error>>,
{
type Output = Result<Response, Error>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
// What to write here?
}
}
```
Ideally we want to write something like this:
1. First poll `self.response_future`, and if it's ready, return the response or error it
resolved to.
2. Otherwise, poll `self.sleep`, and if it's ready, return an error.
3. If neither future is ready return `Poll::Pending`.
We might try:
```rust
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
match self.response_future.poll(cx) {
Poll::Ready(result) => return Poll::Ready(result),
Poll::Pending => {}
}
todo!()
}
```
However that gives an error like this:
```
error[E0599]: no method named `poll` found for type parameter `F` in the current scope
--> src/lib.rs:56:29
|
56 | match self.response_future.poll(cx) {
| ^^^^ method not found in `F`
|
= help: items from traits can only be used if the type parameter is bounded by the trait
help: the following traits define an item `poll`, perhaps you need to restrict type parameter `F` with one of them:
|
49 | impl<F: Future, Response, Error> Future for ResponseFuture<F>
| ^^^^^^^^^
error: aborting due to previous error
```
Unfortunately the error we get from Rust isn't very good. It tells us to add an
`F: Future` bound even though we've already done that with `where F:
Future<Output = Result<Response, E>>`.
The real issue has to do with [`Pin`]. The full details of pinning is outside
the scope of this guide. If you're new to `Pin` we recommend ["The Why, What,
and How of Pinning in Rust"][pin] by Jon Gjengset.
What Rust is trying to tell us is that we need a `Pin<&mut F>` to be able to
call `poll`. Accessing `F` through `self.response_future` when `self` is a
`Pin<&mut Self>` doesn't work.
What we need is called "pin projection" which means going from a `Pin<&mut
Struct>` to a `Pin<&mut Field>`. Normally pin projection would require writing
`unsafe` code but the excellent [pin-project] crate is able to handle all the
`unsafe` details for us.
Using pin-project we can annotate a struct with `#[pin_project]` and add
`#[pin]` to each field that we want to be able to access through a pinned
reference:
```rust
use pin_project::pin_project;
#[pin_project]
pub struct ResponseFuture<F> {
#[pin]
response_future: F,
#[pin]
sleep: Sleep,
}
impl<F, Response, Error> Future for ResponseFuture<F>
where
F: Future<Output = Result<Response, Error>>,
{
type Output = Result<Response, Error>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
// Call the magical `project` method generated by `#[pin_project]`.
let this = self.project();
// `project` returns a `__ResponseFutureProjection` but we can ignore
// the exact type. It has fields that matches `ResponseFuture` but
// maintain pins for fields annotated with `#[pin]`.
// `this.response_future` is now a `Pin<&mut F>`.
let response_future: Pin<&mut F> = this.response_future;
// And `this.sleep` is a `Pin<&mut Sleep>`.
let sleep: Pin<&mut Sleep> = this.sleep;
// If we had another field that wasn't annotated with `#[pin]` that
// would have been a regular `&mut` without `Pin`.
// ...
}
}
```
Pinning in Rust is a complex topic that is hard to understand but thanks to
pin-project we're able to ignore most of that complexity. Crucially, it means we
don't have to fully understand pinning to write Tower middleware. So if you
didn't quite get all the stuff about `Pin` and `Unpin` fear not because
pin-project has your back!
Notice in the previous code block we were able to obtain a `Pin<&mut F>` and a
`Pin<&mut Sleep>` which is exactly what we need to call `poll`:
```rust
impl<F, Response, Error> Future for ResponseFuture<F>
where
F: Future<Output = Result<Response, Error>>,
{
type Output = Result<Response, Error>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.project();
// First check if the response future is ready.
match this.response_future.poll(cx) {
Poll::Ready(result) => {
// The inner service has a response ready for us or it has
// failed.
return Poll::Ready(result);
}
Poll::Pending => {
// Not quite ready yet...
}
}
// Then check if the sleep is ready. If so the response has taken too
// long and we have to return an error.
match this.sleep.poll(cx) {
Poll::Ready(()) => {
// Our time is up, but what error do we return?!
todo!()
}
Poll::Pending => {
// Still some time remaining...
}
}
// If neither future is ready then we are still pending.
Poll::Pending
}
}
```
Now the only remaining question is what error should we return if the sleep
finishes first?
## The error type
The error type we're promising to return now is the generic `Error` type which
is the same as the inner service's error type. However we know nothing about
that type. It is completely opaque to us and we have no way of constructing
values of that type.
We have three options:
1. Return a boxed error trait object like `Box<dyn std::error::Error + Send +
Sync>`.
2. Return an enum with variants for the service error and the timeout error.
3. Define a `TimeoutError` struct and require that our generic error type can be
constructed from a `TimeoutError` using `TimeoutError: Into<Error>`.
While option 3 might seem like it is the most flexible it isn't great since it
requires users using a custom error type to manually implement
`From<TimeoutError> for MyError`. That quickly becomes tedious when using lots
of middleware that each have their own error type.
Option 2 would mean defining an enum like this:
```rust
enum TimeoutError<Error> {
// Variant used if we hit the timeout
Timeout(InnerTimeoutError),
// Variant used if the inner service produced an error
Service(Error),
}
```
While this seems ideal on the surface as we're not losing any type information
and can use `match` to get at the exact error, the approach has three issues:
1. In practice its common to nest lots of middleware. That would make the final
error enum very large. Its not unlikely to look something like
`BufferError<RateLimitError<TimeoutError<MyError>>>`. Pattern matching on
such a type (to, for example, determine if the error is retry-able) is very
tedious.
2. If we change the order our middleware are applied in we also change the final
error type meaning we have to update our pattern matches.
3. There is also the possibility of the final error type being very large and
taking up a significant amount of space on the stack.
With this we're left with option 1 which is to convert the inner service error
into a boxed trait object like `Box<dyn std::error::Error + Send + Sync>`. That
means we can combine multiple errors type into one. That has the following
advantages:
1. Our error handling is less fragile since changing the order middleware are
applied in won't change the final error type.
2. The error type now has a constant size regardless how many middleware we've
applied.
3. Extracting the error no longer requires a big `match` but can instead be done
with `error.downcast_ref::<Timeout>()`.
However it also has the following downsides:
1. As we're using dynamic downcasting the compiler can no longer guarantee that
we're exhaustively checking for every possible error type.
2. Creating an error now requires an allocation. In practice we expect errors to
be infrequent and therefore this shouldn't be a problem.
Which option you prefer is a matter of personal preference. Both have their
advantages and disadvantages. However the pattern that we've decided to use in
Tower is boxed trait objects. You can find the original discussion
[here](https://github.com/tower-rs/tower/issues/131).
For our `Timeout` middleware that means we need to create a struct that
implements `std::error::Error` such that we can convert it into a `Box<dyn
std::error::Error + Send + Sync>`. We also have to require that the inner
service's error type implements `Into<Box<dyn std::error::Error + Send +
Sync>>`. Luckily most errors automatically satisfies that so it won't require
users to write any additional code. We're using `Into` for the trait bound
rather than `From` as recommend by the [standard
library](https://doc.rust-lang.org/stable/std/convert/trait.From.html).
The code for our error type looks like this:
```rust
use std::fmt;
#[derive(Debug, Default)]
pub struct TimeoutError(());
impl fmt::Display for TimeoutError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.pad("request timed out")
}
}
impl std::error::Error for TimeoutError {}
```
We add a private field to `TimeoutError` such that users outside of Tower cannot
construct their own `TimeoutError`. They can only be obtained through our
middleware.
`Box<dyn std::error::Error + Send + Sync>` is also quite a mouthful so
lets define a type alias for it:
```rust
// This also exists as `tower::BoxError`
pub type BoxError = Box<dyn std::error::Error + Send + Sync>;
```
Our future implementation now becomes:
```rust
impl<F, Response, Error> Future for ResponseFuture<F>
where
F: Future<Output = Result<Response, Error>>,
// Require that the inner service's error can be converted into a `BoxError`.
Error: Into<BoxError>,
{
type Output = Result<
Response,
// The error type of `ResponseFuture` is now `BoxError`.
BoxError,
>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.project();
match this.response_future.poll(cx) {
Poll::Ready(result) => {
// Use `map_err` to convert the error type.
let result = result.map_err(Into::into);
return Poll::Ready(result);
}
Poll::Pending => {}
}
match this.sleep.poll(cx) {
Poll::Ready(()) => {
// Construct and return a timeout error.
let error = Box::new(TimeoutError(()));
return Poll::Ready(Err(error));
}
Poll::Pending => {}
}
Poll::Pending
}
}
```
Finally we have to revisit our `Service` implementation and update it to also
use `BoxError`:
```rust
impl<S, Request> Service<Request> for Timeout<S>
where
S: Service<Request>,
// Same trait bound like we had on `impl Future for ResponseFuture`.
S::Error: Into<BoxError>,
{
type Response = S::Response;
// The error type of `Timeout` is now `BoxError`.
type Error = BoxError;
type Future = ResponseFuture<S::Future>;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
// Have to map the error type here as well.
self.inner.poll_ready(cx).map_err(Into::into)
}
fn call(&mut self, request: Request) -> Self::Future {
let response_future = self.inner.call(request);
let sleep = tokio::time::sleep(self.timeout);
ResponseFuture {
response_future,
sleep,
}
}
}
```
## Conclusion
That's it! We've now successfully implemented the `Timeout` middleware as it
exists in Tower today.
Our final implementation is:
```rust
use pin_project::pin_project;
use std::time::Duration;
use std::{
fmt,
future::Future,
pin::Pin,
task::{Context, Poll},
};
use tokio::time::Sleep;
use tower::Service;
#[derive(Debug, Clone)]
struct Timeout<S> {
inner: S,
timeout: Duration,
}
impl<S> Timeout<S> {
fn new(inner: S, timeout: Duration) -> Self {
Timeout { inner, timeout }
}
}
impl<S, Request> Service<Request> for Timeout<S>
where
S: Service<Request>,
S::Error: Into<BoxError>,
{
type Response = S::Response;
type Error = BoxError;
type Future = ResponseFuture<S::Future>;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.inner.poll_ready(cx).map_err(Into::into)
}
fn call(&mut self, request: Request) -> Self::Future {
let response_future = self.inner.call(request);
let sleep = tokio::time::sleep(self.timeout);
ResponseFuture {
response_future,
sleep,
}
}
}
#[pin_project]
struct ResponseFuture<F> {
#[pin]
response_future: F,
#[pin]
sleep: Sleep,
}
impl<F, Response, Error> Future for ResponseFuture<F>
where
F: Future<Output = Result<Response, Error>>,
Error: Into<BoxError>,
{
type Output = Result<Response, BoxError>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.project();
match this.response_future.poll(cx) {
Poll::Ready(result) => {
let result = result.map_err(Into::into);
return Poll::Ready(result);
}
Poll::Pending => {}
}
match this.sleep.poll(cx) {
Poll::Ready(()) => {
let error = Box::new(TimeoutError(()));
return Poll::Ready(Err(error));
}
Poll::Pending => {}
}
Poll::Pending
}
}
#[derive(Debug, Default)]
struct TimeoutError(());
impl fmt::Display for TimeoutError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.pad("request timed out")
}
}
impl std::error::Error for TimeoutError {}
type BoxError = Box<dyn std::error::Error + Send + Sync>;
```
You can find the code in Tower [here][timeout-in-tower].
The pattern of implementing `Service` for some type that wraps another `Service`
and returning a `Future` that wraps another `Future` is how most Tower
middleware work.
Some other good examples are:
- [`ConcurrencyLimit`]: Limit the max number of requests being concurrently processed.
- [`LoadShed`]: For shedding load when inner services aren't ready.
- [`Steer`]: Routing between services.
With this you should be fully equipped to write robust production ready
middleware. If you want more practice here are some exercises to play with:
- Implement our timeout middleware but using [`tokio::time::timeout`] instead of
`sleep`.
- Implement `Service` adapters similar to `Result::map` and `Result::map_err`
that transforms the request, response, or error using a closure given by the
user.
- Implement [`ConcurrencyLimit`]. Hint: You're going to need [`PollSemaphore`] to
implement `poll_ready`.
If you have questions you're welcome to post in `#tower` in the [Tokio Discord
server][discord].
[^1]: The Rust compiler teams plans to add a feature called ["`impl Trait` in
type aliases"](https://github.com/rust-lang/rust/issues/63063) which would
allow us to return `impl Future` from `call` but for now it isn't possible.
[invent]: https://tokio.rs/blog/2021-05-14-inventing-the-service-trait
[`Service`]: https://docs.rs/tower/latest/tower/trait.Service.html
[`tower::timeout::Timeout`]: https://docs.rs/tower/latest/tower/timeout/struct.Timeout.html
[`Pin`]: https://doc.rust-lang.org/stable/std/pin/struct.Pin.html
[pin]: https://www.youtube.com/watch?v=DkMwYxfSYNQ
[pin-project]: https://crates.io/crates/pin-project
[timeout-in-tower]: https://github.com/tower-rs/tower/blob/master/tower/src/timeout/mod.rs
[`ConcurrencyLimit`]: https://github.com/tower-rs/tower/blob/master/tower/src/limit/concurrency/service.rs
[`LoadShed`]: https://github.com/tower-rs/tower/blob/master/tower/src/load_shed/mod.rs
[`Steer`]: https://github.com/tower-rs/tower/blob/master/tower/src/steer/mod.rs
[`tokio::time::timeout`]: https://docs.rs/tokio/latest/tokio/time/fn.timeout.html
[`tokio::time::sleep`]: https://docs.rs/tokio/latest/tokio/time/fn.sleep.html
[`PollSemaphore`]: https://docs.rs/tokio-util/latest/tokio_util/sync/struct.PollSemaphore.html
[discord]: https://discord.gg/tokio
[`Unpin`]: https://doc.rust-lang.org/stable/std/marker/trait.Unpin.html
[rust-guidelines]: https://rust-lang.github.io/api-guidelines/future-proofing.html#c-struct-bounds

8
netlify.toml Normal file
View File

@ -0,0 +1,8 @@
[build]
command = "rustup install nightly --profile minimal && cargo doc --features=full --no-deps && cp -r target/doc _netlify_out"
environment = { RUSTDOCFLAGS= "--cfg docsrs" }
publish = "_netlify_out"
[[redirects]]
from = "/"
to = "/tower"

View File

@ -1,3 +0,0 @@
# 0.1.0 (unreleased)
- Initial release

View File

@ -1,44 +0,0 @@
[package]
name = "tower-balance"
# When releasing to crates.io:
# - Remove path dependencies
# - Update html_root_url.
# - Update doc url
# - Cargo.toml
# - README.md
# - Update CHANGELOG.md.
# - Create "v0.1.x" git tag.
version = "0.1.0"
authors = ["Tower Maintainers <team@tower-rs.com>"]
license = "MIT"
readme = "README.md"
repository = "https://github.com/tower-rs/tower"
homepage = "https://github.com/tower-rs/tower"
documentation = "https://docs.rs/tower-balance/0.1.0"
description = """
Balance load across a set of uniform services.
"""
categories = ["asynchronous", "network-programming"]
edition = "2018"
publish = false
[dependencies]
futures = "0.1.26"
indexmap = "1.0.2"
log = "0.4.1"
rand = "0.6.5"
tokio-timer = "0.2.4"
tower-service = "0.2.0"
tower-discover = "0.1.0"
tower-util = "0.1.0"
[dev-dependencies]
log = "0.4.1"
env_logger = { version = "0.5.3", default-features = false }
hdrsample = "6.0"
quickcheck = { version = "0.6", default-features = false }
tokio = "0.1.7"
tokio-executor = "0.1.2"
tower = { version = "0.1", path = "../tower" }
tower-buffer = { version = "0.1", path = "../tower-buffer" }
tower-limit = { version = "0.1", path = "../tower-limit" }

View File

@ -1,25 +0,0 @@
Copyright (c) 2019 Tower Contributors
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@ -1,13 +0,0 @@
# Tower Balance
Balance load across a set of uniform services.
## License
This project is licensed under the [MIT license](LICENSE).
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in Tower by you, shall be licensed as MIT, without any additional
terms or conditions.

View File

@ -1,235 +0,0 @@
//! Exercises load balancers with mocked services.
use env_logger;
use futures::{future, stream, Future, Stream};
use hdrsample::Histogram;
use rand::{self, Rng};
use std::time::{Duration, Instant};
use tokio::{runtime, timer};
use tower::{discover::Discover, limit::concurrency::ConcurrencyLimit, Service, ServiceExt};
use tower_balance as lb;
const REQUESTS: usize = 50_000;
const CONCURRENCY: usize = 500;
const DEFAULT_RTT: Duration = Duration::from_millis(30);
static ENDPOINT_CAPACITY: usize = CONCURRENCY;
static MAX_ENDPOINT_LATENCIES: [Duration; 10] = [
Duration::from_millis(1),
Duration::from_millis(5),
Duration::from_millis(10),
Duration::from_millis(10),
Duration::from_millis(10),
Duration::from_millis(100),
Duration::from_millis(100),
Duration::from_millis(100),
Duration::from_millis(500),
Duration::from_millis(1000),
];
static WEIGHTS: [f64; 10] = [1.0, 1.0, 1.0, 0.5, 1.5, 0.5, 1.5, 1.0, 1.0, 1.0];
struct Summary {
latencies: Histogram<u64>,
start: Instant,
count_by_instance: [usize; 10],
}
fn main() {
env_logger::init();
println!("REQUESTS={}", REQUESTS);
println!("CONCURRENCY={}", CONCURRENCY);
println!("ENDPOINT_CAPACITY={}", ENDPOINT_CAPACITY);
print!("MAX_ENDPOINT_LATENCIES=[");
for max in &MAX_ENDPOINT_LATENCIES {
let l = max.as_secs() * 1_000 + u64::from(max.subsec_nanos() / 1_000 / 1_000);
print!("{}ms, ", l);
}
println!("]");
print!("WEIGHTS=[");
for w in &WEIGHTS {
print!("{}, ", w);
}
println!("]");
let mut rt = runtime::Runtime::new().unwrap();
// Show weighted behavior first...
let fut = future::lazy(move || {
let decay = Duration::from_secs(10);
let d = gen_disco();
let pe = lb::Balance::p2c(lb::WithWeighted::from(lb::load::WithPeakEwma::new(
d,
DEFAULT_RTT,
decay,
lb::load::NoInstrument,
)));
run("P2C+PeakEWMA w/ weights", pe)
});
let fut = fut.then(move |_| {
let d = gen_disco();
let ll = lb::Balance::p2c(lb::WithWeighted::from(lb::load::WithPendingRequests::new(
d,
lb::load::NoInstrument,
)));
run("P2C+LeastLoaded w/ weights", ll)
});
// Then run through standard comparisons...
let fut = fut.then(move |_| {
let decay = Duration::from_secs(10);
let d = gen_disco();
let pe = lb::Balance::p2c(lb::load::WithPeakEwma::new(
d,
DEFAULT_RTT,
decay,
lb::load::NoInstrument,
));
run("P2C+PeakEWMA", pe)
});
let fut = fut.then(move |_| {
let d = gen_disco();
let ll = lb::Balance::p2c(lb::load::WithPendingRequests::new(
d,
lb::load::NoInstrument,
));
run("P2C+LeastLoaded", ll)
});
let fut = fut.and_then(move |_| {
let rr = lb::Balance::round_robin(gen_disco());
run("RoundRobin", rr)
});
rt.spawn(fut);
rt.shutdown_on_idle().wait().unwrap();
}
type Error = Box<dyn std::error::Error + Send + Sync>;
fn gen_disco() -> impl Discover<
Key = usize,
Error = impl Into<Error>,
Service = lb::Weighted<
impl Service<Req, Response = Rsp, Error = Error, Future = impl Send> + Send,
>,
> + Send {
let svcs = MAX_ENDPOINT_LATENCIES
.iter()
.zip(WEIGHTS.iter())
.enumerate()
.map(|(instance, (latency, weight))| {
let svc = tower::service_fn(move |_| {
let start = Instant::now();
let maxms = u64::from(latency.subsec_nanos() / 1_000 / 1_000)
.saturating_add(latency.as_secs().saturating_mul(1_000));
let latency = Duration::from_millis(rand::thread_rng().gen_range(0, maxms));
timer::Delay::new(start + latency).map(move |_| {
let latency = start.elapsed();
Rsp { latency, instance }
})
});
let svc = ConcurrencyLimit::new(svc, ENDPOINT_CAPACITY);
lb::Weighted::new(svc, *weight)
});
tower_discover::ServiceList::new(svcs)
}
fn run<D, C>(name: &'static str, lb: lb::Balance<D, C>) -> impl Future<Item = (), Error = ()>
where
D: Discover + Send + 'static,
D::Error: Into<Error>,
D::Key: Send,
D::Service: Service<Req, Response = Rsp, Error = Error> + Send,
<D::Service as Service<Req>>::Future: Send,
C: lb::Choose<D::Key, D::Service> + Send + 'static,
{
println!("{}", name);
let requests = stream::repeat::<_, Error>(Req).take(REQUESTS as u64);
let service = ConcurrencyLimit::new(lb, CONCURRENCY);
let responses = service.call_all(requests).unordered();
compute_histo(responses).map(|s| s.report()).map_err(|_| {})
}
fn compute_histo<S>(times: S) -> impl Future<Item = Summary, Error = Error> + 'static
where
S: Stream<Item = Rsp, Error = Error> + 'static,
{
times.fold(Summary::new(), |mut summary, rsp| {
summary.count(rsp);
Ok(summary) as Result<_, Error>
})
}
impl Summary {
fn new() -> Self {
Self {
// The max delay is 2000ms. At 3 significant figures.
latencies: Histogram::<u64>::new_with_max(3_000, 3).unwrap(),
start: Instant::now(),
count_by_instance: [0; 10],
}
}
fn count(&mut self, rsp: Rsp) {
let ms = rsp.latency.as_secs() * 1_000;
let ms = ms + u64::from(rsp.latency.subsec_nanos()) / 1_000 / 1_000;
self.latencies += ms;
self.count_by_instance[rsp.instance] += 1;
}
fn report(&self) {
let mut total = 0;
for c in &self.count_by_instance {
total += c;
}
for (i, c) in self.count_by_instance.into_iter().enumerate() {
let p = *c as f64 / total as f64 * 100.0;
println!(" [{:02}] {:>5.01}%", i, p);
}
println!(" wall {:4}s", self.start.elapsed().as_secs());
if self.latencies.len() < 2 {
return;
}
println!(" p50 {:4}ms", self.latencies.value_at_quantile(0.5));
if self.latencies.len() < 10 {
return;
}
println!(" p90 {:4}ms", self.latencies.value_at_quantile(0.9));
if self.latencies.len() < 50 {
return;
}
println!(" p95 {:4}ms", self.latencies.value_at_quantile(0.95));
if self.latencies.len() < 100 {
return;
}
println!(" p99 {:4}ms", self.latencies.value_at_quantile(0.99));
if self.latencies.len() < 1000 {
return;
}
println!(" p999 {:4}ms", self.latencies.value_at_quantile(0.999));
}
}
#[derive(Debug, Clone)]
struct Req;
#[derive(Debug)]
struct Rsp {
latency: Duration,
instance: usize,
}

View File

@ -1,49 +0,0 @@
use indexmap::IndexMap;
mod p2c;
mod round_robin;
pub use self::{p2c::PowerOfTwoChoices, round_robin::RoundRobin};
/// A strategy for choosing nodes.
// TODO hide `K`
pub trait Choose<K, N> {
/// Returns the index of a replica to be used next.
///
/// `replicas` cannot be empty, so this function must always return a valid index on
/// [0, replicas.len()-1].
fn choose(&mut self, replicas: Replicas<K, N>) -> usize;
}
/// Creates a `Replicas` if there are two or more services.
///
pub(crate) fn replicas<K, S>(inner: &IndexMap<K, S>) -> Result<Replicas<K, S>, TooFew> {
if inner.len() < 2 {
return Err(TooFew);
}
Ok(Replicas(inner))
}
/// Indicates that there were not at least two services.
#[derive(Copy, Clone, Debug)]
pub struct TooFew;
/// Holds two or more services.
// TODO hide `K`
pub struct Replicas<'a, K, S>(&'a IndexMap<K, S>);
impl<K, S> Replicas<'_, K, S> {
pub fn len(&self) -> usize {
self.0.len()
}
}
impl<K, S> ::std::ops::Index<usize> for Replicas<'_, K, S> {
type Output = S;
fn index(&self, idx: usize) -> &Self::Output {
let (_, service) = self.0.get_index(idx).expect("out of bounds");
service
}
}

View File

@ -1,108 +0,0 @@
use log::trace;
use rand::{rngs::SmallRng, FromEntropy, Rng};
use crate::{
choose::{Choose, Replicas},
Load,
};
/// Chooses nodes using the [Power of Two Choices][p2c].
///
/// This is a load-aware strategy, so this may only be used to choose over services that
/// implement `Load`.
///
/// As described in the [Finagle Guide][finagle]:
/// > The algorithm randomly picks two nodes from the set of ready endpoints and selects
/// > the least loaded of the two. By repeatedly using this strategy, we can expect a
/// > manageable upper bound on the maximum load of any server.
/// >
/// > The maximum load variance between any two servers is bound by `ln(ln(n))` where `n`
/// > is the number of servers in the cluster.
///
/// [finagle]: https://twitter.github.io/finagle/guide/Clients.html#power-of-two-choices-p2c-least-loaded
/// [p2c]: http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf
#[derive(Debug)]
pub struct PowerOfTwoChoices {
rng: SmallRng,
}
// ==== impl PowerOfTwoChoices ====
impl Default for PowerOfTwoChoices {
fn default() -> Self {
Self::new(SmallRng::from_entropy())
}
}
impl PowerOfTwoChoices {
pub fn new(rng: SmallRng) -> Self {
Self { rng }
}
/// Returns two random, distinct indices into `ready`.
fn random_pair(&mut self, len: usize) -> (usize, usize) {
debug_assert!(len >= 2);
// Choose a random number on [0, len-1].
let idx0 = self.rng.gen::<usize>() % len;
let idx1 = {
// Choose a random number on [1, len-1].
let delta = (self.rng.gen::<usize>() % (len - 1)) + 1;
// Add it to `idx0` and then mod on `len` to produce a value on
// [idx0+1, len-1] or [0, idx0-1].
(idx0 + delta) % len
};
debug_assert!(idx0 != idx1, "random pair must be distinct");
return (idx0, idx1);
}
}
impl<K, L> Choose<K, L> for PowerOfTwoChoices
where
L: Load,
L::Metric: PartialOrd + ::std::fmt::Debug,
{
/// Chooses two distinct nodes at random and compares their load.
///
/// Returns the index of the lesser-loaded node.
fn choose(&mut self, replicas: Replicas<K, L>) -> usize {
let (a, b) = self.random_pair(replicas.len());
let a_load = replicas[a].load();
let b_load = replicas[b].load();
trace!(
"choose node[{a}]={a_load:?} node[{b}]={b_load:?}",
a = a,
b = b,
a_load = a_load,
b_load = b_load
);
if a_load <= b_load {
a
} else {
b
}
}
}
#[cfg(test)]
mod tests {
use quickcheck::*;
use super::*;
quickcheck! {
fn distinct_random_pairs(n: usize) -> TestResult {
if n < 2 {
return TestResult::discard();
}
let mut p2c = PowerOfTwoChoices::default();
let (a, b) = p2c.random_pair(n);
TestResult::from_bool(a != b)
}
}
}

View File

@ -1,23 +0,0 @@
use crate::choose::{Choose, Replicas};
/// Chooses nodes sequentially.
///
/// This strategy is load-agnostic and may therefore be used to choose over any type of
/// service.
///
/// Note that ordering is not strictly enforced, especially when services are removed by
/// the balancer.
#[derive(Debug, Default)]
pub struct RoundRobin {
/// References the index of the next node to be used.
pos: usize,
}
impl<K, N> Choose<K, N> for RoundRobin {
fn choose(&mut self, nodes: Replicas<K, N>) -> usize {
let len = nodes.len();
let idx = self.pos % len;
self.pos = (idx + 1) % len;
idx
}
}

View File

@ -1,18 +0,0 @@
use std::fmt;
pub(crate) type Error = Box<dyn std::error::Error + Send + Sync>;
#[derive(Debug)]
pub struct Balance(pub(crate) Error);
impl fmt::Display for Balance {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "load balancing discover error: {}", self.0)
}
}
impl std::error::Error for Balance {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
Some(&*self.0)
}
}

View File

@ -1,23 +0,0 @@
use crate::error::Error;
use futures::{Future, Poll};
pub struct ResponseFuture<F>(F);
impl<F> ResponseFuture<F> {
pub(crate) fn new(future: F) -> ResponseFuture<F> {
ResponseFuture(future)
}
}
impl<F> Future for ResponseFuture<F>
where
F: Future,
F::Error: Into<Error>,
{
type Item = F::Item;
type Error = Error;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
self.0.poll().map_err(Into::into)
}
}

View File

@ -1,344 +0,0 @@
#![doc(html_root_url = "https://docs.rs/tower-balance/0.1.0")]
#![deny(rust_2018_idioms)]
#![allow(elided_lifetimes_in_paths)]
#[cfg(test)]
extern crate quickcheck;
use futures::{Async, Poll};
use indexmap::IndexMap;
use log::{debug, trace};
use rand::{rngs::SmallRng, SeedableRng};
use std::fmt;
use tower_discover::Discover;
use tower_service::Service;
pub mod choose;
pub mod error;
pub mod future;
pub mod load;
pub mod pool;
#[cfg(test)]
mod test;
pub use self::{
choose::Choose,
load::{
weight::{HasWeight, Weight, Weighted, WithWeighted},
Load,
},
pool::Pool,
};
use self::{error::Error, future::ResponseFuture};
/// Balances requests across a set of inner services.
#[derive(Debug)]
pub struct Balance<D: Discover, C> {
/// Provides endpoints from service discovery.
discover: D,
/// Determines which endpoint is ready to be used next.
choose: C,
/// Holds an index into `ready`, indicating the service that has been chosen to
/// dispatch the next request.
chosen_ready_index: Option<usize>,
/// Holds an index into `ready`, indicating the service that dispatched the last
/// request.
dispatched_ready_index: Option<usize>,
/// Holds all possibly-available endpoints (i.e. from `discover`).
ready: IndexMap<D::Key, D::Service>,
/// Newly-added endpoints that have not yet become ready.
not_ready: IndexMap<D::Key, D::Service>,
}
// ===== impl Balance =====
impl<D> Balance<D, choose::PowerOfTwoChoices>
where
D: Discover,
D::Service: Load,
<D::Service as Load>::Metric: PartialOrd + fmt::Debug,
{
/// Chooses services using the [Power of Two Choices][p2c].
///
/// This configuration is prefered when a load metric is known.
///
/// As described in the [Finagle Guide][finagle]:
///
/// > The algorithm randomly picks two services from the set of ready endpoints and
/// > selects the least loaded of the two. By repeatedly using this strategy, we can
/// > expect a manageable upper bound on the maximum load of any server.
/// >
/// > The maximum load variance between any two servers is bound by `ln(ln(n))` where
/// > `n` is the number of servers in the cluster.
///
/// [finagle]: https://twitter.github.io/finagle/guide/Clients.html#power-of-two-choices-p2c-least-loaded
/// [p2c]: http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf
pub fn p2c(discover: D) -> Self {
Self::new(discover, choose::PowerOfTwoChoices::default())
}
/// Initializes a P2C load balancer from the provided randomization source.
///
/// This may be preferable when an application instantiates many balancers.
pub fn p2c_with_rng<R: rand::Rng>(discover: D, rng: &mut R) -> Result<Self, rand::Error> {
let rng = SmallRng::from_rng(rng)?;
Ok(Self::new(discover, choose::PowerOfTwoChoices::new(rng)))
}
}
impl<D: Discover> Balance<D, choose::RoundRobin> {
/// Attempts to choose services sequentially.
///
/// This configuration is prefered when no load metric is known.
pub fn round_robin(discover: D) -> Self {
Self::new(discover, choose::RoundRobin::default())
}
}
impl<D, C> Balance<D, C>
where
D: Discover,
C: Choose<D::Key, D::Service>,
{
/// Creates a new balancer.
pub fn new(discover: D, choose: C) -> Self {
Self {
discover,
choose,
chosen_ready_index: None,
dispatched_ready_index: None,
ready: IndexMap::default(),
not_ready: IndexMap::default(),
}
}
/// Returns true iff there are ready services.
///
/// This is not authoritative and is only useful after `poll_ready` has been called.
pub fn is_ready(&self) -> bool {
!self.ready.is_empty()
}
/// Returns true iff there are no ready services.
///
/// This is not authoritative and is only useful after `poll_ready` has been called.
pub fn is_not_ready(&self) -> bool {
self.ready.is_empty()
}
/// Counts the number of services considered to be ready.
///
/// This is not authoritative and is only useful after `poll_ready` has been called.
pub fn num_ready(&self) -> usize {
self.ready.len()
}
/// Counts the number of services not considered to be ready.
///
/// This is not authoritative and is only useful after `poll_ready` has been called.
pub fn num_not_ready(&self) -> usize {
self.not_ready.len()
}
}
impl<D, C> Balance<D, C>
where
D: Discover,
D::Error: Into<Error>,
C: Choose<D::Key, D::Service>,
{
/// Polls `discover` for updates, adding new items to `not_ready`.
///
/// Removals may alter the order of either `ready` or `not_ready`.
fn update_from_discover(&mut self) -> Result<(), error::Balance> {
debug!("updating from discover");
use tower_discover::Change::*;
while let Async::Ready(change) =
self.discover.poll().map_err(|e| error::Balance(e.into()))?
{
match change {
Insert(key, svc) => {
// If the `Insert`ed service is a duplicate of a service already
// in the ready list, remove the ready service first. The new
// service will then be inserted into the not-ready list.
self.ready.remove(&key);
self.not_ready.insert(key, svc);
}
Remove(key) => {
let _ejected = match self.ready.remove(&key) {
None => self.not_ready.remove(&key),
Some(s) => Some(s),
};
// XXX is it safe to just drop the Service? Or do we need some sort of
// graceful teardown?
// TODO: poll_close
}
}
}
Ok(())
}
/// Calls `poll_ready` on all services in `not_ready`.
///
/// When `poll_ready` returns ready, the service is removed from `not_ready` and inserted
/// into `ready`, potentially altering the order of `ready` and/or `not_ready`.
fn promote_to_ready<Request>(&mut self) -> Result<(), <D::Service as Service<Request>>::Error>
where
D::Service: Service<Request>,
{
let n = self.not_ready.len();
if n == 0 {
trace!("promoting to ready: not_ready is empty, skipping.");
return Ok(());
}
debug!("promoting to ready: {}", n);
// Iterate through the not-ready endpoints from right to left to prevent removals
// from reordering services in a way that could prevent a service from being polled.
for idx in (0..n).rev() {
let is_ready = {
let (_, svc) = self
.not_ready
.get_index_mut(idx)
.expect("invalid not_ready index");;
svc.poll_ready()?.is_ready()
};
trace!("not_ready[{:?}]: is_ready={:?};", idx, is_ready);
if is_ready {
debug!("not_ready[{:?}]: promoting to ready", idx);
let (key, svc) = self
.not_ready
.swap_remove_index(idx)
.expect("invalid not_ready index");
self.ready.insert(key, svc);
} else {
debug!("not_ready[{:?}]: not promoting to ready", idx);
}
}
debug!("promoting to ready: done");
Ok(())
}
/// Polls a `ready` service or moves it to `not_ready`.
///
/// If the service exists in `ready` and does not poll as ready, it is moved to
/// `not_ready`, potentially altering the order of `ready` and/or `not_ready`.
fn poll_ready_index<Request>(
&mut self,
idx: usize,
) -> Option<Poll<(), <D::Service as Service<Request>>::Error>>
where
D::Service: Service<Request>,
{
match self.ready.get_index_mut(idx) {
None => return None,
Some((_, svc)) => match svc.poll_ready() {
Ok(Async::Ready(())) => return Some(Ok(Async::Ready(()))),
Err(e) => return Some(Err(e)),
Ok(Async::NotReady) => {}
},
}
let (key, svc) = self
.ready
.swap_remove_index(idx)
.expect("invalid ready index");
self.not_ready.insert(key, svc);
Some(Ok(Async::NotReady))
}
/// Chooses the next service to which a request will be dispatched.
///
/// Ensures that .
fn choose_and_poll_ready<Request>(
&mut self,
) -> Poll<(), <D::Service as Service<Request>>::Error>
where
D::Service: Service<Request>,
{
loop {
let n = self.ready.len();
debug!("choosing from {} replicas", n);
let idx = match n {
0 => return Ok(Async::NotReady),
1 => 0,
_ => {
let replicas = choose::replicas(&self.ready).expect("too few replicas");
self.choose.choose(replicas)
}
};
// XXX Should we handle per-endpoint errors?
if self
.poll_ready_index(idx)
.expect("invalid ready index")?
.is_ready()
{
self.chosen_ready_index = Some(idx);
return Ok(Async::Ready(()));
}
}
}
}
impl<D, C, Svc, Request> Service<Request> for Balance<D, C>
where
D: Discover<Service = Svc>,
D::Error: Into<Error>,
Svc: Service<Request>,
Svc::Error: Into<Error>,
C: Choose<D::Key, Svc>,
{
type Response = Svc::Response;
type Error = Error;
type Future = ResponseFuture<Svc::Future>;
/// Prepares the balancer to process a request.
///
/// When `Async::Ready` is returned, `chosen_ready_index` is set with a valid index
/// into `ready` referring to a `Service` that is ready to disptach a request.
fn poll_ready(&mut self) -> Poll<(), Self::Error> {
// Clear before `ready` is altered.
self.chosen_ready_index = None;
// Before `ready` is altered, check the readiness of the last-used service, moving it
// to `not_ready` if appropriate.
if let Some(idx) = self.dispatched_ready_index.take() {
// XXX Should we handle per-endpoint errors?
self.poll_ready_index(idx)
.expect("invalid dispatched ready key")
.map_err(Into::into)?;
}
// Update `not_ready` and `ready`.
self.update_from_discover()?;
self.promote_to_ready().map_err(Into::into)?;
// Choose the next service to be used by `call`.
self.choose_and_poll_ready().map_err(Into::into)
}
fn call(&mut self, request: Request) -> Self::Future {
let idx = self.chosen_ready_index.take().expect("not ready");
let (_, svc) = self
.ready
.get_index_mut(idx)
.expect("invalid chosen ready index");
self.dispatched_ready_index = Some(idx);
let rsp = svc.call(request);
ResponseFuture::new(rsp)
}
}

View File

@ -1,64 +0,0 @@
use futures::{try_ready, Async, Poll};
use tower_discover::{Change, Discover};
use tower_service::Service;
use crate::Load;
/// Wraps a type so that `Load::load` returns a constant value.
pub struct Constant<T, M> {
inner: T,
load: M,
}
// ===== impl Constant =====
impl<T, M: Copy> Constant<T, M> {
pub fn new(inner: T, load: M) -> Self {
Self { inner, load }
}
}
impl<T, M: Copy + PartialOrd> Load for Constant<T, M> {
type Metric = M;
fn load(&self) -> M {
self.load
}
}
impl<S, M, Request> Service<Request> for Constant<S, M>
where
S: Service<Request>,
M: Copy,
{
type Response = S::Response;
type Error = S::Error;
type Future = S::Future;
fn poll_ready(&mut self) -> Poll<(), Self::Error> {
self.inner.poll_ready()
}
fn call(&mut self, req: Request) -> Self::Future {
self.inner.call(req)
}
}
/// Proxies `Discover` such that all changes are wrapped with a constant load.
impl<D: Discover, M: Copy> Discover for Constant<D, M> {
type Key = D::Key;
type Service = Constant<D::Service, M>;
type Error = D::Error;
/// Yields the next discovery change set.
fn poll(&mut self) -> Poll<Change<D::Key, Self::Service>, D::Error> {
use self::Change::*;
let change = match try_ready!(self.inner.poll()) {
Insert(k, svc) => Insert(k, Constant::new(svc, self.load)),
Remove(k) => Remove(k),
};
Ok(Async::Ready(change))
}
}

View File

@ -1,84 +0,0 @@
use futures::{try_ready, Future, Poll};
/// Attaches `I`-typed instruments to `V` typed values.
///
/// This utility allows load metrics to have a protocol-agnostic means to track streams
/// past their initial response future. For example, if `V` represents an HTTP response
/// type, an implementation could add `H`-typed handles to each response's extensions to
/// detect when the response is dropped.
///
/// Handles are intended to be RAII guards that primarily implement `Drop` and update load
/// metric state as they are dropped.
///
/// A base `impl<H, V> Instrument<H, V> for NoInstrument` is provided to drop the handle
/// immediately. This is appropriate when a response is discrete and cannot comprise
/// multiple messages.
///
/// In many cases, the `Output` type is simply `V`. However, `Instrument` may alter the
/// typein order to instrument it appropriately. For example, an HTTP Instrument may
/// modify the body type: so an `Instrument` that takes values of type `http::Response<A>`
/// may output values of type `http::Response<B>`.
pub trait Instrument<H, V>: Clone {
type Output;
/// Attaches an `H`-typed handle to a `V`-typed value.
fn instrument(&self, handle: H, value: V) -> Self::Output;
}
/// A `Instrument` implementation that drops each instrument immediately.
#[derive(Clone, Copy, Debug)]
pub struct NoInstrument;
/// Attaches a `I`-typed instruments to the result of an `F`-typed `Future`.
#[derive(Debug)]
pub struct InstrumentFuture<F, I, H>
where
F: Future,
I: Instrument<H, F::Item>,
{
future: F,
handle: Option<H>,
instrument: I,
}
// ===== impl InstrumentFuture =====
impl<F, I, H> InstrumentFuture<F, I, H>
where
F: Future,
I: Instrument<H, F::Item>,
{
pub fn new(instrument: I, handle: H, future: F) -> Self {
InstrumentFuture {
future,
instrument,
handle: Some(handle),
}
}
}
impl<F, I, H> Future for InstrumentFuture<F, I, H>
where
F: Future,
I: Instrument<H, F::Item>,
{
type Item = I::Output;
type Error = F::Error;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
let rsp = try_ready!(self.future.poll());
let h = self.handle.take().expect("handle");
Ok(self.instrument.instrument(h, rsp).into())
}
}
// ===== NoInstrument =====
impl<H, V> Instrument<H, V> for NoInstrument {
type Output = V;
fn instrument(&self, handle: H, value: V) -> V {
drop(handle);
value
}
}

View File

@ -1,22 +0,0 @@
mod constant;
mod instrument;
pub mod peak_ewma;
pub mod pending_requests;
pub(crate) mod weight;
pub use self::{
constant::Constant,
instrument::{Instrument, InstrumentFuture, NoInstrument},
peak_ewma::{PeakEwma, WithPeakEwma},
pending_requests::{PendingRequests, WithPendingRequests},
};
/// Exposes a load metric.
///
/// Implementors should choose load values so that lesser-loaded instances return lesser
/// values than higher-load instances.
pub trait Load {
type Metric: PartialOrd;
fn load(&self) -> Self::Metric;
}

View File

@ -1,213 +0,0 @@
use futures::{try_ready, Async, Poll};
use std::{ops, sync::Arc};
use tower_discover::{Change, Discover};
use tower_service::Service;
use super::{Instrument, InstrumentFuture, NoInstrument};
use crate::{HasWeight, Load, Weight};
/// Expresses load based on the number of currently-pending requests.
#[derive(Debug)]
pub struct PendingRequests<S, I = NoInstrument> {
service: S,
ref_count: RefCount,
instrument: I,
}
/// Shared between instances of `PendingRequests` and `Handle` to track active
/// references.
#[derive(Clone, Debug, Default)]
struct RefCount(Arc<()>);
/// Wraps `inner`'s services with `PendingRequests`.
#[derive(Debug)]
pub struct WithPendingRequests<D, I = NoInstrument> {
discover: D,
instrument: I,
}
/// Represents the number of currently-pending requests to a given service.
#[derive(Clone, Copy, Debug, Default, PartialOrd, PartialEq, Ord, Eq)]
pub struct Count(usize);
#[derive(Debug)]
pub struct Handle(RefCount);
// ===== impl Count =====
impl ops::Div<Weight> for Count {
type Output = f64;
fn div(self, weight: Weight) -> f64 {
self.0 / weight
}
}
// ===== impl PendingRequests =====
impl<S, I> PendingRequests<S, I> {
fn new(service: S, instrument: I) -> Self {
Self {
service,
instrument,
ref_count: RefCount::default(),
}
}
fn handle(&self) -> Handle {
Handle(self.ref_count.clone())
}
}
impl<S, I> Load for PendingRequests<S, I> {
type Metric = Count;
fn load(&self) -> Count {
// Count the number of references that aren't `self`.
Count(self.ref_count.ref_count() - 1)
}
}
impl<S: HasWeight, I> HasWeight for PendingRequests<S, I> {
fn weight(&self) -> Weight {
self.service.weight()
}
}
impl<S, I, Request> Service<Request> for PendingRequests<S, I>
where
S: Service<Request>,
I: Instrument<Handle, S::Response>,
{
type Response = I::Output;
type Error = S::Error;
type Future = InstrumentFuture<S::Future, I, Handle>;
fn poll_ready(&mut self) -> Poll<(), Self::Error> {
self.service.poll_ready()
}
fn call(&mut self, req: Request) -> Self::Future {
InstrumentFuture::new(
self.instrument.clone(),
self.handle(),
self.service.call(req),
)
}
}
// ===== impl WithPendingRequests =====
impl<D, I> WithPendingRequests<D, I> {
pub fn new<Request>(discover: D, instrument: I) -> Self
where
D: Discover,
D::Service: Service<Request>,
I: Instrument<Handle, <D::Service as Service<Request>>::Response>,
{
Self {
discover,
instrument,
}
}
}
impl<D, I> Discover for WithPendingRequests<D, I>
where
D: Discover,
I: Clone,
{
type Key = D::Key;
type Service = PendingRequests<D::Service, I>;
type Error = D::Error;
/// Yields the next discovery change set.
fn poll(&mut self) -> Poll<Change<D::Key, Self::Service>, D::Error> {
use self::Change::*;
let change = match try_ready!(self.discover.poll()) {
Insert(k, svc) => Insert(k, PendingRequests::new(svc, self.instrument.clone())),
Remove(k) => Remove(k),
};
Ok(Async::Ready(change))
}
}
// ==== RefCount ====
impl RefCount {
pub fn ref_count(&self) -> usize {
Arc::strong_count(&self.0)
}
}
#[cfg(test)]
mod tests {
use super::*;
use futures::{future, Future, Poll};
struct Svc;
impl Service<()> for Svc {
type Response = ();
type Error = ();
type Future = future::FutureResult<(), ()>;
fn poll_ready(&mut self) -> Poll<(), ()> {
Ok(().into())
}
fn call(&mut self, (): ()) -> Self::Future {
future::ok(())
}
}
#[test]
fn default() {
let mut svc = PendingRequests::new(Svc, NoInstrument);
assert_eq!(svc.load(), Count(0));
let rsp0 = svc.call(());
assert_eq!(svc.load(), Count(1));
let rsp1 = svc.call(());
assert_eq!(svc.load(), Count(2));
let () = rsp0.wait().unwrap();
assert_eq!(svc.load(), Count(1));
let () = rsp1.wait().unwrap();
assert_eq!(svc.load(), Count(0));
}
#[test]
fn instrumented() {
#[derive(Clone)]
struct IntoHandle;
impl Instrument<Handle, ()> for IntoHandle {
type Output = Handle;
fn instrument(&self, i: Handle, (): ()) -> Handle {
i
}
}
let mut svc = PendingRequests::new(Svc, IntoHandle);
assert_eq!(svc.load(), Count(0));
let rsp = svc.call(());
assert_eq!(svc.load(), Count(1));
let i0 = rsp.wait().unwrap();
assert_eq!(svc.load(), Count(1));
let rsp = svc.call(());
assert_eq!(svc.load(), Count(2));
let i1 = rsp.wait().unwrap();
assert_eq!(svc.load(), Count(2));
drop(i1);
assert_eq!(svc.load(), Count(1));
drop(i0);
assert_eq!(svc.load(), Count(0));
}
}

View File

@ -1,167 +0,0 @@
use futures::{try_ready, Async, Poll};
use std::ops;
use tower_discover::{Change, Discover};
use tower_service::Service;
use crate::Load;
/// A weight on [0.0, ∞].
///
/// Lesser-weighted nodes receive less traffic than heavier-weighted nodes.
#[derive(Copy, Clone, Debug, PartialEq, PartialOrd)]
pub struct Weight(f64);
/// A Service, that implements Load, that
#[derive(Clone, Debug, PartialEq, PartialOrd)]
pub struct Weighted<T> {
inner: T,
weight: Weight,
}
#[derive(Debug)]
pub struct WithWeighted<T>(T);
pub trait HasWeight {
fn weight(&self) -> Weight;
}
// === impl Weighted ===
impl<T: HasWeight> From<T> for Weighted<T> {
fn from(inner: T) -> Self {
let weight = inner.weight();
Self { inner, weight }
}
}
impl<T> HasWeight for Weighted<T> {
fn weight(&self) -> Weight {
self.weight
}
}
impl<T> Weighted<T> {
pub fn new<W: Into<Weight>>(inner: T, w: W) -> Self {
let weight = w.into();
Self { inner, weight }
}
pub fn into_parts(self) -> (T, Weight) {
let Self { inner, weight } = self;
(inner, weight)
}
}
impl<L> Load for Weighted<L>
where
L: Load,
L::Metric: ops::Div<Weight>,
<L::Metric as ops::Div<Weight>>::Output: PartialOrd,
{
type Metric = <L::Metric as ops::Div<Weight>>::Output;
fn load(&self) -> Self::Metric {
self.inner.load() / self.weight
}
}
impl<R, S: Service<R>> Service<R> for Weighted<S> {
type Response = S::Response;
type Error = S::Error;
type Future = S::Future;
fn poll_ready(&mut self) -> Poll<(), Self::Error> {
self.inner.poll_ready()
}
fn call(&mut self, req: R) -> Self::Future {
self.inner.call(req)
}
}
// === impl withWeight ===
impl<D> From<D> for WithWeighted<D>
where
D: Discover,
D::Service: HasWeight,
{
fn from(d: D) -> Self {
WithWeighted(d)
}
}
impl<D> Discover for WithWeighted<D>
where
D: Discover,
D::Service: HasWeight,
{
type Key = D::Key;
type Error = D::Error;
type Service = Weighted<D::Service>;
fn poll(&mut self) -> Poll<Change<D::Key, Self::Service>, Self::Error> {
let c = match try_ready!(self.0.poll()) {
Change::Insert(k, svc) => Change::Insert(k, Weighted::from(svc)),
Change::Remove(k) => Change::Remove(k),
};
Ok(Async::Ready(c))
}
}
// === impl Weight ===
impl Weight {
pub const MIN: Weight = Weight(0.0);
pub const DEFAULT: Weight = Weight(1.0);
}
impl Default for Weight {
fn default() -> Self {
Weight::DEFAULT
}
}
impl From<f64> for Weight {
fn from(w: f64) -> Self {
if w < 0.0 {
Weight::MIN
} else {
Weight(w)
}
}
}
impl Into<f64> for Weight {
fn into(self) -> f64 {
self.0
}
}
impl ops::Div<Weight> for f64 {
type Output = f64;
fn div(self, Weight(w): Weight) -> f64 {
if w == 0.0 {
::std::f64::INFINITY
} else {
self / w
}
}
}
impl ops::Div<Weight> for usize {
type Output = f64;
fn div(self, w: Weight) -> f64 {
(self as f64) / w
}
}
#[test]
fn div_min() {
assert_eq!(10.0 / Weight::MIN, ::std::f64::INFINITY);
assert_eq!(10 / Weight::MIN, ::std::f64::INFINITY);
assert_eq!(0 / Weight::MIN, ::std::f64::INFINITY);
}

View File

@ -1,295 +0,0 @@
//! This module defines a load-balanced pool of services that adds new services when load is high.
//!
//! The pool uses `poll_ready` as a signal indicating whether additional services should be spawned
//! to handle the current level of load. Specifically, every time `poll_ready` on the inner service
//! returns `Ready`, [`Pool`] consider that a 0, and every time it returns `NotReady`, [`Pool`]
//! considers it a 1. [`Pool`] then maintains an [exponential moving
//! average](https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average) over those
//! samples, which gives an estimate of how often the underlying service has been ready when it was
//! needed "recently" (see [`Builder::urgency`]). If the service is loaded (see
//! [`Builder::loaded_above`]), a new service is created and added to the underlying [`Balance`].
//! If the service is underutilized (see [`Builder::underutilized_below`]) and there are two or
//! more services, then the latest added service is removed. In either case, the load estimate is
//! reset to its initial value (see [`Builder::initial`] to prevent services from being rapidly
//! added or removed.
#![deny(missing_docs)]
use super::{Balance, Choose};
use futures::{try_ready, Async, Future, Poll};
use tower_discover::{Change, Discover};
use tower_service::Service;
use tower_util::MakeService;
enum Load {
/// Load is low -- remove a service instance.
Low,
/// Load is normal -- keep the service set as it is.
Normal,
/// Load is high -- add another service instance.
High,
}
/// A wrapper around `MakeService` that discovers a new service when load is high, and removes a
/// service when load is low. See [`Pool`].
pub struct PoolDiscoverer<MS, Target, Request>
where
MS: MakeService<Target, Request>,
{
maker: MS,
making: Option<MS::Future>,
target: Target,
load: Load,
services: usize,
}
impl<MS, Target, Request> Discover for PoolDiscoverer<MS, Target, Request>
where
MS: MakeService<Target, Request>,
// NOTE: these bounds should go away once MakeService adopts Box<dyn Error>
MS::MakeError: ::std::error::Error + Send + Sync + 'static,
MS::Error: ::std::error::Error + Send + Sync + 'static,
Target: Clone,
{
type Key = usize;
type Service = MS::Service;
type Error = MS::MakeError;
fn poll(&mut self) -> Poll<Change<Self::Key, Self::Service>, Self::Error> {
if self.services == 0 && self.making.is_none() {
self.making = Some(self.maker.make_service(self.target.clone()));
}
if let Load::High = self.load {
if self.making.is_none() {
try_ready!(self.maker.poll_ready());
// TODO: it'd be great if we could avoid the clone here and use, say, &Target
self.making = Some(self.maker.make_service(self.target.clone()));
}
}
if let Some(mut fut) = self.making.take() {
if let Async::Ready(s) = fut.poll()? {
self.services += 1;
self.load = Load::Normal;
return Ok(Async::Ready(Change::Insert(self.services, s)));
} else {
self.making = Some(fut);
return Ok(Async::NotReady);
}
}
match self.load {
Load::High => {
unreachable!("found high load but no Service being made");
}
Load::Normal => Ok(Async::NotReady),
Load::Low if self.services == 1 => Ok(Async::NotReady),
Load::Low => {
self.load = Load::Normal;
let rm = self.services;
self.services -= 1;
Ok(Async::Ready(Change::Remove(rm)))
}
}
}
}
/// A [builder] that lets you configure how a [`Pool`] determines whether the underlying service is
/// loaded or not. See the [module-level documentation](index.html) and the builder's methods for
/// details.
///
/// [builder]: https://rust-lang-nursery.github.io/api-guidelines/type-safety.html#builders-enable-construction-of-complex-values-c-builder
#[derive(Copy, Clone, Debug)]
pub struct Builder {
low: f64,
high: f64,
init: f64,
alpha: f64,
}
impl Default for Builder {
fn default() -> Self {
Builder {
init: 0.1,
low: 0.00001,
high: 0.2,
alpha: 0.03,
}
}
}
impl Builder {
/// Create a new builder with default values for all load settings.
///
/// If you just want to use the defaults, you can just use [`Pool::new`].
pub fn new() -> Self {
Self::default()
}
/// When the estimated load (see the [module-level docs](index.html)) drops below this
/// threshold, and there are at least two services active, a service is removed.
///
/// The default value is 0.01. That is, when one in every 100 `poll_ready` calls return
/// `NotReady`, then the underlying service is considered underutilized.
pub fn underutilized_below(&mut self, low: f64) -> &mut Self {
self.low = low;
self
}
/// When the estimated load (see the [module-level docs](index.html)) exceeds this
/// threshold, and no service is currently in the process of being added, a new service is
/// scheduled to be added to the underlying [`Balance`].
///
/// The default value is 0.5. That is, when every other call to `poll_ready` returns
/// `NotReady`, then the underlying service is considered highly loaded.
pub fn loaded_above(&mut self, high: f64) -> &mut Self {
self.high = high;
self
}
/// The initial estimated load average.
///
/// This is also the value that the estimated load will be reset to whenever a service is added
/// or removed.
///
/// The default value is 0.1.
pub fn initial(&mut self, init: f64) -> &mut Self {
self.init = init;
self
}
/// How aggressively the estimated load average is updated.
///
/// This is the α parameter of the formula for the [exponential moving
/// average](https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average), and
/// dictates how quickly new samples of the current load affect the estimated load. If the
/// value is closer to 1, newer samples affect the load average a lot (when α is 1, the load
/// average is immediately set to the current load). If the value is closer to 0, newer samples
/// affect the load average very little at a time.
///
/// The given value is clamped to `[0,1]`.
///
/// The default value is 0.05, meaning, in very approximate terms, that each new load sample
/// affects the estimated load by 5%.
pub fn urgency(&mut self, alpha: f64) -> &mut Self {
self.alpha = alpha.max(0.0).min(1.0);
self
}
/// See [`Pool::new`].
pub fn build<C, MS, Target, Request>(
&self,
make_service: MS,
target: Target,
choose: C,
) -> Pool<C, MS, Target, Request>
where
MS: MakeService<Target, Request>,
MS::MakeError: ::std::error::Error + Send + Sync + 'static,
MS::Error: ::std::error::Error + Send + Sync + 'static,
Target: Clone,
C: Choose<usize, MS::Service>,
{
let d = PoolDiscoverer {
maker: make_service,
making: None,
target,
load: Load::Normal,
services: 0,
};
Pool {
balance: Balance::new(d, choose),
options: *self,
ewma: self.init,
}
}
}
/// A dynamically sized, load-balanced pool of `Service` instances.
pub struct Pool<C, MS, Target, Request>
where
MS: MakeService<Target, Request>,
MS::MakeError: ::std::error::Error + Send + Sync + 'static,
MS::Error: ::std::error::Error + Send + Sync + 'static,
Target: Clone,
{
balance: Balance<PoolDiscoverer<MS, Target, Request>, C>,
options: Builder,
ewma: f64,
}
impl<C, MS, Target, Request> Pool<C, MS, Target, Request>
where
MS: MakeService<Target, Request>,
MS::MakeError: ::std::error::Error + Send + Sync + 'static,
MS::Error: ::std::error::Error + Send + Sync + 'static,
Target: Clone,
C: Choose<usize, MS::Service>,
{
/// Construct a new dynamically sized `Pool`.
///
/// If many calls to `poll_ready` return `NotReady`, `new_service` is used to construct another
/// `Service` that is then added to the load-balanced pool. If multiple services are available,
/// `choose` is used to determine which one to use (just as in `Balance`). If many calls to
/// `poll_ready` succeed, the most recently added `Service` is dropped from the pool.
pub fn new(make_service: MS, target: Target, choose: C) -> Self {
Builder::new().build(make_service, target, choose)
}
}
impl<C, MS, Target, Request> Service<Request> for Pool<C, MS, Target, Request>
where
MS: MakeService<Target, Request>,
MS::MakeError: ::std::error::Error + Send + Sync + 'static,
MS::Error: ::std::error::Error + Send + Sync + 'static,
Target: Clone,
C: Choose<usize, MS::Service>,
{
type Response = <Balance<PoolDiscoverer<MS, Target, Request>, C> as Service<Request>>::Response;
type Error = <Balance<PoolDiscoverer<MS, Target, Request>, C> as Service<Request>>::Error;
type Future = <Balance<PoolDiscoverer<MS, Target, Request>, C> as Service<Request>>::Future;
fn poll_ready(&mut self) -> Poll<(), Self::Error> {
if let Async::Ready(()) = self.balance.poll_ready()? {
// services was ready -- there are enough services
// update ewma with a 0 sample
self.ewma = (1.0 - self.options.alpha) * self.ewma;
if self.ewma < self.options.low {
self.balance.discover.load = Load::Low;
if self.balance.discover.services > 1 {
// reset EWMA so we don't immediately try to remove another service
self.ewma = self.options.init;
}
} else {
self.balance.discover.load = Load::Normal;
}
Ok(Async::Ready(()))
} else if self.balance.discover.making.is_none() {
// no services are ready -- we're overloaded
// update ewma with a 1 sample
self.ewma = self.options.alpha + (1.0 - self.options.alpha) * self.ewma;
if self.ewma > self.options.high {
self.balance.discover.load = Load::High;
// don't reset the EWMA -- in theory, poll_ready should now start returning
// `Ready`, so we won't try to launch another service immediately.
} else {
self.balance.discover.load = Load::Normal;
}
Ok(Async::NotReady)
} else {
// no services are ready, but we're already making another service!
Ok(Async::NotReady)
}
}
fn call(&mut self, req: Request) -> Self::Future {
Service::call(&mut self.balance, req)
}
}

View File

@ -1,136 +0,0 @@
use futures::{future, Async, Poll};
use quickcheck::*;
use std::collections::VecDeque;
use tower_discover::Change;
use tower_service::Service;
use crate::*;
type Error = Box<dyn std::error::Error + Send + Sync>;
struct ReluctantDisco(VecDeque<Change<usize, ReluctantService>>);
struct ReluctantService {
polls_until_ready: usize,
}
impl Discover for ReluctantDisco {
type Key = usize;
type Service = ReluctantService;
type Error = Error;
fn poll(&mut self) -> Poll<Change<Self::Key, Self::Service>, Self::Error> {
let r = self
.0
.pop_front()
.map(Async::Ready)
.unwrap_or(Async::NotReady);
debug!("polling disco: {:?}", r.is_ready());
Ok(r)
}
}
impl Service<()> for ReluctantService {
type Response = ();
type Error = Error;
type Future = future::FutureResult<Self::Response, Self::Error>;
fn poll_ready(&mut self) -> Poll<(), Self::Error> {
if self.polls_until_ready == 0 {
return Ok(Async::Ready(()));
}
self.polls_until_ready -= 1;
return Ok(Async::NotReady);
}
fn call(&mut self, _: ()) -> Self::Future {
future::ok(())
}
}
quickcheck! {
/// Creates a random number of services, each of which must be polled a random
/// number of times before becoming ready. As the balancer is polled, ensure that
/// it does not become ready prematurely and that services are promoted from
/// not_ready to ready.
fn poll_ready(service_tries: Vec<usize>) -> TestResult {
// Stores the number of pending services after each poll_ready call.
let mut pending_at = Vec::new();
let disco = {
let mut changes = VecDeque::new();
for (i, n) in service_tries.iter().map(|n| *n).enumerate() {
for j in 0..n {
if j == pending_at.len() {
pending_at.push(1);
} else {
pending_at[j] += 1;
}
}
let s = ReluctantService { polls_until_ready: n };
changes.push_back(Change::Insert(i, s));
}
ReluctantDisco(changes)
};
pending_at.push(0);
let mut balancer = Balance::new(disco, choose::RoundRobin::default());
let services = service_tries.len();
let mut next_pos = 0;
for pending in pending_at.iter().map(|p| *p) {
assert!(pending <= services);
let ready = services - pending;
match balancer.poll_ready() {
Err(_) => return TestResult::error("poll_ready failed"),
Ok(p) => {
if p.is_ready() != (ready > 0) {
return TestResult::failed();
}
}
}
if balancer.num_ready() != ready {
return TestResult::failed();
}
if balancer.num_not_ready() != pending {
return TestResult::failed();
}
if balancer.is_ready() != (ready > 0) {
return TestResult::failed();
}
if balancer.is_not_ready() != (ready == 0) {
return TestResult::failed();
}
if balancer.dispatched_ready_index.is_some() {
return TestResult::failed();
}
if ready == 0 {
if balancer.chosen_ready_index.is_some() {
return TestResult::failed();
}
} else {
// Check that the round-robin chooser is doing its thing:
match balancer.chosen_ready_index {
None => return TestResult::failed(),
Some(idx) => {
if idx != next_pos {
return TestResult::failed();
}
}
}
next_pos = (next_pos + 1) % ready;
}
}
TestResult::passed()
}
}

View File

@ -1,3 +0,0 @@
# 0.1.0 (April 26, 2019)
- Initial release

View File

@ -1,33 +0,0 @@
[package]
name = "tower-buffer"
# When releasing to crates.io:
# - Remove path dependencies
# - Update html_root_url.
# - Update doc url
# - Cargo.toml
# - README.md
# - Update CHANGELOG.md.
# - Create "v0.1.x" git tag.
version = "0.1.0"
authors = ["Tower Maintainers <team@tower-rs.com>"]
license = "MIT"
readme = "README.md"
repository = "https://github.com/tower-rs/tower"
homepage = "https://github.com/tower-rs/tower"
documentation = "https://docs.rs/tower-buffer/0.1.0"
description = """
Buffer requests before dispatching to a `Service`.
"""
categories = ["asynchronous", "network-programming"]
edition = "2018"
[dependencies]
futures = "0.1.25"
tower-service = "0.2.0"
tower-layer = "0.1.0"
tokio-executor = "0.1.7"
tokio-sync = "0.1.0"
[dev-dependencies]
tower = { version = "0.1.0", path = "../tower" }
tower-test = { version = "0.1.0", path = "../tower-test" }

View File

@ -1,25 +0,0 @@
Copyright (c) 2019 Tower Contributors
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@ -1,13 +0,0 @@
# Tower Buffer
Buffer requests before dispatching to a `Service`.
## License
This project is licensed under the [MIT license](LICENSE).
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in Tower by you, shall be licensed as MIT, without any additional
terms or conditions.

View File

@ -1,70 +0,0 @@
//! Future types
use crate::{
error::{Closed, Error},
message,
};
use futures::{Async, Future, Poll};
/// Future eventually completed with the response to the original request.
pub struct ResponseFuture<T> {
state: ResponseState<T>,
}
enum ResponseState<T> {
Failed(Option<Error>),
Rx(message::Rx<T>),
Poll(T),
}
impl<T> ResponseFuture<T>
where
T: Future,
T::Error: Into<Error>,
{
pub(crate) fn new(rx: message::Rx<T>) -> Self {
ResponseFuture {
state: ResponseState::Rx(rx),
}
}
pub(crate) fn failed(err: Error) -> Self {
ResponseFuture {
state: ResponseState::Failed(Some(err)),
}
}
}
impl<T> Future for ResponseFuture<T>
where
T: Future,
T::Error: Into<Error>,
{
type Item = T::Item;
type Error = Error;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
use self::ResponseState::*;
loop {
let fut;
match self.state {
Failed(ref mut e) => {
return Err(e.take().expect("polled after error"));
}
Rx(ref mut rx) => match rx.poll() {
Ok(Async::Ready(Ok(f))) => fut = f,
Ok(Async::Ready(Err(e))) => return Err(e.into()),
Ok(Async::NotReady) => return Ok(Async::NotReady),
Err(_) => return Err(Closed::new().into()),
},
Poll(ref mut fut) => {
return fut.poll().map_err(Into::into);
}
}
self.state = Poll(fut);
}
}
}

View File

@ -1,45 +0,0 @@
use crate::{error::Error, service::Buffer, worker::WorkerExecutor};
use std::marker::PhantomData;
use tokio_executor::DefaultExecutor;
use tower_layer::Layer;
use tower_service::Service;
/// Buffer requests with a bounded buffer
pub struct BufferLayer<Request, E = DefaultExecutor> {
bound: usize,
executor: E,
_p: PhantomData<fn(Request)>,
}
impl<Request> BufferLayer<Request, DefaultExecutor> {
pub fn new(bound: usize) -> Self {
BufferLayer {
bound,
executor: DefaultExecutor::current(),
_p: PhantomData,
}
}
}
impl<Request, E: Clone> BufferLayer<Request, E> {
pub fn with_executor(bound: usize, executor: E) -> Self {
BufferLayer {
bound,
executor,
_p: PhantomData,
}
}
}
impl<E, S, Request> Layer<S> for BufferLayer<Request, E>
where
S: Service<Request>,
S::Error: Into<Error>,
E: WorkerExecutor<S, Request> + Clone,
{
type Service = Buffer<S, Request>;
fn layer(&self, service: S) -> Self::Service {
Buffer::with_executor(service, self.bound, &mut self.executor.clone())
}
}

View File

@ -1,21 +0,0 @@
#![doc(html_root_url = "https://docs.rs/tower-buffer/0.1.0")]
#![deny(rust_2018_idioms)]
#![allow(elided_lifetimes_in_paths)]
//! Buffer requests when the inner service is out of capacity.
//!
//! Buffering works by spawning a new task that is dedicated to pulling requests
//! out of the buffer and dispatching them to the inner service. By adding a
//! buffer and a dedicated task, the `Buffer` layer in front of the service can
//! be `Clone` even if the inner service is not.
pub mod error;
pub mod future;
mod layer;
mod message;
mod service;
mod worker;
pub use crate::layer::BufferLayer;
pub use crate::service::Buffer;
pub use crate::worker::WorkerExecutor;

View File

@ -1,126 +0,0 @@
use crate::{
error::{Error, SpawnError},
future::ResponseFuture,
message::Message,
worker::{Handle, Worker, WorkerExecutor},
};
use futures::Poll;
use tokio_executor::DefaultExecutor;
use tokio_sync::{mpsc, oneshot};
use tower_service::Service;
/// Adds a buffer in front of an inner service.
///
/// See crate level documentation for more details.
pub struct Buffer<T, Request>
where
T: Service<Request>,
{
tx: mpsc::Sender<Message<Request, T::Future>>,
worker: Option<Handle>,
}
impl<T, Request> Buffer<T, Request>
where
T: Service<Request>,
T::Error: Into<Error>,
{
/// Creates a new `Buffer` wrapping `service`.
///
/// `bound` gives the maximal number of requests that can be queued for the service before
/// backpressure is applied to callers.
///
/// The default Tokio executor is used to run the given service, which means that this method
/// must be called while on the Tokio runtime.
pub fn new(service: T, bound: usize) -> Self
where
T: Send + 'static,
T::Future: Send,
T::Error: Send + Sync,
Request: Send + 'static,
{
Self::with_executor(service, bound, &mut DefaultExecutor::current())
}
/// Creates a new `Buffer` wrapping `service`.
///
/// `executor` is used to spawn a new `Worker` task that is dedicated to
/// draining the buffer and dispatching the requests to the internal
/// service.
///
/// `bound` gives the maximal number of requests that can be queued for the service before
/// backpressure is applied to callers.
pub fn with_executor<E>(service: T, bound: usize, executor: &mut E) -> Self
where
E: WorkerExecutor<T, Request>,
{
let (tx, rx) = mpsc::channel(bound);
let worker = Worker::spawn(service, rx, executor);
Buffer { tx, worker }
}
fn get_worker_error(&self) -> Error {
self.worker
.as_ref()
.map(|w| w.get_error_on_closed())
.unwrap_or_else(|| {
// If there's no worker handle, that's because spawning it
// at the beginning failed.
SpawnError::new().into()
})
}
}
impl<T, Request> Service<Request> for Buffer<T, Request>
where
T: Service<Request>,
T::Error: Into<Error>,
{
type Response = T::Response;
type Error = Error;
type Future = ResponseFuture<T::Future>;
fn poll_ready(&mut self) -> Poll<(), Self::Error> {
// If the inner service has errored, then we error here.
self.tx.poll_ready().map_err(|_| self.get_worker_error())
}
fn call(&mut self, request: Request) -> Self::Future {
// TODO:
// ideally we'd poll_ready again here so we don't allocate the oneshot
// if the try_send is about to fail, but sadly we can't call poll_ready
// outside of task context.
let (tx, rx) = oneshot::channel();
match self.tx.try_send(Message { request, tx }) {
Err(e) => {
if e.is_closed() {
ResponseFuture::failed(self.get_worker_error())
} else {
// When `mpsc::Sender::poll_ready` returns `Ready`, a slot
// in the channel is reserved for the handle. Other `Sender`
// handles may not send a message using that slot. This
// guarantees capacity for `request`.
//
// Given this, the only way to hit this code path is if
// `poll_ready` has not been called & `Ready` returned.
panic!("buffer full; poll_ready must be called first");
}
}
Ok(_) => ResponseFuture::new(rx),
}
}
}
impl<T, Request> Clone for Buffer<T, Request>
where
T: Service<Request>,
{
fn clone(&self) -> Self {
Self {
tx: self.tx.clone(),
worker: self.worker.clone(),
}
}
}

View File

@ -1,207 +0,0 @@
use futures::prelude::*;
use std::{cell::RefCell, thread};
use tokio_executor::{SpawnError, TypedExecutor};
use tower::{
buffer::{error, Buffer},
Service,
};
use tower_test::{assert_request_eq, mock};
#[test]
fn req_and_res() {
let (mut service, mut handle) = new_service();
let response = service.call("hello");
assert_request_eq!(handle, "hello").send_response("world");
assert_eq!(response.wait().unwrap(), "world");
}
#[test]
fn clears_canceled_requests() {
let (mut service, mut handle) = new_service();
handle.allow(1);
let res1 = service.call("hello");
let send_response1 = assert_request_eq!(handle, "hello");
// don't respond yet, new requests will get buffered
let res2 = service.call("hello2");
with_task(|| {
assert!(handle.poll_request().unwrap().is_not_ready());
});
let res3 = service.call("hello3");
drop(res2);
send_response1.send_response("world");
assert_eq!(res1.wait().unwrap(), "world");
// res2 was dropped, so it should have been canceled in the buffer
handle.allow(1);
assert_request_eq!(handle, "hello3").send_response("world3");
assert_eq!(res3.wait().unwrap(), "world3");
}
#[test]
fn when_inner_is_not_ready() {
let (mut service, mut handle) = new_service();
// Make the service NotReady
handle.allow(0);
let mut res1 = service.call("hello");
// Allow the Buffer's executor to do work
::std::thread::sleep(::std::time::Duration::from_millis(100));
with_task(|| {
assert!(res1.poll().expect("res1.poll").is_not_ready());
assert!(handle.poll_request().expect("poll_request").is_not_ready());
});
handle.allow(1);
assert_request_eq!(handle, "hello").send_response("world");
assert_eq!(res1.wait().expect("res1.wait"), "world");
}
#[test]
fn when_inner_fails() {
use std::error::Error as StdError;
let (mut service, mut handle) = new_service();
// Make the service NotReady
handle.allow(0);
handle.send_error("foobar");
let mut res1 = service.call("hello");
// Allow the Buffer's executor to do work
::std::thread::sleep(::std::time::Duration::from_millis(100));
with_task(|| {
let e = res1.poll().unwrap_err();
if let Some(e) = e.downcast_ref::<error::ServiceError>() {
let e = e.source().unwrap();
assert_eq!(e.to_string(), "foobar");
} else {
panic!("unexpected error type: {:?}", e);
}
});
}
#[test]
fn when_spawn_fails() {
let (service, _handle) = mock::pair::<(), ()>();
let mut exec = ExecFn(|_| Err(()));
let mut service = Buffer::with_executor(service, 1, &mut exec);
let err = with_task(|| {
service
.poll_ready()
.expect_err("buffer poll_ready should error")
});
assert!(
err.is::<error::SpawnError>(),
"should be a SpawnError: {:?}",
err
);
}
#[test]
fn poll_ready_when_worker_is_dropped_early() {
let (service, _handle) = mock::pair::<(), ()>();
// drop that worker right on the floor!
let mut exec = ExecFn(|fut| {
drop(fut);
Ok(())
});
let mut service = Buffer::with_executor(service, 1, &mut exec);
let err = with_task(|| {
service
.poll_ready()
.expect_err("buffer poll_ready should error")
});
assert!(err.is::<error::Closed>(), "should be a Closed: {:?}", err);
}
#[test]
fn response_future_when_worker_is_dropped_early() {
let (service, mut handle) = mock::pair::<_, ()>();
// hold the worker in a cell until we want to drop it later
let cell = RefCell::new(None);
let mut exec = ExecFn(|fut| {
*cell.borrow_mut() = Some(fut);
Ok(())
});
let mut service = Buffer::with_executor(service, 1, &mut exec);
// keep the request in the worker
handle.allow(0);
let response = service.call("hello");
// drop the worker (like an executor closing up)
cell.borrow_mut().take();
let err = response.wait().expect_err("res.wait");
assert!(err.is::<error::Closed>(), "should be a Closed: {:?}", err);
}
type Mock = mock::Mock<&'static str, &'static str>;
type Handle = mock::Handle<&'static str, &'static str>;
struct Exec;
impl<F> TypedExecutor<F> for Exec
where
F: Future<Item = (), Error = ()> + Send + 'static,
{
fn spawn(&mut self, fut: F) -> Result<(), SpawnError> {
thread::spawn(move || {
fut.wait().unwrap();
});
Ok(())
}
}
struct ExecFn<Func>(Func);
impl<Func, F> TypedExecutor<F> for ExecFn<Func>
where
Func: Fn(F) -> Result<(), ()>,
F: Future<Item = (), Error = ()> + Send + 'static,
{
fn spawn(&mut self, fut: F) -> Result<(), SpawnError> {
(self.0)(fut).map_err(|()| SpawnError::shutdown())
}
}
fn new_service() -> (Buffer<Mock, &'static str>, Handle) {
let (service, handle) = mock::pair();
// bound is >0 here because clears_canceled_requests needs multiple outstanding requests
let service = Buffer::with_executor(service, 10, &mut Exec);
(service, handle)
}
fn with_task<F: FnOnce() -> U, U>(f: F) -> U {
use futures::future::lazy;
lazy(|| Ok::<_, ()>(f())).wait().unwrap()
}

View File

@ -1,3 +0,0 @@
# 0.1.0 (April 26, 2019)
- Initial release

View File

@ -1,26 +0,0 @@
[package]
name = "tower-discover"
# When releasing to crates.io:
# - Remove path dependencies
# - Update html_root_url.
# - Update doc url
# - Cargo.toml
# - README.md
# - Update CHANGELOG.md.
# - Create "v0.1.x" git tag.
version = "0.1.0"
authors = ["Tower Maintainers <team@tower-rs.com>"]
license = "MIT"
readme = "README.md"
repository = "https://github.com/tower-rs/tower"
homepage = "https://github.com/tower-rs/tower"
documentation = "https://docs.rs/tower-discover/0.1.0"
description = """
Abstracts over service discovery strategies.
"""
categories = ["asynchronous", "network-programming"]
edition = "2018"
[dependencies]
futures = "0.1.26"
tower-service = "0.2.0"

View File

@ -1,25 +0,0 @@
Copyright (c) 2019 Tower Contributors
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@ -1,13 +0,0 @@
# Tower Discovery
Abstracts over service discovery strategies.
## License
This project is licensed under the [MIT license](LICENSE).
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in Tower by you, shall be licensed as MIT, without any additional
terms or conditions.

View File

@ -1,12 +0,0 @@
use std::{error::Error, fmt};
#[derive(Debug)]
pub enum Never {}
impl fmt::Display for Never {
fn fmt(&self, _: &mut fmt::Formatter) -> fmt::Result {
match *self {}
}
}
impl Error for Never {}

View File

@ -1,44 +0,0 @@
#![doc(html_root_url = "https://docs.rs/tower-discover/0.1.0")]
#![deny(rust_2018_idioms)]
#![allow(elided_lifetimes_in_paths)]
//! # Tower service discovery
//!
//! Service discovery is the automatic detection of services available to the
//! consumer. These services typically live on other servers and are accessible
//! via the network; however, it is possible to discover services available in
//! other processes or even in process.
mod error;
mod list;
mod stream;
pub use crate::{list::ServiceList, stream::ServiceStream};
use futures::Poll;
use std::hash::Hash;
/// Provide a uniform set of services able to satisfy a request.
///
/// This set of services may be updated over time. On each change to the set, a
/// new `NewServiceSet` is yielded by `Discover`.
///
/// See crate documentation for more details.
pub trait Discover {
/// NewService key
type Key: Hash + Eq;
type Service;
/// Error produced during discovery
type Error;
/// Yields the next discovery change set.
fn poll(&mut self) -> Poll<Change<Self::Key, Self::Service>, Self::Error>;
}
/// A change in the service set
pub enum Change<K, V> {
Insert(K, V),
Remove(K),
}

View File

@ -1,54 +0,0 @@
use crate::{error::Never, Change, Discover};
use futures::{Async, Poll};
use std::iter::{Enumerate, IntoIterator};
use tower_service::Service;
/// Static service discovery based on a predetermined list of services.
///
/// `ServiceList` is created with an initial list of services. The discovery
/// process will yield this list once and do nothing after.
pub struct ServiceList<T>
where
T: IntoIterator,
{
inner: Enumerate<T::IntoIter>,
}
impl<T, U> ServiceList<T>
where
T: IntoIterator<Item = U>,
{
pub fn new<Request>(services: T) -> ServiceList<T>
where
U: Service<Request>,
{
ServiceList {
inner: services.into_iter().enumerate(),
}
}
}
impl<T, U> Discover for ServiceList<T>
where
T: IntoIterator<Item = U>,
{
type Key = usize;
type Service = U;
type Error = Never;
fn poll(&mut self) -> Poll<Change<Self::Key, Self::Service>, Self::Error> {
match self.inner.next() {
Some((i, service)) => Ok(Change::Insert(i, service).into()),
None => Ok(Async::NotReady),
}
}
}
// check that List can be directly over collections
#[cfg(test)]
#[allow(dead_code)]
type ListVecTest<T> = ServiceList<Vec<T>>;
#[cfg(test)]
#[allow(dead_code)]
type ListVecIterTest<T> = ServiceList<::std::vec::IntoIter<T>>;

View File

@ -1,42 +0,0 @@
use crate::{Change, Discover};
use futures::{try_ready, Async, Poll, Stream};
use std::hash::Hash;
use tower_service::Service;
/// Dynamic service discovery based on a stream of service changes.
pub struct ServiceStream<S> {
inner: futures::stream::Fuse<S>,
}
impl<S> ServiceStream<S> {
pub fn new<K, Svc, Request>(services: S) -> Self
where
S: Stream<Item = Change<K, Svc>>,
K: Hash + Eq,
Svc: Service<Request>,
{
ServiceStream {
inner: services.fuse(),
}
}
}
impl<S, K, Svc> Discover for ServiceStream<S>
where
K: Hash + Eq,
S: Stream<Item = Change<K, Svc>>,
{
type Key = K;
type Service = Svc;
type Error = S::Error;
fn poll(&mut self) -> Poll<Change<Self::Key, Self::Service>, Self::Error> {
match try_ready!(self.inner.poll()) {
Some(c) => Ok(Async::Ready(c)),
None => {
// there are no more service changes coming
Ok(Async::NotReady)
}
}
}
}

View File

@ -1,3 +0,0 @@
# 0.1.0 (unreleased)
- Initial release

View File

@ -1,32 +0,0 @@
[package]
name = "tower-filter"
# When releasing to crates.io:
# - Remove path dependencies
# - Update html_root_url.
# - Update doc url
# - Cargo.toml
# - README.md
# - Update CHANGELOG.md.
# - Create "v0.1.x" git tag.
version = "0.1.0"
authors = ["Tower Maintainers <team@tower-rs.com>"]
license = "MIT"
readme = "README.md"
repository = "https://github.com/tower-rs/tower"
homepage = "https://github.com/tower-rs/tower"
documentation = "https://docs.rs/tower-filter/0.1.0"
description = """
Conditionally allow requests to be dispatched to a service based on the result
of a predicate.
"""
categories = ["asynchronous", "network-programming"]
edition = "2018"
publish = false
[dependencies]
futures = "0.1.26"
tower-service = "0.2.0"
tower-layer = "0.1.0"
[dev-dependencies]
tower-test = { version = "0.1", path = "../tower-test" }

View File

@ -1,25 +0,0 @@
Copyright (c) 2019 Tower Contributors
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@ -1,14 +0,0 @@
# Tower Filter
Conditionally allow requests to be dispatched to a service based on the result
of a predicate.
## License
This project is licensed under the [MIT license](LICENSE).
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in Tower by you, shall be licensed as MIT, without any additional
terms or conditions.

View File

@ -1,48 +0,0 @@
//! Error types
use std::{error, fmt};
/// Error produced by `Filter`
#[derive(Debug)]
pub struct Error {
source: Option<Source>,
}
pub(crate) type Source = Box<dyn error::Error + Send + Sync>;
impl Error {
/// Create a new `Error` representing a rejected request.
pub fn rejected() -> Error {
Error { source: None }
}
/// Create a new `Error` representing an inner service error.
pub fn inner<E>(source: E) -> Error
where
E: Into<Source>,
{
Error {
source: Some(source.into()),
}
}
}
impl fmt::Display for Error {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
if self.source.is_some() {
write!(fmt, "inner service errored")
} else {
write!(fmt, "rejected")
}
}
}
impl error::Error for Error {
fn source(&self) -> Option<&(dyn error::Error + 'static)> {
if let Some(ref err) = self.source {
Some(&**err)
} else {
None
}
}
}

View File

@ -1,86 +0,0 @@
//! Future types
use crate::error::{self, Error};
use futures::{Async, Future, Poll};
use tower_service::Service;
/// Filtered response future
#[derive(Debug)]
pub struct ResponseFuture<T, S, Request>
where
S: Service<Request>,
{
/// Response future state
state: State<Request, S::Future>,
/// Predicate future
check: T,
/// Inner service
service: S,
}
#[derive(Debug)]
enum State<Request, U> {
Check(Request),
WaitResponse(U),
Invalid,
}
impl<T, S, Request> ResponseFuture<T, S, Request>
where
T: Future<Error = Error>,
S: Service<Request>,
S::Error: Into<error::Source>,
{
pub(crate) fn new(request: Request, check: T, service: S) -> Self {
ResponseFuture {
state: State::Check(request),
check,
service,
}
}
}
impl<T, S, Request> Future for ResponseFuture<T, S, Request>
where
T: Future<Error = Error>,
S: Service<Request>,
S::Error: Into<error::Source>,
{
type Item = S::Response;
type Error = Error;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
use self::State::*;
use std::mem;
loop {
match mem::replace(&mut self.state, Invalid) {
Check(request) => {
// Poll predicate
match self.check.poll()? {
Async::Ready(_) => {
let response = self.service.call(request);
self.state = WaitResponse(response);
}
Async::NotReady => {
self.state = Check(request);
return Ok(Async::NotReady);
}
}
}
WaitResponse(mut response) => {
let ret = response.poll().map_err(Error::inner);
self.state = WaitResponse(response);
return ret;
}
Invalid => {
panic!("invalid state");
}
}
}
}
}

View File

@ -1,21 +0,0 @@
use crate::Filter;
use tower_layer::Layer;
pub struct FilterLayer<U> {
predicate: U,
}
impl<U> FilterLayer<U> {
pub fn new(predicate: U) -> Self {
FilterLayer { predicate }
}
}
impl<U: Clone, S> Layer<S> for FilterLayer<U> {
type Service = Filter<S, U>;
fn layer(&self, service: S) -> Self::Service {
let predicate = self.predicate.clone();
Filter::new(service, predicate)
}
}

View File

@ -1,56 +0,0 @@
#![doc(html_root_url = "https://docs.rs/tower-filter/0.1.0")]
#![deny(rust_2018_idioms)]
#![allow(elided_lifetimes_in_paths)]
//! Conditionally dispatch requests to the inner service based on the result of
//! a predicate.
pub mod error;
pub mod future;
mod layer;
mod predicate;
pub use crate::{layer::FilterLayer, predicate::Predicate};
use crate::{error::Error, future::ResponseFuture};
use futures::Poll;
use tower_service::Service;
#[derive(Debug)]
pub struct Filter<T, U> {
inner: T,
predicate: U,
}
impl<T, U> Filter<T, U> {
pub fn new(inner: T, predicate: U) -> Self {
Filter { inner, predicate }
}
}
impl<T, U, Request> Service<Request> for Filter<T, U>
where
T: Service<Request> + Clone,
T::Error: Into<error::Source>,
U: Predicate<Request>,
{
type Response = T::Response;
type Error = Error;
type Future = ResponseFuture<U::Future, T, Request>;
fn poll_ready(&mut self) -> Poll<(), Self::Error> {
self.inner.poll_ready().map_err(error::Error::inner)
}
fn call(&mut self, request: Request) -> Self::Future {
use std::mem;
let inner = self.inner.clone();
let inner = mem::replace(&mut self.inner, inner);
// Check the request
let check = self.predicate.check(&request);
ResponseFuture::new(request, check, inner)
}
}

View File

@ -1,21 +0,0 @@
use crate::error::Error;
use futures::{Future, IntoFuture};
/// Checks a request
pub trait Predicate<Request> {
type Future: Future<Item = (), Error = Error>;
fn check(&mut self, request: &Request) -> Self::Future;
}
impl<F, T, U> Predicate<T> for F
where
F: Fn(&T) -> U,
U: IntoFuture<Item = (), Error = Error>,
{
type Future = U::Future;
fn check(&mut self, request: &T) -> Self::Future {
self(request).into_future()
}
}

View File

@ -1,56 +0,0 @@
use futures::*;
use std::thread;
use tower_filter::{error::Error, Filter};
use tower_service::Service;
use tower_test::{assert_request_eq, mock};
#[test]
fn passthrough_sync() {
let (mut service, mut handle) = new_service(|_| Ok(()));
let th = thread::spawn(move || {
// Receive the requests and respond
for i in 0..10 {
assert_request_eq!(handle, format!("ping-{}", i)).send_response(format!("pong-{}", i));
}
});
let mut responses = vec![];
for i in 0..10 {
let request = format!("ping-{}", i);
assert!(service.poll_ready().unwrap().is_ready());
let exchange = service.call(request).and_then(move |response| {
let expect = format!("pong-{}", i);
assert_eq!(response.as_str(), expect.as_str());
Ok(())
});
responses.push(exchange);
}
future::join_all(responses).wait().unwrap();
th.join().unwrap();
}
#[test]
fn rejected_sync() {
let (mut service, _handle) = new_service(|_| Err(Error::rejected()));
let response = service.call("hello".into()).wait();
assert!(response.is_err());
}
type Mock = mock::Mock<String, String>;
type Handle = mock::Handle<String, String>;
fn new_service<F, U>(f: F) -> (Filter<Mock, F>, Handle)
where
F: Fn(&String) -> U,
U: IntoFuture<Item = (), Error = Error>,
{
let (service, handle) = mock::pair();
let service = Filter::new(service, f);
(service, handle)
}

View File

@ -1,3 +1,48 @@
# 0.3.3 (August 1, 2024)
### Added
- **builder,util**: add convenience methods for boxing services ([#616])
- **all**: new functions const when possible ([#760])
[#616]: https://github.com/tower-rs/tower/pull/616
[#760]: https://github.com/tower-rs/tower/pull/760
# 0.3.2 (Octpber 10, 2022)
## Added
- Implement `Layer` for tuples of up to 16 elements ([#694])
[#694]: https://github.com/tower-rs/tower/pull/694
# 0.3.1 (January 7, 2021)
### Added
- Added `layer_fn`, for constructing a `Layer` from a function taking
a `Service` and returning a different `Service` ([#491])
- Added an implementation of `Layer` for `&Layer` ([#446])
- Multiple documentation improvements ([#487], [#490])
[#491]: https://github.com/tower-rs/tower/pull/491
[#446]: https://github.com/tower-rs/tower/pull/446
[#487]: https://github.com/tower-rs/tower/pull/487
[#490]: https://github.com/tower-rs/tower/pull/490
# 0.3.0 (November 29, 2019)
- Move layer builder from `tower-util` to tower-layer.
# 0.3.0-alpha.2 (September 30, 2019)
- Move to `futures-*-preview 0.3.0-alpha.19`
- Move to `pin-project 0.4`
# 0.3.0-alpha.1 (September 11, 2019)
- Move to `std::future`
# 0.1.0 (April 26, 2019)
- Initial release

View File

@ -1,20 +1,18 @@
[package]
name = "tower-layer"
# When releasing to crates.io:
# - Remove path dependencies
# - Update html_root_url.
# - Update doc url
# - Cargo.toml
# - README.md
# - Update CHANGELOG.md.
# - Create "v0.1.x" git tag.
version = "0.1.0"
version = "0.3.3"
authors = ["Tower Maintainers <team@tower-rs.com>"]
license = "MIT"
readme = "README.md"
repository = "https://github.com/tower-rs/tower"
homepage = "https://github.com/tower-rs/tower"
documentation = "https://docs.rs/tower-layer/0.1.0"
documentation = "https://docs.rs/tower-layer/0.3.3"
description = """
Decorates a `Service` to allow easy composition between `Service`s.
"""
@ -22,8 +20,7 @@ categories = ["asynchronous", "network-programming"]
edition = "2018"
[dependencies]
futures = "0.1.26"
tower-service = "0.2.0"
[dev-dependencies]
void = "1.0.2"
tower-service = { path = "../tower-service" }
tower = { path = "../tower" }

View File

@ -1,6 +1,26 @@
# Tower Layer
Decorates a `Service`, transforming either the request or the response.
Decorates a [Tower] `Service`, transforming either the request or the response.
[![Crates.io][crates-badge]][crates-url]
[![Documentation][docs-badge]][docs-url]
[![Documentation (master)][docs-master-badge]][docs-master-url]
[![MIT licensed][mit-badge]][mit-url]
[![Build Status][actions-badge]][actions-url]
[![Discord chat][discord-badge]][discord-url]
[crates-badge]: https://img.shields.io/crates/v/tower-layer.svg
[crates-url]: https://crates.io/crates/tower-layer
[docs-badge]: https://docs.rs/tower-layer/badge.svg
[docs-url]: https://docs.rs/tower-layer
[docs-master-badge]: https://img.shields.io/badge/docs-master-blue
[docs-master-url]: https://tower-rs.github.io/tower/tower_layer
[mit-badge]: https://img.shields.io/badge/license-MIT-blue.svg
[mit-url]: LICENSE
[actions-badge]: https://github.com/tower-rs/tower/workflows/CI/badge.svg
[actions-url]:https://github.com/tower-rs/tower/actions?query=workflow%3ACI
[discord-badge]: https://img.shields.io/discord/500028886025895936?logo=discord&label=discord&logoColor=white
[discord-url]: https://discord.gg/EeF3cQw
## Overview
@ -10,6 +30,8 @@ reusable components that can be applied to very different kinds of services;
for example, it can be applied to services operating on different protocols,
and to both the client and server side of a network transaction.
`tower-layer` is `no_std` compatible.
## License
This project is licensed under the [MIT license](LICENSE).
@ -19,3 +41,5 @@ This project is licensed under the [MIT license](LICENSE).
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in Tower by you, shall be licensed as MIT, without any additional
terms or conditions.
[Tower]: https://crates.io/crates/tower

View File

@ -0,0 +1,51 @@
use super::Layer;
use core::fmt;
/// A no-op middleware.
///
/// When wrapping a [`Service`], the [`Identity`] layer returns the provided
/// service without modifying it.
///
/// [`Service`]: https://docs.rs/tower-service/latest/tower_service/trait.Service.html
///
/// # Examples
///
/// ```rust
/// use tower_layer::Identity;
/// use tower_layer::Layer;
///
/// let identity = Identity::new();
///
/// assert_eq!(identity.layer(42), 42);
/// ```
#[derive(Default, Clone)]
pub struct Identity {
_p: (),
}
impl Identity {
/// Creates a new [`Identity`].
///
/// ```rust
/// use tower_layer::Identity;
///
/// let identity = Identity::new();
/// ```
pub const fn new() -> Identity {
Identity { _p: () }
}
}
impl<S> Layer<S> for Identity {
type Service = S;
fn layer(&self, inner: S) -> Self::Service {
inner
}
}
impl fmt::Debug for Identity {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Identity").finish()
}
}

116
tower-layer/src/layer_fn.rs Normal file
View File

@ -0,0 +1,116 @@
use super::Layer;
use core::fmt;
/// Returns a new [`LayerFn`] that implements [`Layer`] by calling the
/// given function.
///
/// The [`Layer::layer()`] method takes a type implementing [`Service`] and
/// returns a different type implementing [`Service`]. In many cases, this can
/// be implemented by a function or a closure. The [`LayerFn`] helper allows
/// writing simple [`Layer`] implementations without needing the boilerplate of
/// a new struct implementing [`Layer`].
///
/// [`Service`]: https://docs.rs/tower-service/latest/tower_service/trait.Service.html
/// [`Layer::layer()`]: crate::Layer::layer
///
/// # Examples
///
/// ```rust
/// # use tower::Service;
/// # use core::task::{Poll, Context};
/// # use tower_layer::{Layer, layer_fn};
/// # use core::fmt;
/// # use core::convert::Infallible;
/// #
/// // A middleware that logs requests before forwarding them to another service
/// pub struct LogService<S> {
/// target: &'static str,
/// service: S,
/// }
///
/// impl<S, Request> Service<Request> for LogService<S>
/// where
/// S: Service<Request>,
/// Request: fmt::Debug,
/// {
/// type Response = S::Response;
/// type Error = S::Error;
/// type Future = S::Future;
///
/// fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
/// self.service.poll_ready(cx)
/// }
///
/// fn call(&mut self, request: Request) -> Self::Future {
/// // Log the request
/// println!("request = {:?}, target = {:?}", request, self.target);
///
/// self.service.call(request)
/// }
/// }
///
/// // A `Layer` that wraps services in `LogService`
/// let log_layer = layer_fn(|service| {
/// LogService {
/// service,
/// target: "tower-docs",
/// }
/// });
///
/// // An example service. This one uppercases strings
/// let uppercase_service = tower::service_fn(|request: String| async move {
/// Ok::<_, Infallible>(request.to_uppercase())
/// });
///
/// // Wrap our service in a `LogService` so requests are logged.
/// let wrapped_service = log_layer.layer(uppercase_service);
/// ```
pub fn layer_fn<T>(f: T) -> LayerFn<T> {
LayerFn { f }
}
/// A `Layer` implemented by a closure. See the docs for [`layer_fn`] for more details.
#[derive(Clone, Copy)]
pub struct LayerFn<F> {
f: F,
}
impl<F, S, Out> Layer<S> for LayerFn<F>
where
F: Fn(S) -> Out,
{
type Service = Out;
fn layer(&self, inner: S) -> Self::Service {
(self.f)(inner)
}
}
impl<F> fmt::Debug for LayerFn<F> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("LayerFn")
.field("f", &format_args!("{}", core::any::type_name::<F>()))
.finish()
}
}
#[cfg(test)]
mod tests {
use super::*;
use alloc::{format, string::ToString};
#[allow(dead_code)]
#[test]
fn layer_fn_has_useful_debug_impl() {
struct WrappedService<S> {
inner: S,
}
let layer = layer_fn(|svc| WrappedService { inner: svc });
let _svc = layer.layer("foo");
assert_eq!(
"LayerFn { f: tower_layer::layer_fn::tests::layer_fn_has_useful_debug_impl::{{closure}} }".to_string(),
format!("{layer:?}"),
);
}
}

View File

@ -1,5 +1,11 @@
#![doc(html_root_url = "https://docs.rs/tower-layer/0.1.0")]
#![deny(missing_docs, rust_2018_idioms)]
#![warn(
missing_debug_implementations,
missing_docs,
rust_2018_idioms,
unreachable_pub
)]
#![forbid(unsafe_code)]
// `rustdoc::broken_intra_doc_links` is checked on CI
//! Layer traits and extensions.
//!
@ -7,8 +13,26 @@
//! allows other services to be composed with the service that implements layer.
//!
//! A middleware implements the [`Layer`] and [`Service`] trait.
//!
//! [`Service`]: https://docs.rs/tower/*/tower/trait.Service.html
/// Decorates a `Service`, transforming either the request or the response.
#![no_std]
#[cfg(test)]
extern crate alloc;
mod identity;
mod layer_fn;
mod stack;
mod tuple;
pub use self::{
identity::Identity,
layer_fn::{layer_fn, LayerFn},
stack::Stack,
};
/// Decorates a [`Service`], transforming either the request or the response.
///
/// Often, many of the pieces needed for writing network applications can be
/// reused across multiple services. The `Layer` trait can be used to write
@ -21,14 +45,10 @@
/// Take request logging as an example:
///
/// ```rust
/// # extern crate futures;
/// # extern crate tower_service;
/// # extern crate void;
/// # use tower_service::Service;
/// # use futures::{Poll, Async};
/// # use core::task::{Poll, Context};
/// # use tower_layer::Layer;
/// # use std::fmt;
/// # use void::Void;
/// # use core::fmt;
///
/// pub struct LogLayer {
/// target: &'static str,
@ -60,8 +80,8 @@
/// type Error = S::Error;
/// type Future = S::Future;
///
/// fn poll_ready(&mut self) -> Poll<(), Self::Error> {
/// self.service.poll_ready()
/// fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
/// self.service.poll_ready(cx)
/// }
///
/// fn call(&mut self, request: Request) -> Self::Future {
@ -75,6 +95,8 @@
/// The above log implementation is decoupled from the underlying protocol and
/// is also decoupled from client or server concerns. In other words, the same
/// log middleware could be used in either a client or a server.
///
/// [`Service`]: https://docs.rs/tower/*/tower/trait.Service.html
pub trait Layer<S> {
/// The wrapped service
type Service;
@ -82,3 +104,14 @@ pub trait Layer<S> {
/// that has been decorated with the middleware.
fn layer(&self, inner: S) -> Self::Service;
}
impl<T, S> Layer<S> for &T
where
T: ?Sized + Layer<S>,
{
type Service = T::Service;
fn layer(&self, inner: S) -> Self::Service {
(**self).layer(inner)
}
}

85
tower-layer/src/stack.rs Normal file
View File

@ -0,0 +1,85 @@
use super::Layer;
use core::fmt;
/// Two [`Layer`]s chained together.
///
/// # Examples
///
/// ```rust
/// use tower_layer::{Stack, layer_fn, Layer};
///
/// let inner = layer_fn(|service| service+2);
/// let outer = layer_fn(|service| service*2);
///
/// let inner_outer_stack = Stack::new(inner, outer);
///
/// // (4 + 2) * 2 = 12
/// // (4 * 2) + 2 = 10
/// assert_eq!(inner_outer_stack.layer(4), 12);
/// ```
#[derive(Clone)]
pub struct Stack<Inner, Outer> {
inner: Inner,
outer: Outer,
}
impl<Inner, Outer> Stack<Inner, Outer> {
/// Creates a new [`Stack`].
///
/// # Examples
///
/// ```rust
/// use tower_layer::{Stack, Identity};
///
/// let stack = Stack::new(Identity::new(), Identity::new());
/// ```
pub const fn new(inner: Inner, outer: Outer) -> Self {
Stack { inner, outer }
}
}
impl<S, Inner, Outer> Layer<S> for Stack<Inner, Outer>
where
Inner: Layer<S>,
Outer: Layer<Inner::Service>,
{
type Service = Outer::Service;
fn layer(&self, service: S) -> Self::Service {
let inner = self.inner.layer(service);
self.outer.layer(inner)
}
}
impl<Inner, Outer> fmt::Debug for Stack<Inner, Outer>
where
Inner: fmt::Debug,
Outer: fmt::Debug,
{
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
// The generated output of nested `Stack`s is very noisy and makes
// it harder to understand what is in a `ServiceBuilder`.
//
// Instead, this output is designed assuming that a `Stack` is
// usually quite nested, and inside a `ServiceBuilder`. Therefore,
// this skips using `f.debug_struct()`, since each one would force
// a new layer of indentation.
//
// - In compact mode, a nested stack ends up just looking like a flat
// list of layers.
//
// - In pretty mode, while a newline is inserted between each layer,
// the `DebugStruct` used in the `ServiceBuilder` will inject padding
// to that each line is at the same indentation level.
//
// Also, the order of [outer, inner] is important, since it reflects
// the order that the layers were added to the stack.
if f.alternate() {
// pretty
write!(f, "{:#?},\n{:#?}", self.outer, self.inner)
} else {
write!(f, "{:?}, {:?}", self.outer, self.inner)
}
}
}

330
tower-layer/src/tuple.rs Normal file
View File

@ -0,0 +1,330 @@
use crate::Layer;
impl<S> Layer<S> for () {
type Service = S;
fn layer(&self, service: S) -> Self::Service {
service
}
}
impl<S, L1> Layer<S> for (L1,)
where
L1: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1,) = self;
l1.layer(service)
}
}
impl<S, L1, L2> Layer<S> for (L1, L2)
where
L1: Layer<L2::Service>,
L2: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2) = self;
l1.layer(l2.layer(service))
}
}
impl<S, L1, L2, L3> Layer<S> for (L1, L2, L3)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3) = self;
l1.layer((l2, l3).layer(service))
}
}
impl<S, L1, L2, L3, L4> Layer<S> for (L1, L2, L3, L4)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4) = self;
l1.layer((l2, l3, l4).layer(service))
}
}
impl<S, L1, L2, L3, L4, L5> Layer<S> for (L1, L2, L3, L4, L5)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5) = self;
l1.layer((l2, l3, l4, l5).layer(service))
}
}
impl<S, L1, L2, L3, L4, L5, L6> Layer<S> for (L1, L2, L3, L4, L5, L6)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<L6::Service>,
L6: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5, l6) = self;
l1.layer((l2, l3, l4, l5, l6).layer(service))
}
}
impl<S, L1, L2, L3, L4, L5, L6, L7> Layer<S> for (L1, L2, L3, L4, L5, L6, L7)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<L6::Service>,
L6: Layer<L7::Service>,
L7: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5, l6, l7) = self;
l1.layer((l2, l3, l4, l5, l6, l7).layer(service))
}
}
impl<S, L1, L2, L3, L4, L5, L6, L7, L8> Layer<S> for (L1, L2, L3, L4, L5, L6, L7, L8)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<L6::Service>,
L6: Layer<L7::Service>,
L7: Layer<L8::Service>,
L8: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5, l6, l7, l8) = self;
l1.layer((l2, l3, l4, l5, l6, l7, l8).layer(service))
}
}
impl<S, L1, L2, L3, L4, L5, L6, L7, L8, L9> Layer<S> for (L1, L2, L3, L4, L5, L6, L7, L8, L9)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<L6::Service>,
L6: Layer<L7::Service>,
L7: Layer<L8::Service>,
L8: Layer<L9::Service>,
L9: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5, l6, l7, l8, l9) = self;
l1.layer((l2, l3, l4, l5, l6, l7, l8, l9).layer(service))
}
}
impl<S, L1, L2, L3, L4, L5, L6, L7, L8, L9, L10> Layer<S>
for (L1, L2, L3, L4, L5, L6, L7, L8, L9, L10)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<L6::Service>,
L6: Layer<L7::Service>,
L7: Layer<L8::Service>,
L8: Layer<L9::Service>,
L9: Layer<L10::Service>,
L10: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5, l6, l7, l8, l9, l10) = self;
l1.layer((l2, l3, l4, l5, l6, l7, l8, l9, l10).layer(service))
}
}
impl<S, L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11> Layer<S>
for (L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<L6::Service>,
L6: Layer<L7::Service>,
L7: Layer<L8::Service>,
L8: Layer<L9::Service>,
L9: Layer<L10::Service>,
L10: Layer<L11::Service>,
L11: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11) = self;
l1.layer((l2, l3, l4, l5, l6, l7, l8, l9, l10, l11).layer(service))
}
}
impl<S, L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12> Layer<S>
for (L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<L6::Service>,
L6: Layer<L7::Service>,
L7: Layer<L8::Service>,
L8: Layer<L9::Service>,
L9: Layer<L10::Service>,
L10: Layer<L11::Service>,
L11: Layer<L12::Service>,
L12: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12) = self;
l1.layer((l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12).layer(service))
}
}
impl<S, L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12, L13> Layer<S>
for (L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12, L13)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<L6::Service>,
L6: Layer<L7::Service>,
L7: Layer<L8::Service>,
L8: Layer<L9::Service>,
L9: Layer<L10::Service>,
L10: Layer<L11::Service>,
L11: Layer<L12::Service>,
L12: Layer<L13::Service>,
L13: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13) = self;
l1.layer((l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13).layer(service))
}
}
impl<S, L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12, L13, L14> Layer<S>
for (L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12, L13, L14)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<L6::Service>,
L6: Layer<L7::Service>,
L7: Layer<L8::Service>,
L8: Layer<L9::Service>,
L9: Layer<L10::Service>,
L10: Layer<L11::Service>,
L11: Layer<L12::Service>,
L12: Layer<L13::Service>,
L13: Layer<L14::Service>,
L14: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13, l14) = self;
l1.layer((l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13, l14).layer(service))
}
}
#[rustfmt::skip]
impl<S, L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12, L13, L14, L15> Layer<S>
for (L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12, L13, L14, L15)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<L6::Service>,
L6: Layer<L7::Service>,
L7: Layer<L8::Service>,
L8: Layer<L9::Service>,
L9: Layer<L10::Service>,
L10: Layer<L11::Service>,
L11: Layer<L12::Service>,
L12: Layer<L13::Service>,
L13: Layer<L14::Service>,
L14: Layer<L15::Service>,
L15: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13, l14, l15) = self;
l1.layer((l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13, l14, l15).layer(service))
}
}
#[rustfmt::skip]
impl<S, L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12, L13, L14, L15, L16> Layer<S>
for (L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12, L13, L14, L15, L16)
where
L1: Layer<L2::Service>,
L2: Layer<L3::Service>,
L3: Layer<L4::Service>,
L4: Layer<L5::Service>,
L5: Layer<L6::Service>,
L6: Layer<L7::Service>,
L7: Layer<L8::Service>,
L8: Layer<L9::Service>,
L9: Layer<L10::Service>,
L10: Layer<L11::Service>,
L11: Layer<L12::Service>,
L12: Layer<L13::Service>,
L13: Layer<L14::Service>,
L14: Layer<L15::Service>,
L15: Layer<L16::Service>,
L16: Layer<S>,
{
type Service = L1::Service;
fn layer(&self, service: S) -> Self::Service {
let (l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13, l14, l15, l16) = self;
l1.layer((l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13, l14, l15, l16).layer(service))
}
}

View File

@ -1,3 +0,0 @@
# 0.1.0 (April 26, 2019)
- Initial release

View File

@ -1,34 +0,0 @@
[package]
name = "tower-limit"
# When releasing to crates.io:
# - Remove path dependencies
# - Update html_root_url.
# - Update doc url
# - Cargo.toml
# - README.md
# - Update CHANGELOG.md.
# - Create "v0.1.x" git tag.
version = "0.1.0"
authors = ["Tower Maintainers <team@tower-rs.com>"]
license = "MIT"
readme = "README.md"
repository = "https://github.com/tower-rs/tower"
homepage = "https://github.com/tower-rs/tower"
documentation = "https://docs.rs/tower-limit/0.1.0"
description = """
Limit maximum request rate to a `Service`.
"""
categories = ["asynchronous", "network-programming"]
edition = "2018"
[dependencies]
futures = "0.1.26"
tower-service = "0.2.0"
tower-layer = "0.1.0"
tokio-sync = "0.1.3"
tokio-timer = "0.2.6"
[dev-dependencies]
tower-test = { version = "0.1", path = "../tower-test" }
tokio = "0.1.19"
tokio-mock-task = "0.1.1"

View File

@ -1,25 +0,0 @@
Copyright (c) 2019 Tower Contributors
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@ -1,13 +0,0 @@
# Tower Rate Limit
Limit maximum request rate to a `Service`.
## License
This project is licensed under the [MIT license](LICENSE).
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in Tower by you, shall be licensed as MIT, without any additional
terms or conditions.

View File

@ -1,38 +0,0 @@
//! Future types
//!
use super::Error;
use futures::{Future, Poll};
use std::sync::Arc;
use tokio_sync::semaphore::Semaphore;
/// Future for the `ConcurrencyLimit` service.
#[derive(Debug)]
pub struct ResponseFuture<T> {
inner: T,
semaphore: Arc<Semaphore>,
}
impl<T> ResponseFuture<T> {
pub(crate) fn new(inner: T, semaphore: Arc<Semaphore>) -> ResponseFuture<T> {
ResponseFuture { inner, semaphore }
}
}
impl<T> Future for ResponseFuture<T>
where
T: Future,
T::Error: Into<Error>,
{
type Item = T::Item;
type Error = Error;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
self.inner.poll().map_err(Into::into)
}
}
impl<T> Drop for ResponseFuture<T> {
fn drop(&mut self) {
self.semaphore.add_permits(1);
}
}

View File

@ -1,24 +0,0 @@
use super::ConcurrencyLimit;
use tower_layer::Layer;
/// Enforces a limit on the concurrent number of requests the underlying
/// service can handle.
#[derive(Debug, Clone)]
pub struct ConcurrencyLimitLayer {
max: usize,
}
impl ConcurrencyLimitLayer {
/// Create a new concurrency limit layer.
pub fn new(max: usize) -> Self {
ConcurrencyLimitLayer { max }
}
}
impl<S> Layer<S> for ConcurrencyLimitLayer {
type Service = ConcurrencyLimit<S>;
fn layer(&self, service: S) -> Self::Service {
ConcurrencyLimit::new(service, self.max)
}
}

View File

@ -1,9 +0,0 @@
//! Limit the max number of requests being concurrently processed.
pub mod future;
mod layer;
mod service;
pub use self::{layer::ConcurrencyLimitLayer, service::ConcurrencyLimit};
type Error = Box<dyn std::error::Error + Send + Sync>;

View File

@ -1,12 +0,0 @@
use std::fmt;
#[derive(Debug)]
/// An error that can never occur.
pub enum Never {}
impl fmt::Display for Never {
fn fmt(&self, _: &mut fmt::Formatter) -> fmt::Result {
match *self {}
}
}
impl std::error::Error for Never {}

View File

@ -1,111 +0,0 @@
use super::{future::ResponseFuture, Error};
use tower_service::Service;
use futures::{try_ready, Poll};
use std::sync::Arc;
use tokio_sync::semaphore::{self, Semaphore};
/// Enforces a limit on the concurrent number of requests the underlying
/// service can handle.
#[derive(Debug)]
pub struct ConcurrencyLimit<T> {
inner: T,
limit: Limit,
}
#[derive(Debug)]
struct Limit {
semaphore: Arc<Semaphore>,
permit: semaphore::Permit,
}
impl<T> ConcurrencyLimit<T> {
/// Create a new concurrency limiter.
pub fn new(inner: T, max: usize) -> Self {
ConcurrencyLimit {
inner,
limit: Limit {
semaphore: Arc::new(Semaphore::new(max)),
permit: semaphore::Permit::new(),
},
}
}
/// Get a reference to the inner service
pub fn get_ref(&self) -> &T {
&self.inner
}
/// Get a mutable reference to the inner service
pub fn get_mut(&mut self) -> &mut T {
&mut self.inner
}
/// Consume `self`, returning the inner service
pub fn into_inner(self) -> T {
self.inner
}
}
impl<S, Request> Service<Request> for ConcurrencyLimit<S>
where
S: Service<Request>,
S::Error: Into<Error>,
{
type Response = S::Response;
type Error = Error;
type Future = ResponseFuture<S::Future>;
fn poll_ready(&mut self) -> Poll<(), Self::Error> {
try_ready!(self
.limit
.permit
.poll_acquire(&self.limit.semaphore)
.map_err(Error::from));
self.inner.poll_ready().map_err(Into::into)
}
fn call(&mut self, request: Request) -> Self::Future {
// Make sure a permit has been acquired
if self
.limit
.permit
.try_acquire(&self.limit.semaphore)
.is_err()
{
panic!("max requests in-flight; poll_ready must be called first");
}
// Call the inner service
let future = self.inner.call(request);
// Forget the permit, the permit will be returned when
// `future::ResponseFuture` is dropped.
self.limit.permit.forget();
ResponseFuture::new(future, self.limit.semaphore.clone())
}
}
impl<S> Clone for ConcurrencyLimit<S>
where
S: Clone,
{
fn clone(&self) -> ConcurrencyLimit<S> {
ConcurrencyLimit {
inner: self.inner.clone(),
limit: Limit {
semaphore: self.limit.semaphore.clone(),
permit: semaphore::Permit::new(),
},
}
}
}
impl Drop for Limit {
fn drop(&mut self) {
self.permit.release(&self.semaphore);
}
}

View File

@ -1,14 +0,0 @@
#![doc(html_root_url = "https://docs.rs/tower-limit/0.1.0")]
#![cfg_attr(test, deny(warnings))]
#![deny(missing_debug_implementations, missing_docs, rust_2018_idioms)]
#![allow(elided_lifetimes_in_paths)]
//! Tower middleware for limiting requests.
pub mod concurrency;
pub mod rate;
pub use crate::{
concurrency::{ConcurrencyLimit, ConcurrencyLimitLayer},
rate::{RateLimit, RateLimitLayer},
};

View File

@ -1,3 +0,0 @@
use std::error;
pub(crate) type Error = Box<dyn error::Error + Send + Sync>;

View File

@ -1,29 +0,0 @@
//! Future types
use super::error::Error;
use futures::{Future, Poll};
/// Future for the `RateLimit` service.
#[derive(Debug)]
pub struct ResponseFuture<T> {
inner: T,
}
impl<T> ResponseFuture<T> {
pub(crate) fn new(inner: T) -> ResponseFuture<T> {
ResponseFuture { inner }
}
}
impl<T> Future for ResponseFuture<T>
where
T: Future,
Error: From<T::Error>,
{
type Item = T::Item;
type Error = Error;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
self.inner.poll().map_err(Into::into)
}
}

View File

@ -1,287 +0,0 @@
use futures::{
self,
future::{poll_fn, Future},
};
use tokio_mock_task::MockTask;
use tower_limit::concurrency::ConcurrencyLimit;
use tower_service::Service;
use tower_test::{assert_request_eq, mock};
macro_rules! assert_ready {
($e:expr) => {{
use futures::Async::*;
match $e {
Ok(Ready(v)) => v,
Ok(NotReady) => panic!("not ready"),
Err(e) => panic!("err = {:?}", e),
}
}};
}
macro_rules! assert_not_ready {
($e:expr) => {{
use futures::Async::*;
match $e {
Ok(NotReady) => {}
r => panic!("unexpected poll status = {:?}", r),
}
}};
}
#[test]
fn basic_service_limit_functionality_with_poll_ready() {
let mut task = MockTask::new();
let (mut service, mut handle) = new_service(2);
poll_fn(|| service.poll_ready()).wait().unwrap();
let r1 = service.call("hello 1");
poll_fn(|| service.poll_ready()).wait().unwrap();
let r2 = service.call("hello 2");
task.enter(|| {
assert!(service.poll_ready().unwrap().is_not_ready());
});
assert!(!task.is_notified());
// The request gets passed through
assert_request_eq!(handle, "hello 1").send_response("world 1");
// The next request gets passed through
assert_request_eq!(handle, "hello 2").send_response("world 2");
// There are no more requests
task.enter(|| {
assert!(handle.poll_request().unwrap().is_not_ready());
});
assert_eq!(r1.wait().unwrap(), "world 1");
assert!(task.is_notified());
// Another request can be sent
task.enter(|| {
assert!(service.poll_ready().unwrap().is_ready());
});
let r3 = service.call("hello 3");
task.enter(|| {
assert!(service.poll_ready().unwrap().is_not_ready());
});
assert_eq!(r2.wait().unwrap(), "world 2");
// The request gets passed through
assert_request_eq!(handle, "hello 3").send_response("world 3");
assert_eq!(r3.wait().unwrap(), "world 3");
}
#[test]
fn basic_service_limit_functionality_without_poll_ready() {
let mut task = MockTask::new();
let (mut service, mut handle) = new_service(2);
assert_ready!(service.poll_ready());
let r1 = service.call("hello 1");
assert_ready!(service.poll_ready());
let r2 = service.call("hello 2");
task.enter(|| {
assert_not_ready!(service.poll_ready());
});
// The request gets passed through
assert_request_eq!(handle, "hello 1").send_response("world 1");
assert!(!task.is_notified());
// The next request gets passed through
assert_request_eq!(handle, "hello 2").send_response("world 2");
assert!(!task.is_notified());
// There are no more requests
task.enter(|| {
assert!(handle.poll_request().unwrap().is_not_ready());
});
assert_eq!(r1.wait().unwrap(), "world 1");
assert!(task.is_notified());
// One more request can be sent
assert_ready!(service.poll_ready());
let r4 = service.call("hello 4");
task.enter(|| {
assert_not_ready!(service.poll_ready());
});
assert_eq!(r2.wait().unwrap(), "world 2");
assert!(task.is_notified());
// The request gets passed through
assert_request_eq!(handle, "hello 4").send_response("world 4");
assert_eq!(r4.wait().unwrap(), "world 4");
}
#[test]
fn request_without_capacity() {
let mut task = MockTask::new();
let (mut service, _) = new_service(0);
task.enter(|| {
assert_not_ready!(service.poll_ready());
});
}
#[test]
fn reserve_capacity_without_sending_request() {
let mut task = MockTask::new();
let (mut s1, mut handle) = new_service(1);
let mut s2 = s1.clone();
// Reserve capacity in s1
task.enter(|| {
assert!(s1.poll_ready().unwrap().is_ready());
});
// Service 2 cannot get capacity
task.enter(|| {
assert!(s2.poll_ready().unwrap().is_not_ready());
});
// s1 sends the request, then s2 is able to get capacity
let r1 = s1.call("hello");
assert_request_eq!(handle, "hello").send_response("world");
task.enter(|| {
assert!(s2.poll_ready().unwrap().is_not_ready());
});
r1.wait().unwrap();
task.enter(|| {
assert!(s2.poll_ready().unwrap().is_ready());
});
}
#[test]
fn service_drop_frees_capacity() {
let mut task = MockTask::new();
let (mut s1, _handle) = new_service(1);
let mut s2 = s1.clone();
// Reserve capacity in s1
assert_ready!(s1.poll_ready());
// Service 2 cannot get capacity
task.enter(|| {
assert_not_ready!(s2.poll_ready());
});
drop(s1);
assert!(task.is_notified());
assert_ready!(s2.poll_ready());
}
#[test]
fn response_error_releases_capacity() {
let mut task = MockTask::new();
let (mut s1, mut handle) = new_service(1);
let mut s2 = s1.clone();
// Reserve capacity in s1
task.enter(|| {
assert_ready!(s1.poll_ready());
});
// s1 sends the request, then s2 is able to get capacity
let r1 = s1.call("hello");
assert_request_eq!(handle, "hello").send_error("boom");
r1.wait().unwrap_err();
task.enter(|| {
assert!(s2.poll_ready().unwrap().is_ready());
});
}
#[test]
fn response_future_drop_releases_capacity() {
let mut task = MockTask::new();
let (mut s1, _handle) = new_service(1);
let mut s2 = s1.clone();
// Reserve capacity in s1
task.enter(|| {
assert_ready!(s1.poll_ready());
});
// s1 sends the request, then s2 is able to get capacity
let r1 = s1.call("hello");
task.enter(|| {
assert_not_ready!(s2.poll_ready());
});
drop(r1);
task.enter(|| {
assert!(s2.poll_ready().unwrap().is_ready());
});
}
#[test]
fn multi_waiters() {
let mut task1 = MockTask::new();
let mut task2 = MockTask::new();
let mut task3 = MockTask::new();
let (mut s1, _handle) = new_service(1);
let mut s2 = s1.clone();
let mut s3 = s1.clone();
// Reserve capacity in s1
task1.enter(|| assert_ready!(s1.poll_ready()));
// s2 and s3 are not ready
task2.enter(|| assert_not_ready!(s2.poll_ready()));
task3.enter(|| assert_not_ready!(s3.poll_ready()));
drop(s1);
assert!(task2.is_notified());
assert!(!task3.is_notified());
drop(s2);
assert!(task3.is_notified());
}
type Mock = mock::Mock<&'static str, &'static str>;
type Handle = mock::Handle<&'static str, &'static str>;
fn new_service(max: usize) -> (ConcurrencyLimit<Mock>, Handle) {
let (service, handle) = mock::pair();
let service = ConcurrencyLimit::new(service, max);
(service, handle)
}

View File

@ -1,80 +0,0 @@
use futures::future;
use tokio::runtime::current_thread::Runtime;
use tokio_timer::Delay;
use tower_limit::rate::*;
use tower_service::*;
use tower_test::{assert_request_eq, mock};
use std::time::{Duration, Instant};
macro_rules! assert_ready {
($e:expr) => {{
use futures::Async::*;
match $e {
Ok(Ready(v)) => v,
Ok(NotReady) => panic!("not ready"),
Err(e) => panic!("err = {:?}", e),
}
}};
}
macro_rules! assert_not_ready {
($e:expr) => {{
use futures::Async::*;
match $e {
Ok(NotReady) => {}
r => panic!("unexpected poll status = {:?}", r),
}
}};
}
#[test]
fn reaching_capacity() {
let mut rt = Runtime::new().unwrap();
let (mut service, mut handle) = new_service(Rate::new(1, from_millis(100)));
assert_ready!(service.poll_ready());
let response = service.call("hello");
assert_request_eq!(handle, "hello").send_response("world");
let response = rt.block_on(response);
assert_eq!(response.unwrap(), "world");
rt.block_on(future::lazy(|| {
assert_not_ready!(service.poll_ready());
Ok::<_, ()>(())
}))
.unwrap();
let poll_request = rt.block_on(future::lazy(|| handle.poll_request()));
assert!(poll_request.unwrap().is_not_ready());
// Unlike `thread::sleep`, this advances the timer.
rt.block_on(Delay::new(Instant::now() + Duration::from_millis(100)))
.unwrap();
let poll_ready = rt.block_on(future::lazy(|| service.poll_ready()));
assert_ready!(poll_ready);
// Send a second request
let response = service.call("two");
assert_request_eq!(handle, "two").send_response("done");
let response = rt.block_on(response);
assert_eq!(response.unwrap(), "done");
}
type Mock = mock::Mock<&'static str, &'static str>;
type Handle = mock::Handle<&'static str, &'static str>;
fn new_service(rate: Rate) -> (RateLimit<Mock>, Handle) {
let (service, handle) = mock::pair();
let service = RateLimit::new(service, rate);
(service, handle)
}
fn from_millis(n: u64) -> Duration {
Duration::from_millis(n)
}

View File

@ -1,3 +0,0 @@
# 0.1.0 (April 26, 2019)
- Initial release

View File

@ -1,32 +0,0 @@
[package]
name = "tower-load-shed"
# When releasing to crates.io:
# - Remove path dependencies
# - Update html_root_url.
# - Update doc url
# - Cargo.toml
# - README.md
# - Update CHANGELOG.md.
# - Create "v0.1.x" git tag.
version = "0.1.0"
authors = ["Tower Maintainers <team@tower-rs.com>"]
license = "MIT"
readme = "README.md"
repository = "https://github.com/tower-rs/tower"
homepage = "https://github.com/tower-rs/tower"
documentation = "https://docs.rs/tower-load-shed/0.1.0"
description = """
Immediately reject requests if the inner service is not ready. This is also
known as load-shedding.
"""
categories = ["asynchronous", "network-programming"]
edition = "2018"
[dependencies]
futures = "0.1.25"
tower-service = "0.2.0"
tower-layer = "0.1.0"
[dev-dependencies]
tokio-mock-task = "0.1.1"
tower-test = { version = "0.1.0", path = "../tower-test" }

View File

@ -1,25 +0,0 @@
Copyright (c) 2019 Tower Contributors
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@ -1,14 +0,0 @@
# Tower Load Shed
Immediately reject requests if the inner service is not ready. This is also
known as load-shedding.
## License
This project is licensed under the [MIT license](LICENSE).
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in Tower by you, shall be licensed as MIT, without any additional
terms or conditions.

View File

@ -1,48 +0,0 @@
//! Future types
use std::fmt;
use futures::{Future, Poll};
use crate::error::{Error, Overloaded};
/// Future for the `LoadShed` service.
pub struct ResponseFuture<F> {
state: Result<F, ()>,
}
impl<F> ResponseFuture<F> {
pub(crate) fn called(fut: F) -> Self {
ResponseFuture { state: Ok(fut) }
}
pub(crate) fn overloaded() -> Self {
ResponseFuture { state: Err(()) }
}
}
impl<F> Future for ResponseFuture<F>
where
F: Future,
F::Error: Into<Error>,
{
type Item = F::Item;
type Error = Error;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
match self.state {
Ok(ref mut fut) => fut.poll().map_err(Into::into),
Err(()) => Err(Overloaded::new().into()),
}
}
}
impl<F> fmt::Debug for ResponseFuture<F>
where
// bounds for future-proofing...
F: fmt::Debug,
{
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_str("ResponseFuture")
}
}

View File

@ -1,24 +0,0 @@
use tower_layer::Layer;
use crate::LoadShed;
/// A `tower-layer` to wrap services in `LoadShed` middleware.
#[derive(Debug)]
pub struct LoadShedLayer {
_p: (),
}
impl LoadShedLayer {
/// Creates a new layer.
pub fn new() -> Self {
LoadShedLayer { _p: () }
}
}
impl<S> Layer<S> for LoadShedLayer {
type Service = LoadShed<S>;
fn layer(&self, service: S) -> Self::Service {
LoadShed::new(service)
}
}

View File

@ -1,54 +0,0 @@
use futures::Future;
use tower_load_shed::{self, LoadShed};
use tower_service::Service;
use tower_test::{assert_request_eq, mock};
#[test]
fn when_ready() {
let (mut service, mut handle) = new_service();
with_task(|| {
assert!(
service.poll_ready().unwrap().is_ready(),
"overload always reports ready",
);
});
let response = service.call("hello");
assert_request_eq!(handle, "hello").send_response("world");
assert_eq!(response.wait().unwrap(), "world");
}
#[test]
fn when_not_ready() {
let (mut service, mut handle) = new_service();
handle.allow(0);
with_task(|| {
assert!(
service.poll_ready().unwrap().is_ready(),
"overload always reports ready",
);
});
let fut = service.call("hello");
let err = fut.wait().unwrap_err();
assert!(err.is::<tower_load_shed::error::Overloaded>());
}
type Mock = mock::Mock<&'static str, &'static str>;
type Handle = mock::Handle<&'static str, &'static str>;
fn new_service() -> (LoadShed<Mock>, Handle) {
let (service, handle) = mock::pair();
let service = LoadShed::new(service);
(service, handle)
}
fn with_task<F: FnOnce() -> U, U>(f: F) -> U {
use futures::future::{lazy, Future};
lazy(|| Ok::<_, ()>(f())).wait().unwrap()
}

View File

@ -1,3 +0,0 @@
# 0.1.0 (unreleased)
- Initial release

View File

@ -1,29 +0,0 @@
[package]
name = "tower-reconnect"
# When releasing to crates.io:
# - Remove path dependencies
# - Update html_root_url.
# - Update doc url
# - Cargo.toml
# - README.md
# - Update CHANGELOG.md.
# - Create "v0.1.x" git tag.
version = "0.1.0"
authors = ["Tower Maintainers <team@tower-rs.com>"]
license = "MIT"
readme = "README.md"
repository = "https://github.com/tower-rs/tower"
homepage = "https://github.com/tower-rs/tower"
documentation = "https://docs.rs/tower-reconnect/0.1.0"
description = """
Automatically recreate a new `Service` instance when an error is encountered.
"""
categories = ["asynchronous", "network-programming"]
edition = "2018"
publish = false
[dependencies]
log = "0.4.1"
futures = "0.1.26"
tower-service = "0.2.0"
tower-util = "0.1.0"

View File

@ -1,25 +0,0 @@
Copyright (c) 2019 Tower Contributors
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@ -1,13 +0,0 @@
# Tower Reconnect
Automatically recreate a new `Service` instance when an error is encountered.
## License
This project is licensed under the [MIT license](LICENSE).
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in Tower by you, shall be licensed as MIT, without any additional
terms or conditions.

View File

@ -1,25 +0,0 @@
use crate::Error;
use futures::{Future, Poll};
pub struct ResponseFuture<F> {
inner: F,
}
impl<F> ResponseFuture<F> {
pub(crate) fn new(inner: F) -> Self {
ResponseFuture { inner }
}
}
impl<F> Future for ResponseFuture<F>
where
F: Future,
F::Error: Into<Error>,
{
type Item = F::Item;
type Error = Error;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
self.inner.poll().map_err(Into::into)
}
}

View File

@ -1,151 +0,0 @@
#![doc(html_root_url = "https://docs.rs/tower-load-shed/0.1.0")]
#![deny(rust_2018_idioms)]
#![allow(elided_lifetimes_in_paths)]
pub mod future;
use crate::future::ResponseFuture;
use futures::{Async, Future, Poll};
use log::trace;
use std::fmt;
use tower_service::Service;
use tower_util::MakeService;
pub struct Reconnect<M, Target>
where
M: Service<Target>,
{
mk_service: M,
state: State<M::Future, M::Response>,
target: Target,
}
type Error = Box<dyn std::error::Error + Send + Sync>;
#[derive(Debug)]
enum State<F, S> {
Idle,
Connecting(F),
Connected(S),
}
impl<M, Target> Reconnect<M, Target>
where
M: Service<Target>,
{
pub fn new<S, Request>(mk_service: M, target: Target) -> Self
where
M: Service<Target, Response = S>,
S: Service<Request>,
Error: From<M::Error> + From<S::Error>,
Target: Clone,
{
Reconnect {
mk_service,
state: State::Idle,
target,
}
}
}
impl<M, Target, S, Request> Service<Request> for Reconnect<M, Target>
where
M: Service<Target, Response = S>,
S: Service<Request>,
Error: From<M::Error> + From<S::Error>,
Target: Clone,
{
type Response = S::Response;
type Error = Error;
type Future = ResponseFuture<S::Future>;
fn poll_ready(&mut self) -> Poll<(), Self::Error> {
let ret;
let mut state;
loop {
match self.state {
State::Idle => {
trace!("poll_ready; idle");
match self.mk_service.poll_ready()? {
Async::Ready(()) => (),
Async::NotReady => {
trace!("poll_ready; MakeService not ready");
return Ok(Async::NotReady);
}
}
let fut = self.mk_service.make_service(self.target.clone());
self.state = State::Connecting(fut);
continue;
}
State::Connecting(ref mut f) => {
trace!("poll_ready; connecting");
match f.poll() {
Ok(Async::Ready(service)) => {
state = State::Connected(service);
}
Ok(Async::NotReady) => {
trace!("poll_ready; not ready");
return Ok(Async::NotReady);
}
Err(e) => {
trace!("poll_ready; error");
state = State::Idle;
ret = Err(e.into());
break;
}
}
}
State::Connected(ref mut inner) => {
trace!("poll_ready; connected");
match inner.poll_ready() {
Ok(Async::Ready(_)) => {
trace!("poll_ready; ready");
return Ok(Async::Ready(()));
}
Ok(Async::NotReady) => {
trace!("poll_ready; not ready");
return Ok(Async::NotReady);
}
Err(_) => {
trace!("poll_ready; error");
state = State::Idle;
}
}
}
}
self.state = state;
}
self.state = state;
ret
}
fn call(&mut self, request: Request) -> Self::Future {
let service = match self.state {
State::Connected(ref mut service) => service,
_ => panic!("service not ready; poll_ready must be called first"),
};
let fut = service.call(request);
ResponseFuture::new(fut)
}
}
impl<M, Target> fmt::Debug for Reconnect<M, Target>
where
M: Service<Target> + fmt::Debug,
M::Future: fmt::Debug,
M::Response: fmt::Debug,
Target: fmt::Debug,
{
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
fmt.debug_struct("Reconnect")
.field("mk_service", &self.mk_service)
.field("state", &self.state)
.field("target", &self.target)
.finish()
}
}

View File

@ -1,3 +0,0 @@
# 0.1.0 (April 26, 2019)
- Initial release

Some files were not shown because too many files have changed in this diff Show More