* Make encode and encode_by_ref fallible
This only changes the trait for now and makes it compile, calling .expect() on all users. Those will be removed in a later commit.
* PgNumeric: Turn TryFrom Decimal to an infallible From
* Turn panics in Encode implementations into errors
* Add Encode error analogous to the Decode error
* Propagate decode errors through Arguments::add
This pushes the panics one level further to mostly bind calls. Those will also be removed later.
* Only check argument encoding at the end
* Use Result in Query internally
* Implement query_with functions in terms of _with_result
* Surface encode errors when executing a query.
* Remove remaining panics in AnyConnectionBackend implementations
* PostgreSQL BigDecimal: Return encode error immediately
* Arguments: Add len method to report how many arguments were added
* Query::bind: Report which argument failed to encode
* IsNull: Add is_null method
* MySqlArguments: Replace manual bitmap code with NullBitMap helper type
* Roll back buffer in MySqlArguments if encoding fails
* Roll back buffer in SqliteArguments if encoding fails
* Roll back PgArgumentBuffer if encoding fails
* Include test case for regular subtransactions
While using COPY and subtransactions I kept running into errors.
This test case documents that error, it currently fails with:
Error: encountered unexpected or invalid data: expecting ParseComplete but received CommandComplete
* PostgreSQL Copy: Consume ReadyForQuery on error
When a COPY statement was in error inside a subtransaction,
a Protocol Error used to be raised. By consuming the ReadyForQuery
message when there is an error, we no longer have this issue.
This is meant to be much easier to discover than the current approach of directly invoking `Executor` methods.
In addition, I'm improving documentation for the `query*()` functions across the board.
Inlined format args make code more readable, and code more compact.
I ran this clippy command to fix most cases, and then cleaned up a few trailing commas and uncaught edge cases.
```
cargo clippy --bins --examples --benches --tests --lib --workspace --fix -- -A clippy::all -W clippy::uninlined_format_args
```
This function can panic due to slicing out of bounds when the server
responds without the `\x` prefix. With this commit we instead error and
also ensure that the prefix is what we expect instead of blindly
removing it.
Not directly related to the panic, we replace as_str() with as_bytes()
because there is no reason to perform a utf8 validity check when
hex::decode already checks that the content is valid.
* Fixed leak of `Arc<SharedPool>` in `DecrementSizeGuard::cancel()`
* Renamed `PoolOptions::connect_timeout` to `acquire_timeout` for clarity.
* Fixed `/* SQLx ping */` showing up in Postgres query logs
* Made `.close()` a regular function that returns a `Future`
* Deleted deprecated method `PoolConnection::release()`
* Document why connection might be dropped if `Pool::acquire()` is cancelled
* Added connection metadata to pool lifecycle callbacks
* Improved guarantees for `min_connections`
* Fixed `num_idle()` to not spin forever at high load
* Improved documentation across the `pool` module
Postgres arrays and records do not fully support custom types. When encountering an unknown OID, they currently default to using `PgTypeInfo::with_oid`. This is invalid as it breaks the invariant that decoding only uses resolved types, leading to panics.
This commit returns an error instead of panicking. This is merely a mitigation: a proper fix would actually add full support for custom Postgres types. Full support involves more work, so it may still be useful to fix this immediate issue.
Related issues:
- https://github.com/launchbadge/sqlx/issues/1672
- https://github.com/launchbadge/sqlx/issues/1797
* postgres: use Oid type instead of u32
* Make serde happy
* Expose the inner u32
* docs
* Try to fix tests
* Fix unit tests
* Fix order
* Not sure what happened here
This commit fixes the array decoder to support custom types. The core of the issue was that the array decoder did not use the type info retrieved from the database. It means that it only supported native types.
This commit fixes the issue by using the element type info fetched from the database. A new internal helper method is added to the `PgType` struct: it returns the type info for the inner array element, if available.
Closes#1477
* fix: wait until ready after executing other helper queries while pg quering is executing
Signed-off-by: Atkins Chang <atkinschang@gmail.com>
* fix: use tls parameter for testing target
Signed-off-by: Atkins Chang <atkinschang@gmail.com>
* a task that is marked woken but didn't actually wake before being cancelled will instead wake the next task in the queue
* a task that wakes but doesn't get a connection will put itself back in the queue instead of waiting until it times out with no way to be woken
* the idle reaper now won't run if there are tasks waiting for a connection, and also uses
the proper `SharedPool::release()` to return validated connections to the pool so waiting tasks get woken
closes#622, #1210
(hopefully for good this time)
Signed-off-by: Austin Bonander <austin@launchbadge.com>
* fix `DecrementSizeGuard::drop()` only waking one `Waiter` regardless of whether that waiter was already woken
* fix connect-backoff loop giving up the size guard
* don't cut in line to open a new connection
* have tasks waiting on `acquire()` wake periodically to check if there's a connection in the queue
Signed-off-by: Austin Bonander <austin@launchbadge.com>
Fix commit updates the `postgres::connection::describe` module to add full support for domain types. Domain types were previously confused with their category which caused invalid oid resolution.
Fixeslaunchbadge/sqlx#110
BREAKING CHANGE: some columns in `query!()` et. al. output will change from `T` to `Option<T>`
breakage should be minimal in practice as
these columns will need to have been manually
overridden anyway to avoid runtime errors
Signed-off-by: Austin Bonander <austin@launchbadge.com>
This solves a problem when needing to describe a query before executing
it, in cases where the query is more of a dynamic nature and binding its
parameters require the type info.
In this case, the first describe doesn't know the input parameters, so
the parameter type info is inferred from the database. In cases of
`varchar(10)[]` or `json` column types, the parameter type is inferred
to the column type. Now if we then try to use `text[]` or `jsonb` as the
types, the database has already stored different types for these
parameters for the statement, and will error out.
So we'll now only store the statement to the cache when it comes from an
actual execution of the query.