Unfortunately this means we lose use of the convenient name `gen`, so
this includes a handful of renaming.
We can't increase the edition for `libm` yet due to MSRV, but we can
enable `unsafe_op_in_unsafe_fn` to help make that change smoother in the
future.
Move the workspace configuration to a virtual manifest. This
reorganization makes a more clear separation between package contents
and support files that don't get distributed. It will also make it
easier to merge this repository with `compiler-builtins` which is
planned (builtins had a similar update done in [1]).
LICENSE.txt and README.md are symlinkedinto the new directory to ensure
they get included in the package.
[1]: https://github.com/rust-lang/compiler-builtins/pull/702
In preparation for switching to a virtual manifest, move the `libm`
crate into a subdirectory and update paths to match.
Updating `Cargo.toml` is done in the next commit so git tracks the moved
file correctly.
Benchmarks for [1] seemed to indicate that repository organization for
some reason had an effect on performance, even though the exact same
rustc commands were running (though some with a different order). After
investigating more, it appears that dependencies may have an affect on
inlining thresholds for generic functions.
It is surprising that this happens, we more or less expect that public
functions will be standalone but everything they call will be inlined.
To help ensure this, mark all generic functions `#[inline]` if they
should be merged into the public function.
Zulip discussion at [2].
[1]: https://github.com/rust-lang/libm/pull/533
[2]: https://rust-lang.zulipchat.com/#narrow/channel/182449-t-compiler.2Fhelp/topic/Dependencies.20affecting.20codegen/with/513079387
Since `fmod` is generic, there isn't any need to have the small wrappers
in separate files. Most operations was done in [1] but `fmod` was
omitted until now.
[1]: https://github.com/rust-lang/libm/pull/537
The reorganization PR has caused this to fail once before because every
file shows up as changed. Increase the timeout so this doesn't happen.
We now cancel the job if too many extensive tests are run unless `ci:
allow-many-extensive` is in the PR description, so this helps prevent
the limit being hit by accident.
Error out when too many extensive tests would be run unless `ci:
allow-many-extensive` is in the PR description. This allows us to set a
much higher CI timeout with less risk that a 4+ hour job gets started by
accident.
Sometimes we do refactoring that moves things around and triggers an
extensive test, even though the implementation didn't change. There
isn't any need to run full extensive CI in these cases, so add a way to
skip it from the PR message.
Jobs should just cancel automatically, it isn't ideal that extensive
jobs can continue running for multiple hours after code has been
updated. Use a solution from [1] to do this.
[1]: https://stackoverflow.com/a/72408109/5380651
Splitting into different source files by float size doesn't have any
benefit when the only content is a small function that forwards to the
generic implementation. Combine the source files for all width versions
of:
* ceil
* copysign
* fabs
* fdim
* floor
* fmaximum
* fmaximum_num
* fminimum
* fminimum_num
* ldexp
* scalbn
* sqrt
* truc
fmod is excluded to avoid conflicts with an open PR.
As part of this change move unit tests out of the generic module,
instead testing the type-specific functions (e.g. `ceilf16` rather than
`ceil::<f16>()`). This ensures that unit tests are validating whatever
we expose, such as arch-specific implementations via
`select_implementation!`, which would otherwise be skipped. (They are
still covered by integration tests).
Introduce a constant representing NaN with a negative sign bit for use
with testing. There isn't really any guarantee that `F::NAN` is positive
but in practice it always is, which is good enough for testing purposes.
Discussed at [1], there was an off-by-one mistake when converting from
the loop routine to using `leading_zeros` for normalization.
Currently, using `EXP_BITS` has the effect that `ix` after the branch
has its MSB _one bit to the left_ of the implicit bit's position,
whereas a shift by `EXP_BITS + 1` ensures that the MSB is exactly at the
implicit bit's position, matching what is done for normals (where the
implicit bit is set to be explicit). This doesn't seem to have any
effect in our implementation since the failing test cases from [1]
appear to still have correct results.
Since the result of using `EXP_BITS + 1` is more consistent with what is
done for normals, apply this here.
[1]: https://github.com/rust-lang/libm/pull/469#discussion_r2012473920
Parsing errors are now bubbled up part of the way, but that needs some
more work.
Rounding should be correct, and the `Status` returned by `parse_any`
should have the correct bits set. These are used for the current (unchanged)
behavior of the surface level functions like `hf64`: panic on invalid inputs, or
values that aren't exactly representable.
Replace `core::arch` versions of the following with handwritten
assembly, which avoids recursion issues (cg_gcc using `rint` as a
fallback) as well as problems with `aarch64be`.
* `rint`
* `rintf`
Additionally, add assembly versions of the following:
* `fma`
* `fmaf`
* `sqrt`
* `sqrtf`
If the `fp16` target feature is available, which implies `neon`, also
include the following:
* `rintf16`
* `sqrtf16`
`sqrt` is added to match the implementation for `x86`. `fma` is included
since it is used by many other routines.
There are a handful of other operations that have assembly
implementations. They are omitted here because we should have basic
float math routines available in `core` in the near future, which will
allow us to defer to LLVM for assembly lowering rather than implementing
these ourselves.
Some backends may replace calls to `core::arch` with multiple calls to
`sqrt` [1], which becomes recursive. Help mitigate this by replacing the
call with assembly.
Results in the same assembly as the current implementation when built
with optimizations.
[1]: https://github.com/rust-lang/compiler-builtins/issues/649
`compiler-builtins` is not allowed to call anything from `core`;
however, there are a couple of cases where we do so in `libm` for debug
output. Gate relevant locations behind the `compiler-builtins` Cargo
feature.
In `compiler-builtins`, `libm` is contained within a `math` module. The
smoke test in this repo has a slightly different layout so some things
were passing that shouldn't be.
Change module layouts in `compiler-builtins-smoke-test` to match
`compiler-builtins` and update a few instances of broken paths.
Similar to other recent changes, just put public API in the same file as
its generic implementation. To keep things slightly cleaner, split the
default implementation from the `_wide` implementation.
Also introduces a stub `fmaf16`.
Currently the argument multiplier and large float multiplier happen
before selecting count based on generator. However, this means that
bivariate and trivariate functions don't get scaled at all (except for
the special cased fma).
Move this scaling to a later point.
When there is a panic in an extensive test, tracing down where it came
from can be difficult since no information is provides (messeges are
e.g. "attempted to subtract with overflow"). Resolve this by calling the
functions within `panic::catch_unwind`, printing the input, and
continuing.
Inputs in `case_list` shouldn't hit xfails or increased ULP tolerance.
Ensure that overrides are skipped when testing against MPFR or a
specified value and that NaNs, if any, are checked bitwise.
C23 specifies a new set of `roundeven` functions that round to the
nearest integral, with ties to even. It does not raise any floating
point exceptions.
This behavior is similar to two other functions:
1. `rint`, which rounds to the nearest integer respecting rounding mode
and possibly raising exceptions.
2. `nearbyint`, which is identical to `rint` except it may not raise
exceptions.
Technically `rint`, `nearbyint`, and `roundeven` all behave the same in
Rust because we assume default floating point environment. The backends
are allowed to lower to `roundeven`, however, so we should provide it in
case the fallback is needed.
Add the `roundeven` family here and convert `rint` to a function that
takes a rounding mode. This currently has no effect.
These don't have much content since they now use the generic
implementation. There will be more similar functions in the near future
(fminimum, fmaximum, fminimum_num, fmaximum_num); start the pattern of
combining similar functions now so we don't have to eventually maintain
similar docs across 24 different files.
Many routines have some form of handling for rounding mode and floating
point exceptions, which are implemented via a combination of stubs and
`force_eval!` use. This is suboptimal, however, because:
1. Rust does not interact with the floating point environment, so most
of this code does nothing.
2. The parts of the code that are not dead are not testable.
3. `force_eval!` blocks optimizations, which is unnecessary because we
do not rely on its side effects.
We cannot ensure correct rounding and exception handling in all cases
without some form of arithmetic operations that are aware of this
behavior. However, the cases where rounding mode is explicitly handled
or exceptions are explicitly raised are testable. Make this possible
here for functions that depend on `math::fenv` by moving the
implementation to a nonpublic function that takes a `Round` and returns
a `Status`.
Link: https://github.com/rust-lang/libm/issues/480
This produces better assembly, e.g. on aarch64:
.globl libm::u128_wmul
.p2align 2
libm::u128_wmul:
Lfunc_begin124:
.cfi_startproc
mul x9, x2, x0
umulh x10, x2, x0
umulh x11, x3, x0
mul x12, x3, x0
umulh x13, x2, x1
mul x14, x2, x1
umulh x15, x3, x1
mul x16, x3, x1
adds x10, x10, x14
cinc x13, x13, hs
adds x13, x13, x16
cinc x14, x15, hs
adds x10, x10, x12
cinc x11, x11, hs
adds x11, x13, x11
stp x9, x10, [x8]
cinc x9, x14, hs
stp x11, x9, [x8, rust-lang/libm#16]
ret
The original was ~70 instructions so the improvement is significant.
With these changes, the result is reasonably close to what LLVM
generates using `u256` operands [1].
[1]: https://llvm.godbolt.org/z/re1aGdaqY