It would be preferable to switch to a different generator, or at least
set the seed within the benchmark, but this is the most straightforward
way to make things simple.
A failing debug assertion or overflow without correctly wrapping or
saturating is a bug, but the `debug` profile that has these enabled does
not run enough test cases to hit edge cases that may trigger these. Add
a new `release-checked` profile that enables debug assertions and
overflow checks. This seems to only extend per-function test time by a
few seconds (or around a minute on longer extensive tests), so enable
this as the default on CI.
In order to ensure `no_panic` still gets checked, add a build-only step
to CI.
`ExpInt` is likely to only have performance benefits on 16-bit
platforms, but makes working with the exponent more difficult. It seems
like a worthwhile tradeoff to instead just use `i32`, so do that here.
I do not believe Cargo separately caches crates with different sets of
features enabled. So, ensuring that tests run with `unstable-intrinsics`
are always grouped should slightly reduce runtime.
As an added benefit, all the debug mode tests run first so initial
feedback is available faster.
There is a difference in intent between wishing to cast and truncate the
value, and expecting the input to be within range. To make this clear,
add separate `cast_lossy` and `cast_from_lossy` to indicate what that
truncation is intended, leaving `cast` and `cast_from` to only be casts
that expected not to truncate.
Actually enforcing this at runtime is likely to have a cost, so just
`debug_assert!` that `cast` doesn't truncate.
These wasm functions are available in `core::arch::wasm32` since [1], so
we can use them while avoiding the possibly-recursive `intrinsics::*`
calls (in practice none of those should always lower to libcalls on
wasm, but that is up to LLVM).
Since these require an unstable feature, they are still gated under
`unstable-intrinsics`.
[1]: https://github.com/rust-lang/stdarch/pull/1677
WASM is the only architecture we use `intrinsics::` for. We probably
don't want to do this for any other architectures since it is better to
use assembly, or work toward getting the functions available in `core`.
To more accurately reflect the relationship between arch and intrinsics,
make wasm32 an `arch` module and call the intrinsics from there.
This configuration was duplicated from `fabs` and `fabsf`, but wasm is
unlikely to have an intrinsic lowering for these float types. So, just
always use the generic.
On master, this fetch fails with:
fatal: refusing to fetch into branch 'refs/heads/master' checked out at '/home/runner/work/libm/libm'
Just skip the command when this shouldn't be needed.
Update test traits to support `f16` and `f128`, as applicable. Add the
new routines (`fabs` and `copysign` for `f16` and `f128`) to the list of
all operations.
Add a CI job with a dynamically calculated matrix that runs extensive
jobs on changed files. This makes use of the new
`function-definitions.json` file to determine which changed files
require full tests for a routine to run.
Add a generator that will test all inputs for input spaces `u32::MAX` or
smaller (e.g. single-argument `f32` routines). For anything larger,
still run approximately `u32::MAX` tests, but distribute inputs evenly
across the function domain.
Since we often only want to run one of these tests at a time, this
implementation parallelizes within each test using `rayon`. A custom
test runner is used so a progress bar is possible.
Specific tests must be enabled by setting the `LIBM_EXTENSIVE_TESTS`
environment variable, e.g.
LIBM_EXTENSIVE_TESTS=all_f16,cos,cosf cargo run ...
Testing on a recent machine, most tests take about two minutes or less.
The Bessel functions are quite slow and take closer to 10 minutes, and
FMA is increased to run for about the same.
Update the script to produce, in addition to the simple text list, a
JSON file listing routine names, the types they work with, and the
source files that contain a function with the routine name. This gets
consumed by another script and will be used to determine which extensive
CI jobs to run.
New random seeds seem to indicate that this test does have some more
failures, this is a recent failure on i586:
---- musl_random_jnf stdout ----
Random Musl jnf arg 1/2: 100 iterations (10000 total) using `LIBM_SEED=nLfzQ3U1OBVvqWaMBcto84UTMsC5FIaC`
Random Musl jnf arg 2/2: 100 iterations (10000 total) using `LIBM_SEED=nLfzQ3U1OBVvqWaMBcto84UTMsC5FIaC`
thread 'musl_random_jnf' panicked at crates/libm-test/tests/compare_built_musl.rs:43:51:
called `Result::unwrap()` on an `Err` value:
input: (205, 5497.891) (0x000000cd, 0x45abcf21)
expected: 7.3291517e-6 0x36f5ecef
actual: 7.331668e-6 0x36f6028c
Caused by:
ulp 5533 > 4000
It seems unlikely that `jn` would somehow have better precision than
`j0`/`j1`, so just use the same precision.
The `support` module that this feature makes public will be useful for
implementations in `compiler-builtins`, not only for testing. Give this
feature a more accurate name.
Currently, all inputs are generated and then cached. This works
reasonably well but it isn't very configurable or extensible (adding
`f16` and `f128` is awkward).
Replace this with a trait for generating random sequences of tuples.
This also removes possible storage limitations of caching all inputs.
Introduce the `KnownSize` iterator wrapper, which allows providing the
size at construction time. This provides an `ExactSizeIterator`
implemenation so we can check a generator's value count during testing.
Currently, tests use a handful of constants to determine how many
iterations to perform: `NTESTS`, `AROUND`, and `MAX_CHECK_POINTS`. This
configuration is not very straightforward to adjust and needs to be
repeated everywhere it is used.
Replace this with new functions in the `run_cfg` module that determine
iteration counts in a more reusable and documented way.
This only updates `edge_cases` and `domain_logspace`, `random` is
refactored in a later commit.
Occasionally it is useful to see some information from running tests
without making everything noisy from `--nocapture`. Add a function to
log this kind of output to a file, and print the file as part of CI.
Currently our implementations for `abs` and `copysign` are defined on
the trait, and these are then called from `generic`. It would be better
to call core's `.abs()` / `.copysign(y)`, but we can't do this in the
generic because calling the standalone function could become recursive
(`fabsf` becomes `intrinsics::fabsf32`, that may lower to a call to
`fabsf`).
Change this so the traits uses the call to `core` if available, falling
back to a call to the standalone generic function.
In practice the recursion isn't likely to be a problem since LLVM
probably always lowers `abs`/`copysign` to assembly, but this pattern
should be more correct for functions that we will add in the future
(e.g. `fma`).
This should eventually be followed by a change to call the trait methods
rather than `fabs`/`copysign` directly.
Once we start addinf `f16` and `f128` routines, we will need to have
this cfg for almost all uses of `for_each_function`. Rather than needing
to specify this each time, always emit `#[cfg(f16_enabled)]` or
`#[cfg(f128_enabled)]` for each function that uses `f16` or `f128`,
respectively.
Now that we are using rustdoc output to locate public functions, the
test is indicating a few that were missed since they don't have their
own function. Update everything to now include the following routines:
* `erfc`
* `erfcf`
* `y0`
* `y0f`
* `y1`
* `y1f`
* `yn`
* `ynf`
Currently `logspace` does a lossy cast from `F::Int` to `usize`. This
could be problematic in the rare cases that this is called with a step
count exceeding what is representable in `usize`.
Resolve this by instead adding bounds so the float's integer type itself
can be iterated.