- Use "true" and "false" instead of "1" and "0"
- "nonzero" -> "non-zero"
- "returns true if ... or false if ..." -> "returns true ..., false
otherwise"
This intrinsic doesn't have any preconditions and is always safe to
call, so it can be safe.
This function is already stable, but dropping `unsafe` is a backwards
compatible change.
Note tha we already have a precedent for wasm intrinsics being safe --
wasm simd is safe.
It is relatively practically important to mark this safe --
wasm32::unreachable is directly useful in practice as more codesize
efficient `panic!()`.
Those intrinsics have the correct semantics for the desired fcvtz instruction,
without any undefined behaviour. The previous simd_cast was undefined for
infinite and NaN which could cause issues.
Wasmtime now supports all of the simd proposal, so this commit
uncomments instruction assertions and tests, while also adding more
tests, for all wasm simd instructions. This means that all wasm simd
instructions should be tested and have running instruction assertions,
except for `i64x2.abs`, which will require an LLVM upgrade to LLVM 13.
This allows us to deprecate the crypto target_feature in favour of its
subfeatures.
We cannot do this yet for ARM targets as LLVM requires the crypto
feature. This was fixed in
b8baa2a913
Now that `transmute` can be flagged as `const`-stable this commit
updates the `f32x4` and `f64x2` constructors as `const`-stable as well.
This also additionally rewrites the other integer constructors in a more
readable fashion now that the general `v128()` method is `const`-stable.
Since most intrinsics are safe it likely makes sense to explicitly
document why there are a few intrinsics that are not safe. These
intrinsics are all unsafe for the same reason, which is that they're
dealing with a raw pointer that must be valid to load/store memory to.
Note that the are no alignment requirements on any of these intrinsics.
This changes wasm simd intrisnics which deal with memory to match clang
where they all are emitted with an alignment of 1. This is expected to
not impact performance since wasm engines generally ignore alignment as
it's just a hint. Otherwise this can increase safety slightly when used
from Rust since if an unaligned pointer was previously passed in that
could result in UB on the LLVM side. This means that the intrinsics are
slighly more usable in more situations than before.
It's expected that if higher alignment is desired then programs will not
use these intrinsics but rather the component parts. For example instead
of `v128_load` you'd just load the pointer itself (and loading from a
pointer in Rust automatically assumes correct alignment). For
`v128_load64_splat` you'd do a load followed by a splat operation, which
LLVM should optimized into a `v128.load64_splat` instruction with the
desired alignment. LLVM doesn't fully support some optimizations (such
as optimizing `v128.load16_lane` from component parts) but that's
expected to be a temporary issue. Additionally we don't have a way of
configuring the alignment on operations that otherwise can't be
decomposed into their portions (such as with `i64x2_load_extend_u32x2`),
but we can ideally cross such a bridge when we get there if anyone ever
needs the alignment configured there.
Naming right now for wasm simd intrinsics takes the signededness of the
instruction into account, but some operations are the same regardless of
signededness, such as `i32x4_add`. This commit adds aliases for all of
these operations under unsigned names as well (such as `u32x4_add`)
which are just a `pub use` to rename the item as two names. The goal of
this is to assist in reading code (no need to switch back and forth
between `i` and `u`) as well as writing code (no need to always remember
which operations are the same for signed/unsigned but only available
under the signed names).
First change them all to unsigned since they're just returning bits, and
then also change them to the smallest-size integer which fits the return
value (`u16` for `i8x16_bitmask` and `u8` for everything else). This
suffers from an LLVM codegen bug for now, but it will hopefully get
fixed in the not too distant future.