793 Commits

Author SHA1 Message Date
Mara Bos
685e8d906d Remove references to the crates on crates.io.
They haven't been published in years. This removes the suggestion that
the crates on crates.io are actively updated/maintained.
2021-08-12 00:24:32 +01:00
Jamie Cunliffe
0285e513e0 Update arm vcvt intrinsics to use llvm.fpto(su)i.sat
Those intrinsics have the correct semantics for the desired fcvtz instruction,
without any undefined behaviour. The previous simd_cast was undefined for
infinite and NaN which could cause issues.
2021-08-11 13:13:19 +01:00
Amanieu d'Antras
52dae87319 Remove unused wasm feature 2021-08-11 11:46:45 +01:00
Alex Crichton
b5c437e119 Add tests for remaining wasm simd intrinsics
Wasmtime now supports all of the simd proposal, so this commit
uncomments instruction assertions and tests, while also adding more
tests, for all wasm simd instructions. This means that all wasm simd
instructions should be tested and have running instruction assertions,
except for `i64x2.abs`, which will require an LLVM upgrade to LLVM 13.
2021-08-03 00:46:38 +01:00
Adam Gemmell
3347e8cc98 Remove the bootstrap directive for cryptographic target_features 2021-08-02 23:38:57 +01:00
Adam Gemmell
8cb8cd2142 Replace the crypto feature with aes in generated intrinsics for aarch64
This allows us to deprecate the crypto target_feature in favour of its
subfeatures.

We cannot do this yet for ARM targets as LLVM requires the crypto
feature. This was fixed in
b8baa2a913
2021-08-02 23:38:57 +01:00
Alex Crichton
5800a3624a Remove stabilized features 2021-07-30 12:52:55 +02:00
Alex Crichton
8e8879ddd9 Mark f32x4 and f64x2 as const-stable on wasm
Now that `transmute` can be flagged as `const`-stable this commit
updates the `f32x4` and `f64x2` constructors as `const`-stable as well.
This also additionally rewrites the other integer constructors in a more
readable fashion now that the general `v128()` method is `const`-stable.
2021-07-30 12:52:55 +02:00
Amanieu d'Antras
335bc49609
Force the use of sysv64 calling convention in x86_64 disassembly tests (#1187)
This ensures that results are consistent across windows/linux tests.
2021-07-20 20:02:22 +01:00
bstrie
bfb3f78b6b
Revert "Move asm! and global_asm! to core::arch (#1183)" (#1185)
This reverts commit 9437b11cd4f42c5995eb41aa92ead877b9b7823a.
2021-07-20 09:49:59 +01:00
Alex Crichton
487db3bf1b
Document unsafety of wasm simd intrinsics (#1184)
Since most intrinsics are safe it likely makes sense to explicitly
document why there are a few intrinsics that are not safe. These
intrinsics are all unsafe for the same reason, which is that they're
dealing with a raw pointer that must be valid to load/store memory to.
Note that the are no alignment requirements on any of these intrinsics.
2021-07-16 17:40:14 +01:00
bstrie
4b8c2e5376
Move asm! and global_asm! to core::arch (#1183) 2021-07-15 05:21:50 +01:00
Yuki Okushi
069adbcc4c
Fix the stabilized version for simd_x86_bittest (#1182) 2021-06-12 18:25:16 +01:00
Yuki Okushi
fc0837cfa5
Stabilize simd_x86_bittest feature (#1180) 2021-06-11 01:31:44 +01:00
Alex Crichton
79140b43ea
wasm: Mark simd intrinsics as stable (#1179) 2021-06-10 20:32:39 +01:00
Alex Crichton
2c11b9fa1f
wasm: Mark most simd intrinsics as safe (#1177) 2021-06-10 12:13:33 +01:00
Adam Gemmell
1069e66439
Update aarch64 linux feature detection (#1146) 2021-05-28 01:37:20 +01:00
Alex Crichton
4e4a60b9d9
wasm: Lower alignment of all loads/stores (#1175)
This changes wasm simd intrisnics which deal with memory to match clang
where they all are emitted with an alignment of 1. This is expected to
not impact performance since wasm engines generally ignore alignment as
it's just a hint. Otherwise this can increase safety slightly when used
from Rust since if an unaligned pointer was previously passed in that
could result in UB on the LLVM side. This means that the intrinsics are
slighly more usable in more situations than before.

It's expected that if higher alignment is desired then programs will not
use these intrinsics but rather the component parts. For example instead
of `v128_load` you'd just load the pointer itself (and loading from a
pointer in Rust automatically assumes correct alignment). For
`v128_load64_splat` you'd do a load followed by a splat operation, which
LLVM should optimized into a `v128.load64_splat` instruction with the
desired alignment. LLVM doesn't fully support some optimizations (such
as optimizing `v128.load16_lane` from component parts) but that's
expected to be a temporary issue. Additionally we don't have a way of
configuring the alignment on operations that otherwise can't be
decomposed into their portions (such as with `i64x2_load_extend_u32x2`),
but we can ideally cross such a bridge when we get there if anyone ever
needs the alignment configured there.
2021-05-28 00:02:56 +01:00
Alex Crichton
4d6fa80bb3
wasm: Add convenience aliases with unsigned names (#1174)
Naming right now for wasm simd intrinsics takes the signededness of the
instruction into account, but some operations are the same regardless of
signededness, such as `i32x4_add`. This commit adds aliases for all of
these operations under unsigned names as well (such as `u32x4_add`)
which are just a `pub use` to rename the item as two names. The goal of
this is to assist in reading code (no need to switch back and forth
between `i` and `u`) as well as writing code (no need to always remember
which operations are the same for signed/unsigned but only available
under the signed names).
2021-05-27 16:52:15 +01:00
Alex Crichton
b3f06eb658
wasm: Change *_bitmask return values (#1173)
First change them all to unsigned since they're just returning bits, and
then also change them to the smallest-size integer which fits the return
value (`u16` for `i8x16_bitmask` and `u8` for everything else). This
suffers from an LLVM codegen bug for now, but it will hopefully get
fixed in the not too distant future.
2021-05-27 16:23:24 +01:00
Nils Hasenbanck
3ecc56b329
Add vst1_* neon intrinsics. (#1171) 2021-05-27 07:40:45 +01:00
Sparrow Li
10f7ebc387
Add vfma and vfms neon instructions (#1169) 2021-05-21 12:26:21 +01:00
Amanieu d'Antras
b216e9f9c4
Fix x86 SIMD byte shift intrinsics (#1168) 2021-05-20 01:47:38 +01:00
Sparrow Li
15749b0ed3
Modify the implementation of d_s64 suffix instructions (#1167) 2021-05-19 03:43:53 +01:00
Jamie Cunliffe
a98b05c635
Add support for the remaining vget(q)_lane functions. (#1164) 2021-05-19 02:53:59 +01:00
Aaron Hill
750250023f
Use #![feature(const_panic)] to produce better assertion errors (#1165) 2021-05-15 23:47:59 +01:00
Sparrow Li
09a05e02f4
Add vmull_p64 and vmull_high_p64 for aarch64 (#1157) 2021-05-15 21:58:23 +01:00
Sparrow Li
4a21f4db0e
Add vqmovn neon instructions (#1163) 2021-05-14 12:32:58 +01:00
Ralf Jung
604ed7ebbf
use simd_shuffle macros on wasm32 (#1162) 2021-05-13 13:48:20 +01:00
Ralf Jung
a34883b5d3
manually const-ify shuffle arguments (#1160) 2021-05-11 21:11:52 +01:00
SparrowLii
7516a80c31 Add vset neon instructions 2021-05-11 13:38:16 +01:00
Alex Crichton
2d9b71bca6 Add doc aliases for all wasm intrinsics
Recommended in #74372
2021-05-11 01:30:44 +01:00
Thom Chiovoloni
21c01768b7 Avoid using simd_f(min|max) in _mm256_(min|max)_p[sd] 2021-05-09 13:36:39 +01:00
Amanieu d'Antras
e9f73d0dc8 Fix asm! in bit-test intrinsics on x32 2021-05-08 19:40:07 +01:00
Amanieu d'Antras
994a4250a9 Use AT&T syntax to support LLVM 10 2021-05-07 23:19:18 +01:00
SparrowLii
8a2936b9a2 Completion of vcvt neon instruction 2021-05-07 23:02:39 +01:00
Ralf Jung
ed761b261c remvoe const_fn leftovers 2021-05-07 17:51:27 +01:00
Pietro Albini
1d6ff635c0 remove cfg(not(bootstrap)) for 1.54 2021-05-07 00:31:04 +01:00
SparrowLii
911ace84b2 Add vqrdmulh, vqrdmlah, vqrdmlsh neon instructions 2021-05-06 15:44:54 +01:00
Alex Crichton
128aa9a7e5 Update docs for v128_any_true 2021-05-03 15:56:41 +01:00
Alex Crichton
1d92c1d8b2 Another round of wasm SIMD updates
This round is dependant on
https://github.com/rust-lang/llvm-project/pull/101 landing first in
rust-lang/rust and won't pass CI until that does. That PR, however, will
also break wasm CI because it's changing how the wasm target works. My
goal here is to open this early to get it out there so that when that PR
lands in rust-lang/rust and CI breaks in stdarch then this can be merged
to make CI green again.

The changes here are mostly around the codegen for various intrinsics.
Some wasm-specific intrinsics have been removed in favor of more general
LLVM intrinsics, and other intrinsics have been removed in favor of
pattern-matching codegen.

The only new instruction supported as part of this chagne is
`v128.any_true`. This leaves only one instruction unsupported in LLVM
which is `i64x2.abs`. I think the codegen for the instruction is correct
in stdsimd, though, and LLVM just needs to update with a pattern-match
to actually emit the opcode. That'll happen in a future LLVM update.
2021-05-03 15:56:41 +01:00
Sparrow Li
fd29f9602c
Add vmul_n, vmul_lane, vmulx neon instructions (#1147) 2021-04-30 21:09:41 +01:00
Sparrow Li
07f1d0cae3
Add vmla_n, vmla_lane, vmls_n, vmls_lane neon instructions (#1145) 2021-04-28 22:59:41 +01:00
scottmcm
54a2d8b82a
Remove #![feature(try_trait)] from a test (#1142)
I'm working on `try_trait_v2` which will break this, so I'm going
around removing uses from the rustc tree where I can.
2021-04-26 00:45:20 +01:00
Amanieu d'Antras
63daa088fd
Move cfg!(target_feature) directly into is_*_feature_detected!() (#1141)
Fixes #1135
2021-04-24 08:02:24 +01:00
Sparrow Li
8852d07441
add vcopy neon instructions (#1139) 2021-04-24 01:49:11 +01:00
Ralf Jung
03e109a2f3
remove unused const_fn feature (#1140) 2021-04-23 16:46:38 +01:00
Christopher Serr
a43f92a181
Add vrndn neon instructions (#1086)
This adds the neon instructions for lane-wise rounding without actually
converting the lanes to integers.
2021-04-22 06:08:40 +01:00
Sparrow Li
de3e8f72c5
Add vqdmul* neon instructions (#1130) 2021-04-21 15:27:08 +01:00
surechen
20c0120362
add neon instruction vaddlv_* (#1129) 2021-04-20 15:19:04 +01:00