1478 Commits

Author SHA1 Message Date
Folkert de Vries
7516645928
stabilize s390x_target_feature_vector 2025-11-06 12:49:48 +01:00
Folkert de Vries
c59298da36
stabilize stdarch_s390x_feature_detection 2025-11-06 12:49:46 +01:00
Folkert de Vries
0645ac31cb
extract s390x vector and friends to their own rust feature 2025-11-06 12:49:04 +01:00
Jakub Beránek
3c9656c4f4
Merge ref '73e6c9ebd912' from rust-lang/rust
Pull recent changes from https://github.com/rust-lang/rust via Josh.

Upstream ref: 73e6c9ebd9123154a196300ef58e30ec8928e74e
Filtered ref: e8bb3cae4cd2b04bdc252cdf79102717db2b2d8d
Upstream diff: 32e7a4b92b...73e6c9ebd9

This merge was created using https://github.com/rust-lang/josh-sync.
2025-11-02 14:45:26 +01:00
Alisa Sireneva
420544a34a Move wasm throw intrinsic back to unwind
rustc assumes that regular `extern "Rust"` functions unwind only if the
`unwind` panic runtime is linked. `throw` was annotated as such, but
unwound unconditionally. This could cause UB when a crate built with `-C
panic=abort` called `throw` from `core` built with `-C panic=unwind`,
since no terminator was added to handle the panic arising from calling an
allegedly non-unwinding `extern "Rust"` function.

rustc was taught to recognize this condition since
https://github.com/rust-lang/rust/pull/144225 and prevented such
linkage, but this caused regressions in
https://github.com/rust-lang/rust/issues/148246, since this meant that
Emscripten projects could not be built with `-C panic=abort` without
recompiling std.

The most straightforward solution would be to move `throw` into the
`panic_unwind` crate, so that it's only compiled if the panic runtime is
guaranteed to be `unwind`, but this is messy due to our architecture.
Instead, move it into `unwind::wasm`, which is only compiled for
bare-metal targets that default to `panic = "abort"`, rendering the
issue moot.
2025-10-30 15:13:32 +03:00
Noa
a4638e3d25
Enable assert_instr for wasm32 throw 2025-10-27 12:12:52 -05:00
sayantn
4c6e879326
Make the fence intrinsics and _mm_pause safe 2025-10-26 23:57:47 +05:30
sayantn
22f169f844
Make _mm_prefetch safe 2025-10-26 23:57:42 +05:30
sayantn
8bff8b6849
Make all TBM intrinsics safe 2025-10-26 23:52:45 +05:30
sayantn
f2eb88b0bb
Make RDRAND/RDSEED safe 2025-10-26 23:52:45 +05:30
sayantn
5dcd3046c8
Make _bswap{,64} safe 2025-10-26 23:52:45 +05:30
sayantn
cfb36829a9
Make _mm512_reduce_mul_ph safe (missed) 2025-10-26 23:52:45 +05:30
sayantn
788d1826e9
Make ADC/ADX intrinsics safe 2025-10-26 23:52:44 +05:30
Folkert de Vries
cf1cf2e94d
remove a use of core::intrinsics::size_of
use of the intrinsic, rather than the stable function, is probably an accident.
2025-10-25 23:57:17 +02:00
Amanieu d'Antras
d64b23c061
Merge pull request #1945 from folkertdev/gfni-cleanup
use `byte_add` in gfni tests
2025-10-25 14:17:49 +00:00
Folkert de Vries
9ebee4853d
use byte_add in gfni tests 2025-10-25 01:55:37 +02:00
Folkert de Vries
8dff65f010
Merge pull request #1938 from linkmauve/fjcvtzs
Implement fjcvtzs under the name __jcvt like the C intrinsic
2025-10-10 14:13:13 +00:00
Emmanuel Gil Peyrot
6039ddea09 Implement fjcvtzs under the name __jcvt like the C intrinsic
This instruction is only available when the jsconv target_feature is available,
so on ARMv8.3 or higher.

It is used e.g. by Ruffle[0] to speed up its conversion from f64 to i32, or by
any JS engine probably.

I’ve picked the stdarch_aarch64_jscvt feature because it’s the name of the
FEAT_JSCVT, but hesitated with naming it stdarch_aarch64_jsconv (the name of
the target_feature) or stdarch_aarch64_jcvt (the name of the C intrinsic) or
stdarch_aarch64_fjcvtzs (the name of the instruction), this choice is purely
arbitrary and I guess it could be argued one way or another.  I wouldn’t expect
it to stay unstable for too long, so ultimately this shouldn’t matter much.

This feature is now tracked in this issue[1].

[0] https://github.com/ruffle-rs/ruffle/pull/21780
[1] https://github.com/rust-lang/rust/issues/147555
2025-10-10 13:29:42 +00:00
Sayantan Chakraborty
01dc34d709
Merge pull request #1939 from folkertdev/crc-remove-not-arm
crc32: remove `#[cfg(not(target_arch = "arm"))]` from aarch64 crc functions
2025-10-09 17:37:09 +00:00
Folkert de Vries
4fcf3f86c4
crc32: remove #[cfg(not(target_arch = "arm"))] from crc functions
They are defined in the aarch64 module, so this cfg is pointless.

Note that these instructions do exist for arm, but the aarch64 ones are
already stable, so this would need some additional work to implement
them for arm.
2025-10-09 19:20:20 +02:00
Folkert de Vries
27866a7f06
Merge pull request #1937 from sayantn/intrinsic-fixes
use simd intrinsics for `vec_max` and `vec_min`
2025-10-08 11:17:58 +00:00
sayantn
40ce617b2a
use simd intrinsics for vec_max and vec_min 2025-10-08 16:01:08 +05:30
Tsukasa OI
af91b45726 RISC-V: Use symbolic instructions on inline assembly (part 1)
While many intrinsics use `.insn` to generate raw machine code from
numbers, all ratified instructions can be symbolic
using `.option` directives.

By saving the assembler environment with `.option push` then modifying
the architecture with `.option arch`, we can temporarily enable certain
extensions (as we use `.option pop` immediately after the target
instruction, surrounding environment is completely intact in this
commit; *almost* completely intact in general).

This commit modifies the `pause` *hint* intrinsic to use symbolic
*instruction* because we want to expose it even if the Zihintpause
extension is unavailable on the target.
2025-10-06 01:08:42 +00:00
Amanieu d'Antras
09c43ef6d3
Merge pull request #1929 from sayantn/non-temporal
Fixes for non-temporal intrinsics
2025-10-05 22:44:09 +00:00
sayantn
c0e41518d1
Add comments in NT asm blocks for future reference 2025-10-05 07:04:36 +05:30
sayantn
5bf53654c5
Add _mm_sfence to all non-temporal intrinsic tests 2025-10-05 06:56:49 +05:30
sayantn
b29308c167
Use Inline ASM for SSE4a nontemporal stores 2025-10-05 06:56:46 +05:30
sayantn
28cf2d1a6c
Fix xsave segfaults 2025-10-05 05:39:29 +05:30
Sayantan Chakraborty
7e850c5f1e
Merge pull request #1932 from sayantn/fmaddsub
Use SIMD intrinsics for `vfmaddsubph` and `vfmsubaddph`
2025-10-04 00:43:02 +00:00
Amanieu d'Antras
14b888574f
Merge pull request #1931 from sayantn/use-intrinsics
Fix mistake in #1928
2025-10-03 13:10:34 +00:00
sayantn
f90d9ec8b2
Use SIMD intrinsics for vfmaddsubph and vfmsubaddph 2025-10-03 05:33:13 +05:30
sayantn
37605b03c5
Ensure simd_funnel_sh{l,r} always gets passed shift amounts in range 2025-10-03 03:51:34 +05:30
sayantn
018f9927b2
Revert uses of SIMD intrinsics for shifts 2025-10-03 03:30:50 +05:30
Madhav Madhusoodanan
6b99d5fb56 fix: update the implementation of _kshiftri_mask16 and _kshiftli_mask16
to zero out when the amount of shift exceeds 16.
2025-10-03 02:33:11 +05:30
Madhav Madhusoodanan
0138b95620 fix: update the implementation of _kshiftri_mask8 and _kshiftli_mask8 to
zero out when the amount of shift exceeds the bit length of the input
argument.
2025-10-03 02:27:15 +05:30
Madhav Madhusoodanan
8b25ddeea3 fix: update the implementation of _kshiftri_mask32, _kshiftri_mask64,
_kshiftli_mask32 and _kshiftli_mask64 to zero out when the amount of
shift exceeds the bit length of the input argument.
2025-10-03 02:20:50 +05:30
sayantn
851c32abb2
Use SIMD intrinsics for test{z,c} intrinsics 2025-10-01 12:33:41 +05:30
sayantn
4c94e6bba9
Use SIMD intrinsics for vperm2 intrinsics 2025-10-01 10:26:59 +05:30
sayantn
d23dbbec31
Use SIMD intrinsics for cvtsi{,64}_{ss,sd} intrinsics 2025-10-01 07:23:43 +05:30
sayantn
6460b35798
Use SIMD intrinsics for f16 intrinsics 2025-10-01 07:23:10 +05:30
sayantn
3f91ced840
Use SIMD intrinsics for shift and rotate intrinsics 2025-10-01 07:22:12 +05:30
sayantn
1819ae0c1f
Use SIMD intrinsics for madd, hadd and hsub intrinsics 2025-10-01 07:20:30 +05:30
sayantn
b55b085535
Remove uses of deprecated llvm.x86.addcarryx.u{32,64} intrinsics
- Correct mistake in x86_64/adx.rs where it was not testing `_addcarryx` at all
2025-10-01 07:16:44 +05:30
usamoi
00c8866c57 pick changes from https://github.com/rust-lang/rust/pull/146683 2025-09-23 10:17:54 +08:00
usamoi
3b09522c34 Revert "Remove big-endian swizzles from vreinterpret"
This reverts commit 24f89ca53d3374ed8d3e0cbadc1dc89eea41acba.
2025-09-23 10:05:32 +08:00
bors
ce4beebecb Auto merge of #146683 - clarfonthey:safe-intrinsics, r=RalfJung,Amanieu
Mark float intrinsics with no preconditions as safe

Note: for ease of reviewing, the list of safe intrinsics is sorted in the first commit, and then safe intrinsics are added in the second commit.

All *recently added* float intrinsics have been correctly marked as safe to call due to the fact that they have no preconditions. This adds the remaining float intrinsics which are safe to call to the safe intrinsic list, and removes the unsafe blocks around their calls.

---

Side note: this may want a try run before being added to the queue, since I'm not sure if there's any tier-2 code that uses these intrinsics that might not be tested on the usual PR flow. We've already uncovered a few places in subtrees that do this, and it's worth double-checking before clogging up the queue.
2025-09-22 14:35:46 +00:00
ltdk
055e05a338 Mark float intrinsics with no preconditions as safe 2025-09-21 20:37:51 -04:00
Sayantan Chakraborty
c1242fab74
Merge pull request #1921 from a4lg/riscv-inline-asm-general-improvements
RISC-V: Improvements of inline assembly uses
2025-09-15 18:39:49 +00:00
Folkert de Vries
5dd0fdcd67
Merge pull request #1919 from sayantn/fix-vreinterpret
Remove big-endian swizzles from `vreinterpret`
2025-09-15 08:18:20 +00:00
Tsukasa OI
8df078a3f0 RISC-V: Improvements of inline assembly uses
This commit performs various improvements (better register allocation,
less register clobbering on the worst case and better readability) of
RISC-V inline assembly use cases.

Note that it does not change the `p` module (which defines the "P"
extension draft instructions but very likely to change).

1.  Use `lateout` as possible.
    Unlike `out(reg)` and `in(reg)` pair, `lateout(reg)` and `in(reg)`
    can share the same register because they state that the late-output
    register is written after all the reads are performed.
    It can improve register allocation.
2.  Add `preserves_flags` option as possible.
    While RISC-V doesn't have _regular_ condition codes, RISC-V inline
    assembly in the Rust language assumes that some registers
    (mainly vector state registers) may be overwritten by default.
    By adding `preserves_flags` to the intrinsics corresponding
    instructions without overwriting them, it can minimize register
    clobbering on the worst case.
3.  Use trailing semicolon.
    As `asm!` declares an action and it doesn't return a value by
    itself, it would be better to have trailing semicolon to denote that
    an `asm!` call is effectively a statement.
4.  Make most of `asm!` calls multi-lined.
    `rustfmt` makes some simple (yet long) `asm!` calls multi-lined but
    it does not perform formatting of complex `asm!` calls with inputs
    and/or outputs.  To keep consistency, it makes most of the `asm!`
    calls multi-lined.
2025-09-14 05:08:19 +00:00