Skip to content

Commit

Permalink
Merge pull request from GHSA-ff4p-7xrq-q5r8
Browse files Browse the repository at this point in the history
* x64: Remove incorrect `amode_add` lowering rules

This commit removes two incorrect rules as part of the x64 backend's
computation of addressing modes. These two rules folded a zero-extended
32-bit computation into the address mode operand, but this isn't correct
as the 32-bit computation should be truncated to 32-bits but when folded
into the address mode computation it happens with 64-bit operands,
meaning truncation doesn't happen.

* Add release notes for 6.0.1
  • Loading branch information
alexcrichton authored Mar 8, 2023
1 parent fc13ee1 commit 5697586
Show file tree
Hide file tree
Showing 3 changed files with 19 additions and 18 deletions.
14 changes: 14 additions & 0 deletions RELEASES.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,19 @@
--------------------------------------------------------------------------------

## 6.0.1

Released 2023-03-08.

### Fixed

* Guest-controlled out-of-bounds read/write on x86\_64
[GHSA-ff4p-7xrq-q5r8](https://github.com/bytecodealliance/wasmtime/security/advisories/GHSA-ff4p-7xrq-q5r8)

* Miscompilation of `i8x16.select` with the same inputs on x86\_64
[GHSA-xm67-587q-r2vw](https://github.com/bytecodealliance/wasmtime/security/advisories/GHSA-xm67-587q-r2vw)

--------------------------------------------------------------------------------

## 6.0.0

Released 2023-02-20.
Expand Down
14 changes: 0 additions & 14 deletions cranelift/codegen/src/isa/x64/inst.isle
Original file line number Diff line number Diff line change
Expand Up @@ -987,20 +987,6 @@
(rule 2 (amode_add (Amode.ImmReg off (valid_reg base) flags) (ishl index (iconst (uimm8 shift))))
(if (u32_lteq (u8_as_u32 shift) 3))
(Amode.ImmRegRegShift off base index shift flags))
(rule 2 (amode_add (Amode.ImmReg off (valid_reg base) flags) (uextend (ishl index (iconst (uimm8 shift)))))
(if (u32_lteq (u8_as_u32 shift) 3))
(Amode.ImmRegRegShift off base (extend_to_gpr index $I64 (ExtendKind.Zero)) shift flags))

;; Same, but with a uextend of a shift of a 32-bit add. This is valid
;; because we know our lowering of a narrower-than-64-bit `iadd` will
;; always write the full register width, so we can effectively ignore
;; the `uextend` and look through it to the `ishl`.
;;
;; Priority 3 to avoid conflict with the previous rule.
(rule 3 (amode_add (Amode.ImmReg off (valid_reg base) flags)
(uextend (ishl index @ (iadd _ _) (iconst (uimm8 shift)))))
(if (u32_lteq (u8_as_u32 shift) 3))
(Amode.ImmRegRegShift off base index shift flags))

;; -- Case 4 (absorbing constant offsets).
;;
Expand Down
9 changes: 5 additions & 4 deletions cranelift/filetests/filetests/isa/x64/amode-opt.clif
Original file line number Diff line number Diff line change
Expand Up @@ -132,8 +132,9 @@ block0(v0: i64, v1: i32):
; pushq %rbp
; movq %rsp, %rbp
; block0:
; movl %esi, %ecx
; movq -1(%rdi,%rcx,8), %rax
; movq %rsi, %rdx
; shll $3, %edx, %edx
; movq -1(%rdi,%rdx,1), %rax
; movq %rbp, %rsp
; popq %rbp
; ret
Expand All @@ -155,8 +156,8 @@ block0(v0: i64, v1: i32, v2: i32):
; block0:
; movq %rsi, %r8
; addl %r8d, %edx, %r8d
; movq -1(%rdi,%r8,4), %rax
; shll $2, %r8d, %r8d
; movq -1(%rdi,%r8,1), %rax
; movq %rbp, %rsp
; popq %rbp
; ret

0 comments on commit 5697586

Please sign in to comment.