Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Casting the same i64 and u64 integers to f32 gives different results on i686-pc-windows-msvc target #105626

Closed
icedrocket opened this issue Dec 12, 2022 · 6 comments · Fixed by #107879
Labels
A-floating-point Area: Floating point numbers and arithmetic A-LLVM Area: Code generation parts specific to LLVM. Both correctness bugs and optimization-related issues. C-bug Category: This is a bug.

Comments

@icedrocket
Copy link
Contributor

icedrocket commented Dec 12, 2022

I tried this code:

fn main() {
    let mut n: i64 = 0b0000000000111111111111111111111111011111111111111111111111111111;
    println!("{}, {}, {}, {}", n, n as i64 as f32, n as u64 as f32, n as u128 as f32);
    assert_ne!((n as f64) as f32, n as f32);
    assert_eq!(n as i64 as f32, n as u64 as f32);
    n = -n;
    assert_ne!((n as f64) as f32, n as f32);
    assert_eq!(n as i64 as f32, -(-n as u64 as f32));
}

rust-num/num-bigint#261

I expected to see this happen:

The code prints the output below and exits successfully.

18014397972611071, 18014397000000000, 18014397000000000, 18014397000000000

Instead, this happened:

The code prints the output below and panics.

18014397972611071, 18014397000000000, 18014399000000000, 18014397000000000

Meta

rustc --version --verbose:

rustc 1.65.0 (897e37553 2022-11-02)
binary: rustc
commit-hash: 897e37553bba8b42751c67658967889d11ecd120
commit-date: 2022-11-02
host: x86_64-pc-windows-msvc
release: 1.65.0
LLVM version: 15.0.0
@icedrocket icedrocket added the C-bug Category: This is a bug. label Dec 12, 2022
@icedrocket icedrocket changed the title n as i64 as f32 and n as u64 as f32 give different results on i686-pc-windows-msvc target Casting the same i64 and u64 integers to f32 gives different results on i686-pc-windows-msvc target Dec 12, 2022
@CryZe
Copy link
Contributor

CryZe commented Dec 12, 2022

Here's some reproduction on Godbolt. The problem seems unrelated to i64 and it's entirely the u64 -> f32 conversion that's buggy.

https://rust.godbolt.org/z/9Gccr5o1s

The fact that we use the floating point numbers as some sort of address makes it seem like we are loading the value from a lookup table and there might be a bug in Windows' table:

fadd    dword ptr [4*eax + __real@5f80000000000000]

@CryZe
Copy link
Contributor

CryZe commented Dec 13, 2022

Took me way too long, but I figured it out. The fact that FPU instructions are used means that it runs into problems with the FPU accuracy. By default the control word is set to 0x27f on Windows and 0x37f on Linux. In particular the 0x3 vs 0x2 is the precision control setting of the FPU.

00 = 24 bits (REAL4)
01 = Not used
10 = 53 bits (REAL8)
11 = 64 bits (REAL10)

So on Windows it's in 53-bits mode whereas on Linux it's in 64-bits mode.

Running the following code therefore fixes the problem on Windows:

fn load_fpu_control_word(control: u16) {
    unsafe {
        asm!("fldcw [{}]", in(reg) &control, options(nostack));
    }
}

load_fpu_control_word(0x37f);

This probably is an LLVM bug where LLVM assumes the FPU is configured like on Linux instead of like on Windows. MSVC for example just emits a call to __ultod3 instead, which is probably the correct thing to do. (Also it's a little surprising to me the FPU is used at all, when i686 afaik even has SSE2 available by default).

@bjorn3 bjorn3 added the A-floating-point Area: Floating point numbers and arithmetic label Dec 13, 2022
@CryZe
Copy link
Contributor

CryZe commented Dec 13, 2022

The function that lowers the uint to float cast is X86TargetLowering::LowerUINT_TO_FP in X86ISelLowering.cpp. There's basically a bunch of checks to generate shorter versions if various conditions are met, but all of them fail and it has to resort to using the FPU. There actually is one case in there where they treat Windows differently, I'm guessing something similar can be done to not fall into the FPU default case here as well. In particular because both SSE2 and a builtin function that does exactly this cast exist.

@icedrocket
Copy link
Contributor Author

icedrocket commented Dec 19, 2022

I tested it with some other code and thought it was resolved.

@icedrocket icedrocket reopened this Dec 19, 2022
@workingjubilee workingjubilee added the A-LLVM Area: Code generation parts specific to LLVM. Both correctness bugs and optimization-related issues. label Jan 27, 2023
@workingjubilee
Copy link
Member

workingjubilee commented Jan 27, 2023

This was patched upstream (thanks icedrocket!) but it seems like they reverted the patch, as unfortunately it broke Chrome's build: llvm/llvm-project@1692dff

I don't know if it is more profitable to work around this on our end by emitting the relevant call? It looks like there's an intention to use another fix: https://reviews.llvm.org/D142178

@icedrocket
Copy link
Contributor Author

icedrocket commented Feb 7, 2023

A new fix has been commited to upstream: llvm/llvm-project@11fb09e

compiler-errors added a commit to compiler-errors/rust that referenced this issue Feb 25, 2023
compiler-errors added a commit to compiler-errors/rust that referenced this issue Feb 25, 2023
matthiaskrgr added a commit to matthiaskrgr/rust that referenced this issue Feb 26, 2023
@bors bors closed this as completed in 18caf88 Mar 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-floating-point Area: Floating point numbers and arithmetic A-LLVM Area: Code generation parts specific to LLVM. Both correctness bugs and optimization-related issues. C-bug Category: This is a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants