-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Miri: convert to/from apfloat instead of host floats #61673
Conversation
r? @varkor (rust_highfive has picked a reviewer for you, use r? to override) |
Co-Authored-By: Mazdak Farrokhzad <twingoow@gmail.com>
Interesting that this passed... seems like we are missing a case from our test suite, namely casting a multivariant integer enum to an integer. |
I opened #61702 for the missing test; this PR here is good to go I think. |
Div => (l / r).value.into(), | ||
Rem => (l % r).value.into(), | ||
_ => bug!("invalid float op: `{:?}`", bin_op), | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Much nicer!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I love this. :) If only we had a similar trait for integers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All integer operations can be implemented with a runtime bitwidth n
and an u128
to hold the value, though (maybe i128
for signed).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like, LLVM also has an APInt
, not just APFloat
, and APFloat
uses APInt
for the significand, but I didn't port APInt
as its own thing, just added a bunch of functions, because of how relatively simple it is:
rust/src/librustc_apfloat/ieee.rs
Lines 2287 to 2777 in ad3829f
/// Implementation details of IeeeFloat significands, such as big integer arithmetic. | |
/// As a rule of thumb, no functions in this module should dynamically allocate. | |
mod sig { | |
use std::cmp::Ordering; | |
use std::mem; | |
use super::{ExpInt, Limb, LIMB_BITS, limbs_for_bits, Loss}; | |
pub(super) fn is_all_zeros(limbs: &[Limb]) -> bool { | |
limbs.iter().all(|&l| l == 0) | |
} | |
/// One, not zero, based LSB. That is, returns 0 for a zeroed significand. | |
pub(super) fn olsb(limbs: &[Limb]) -> usize { | |
limbs.iter().enumerate().find(|(_, &limb)| limb != 0).map_or(0, | |
|(i, limb)| i * LIMB_BITS + limb.trailing_zeros() as usize + 1) | |
} | |
/// One, not zero, based MSB. That is, returns 0 for a zeroed significand. | |
pub(super) fn omsb(limbs: &[Limb]) -> usize { | |
limbs.iter().enumerate().rfind(|(_, &limb)| limb != 0).map_or(0, | |
|(i, limb)| (i + 1) * LIMB_BITS - limb.leading_zeros() as usize) | |
} | |
/// Comparison (unsigned) of two significands. | |
pub(super) fn cmp(a: &[Limb], b: &[Limb]) -> Ordering { | |
assert_eq!(a.len(), b.len()); | |
for (a, b) in a.iter().zip(b).rev() { | |
match a.cmp(b) { | |
Ordering::Equal => {} | |
o => return o, | |
} | |
} | |
Ordering::Equal | |
} | |
/// Extracts the given bit. | |
pub(super) fn get_bit(limbs: &[Limb], bit: usize) -> bool { | |
limbs[bit / LIMB_BITS] & (1 << (bit % LIMB_BITS)) != 0 | |
} | |
/// Sets the given bit. | |
pub(super) fn set_bit(limbs: &mut [Limb], bit: usize) { | |
limbs[bit / LIMB_BITS] |= 1 << (bit % LIMB_BITS); | |
} | |
/// Clear the given bit. | |
pub(super) fn clear_bit(limbs: &mut [Limb], bit: usize) { | |
limbs[bit / LIMB_BITS] &= !(1 << (bit % LIMB_BITS)); | |
} | |
/// Shifts `dst` left `bits` bits, subtract `bits` from its exponent. | |
pub(super) fn shift_left(dst: &mut [Limb], exp: &mut ExpInt, bits: usize) { | |
if bits > 0 { | |
// Our exponent should not underflow. | |
*exp = exp.checked_sub(bits as ExpInt).unwrap(); | |
// Jump is the inter-limb jump; shift is the intra-limb shift. | |
let jump = bits / LIMB_BITS; | |
let shift = bits % LIMB_BITS; | |
for i in (0..dst.len()).rev() { | |
let mut limb; | |
if i < jump { | |
limb = 0; | |
} else { | |
// dst[i] comes from the two limbs src[i - jump] and, if we have | |
// an intra-limb shift, src[i - jump - 1]. | |
limb = dst[i - jump]; | |
if shift > 0 { | |
limb <<= shift; | |
if i > jump { | |
limb |= dst[i - jump - 1] >> (LIMB_BITS - shift); | |
} | |
} | |
} | |
dst[i] = limb; | |
} | |
} | |
} | |
/// Shifts `dst` right `bits` bits noting lost fraction. | |
pub(super) fn shift_right(dst: &mut [Limb], exp: &mut ExpInt, bits: usize) -> Loss { | |
let loss = Loss::through_truncation(dst, bits); | |
if bits > 0 { | |
// Our exponent should not overflow. | |
*exp = exp.checked_add(bits as ExpInt).unwrap(); | |
// Jump is the inter-limb jump; shift is the intra-limb shift. | |
let jump = bits / LIMB_BITS; | |
let shift = bits % LIMB_BITS; | |
// Perform the shift. This leaves the most significant `bits` bits | |
// of the result at zero. | |
for i in 0..dst.len() { | |
let mut limb; | |
if i + jump >= dst.len() { | |
limb = 0; | |
} else { | |
limb = dst[i + jump]; | |
if shift > 0 { | |
limb >>= shift; | |
if i + jump + 1 < dst.len() { | |
limb |= dst[i + jump + 1] << (LIMB_BITS - shift); | |
} | |
} | |
} | |
dst[i] = limb; | |
} | |
} | |
loss | |
} | |
/// Copies the bit vector of width `src_bits` from `src`, starting at bit SRC_LSB, | |
/// to `dst`, such that the bit SRC_LSB becomes the least significant bit of `dst`. | |
/// All high bits above `src_bits` in `dst` are zero-filled. | |
pub(super) fn extract(dst: &mut [Limb], src: &[Limb], src_bits: usize, src_lsb: usize) { | |
if src_bits == 0 { | |
return; | |
} | |
let dst_limbs = limbs_for_bits(src_bits); | |
assert!(dst_limbs <= dst.len()); | |
let src = &src[src_lsb / LIMB_BITS..]; | |
dst[..dst_limbs].copy_from_slice(&src[..dst_limbs]); | |
let shift = src_lsb % LIMB_BITS; | |
let _: Loss = shift_right(&mut dst[..dst_limbs], &mut 0, shift); | |
// We now have (dst_limbs * LIMB_BITS - shift) bits from `src` | |
// in `dst`. If this is less that src_bits, append the rest, else | |
// clear the high bits. | |
let n = dst_limbs * LIMB_BITS - shift; | |
if n < src_bits { | |
let mask = (1 << (src_bits - n)) - 1; | |
dst[dst_limbs - 1] |= (src[dst_limbs] & mask) << (n % LIMB_BITS); | |
} else if n > src_bits && src_bits % LIMB_BITS > 0 { | |
dst[dst_limbs - 1] &= (1 << (src_bits % LIMB_BITS)) - 1; | |
} | |
// Clear high limbs. | |
for x in &mut dst[dst_limbs..] { | |
*x = 0; | |
} | |
} | |
/// We want the most significant PRECISION bits of `src`. There may not | |
/// be that many; extract what we can. | |
pub(super) fn from_limbs(dst: &mut [Limb], src: &[Limb], precision: usize) -> (Loss, ExpInt) { | |
let omsb = omsb(src); | |
if precision <= omsb { | |
extract(dst, src, precision, omsb - precision); | |
( | |
Loss::through_truncation(src, omsb - precision), | |
omsb as ExpInt - 1, | |
) | |
} else { | |
extract(dst, src, omsb, 0); | |
(Loss::ExactlyZero, precision as ExpInt - 1) | |
} | |
} | |
/// For every consecutive chunk of `bits` bits from `limbs`, | |
/// going from most significant to the least significant bits, | |
/// call `f` to transform those bits and store the result back. | |
pub(super) fn each_chunk<F: FnMut(Limb) -> Limb>(limbs: &mut [Limb], bits: usize, mut f: F) { | |
assert_eq!(LIMB_BITS % bits, 0); | |
for limb in limbs.iter_mut().rev() { | |
let mut r = 0; | |
for i in (0..LIMB_BITS / bits).rev() { | |
r |= f((*limb >> (i * bits)) & ((1 << bits) - 1)) << (i * bits); | |
} | |
*limb = r; | |
} | |
} | |
/// Increment in-place, return the carry flag. | |
pub(super) fn increment(dst: &mut [Limb]) -> Limb { | |
for x in dst { | |
*x = x.wrapping_add(1); | |
if *x != 0 { | |
return 0; | |
} | |
} | |
1 | |
} | |
/// Decrement in-place, return the borrow flag. | |
pub(super) fn decrement(dst: &mut [Limb]) -> Limb { | |
for x in dst { | |
*x = x.wrapping_sub(1); | |
if *x != !0 { | |
return 0; | |
} | |
} | |
1 | |
} | |
/// `a += b + c` where `c` is zero or one. Returns the carry flag. | |
pub(super) fn add(a: &mut [Limb], b: &[Limb], mut c: Limb) -> Limb { | |
assert!(c <= 1); | |
for (a, &b) in a.iter_mut().zip(b) { | |
let (r, overflow) = a.overflowing_add(b); | |
let (r, overflow2) = r.overflowing_add(c); | |
*a = r; | |
c = (overflow | overflow2) as Limb; | |
} | |
c | |
} | |
/// `a -= b + c` where `c` is zero or one. Returns the borrow flag. | |
pub(super) fn sub(a: &mut [Limb], b: &[Limb], mut c: Limb) -> Limb { | |
assert!(c <= 1); | |
for (a, &b) in a.iter_mut().zip(b) { | |
let (r, overflow) = a.overflowing_sub(b); | |
let (r, overflow2) = r.overflowing_sub(c); | |
*a = r; | |
c = (overflow | overflow2) as Limb; | |
} | |
c | |
} | |
/// `a += b` or `a -= b`. Does not preserve `b`. | |
pub(super) fn add_or_sub( | |
a_sig: &mut [Limb], | |
a_exp: &mut ExpInt, | |
a_sign: &mut bool, | |
b_sig: &mut [Limb], | |
b_exp: ExpInt, | |
b_sign: bool, | |
) -> Loss { | |
// Are we bigger exponent-wise than the RHS? | |
let bits = *a_exp - b_exp; | |
// Determine if the operation on the absolute values is effectively | |
// an addition or subtraction. | |
// Subtraction is more subtle than one might naively expect. | |
if *a_sign ^ b_sign { | |
let (reverse, loss); | |
if bits == 0 { | |
reverse = cmp(a_sig, b_sig) == Ordering::Less; | |
loss = Loss::ExactlyZero; | |
} else if bits > 0 { | |
loss = shift_right(b_sig, &mut 0, (bits - 1) as usize); | |
shift_left(a_sig, a_exp, 1); | |
reverse = false; | |
} else { | |
loss = shift_right(a_sig, a_exp, (-bits - 1) as usize); | |
shift_left(b_sig, &mut 0, 1); | |
reverse = true; | |
} | |
let borrow = (loss != Loss::ExactlyZero) as Limb; | |
if reverse { | |
// The code above is intended to ensure that no borrow is necessary. | |
assert_eq!(sub(b_sig, a_sig, borrow), 0); | |
a_sig.copy_from_slice(b_sig); | |
*a_sign = !*a_sign; | |
} else { | |
// The code above is intended to ensure that no borrow is necessary. | |
assert_eq!(sub(a_sig, b_sig, borrow), 0); | |
} | |
// Invert the lost fraction - it was on the RHS and subtracted. | |
match loss { | |
Loss::LessThanHalf => Loss::MoreThanHalf, | |
Loss::MoreThanHalf => Loss::LessThanHalf, | |
_ => loss, | |
} | |
} else { | |
let loss = if bits > 0 { | |
shift_right(b_sig, &mut 0, bits as usize) | |
} else { | |
shift_right(a_sig, a_exp, -bits as usize) | |
}; | |
// We have a guard bit; generating a carry cannot happen. | |
assert_eq!(add(a_sig, b_sig, 0), 0); | |
loss | |
} | |
} | |
/// `[low, high] = a * b`. | |
/// | |
/// This cannot overflow, because | |
/// | |
/// `(n - 1) * (n - 1) + 2 * (n - 1) == (n - 1) * (n + 1)` | |
/// | |
/// which is less than n<sup>2</sup>. | |
pub(super) fn widening_mul(a: Limb, b: Limb) -> [Limb; 2] { | |
let mut wide = [0, 0]; | |
if a == 0 || b == 0 { | |
return wide; | |
} | |
const HALF_BITS: usize = LIMB_BITS / 2; | |
let select = |limb, i| (limb >> (i * HALF_BITS)) & ((1 << HALF_BITS) - 1); | |
for i in 0..2 { | |
for j in 0..2 { | |
let mut x = [select(a, i) * select(b, j), 0]; | |
shift_left(&mut x, &mut 0, (i + j) * HALF_BITS); | |
assert_eq!(add(&mut wide, &x, 0), 0); | |
} | |
} | |
wide | |
} | |
/// `dst = a * b` (for normal `a` and `b`). Returns the lost fraction. | |
pub(super) fn mul<'a>( | |
dst: &mut [Limb], | |
exp: &mut ExpInt, | |
mut a: &'a [Limb], | |
mut b: &'a [Limb], | |
precision: usize, | |
) -> Loss { | |
// Put the narrower number on the `a` for less loops below. | |
if a.len() > b.len() { | |
mem::swap(&mut a, &mut b); | |
} | |
for x in &mut dst[..b.len()] { | |
*x = 0; | |
} | |
for i in 0..a.len() { | |
let mut carry = 0; | |
for j in 0..b.len() { | |
let [low, mut high] = widening_mul(a[i], b[j]); | |
// Now add carry. | |
let (low, overflow) = low.overflowing_add(carry); | |
high += overflow as Limb; | |
// And now `dst[i + j]`, and store the new low part there. | |
let (low, overflow) = low.overflowing_add(dst[i + j]); | |
high += overflow as Limb; | |
dst[i + j] = low; | |
carry = high; | |
} | |
dst[i + b.len()] = carry; | |
} | |
// Assume the operands involved in the multiplication are single-precision | |
// FP, and the two multiplicants are: | |
// a = a23 . a22 ... a0 * 2^e1 | |
// b = b23 . b22 ... b0 * 2^e2 | |
// the result of multiplication is: | |
// dst = c48 c47 c46 . c45 ... c0 * 2^(e1+e2) | |
// Note that there are three significant bits at the left-hand side of the | |
// radix point: two for the multiplication, and an overflow bit for the | |
// addition (that will always be zero at this point). Move the radix point | |
// toward left by two bits, and adjust exponent accordingly. | |
*exp += 2; | |
// Convert the result having "2 * precision" significant-bits back to the one | |
// having "precision" significant-bits. First, move the radix point from | |
// poision "2*precision - 1" to "precision - 1". The exponent need to be | |
// adjusted by "2*precision - 1" - "precision - 1" = "precision". | |
*exp -= precision as ExpInt + 1; | |
// In case MSB resides at the left-hand side of radix point, shift the | |
// mantissa right by some amount to make sure the MSB reside right before | |
// the radix point (i.e., "MSB . rest-significant-bits"). | |
// | |
// Note that the result is not normalized when "omsb < precision". So, the | |
// caller needs to call IeeeFloat::normalize() if normalized value is | |
// expected. | |
let omsb = omsb(dst); | |
if omsb <= precision { | |
Loss::ExactlyZero | |
} else { | |
shift_right(dst, exp, omsb - precision) | |
} | |
} | |
/// `quotient = dividend / divisor`. Returns the lost fraction. | |
/// Does not preserve `dividend` or `divisor`. | |
pub(super) fn div( | |
quotient: &mut [Limb], | |
exp: &mut ExpInt, | |
dividend: &mut [Limb], | |
divisor: &mut [Limb], | |
precision: usize, | |
) -> Loss { | |
// Normalize the divisor. | |
let bits = precision - omsb(divisor); | |
shift_left(divisor, &mut 0, bits); | |
*exp += bits as ExpInt; | |
// Normalize the dividend. | |
let bits = precision - omsb(dividend); | |
shift_left(dividend, exp, bits); | |
// Division by 1. | |
let olsb_divisor = olsb(divisor); | |
if olsb_divisor == precision { | |
quotient.copy_from_slice(dividend); | |
return Loss::ExactlyZero; | |
} | |
// Ensure the dividend >= divisor initially for the loop below. | |
// Incidentally, this means that the division loop below is | |
// guaranteed to set the integer bit to one. | |
if cmp(dividend, divisor) == Ordering::Less { | |
shift_left(dividend, exp, 1); | |
assert_ne!(cmp(dividend, divisor), Ordering::Less) | |
} | |
// Helper for figuring out the lost fraction. | |
let lost_fraction = |dividend: &[Limb], divisor: &[Limb]| { | |
match cmp(dividend, divisor) { | |
Ordering::Greater => Loss::MoreThanHalf, | |
Ordering::Equal => Loss::ExactlyHalf, | |
Ordering::Less => { | |
if is_all_zeros(dividend) { | |
Loss::ExactlyZero | |
} else { | |
Loss::LessThanHalf | |
} | |
} | |
} | |
}; | |
// Try to perform a (much faster) short division for small divisors. | |
let divisor_bits = precision - (olsb_divisor - 1); | |
macro_rules! try_short_div { | |
($W:ty, $H:ty, $half:expr) => { | |
if divisor_bits * 2 <= $half { | |
// Extract the small divisor. | |
let _: Loss = shift_right(divisor, &mut 0, olsb_divisor - 1); | |
let divisor = divisor[0] as $H as $W; | |
// Shift the dividend to produce a quotient with the unit bit set. | |
let top_limb = *dividend.last().unwrap(); | |
let mut rem = (top_limb >> (LIMB_BITS - (divisor_bits - 1))) as $H; | |
shift_left(dividend, &mut 0, divisor_bits - 1); | |
// Apply short division in place on $H (of $half bits) chunks. | |
each_chunk(dividend, $half, |chunk| { | |
let chunk = chunk as $H; | |
let combined = ((rem as $W) << $half) | (chunk as $W); | |
rem = (combined % divisor) as $H; | |
(combined / divisor) as $H as Limb | |
}); | |
quotient.copy_from_slice(dividend); | |
return lost_fraction(&[(rem as Limb) << 1], &[divisor as Limb]); | |
} | |
} | |
} | |
try_short_div!(u32, u16, 16); | |
try_short_div!(u64, u32, 32); | |
try_short_div!(u128, u64, 64); | |
// Zero the quotient before setting bits in it. | |
for x in &mut quotient[..limbs_for_bits(precision)] { | |
*x = 0; | |
} | |
// Long division. | |
for bit in (0..precision).rev() { | |
if cmp(dividend, divisor) != Ordering::Less { | |
sub(dividend, divisor, 0); | |
set_bit(quotient, bit); | |
} | |
shift_left(dividend, &mut 0, 1); | |
} | |
lost_fraction(dividend, divisor) | |
} | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I feel like at least for the signed/unsigned distinction this will become ugly when done "untyped".
That "simple" thing you pointed to is still way more complicated than what we currently do for integers ops in CTFE.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RalfJung Yes, because it handles arbitrary-size integers, while you have only one "limb".
What you do is more or less what I mean.
@oli-obk r=me unless you have miri-specific comments |
@bors r=eddyb,oli-obk |
📌 Commit 8dfc8db has been approved by |
Let's make Miri work again. @bors p=1 |
☀️ Test successful - checks-travis, status-appveyor |
Tested on commit rust-lang/rust@912d22e. Direct link to PR: <rust-lang/rust#61673> 🎉 rls on linux: test-fail → test-pass (cc @Xanewok, @rust-lang/infra).
test more variants of enum-int-casting As I learned in rust-lang#61673 (comment), there is a code path we are not testing yet. Looks like enum-int-casting with and without an intermediate let-binding is totally different. EDIT: The reason for this is to get rid of the cycle in definitions such as: ```rust enum Foo { A = 0, B = Foo::A as isize + 2, } ``` This has historically been supported, so a hack adding special treatment to `Enum::Variant as _` was added to keep supporting it.
Cc @oli-obk @eddyb