Floating point 'equality' testing

Discussion in 'Programmer's Corner' started by AlbertHall, Oct 3, 2016.

  1. AlbertHall

    Thread Starter Well-Known Member

    Jun 4, 2014
    1,908
    379
    I know that it is generally inadvisable to test floating point numbers for equality. However, is it safe to assume that 'some code' will be executed in this example?
    Code (Text):
    1.  
    2. double A, B;
    3. A = 3.27 / 4.72;
    4. B = A;
    5. if(A == B)
    6. {
    7. //some code
    8. }
    9.  
     
  2. dannyf

    Well-Known Member

    Sep 13, 2015
    1,779
    360
    not with 100% certainty.
     
  3. WBahn

    Moderator

    Mar 31, 2012
    17,720
    4,788
    Yes, because you are storing one bit pattern directly into another and then checking if those two bit patterns are the same. There is no round-off involved.
     
  4. ci139

    Member

    Jul 11, 2016
    341
    38
     
  5. AlbertHall

    Thread Starter Well-Known Member

    Jun 4, 2014
    1,908
    379
    This is for embedded code and I am trying to avoid fabs(A-B)<0.00001 which looks like a lot of code space and clock cycles.
     
  6. ci139

    Member

    Jul 11, 2016
    341
    38
    on a second thought perhaps not -- and it depends the FP format and the algorithm used to test the equality of doubles
    e.g.
    the specific values and NaN-s may have a different bit patterns stored in FP-s - especially if youse JAVA or other platform independent
    to be more specific -- to substract a value from **another -- they might **complement the second then add the operands , test zero, rather than xor FP64 bits (, test zero), if it's FP then they** produce a round off error . . . maybe . . . and such is a bad Math software (while they keep sending updates and service packs ...)
     
  7. joeyd999

    AAC Fanatic!

    Jun 6, 2011
    2,674
    2,724
    I do a great deal of work with floats in embedded code, yet I cannot think of a single instance where I'd want to know if two (of arbitrary precision) are equal.

    What is it, precisely, you are really trying to accomplish? There is likely a better way.
     
  8. ci139

    Member

    Jul 11, 2016
    341
    38
  9. joeyd999

    AAC Fanatic!

    Jun 6, 2011
    2,674
    2,724
    You missed the absolute value part. Aside from that, I think the remainder of your comment was lost in translation.

    FABS() is cheap for floats. Addition and subtraction are expensive: the arguments must first be aligned prior to the operation, and the result must be normalized.
     
  10. WBahn

    Moderator

    Mar 31, 2012
    17,720
    4,788
    In general this is not a very good way to check for (approximate) floating point equality. What if

    A = 1.000000000001e+19
    B = 1.000000000002e+19

    Wouldn't you (probably) want those to be considered "equal", even though (if I counted right) fabs(A-B) is one million?

    Or what about

    A = 1000.0e-19
    B = 1.0e-19

    Wouldn't you want those to be considered "unequal", even though fabs(A-B) here is orders of magnitude smaller than 0.00001?

    IF (and only IF) you know that A and B are sufficiently close to 1.0 to make 0.00001 a meaningful threshold should you consider this approach. Otherwise you want to normalize your threshold so that you are essentially saying that A and B are considered "equal" if they agree to so many sig figs.
     
  11. WBahn

    Moderator

    Mar 31, 2012
    17,720
    4,788
    I agree that there is hardly a case in which you should want to equality compare two floating point values (though the IEEE-754 standard requires that a limited range of integers have exact representations and that arithmetic performed within that range produce exact results, so there is a lot of code out there that hangs its hat on that requirement -- personally I think it is fragile but if you are extremely careful you can almost certain get significant performance gains by exploiting it).

    Whether you want to (or should want to) do it or not, I'm interested in what your experience tells you about the strict question that was asked, which is basically:

    If we have two floating point variables (say doubles) and I do:

    b = some_finite_representable_value;
    a = b;
    if (a == b) { some_code }

    is it guaranteed that the "some_code" will execute?

    If not, what are the potential reasons why not?

    The only one that I can think of is that the operation (a - b) might not produce exactly zero. But is this possible when 'a' and 'b' have been forced to have the exact same representation (bit pattern) in memory?

    Or is it possible that a = b could result in a different pattern stored in 'a' than in 'b'? That seems unlikely (meaning I don't think it is possible) except in the case of b being equal to -0. But I think that the standards requirement that -0 be treated identically to +0 (except that 1/-0 = -infinity and 1/+0 = +infinity) would come into play.
     
  12. joeyd999

    AAC Fanatic!

    Jun 6, 2011
    2,674
    2,724
    Since I write my own math libraries, I can guarantee that the conditional presented will evaluate as true.

    Would I trust a third party implementation? Probably not. But since I'd never compare floats for equality, I've never asked, or researched, the question.
     
  13. AlbertHall

    Thread Starter Well-Known Member

    Jun 4, 2014
    1,908
    379
    OK, so the take away message is 'don't rely on it' and will either bite the bullet or find an alternative approach.
    Thanks guys, I am enlightened.
     
  14. joeyd999

    AAC Fanatic!

    Jun 6, 2011
    2,674
    2,724
    I figured a (relatively) cheap and fast way to do this:
    1. Compare the exponents. If they are equal, proceed to step 3.
    2. If they are different by 1, align the significand with the larger exponent by a single right shift. Otherwise, the two values are different by greater than one (base 2) order of magnitude -- quit as not equal.
    3. For each operand that is negative (sign bit set), 2's complement the significand.
    4. (Binary) subtract one significand from the other.
    5. If the result is negative, 2's complement the result.
    6. Compare the final binary result against a max value depending upon the precision required. Less than max is equal, greater than is not equal.
    While this process requires stripping the sign, exponent, and significand of the original floats, it at least eliminates the need for any floating point operations. And, it takes care of the precision vs. magnitude issue.
     
  15. joeyd999

    AAC Fanatic!

    Jun 6, 2011
    2,674
    2,724
    Sorry, the subtraction requires 2 signed integers, followed by negation if necessary.
     
  16. MrChips

    Moderator

    Oct 2, 2009
    12,432
    3,360
    Use fixed-point arithmetic.
     
  17. AlbertHall

    Thread Starter Well-Known Member

    Jun 4, 2014
    1,908
    379
    That's definitely something I'm going to consider.
     
  18. MrChips

    Moderator

    Oct 2, 2009
    12,432
    3,360
    What are your range of numbers and what resolution do you desire?
    For example, for -199.99 to +199.99, you can scale all your values by 100 and work with integer arithmetic.
     
  19. joeyd999

    AAC Fanatic!

    Jun 6, 2011
    2,674
    2,724
    How does this answer the question regarding testing for the equality of two floating point numbers?
     
  20. joeyd999

    AAC Fanatic!

    Jun 6, 2011
    2,674
    2,724
    So, two floating point multiplications followed by a type cast to signed ints is faster than what I proposed?
     
Loading...