Floating point 'equality' testing

Thread Starter

AlbertHall

Joined Jun 4, 2014
12,347
I know that it is generally inadvisable to test floating point numbers for equality. However, is it safe to assume that 'some code' will be executed in this example?
Code:
double A, B;
A = 3.27 / 4.72;
B = A;
if(A == B)
{
//some code
}
 

WBahn

Joined Mar 31, 2012
30,077
I know that it is generally inadvisable to test floating point numbers for equality. However, is it safe to assume that 'some code' will be executed in this example?
Code:
double A, B;
A = 3.27 / 4.72;
B = A;
if(A == B)
{
//some code
}
Yes, because you are storing one bit pattern directly into another and then checking if those two bit patterns are the same. There is no round-off involved.
 

Thread Starter

AlbertHall

Joined Jun 4, 2014
12,347
This is for embedded code and I am trying to avoid fabs(A-B)<0.00001 which looks like a lot of code space and clock cycles.
 

ci139

Joined Jul 11, 2016
1,898
on a second thought perhaps not -- and it depends the FP format and the algorithm used to test the equality of doubles
e.g.
the specific values and NaN-s may have a different bit patterns stored in FP-s - especially if youse JAVA or other platform independent
to be more specific -- to substract a value from **another -- they might **complement the second then add the operands , test zero, rather than xor FP64 bits (, test zero), if it's FP then they** produce a round off error . . . maybe . . . and such is a bad Math software (while they keep sending updates and service packs ...)
 

joeyd999

Joined Jun 6, 2011
5,287
This is for embedded code and I am trying to avoid fabs(A-B)<0.00001 which looks like a lot of code space and clock cycles.
I do a great deal of work with floats in embedded code, yet I cannot think of a single instance where I'd want to know if two (of arbitrary precision) are equal.

What is it, precisely, you are really trying to accomplish? There is likely a better way.
 

joeyd999

Joined Jun 6, 2011
5,287
= (A-(B+0.00001)) , ? test NEG.flag a bit C0 seems complicated -- people like wasting other peoples´ time (this like general genetic fault)
https://en.wikipedia.org/wiki/Parity_flag
http://x86.renejeschke.de/html/file_module_x86_id_123.html
https://cs.fit.edu/~mmahoney/cse3101/float.html
You missed the absolute value part. Aside from that, I think the remainder of your comment was lost in translation.

FABS() is cheap for floats. Addition and subtraction are expensive: the arguments must first be aligned prior to the operation, and the result must be normalized.
 

WBahn

Joined Mar 31, 2012
30,077
This is for embedded code and I am trying to avoid fabs(A-B)<0.00001 which looks like a lot of code space and clock cycles.
In general this is not a very good way to check for (approximate) floating point equality. What if

A = 1.000000000001e+19
B = 1.000000000002e+19

Wouldn't you (probably) want those to be considered "equal", even though (if I counted right) fabs(A-B) is one million?

Or what about

A = 1000.0e-19
B = 1.0e-19

Wouldn't you want those to be considered "unequal", even though fabs(A-B) here is orders of magnitude smaller than 0.00001?

IF (and only IF) you know that A and B are sufficiently close to 1.0 to make 0.00001 a meaningful threshold should you consider this approach. Otherwise you want to normalize your threshold so that you are essentially saying that A and B are considered "equal" if they agree to so many sig figs.
 

WBahn

Joined Mar 31, 2012
30,077
I do a great deal of work with floats in embedded code, yet I cannot think of a single instance where I'd want to know if two (of arbitrary precision) are equal.

What is it, precisely, you are really trying to accomplish? There is likely a better way.
I agree that there is hardly a case in which you should want to equality compare two floating point values (though the IEEE-754 standard requires that a limited range of integers have exact representations and that arithmetic performed within that range produce exact results, so there is a lot of code out there that hangs its hat on that requirement -- personally I think it is fragile but if you are extremely careful you can almost certain get significant performance gains by exploiting it).

Whether you want to (or should want to) do it or not, I'm interested in what your experience tells you about the strict question that was asked, which is basically:

If we have two floating point variables (say doubles) and I do:

b = some_finite_representable_value;
a = b;
if (a == b) { some_code }

is it guaranteed that the "some_code" will execute?

If not, what are the potential reasons why not?

The only one that I can think of is that the operation (a - b) might not produce exactly zero. But is this possible when 'a' and 'b' have been forced to have the exact same representation (bit pattern) in memory?

Or is it possible that a = b could result in a different pattern stored in 'a' than in 'b'? That seems unlikely (meaning I don't think it is possible) except in the case of b being equal to -0. But I think that the standards requirement that -0 be treated identically to +0 (except that 1/-0 = -infinity and 1/+0 = +infinity) would come into play.
 

joeyd999

Joined Jun 6, 2011
5,287
I agree that there is hardly a case in which you should want to equality compare two floating point values (though the IEEE-754 standard requires that a limited range of integers have exact representations and that arithmetic performed within that range produce exact results, so there is a lot of code out there that hangs its hat on that requirement -- personally I think it is fragile but if you are extremely careful you can almost certain get significant performance gains by exploiting it).

Whether you want to (or should want to) do it or not, I'm interested in what your experience tells you about the strict question that was asked, which is basically:

If we have two floating point variables (say doubles) and I do:

b = some_finite_representable_value;
a = b;
if (a == b) { some_code }

is it guaranteed that the "some_code" will execute?

If not, what are the potential reasons why not?

The only one that I can think of is that the operation (a - b) might not produce exactly zero. But is this possible when 'a' and 'b' have been forced to have the exact same representation (bit pattern) in memory?

Or is it possible that a = b could result in a different pattern stored in 'a' than in 'b'? That seems unlikely (meaning I don't think it is possible) except in the case of b being equal to -0. But I think that the standards requirement that -0 be treated identically to +0 (except that 1/-0 = -infinity and 1/+0 = +infinity) would come into play.
Since I write my own math libraries, I can guarantee that the conditional presented will evaluate as true.

Would I trust a third party implementation? Probably not. But since I'd never compare floats for equality, I've never asked, or researched, the question.
 

Thread Starter

AlbertHall

Joined Jun 4, 2014
12,347
OK, so the take away message is 'don't rely on it' and will either bite the bullet or find an alternative approach.
Thanks guys, I am enlightened.
 

joeyd999

Joined Jun 6, 2011
5,287
I figured a (relatively) cheap and fast way to do this:
  1. Compare the exponents. If they are equal, proceed to step 3.
  2. If they are different by 1, align the significand with the larger exponent by a single right shift. Otherwise, the two values are different by greater than one (base 2) order of magnitude -- quit as not equal.
  3. For each operand that is negative (sign bit set), 2's complement the significand.
  4. (Binary) subtract one significand from the other.
  5. If the result is negative, 2's complement the result.
  6. Compare the final binary result against a max value depending upon the precision required. Less than max is equal, greater than is not equal.
While this process requires stripping the sign, exponent, and significand of the original floats, it at least eliminates the need for any floating point operations. And, it takes care of the precision vs. magnitude issue.
 

joeyd999

Joined Jun 6, 2011
5,287
3. For each operand that is negative (sign bit set), 2's complement the significand.
4. (Binary) subtract one significand from the other.
5. If the result is negative, 2's complement the result.
Sorry, the subtraction requires 2 signed integers, followed by negation if necessary.
 

MrChips

Joined Oct 2, 2009
30,824
What are your range of numbers and what resolution do you desire?
For example, for -199.99 to +199.99, you can scale all your values by 100 and work with integer arithmetic.
 
Top