Weird bug with float <= float

So I’m running 4.10.2 UE4, Windows 7 64bit, 8 gb ram, AMD Athlon II X3 425, NVIDIA GTX 750 ti 4gb.

I have a float default to 1.0. I subtract 0.2 from it five times and send it into a branch comparing the float variable <= 0.0. But the condition does not branch to TRUE. I print the variable to screen, and it shows 0.0 but does not branch TRUE.

If I subtract 0.2 from it a sixth time, setting the variable to -0.2, the branch does branch TRUE.

If instead of subtracting 0.2 five times, I subtract 0.1 ten times or 0.25 four times, still setting the variable to 0.0, the branch DOES branch TRUE.

This issue really had me confused. I couldn’t fathom why on earth that subtracting 0.2 five times from a variable set to 1 would not satisfy <= 0.0, but subtracting 0.1 ten times from that same variable would satisfy <= 0.0

If I set the branch’s condition to <= 0.01 then it does branch to TRUE after subtracting 0.2 five times.

So I tried playing with this some more and it seems that when applying -0.2 five times the variable ends up as 0.0, but when applying -0.1 ten times the variable ends up as -0.0, and that is the difference between it branching TRUE on the <= 0.0 comparison. Still very strange.

It’s not a bug; this is binary floating point imprecision.
Read ‘what every programmer should know about floating points’…:

http://floating-point-gui.de/basic/

http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

I don’t think that matters in this case. Here isn’t used any multiplication/division operations, so for me this behavior is strange too.

Hi ,

is correct, I believe this is a float point error. I have reproduced the error and have entered a bug report, UE_26259, to be assessed by the development staff.