Marc Hughes


Home
Blog
Twitter
LinkedIn
GitHub
about
I am a developer from a bit west of Boston.

Reminder Floats are not precise

13 May 2009

I was chatting with a couple friends the other day about an odd bug that was due to floating point precision.  Neither of them got it at first, and it reminded me that not everyone realizes how imprecise floating point math can be.  I know this, you know this, even the friends I was talking to knew it after they thought about it.  But it's so easy to ignore it since floating point math usually does what we want.  Here's three amazingly simple examples:

Actionscript:

var val:Number = 0.0;
for( var i:int = 0 ; i < 10 ; i++ )
{
    val += 0.1;
}
trace(val);
Java:
public class FloatTest {
    public static void main(String[] args) {
        float val = 0;
        for( int i = 0 ; i < 10 ; i++)
        {
            val += 0.1;
        }
        System.out.println( val );
    }
}
Ruby (I had to go a bit higher on the loop since it rounds differently):
val = 0
(1..100).each do
  val += 0.1
end
print val
What would you expect the output of those to be? Probably 1, 1, and 10 right?

Wrong.

The actionscript example comes out to 0.9999999999999999, java has 1.0000001, and ruby gets 9.99999999999998

Here's another actionscript example:

trace( (0.1 + 0.1 ) == (2/10) );
trace( (0.1 + 0.1 + 0.1) == (3/10) );

That traces out: true false

Baffling huh? A float should easily have the precision to represent a tenth, right? It should easily be able to add up ten tenths to get one, right?

Here's the problem. Floats really operate in binary (duh). In binary, you can't represent a lot of simple decimal values, such as 0.10 without a repeating pattern. 0.10 decimal works out to 0.000110011 in binary, with the last 4 digits repeating forever. So the 0.1 you see when you trace out 0.1 is really just a rounded off binary value. If you do math with those values, the error can accumulate until it's big enough to be seen despite the rounding.

So you should never use floats for anything like:

  • Money
  • "Real mathematics" (for instance, I work on educational math software where the numbers have to always actually add up)
  • Anything requiring a certain precision, like sending a probe to mars or calculating medication dosage

You should use floats for things where precision doesn't really matter.  Things, that if you're off by a little nobody will ever notice.  Games, animations, audio compression, etc.   Otherwise, the easiest solution is to stick to integer based math.  You can pick a unit of measure magnitudes higher than you need.  Example: for money use cents (or thenths of cents?) instead of dollars as your UoM.  Then when you display the value you divide by 100 to show dollars.  Or you could use a richer class that handles precise numbers like Java's BigDecimal, or ruby's Rational type.

But like I said, we already knew this, right?