Floating point addition is not associative

February 28th, 2014 | Categories: general math, matlab, Numerics, programming, python | Tags:

A lot of people don’t seem to know this….and they should. When working with floating point arithmetic, it is not necessarily true that a+(b+c) = (a+b)+c. Here is a demo using MATLAB

>> x=0.1+(0.2+0.3);
>> y=(0.1+0.2)+0.3;
>> % are they equal?
>> x==y

ans =
     0

>> % lets look
>> sprintf('%.17f',x)
ans =
0.59999999999999998

>> sprintf('%.17f',y)
ans =
0.60000000000000009

These results have nothing to do with the fact that I am using MATLAB. Here’s the same thing in Python

>>> x=(0.1+0.2)+0.3
>>> y=0.1+(0.2+0.3)
>>> x==y
False
>>> print('%.17f' %x)
0.60000000000000009
>>> print('%.17f' %y)
0.59999999999999998

If this upsets you, or if you don’t understand why, I suggest you read the following

Does anyone else out there have suggestions for similar resources on this topic?

  1. February 28th, 2014 at 18:43
    Reply | Quote | #1

    Lahey hosts a nice concise 1996 essay by Bruce M. Bush, “The Perils of Floating Point”:
    http://www.lahey.com/float.htm

    The examples are in Fortran but every real programmer knows Fortran anyway, right? :)

    And for a historical perspective there’s this 1998 interview with William Kahan on the advent of IEEE standardization:
    http://www.eecs.berkeley.edu/~wkahan/ieee754status/754story.html

    The takeaway here is, if you think FP is bad today it was much worse before!

  2. February 28th, 2014 at 18:52
    Reply | Quote | #2

    A similar resource, with the same numbers:

  3. brian t
    February 28th, 2014 at 19:13
    Reply | Quote | #3

    Associative? It’s not even Commutative:

    octave:1> x=0.1+0.2+0.3
    x = 0.60000
    octave:2> y=0.3+0.2+0.1
    y = 0.60000
    octave:3> x==y
    ans = 0
    octave:4> x-y
    ans = 1.1102e-16

  4. Mike Croucher
    February 28th, 2014 at 20:11
    Reply | Quote | #4

    Nice resources…thanks all :)

  5. February 28th, 2014 at 20:26
    Reply | Quote | #5

    A recent blog post about algorithm issues related to floating points: Bisecting Floating Point Numbers.

  6. Martin Cohen
    February 28th, 2014 at 23:09
    Reply | Quote | #6

    Knuth has a extensive discussion of this (and practically everything else) in AOCP.

  7. Matt D.
    March 1st, 2014 at 12:48
    Reply | Quote | #7
  8. March 1st, 2014 at 12:52
    Reply | Quote | #8

    Book references for floating point arithmetic include

    Chapter 2 of
    Accuracy and Stability of Numerical Algorithms (SIAM, 2nd ed, 2002)
    http://www.maths.manchester.ac.uk/~higham/asna/index.php

    Numerical Computing with IEEE Floating Point Arithmetic (SIAM, 2001)
    http://www.ec-securehost.com/SIAM/ot76.html

    and

    Handbook of Floating-Point Arithmetic (Birkhauser, 2010)
    http://www.springer.com/birkhauser/mathematics/book/978-0-8176-4704-9

    Anyone with institutional access to SIAM e-books and SpringerLink should be
    able to access these books in PDF form.

  9. Matt D.
    March 1st, 2014 at 12:54
    Reply | Quote | #9

    I may also add that writings by Bruce Dawson are excellent:
    https://randomascii.wordpress.com/category/floating-point/

    For instance, I found these particularly insightful and informative:
    – Comparing Floating Point Numbers, 2012 Edition: https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
    – Floating-Point Determinism: https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/
    – Floating-point complexities: https://randomascii.wordpress.com/2012/04/05/floating-point-complexities/
    – Intermediate Floating-Point Precision: https://randomascii.wordpress.com/2012/03/21/intermediate-floating-point-precision/
    – Float Precision–From Zero to 100+ Digits: https://randomascii.wordpress.com/2012/03/08/float-precisionfrom-zero-to-100-digits-2/

  10. Ruben Garcia
    March 1st, 2014 at 15:05

    Mathematica doesn’t seem to have any problems:

    In[1]:= x = 0.1 + (0.2 + 0.3);
    y = (0.1 + 0.2) + 0.3;

    In[3]:= x == y

    Out[3]= True

  11. Mike Croucher
    March 1st, 2014 at 19:00

    @Ruben I disagree! Let’s look at what those variables actually contain

    In[1]:= x=0.1+(0.2+0.3);
    y=(0.1+0.2)+0.3;

    In[6]:= x //FullForm
    Out[6]//FullForm= 0.6`

    In[7]:= y//FullForm
    Out[7]//FullForm= 0.6000000000000001`

    In[8]:= x==y
    Out[8]= True

    So, what we have here is that x and y are demonstrably different and yet Mathematica says that they are equal. I understand what all of the other systems are doing since they are obeying IEEE arithmetic. I have no idea what Mathematica is choosing to do here.

  12. Alex Henderson
    March 1st, 2014 at 19:17

    Be careful not to fall into the trap of comparing two floating point numbers for equality. One should always compare within the machine’s epsilon value which relates to the rounding error precision of the processor. See that font of all knowledge Wikipedia on the matter, together with a method of comparing in MATLAB: http://en.wikipedia.org/wiki/Machine_epsilon

    My guess is that Mathematica is using this to say that, to its best approximation using the underlying hardware, the numbers are the same.

  13. Ruben Garcia
    March 2nd, 2014 at 03:59

    @Mike You’re right. Thanks for pointing it out.
    When I did the same calculation with arbitrary precision both variables had the same value. I’ll post the question in http://mathematica.stackexchange.com to try to find out more.

  14. March 2nd, 2014 at 09:27

    Moreover, it is surprising that 0:0.1:1 actually works. In EMT, I use an internal epsilon to check for the last multiple of 0.1 to be 1 or not. An alternative would be to round 1/0.1 to 10 and using that number of steps.

    By the way, in EMT, you can print the hexadecimal content of numbers. It also has a ~= operator, which checks for equality.

    >x=0.1+(0.2+0.3);
    >y=(0.1+0.2)+0.3;
    >x==y
    0
    >x~=y
    1
    >printhex(x)
    9.9999999999998*16^-1
    >printhex(y)
    9.99999999999A0*16^-1

  15. Mike Croucher
    March 2nd, 2014 at 14:59

    Alex Henderson said “Be careful not to fall into the trap of comparing two floating point numbers for equality. ”

    I completely agree. It is inadvisable to compare two floating point numbers for equality for many reasons with the issue demonstrated in this post being just one of them.

    As R.G. ably demonstrates in the comment above, the two numbers we are considering are different at the bit level. If I ask a system if two different quantities are the same, I expect it to say ‘No’. Mathematica appears not to be doing that.

    Reading this Mathematica Stack Exchange discussion – http://mathematica.stackexchange.com/questions/43211/floating-point-addition-not-associative – it seems that Mathematica’s behaviour here is intentional. This raises the issue that if you prototype a floating point algorithm using Mathematica, you can’t guarantee that it will work as intended if you implement it in C, MATLAB, Python, Fortran or anything else!

  16. Martin Cohen
    March 2nd, 2014 at 20:19

    Re R.G.: To do a loop the number of times you want:

    for(x=x0; x+=dx; x 0).

  17. Steve L
    March 4th, 2014 at 21:07

    Regarding 0:0.1:1 working (with the last value being exactly 1) — take a look at the bottom of the first column of the second page of this Cleve’s Corner article. It’s from a number of years ago, but it’s still valid.

    http://www.mathworks.com/company/newsletters/articles/floating-points-ieee-standard-unifies-arithmetic-model.html

    You can display the hex form of numbers in MATLAB as well; use either the ‘hex’ option for the FORMAT function or use NUM2HEX.

    http://www.mathworks.com/help/matlab/ref/format.html
    http://www.mathworks.com/help/matlab/ref/num2hex.html

    If you know how many elements you want the vector you’re generating to have and the endpoints, use LINSPACE.

    http://www.mathworks.com/help/matlab/ref/linspace.html

  18. March 6th, 2014 at 04:35

    @Martin Cohen

    for (double x=x0; x+=dx; x<=x1+epsilon) !!!

  19. Grisha Kirilin
    March 10th, 2014 at 11:17

    @Mike Croucher Mathematica says that the numbers are equal with default adaptive precision:

    x == y
    (* True *)

    x – y == 0
    (* False *)

    x – y == 0.
    (* False *)

    Accuracy[x – y]

    (* 31.9092 *)

    In order to have correct arithmetics one has to use an explicit precision:

    accuracy = 40;
    x = SetPrecision[0.1, accuracy] + (SetPrecision[0.2, accuracy] + SetPrecision[0.3, accuracy]);
    y = (SetPrecision[0.1, accuracy] + SetPrecision[0.2, accuracy]) + SetPrecision[0.3, accuracy];

    x == y
    (* True *)

    x – y == 0
    (* True *)

    so that the fullforms are essentially the same:

    x // FullForm
    (* 0.6000000000000000055511151231257827021181583404541015625`40. *)

    y // FullForm
    (* 0.6000000000000000055511151231257827021181583404541015625`40. *)

    x – y // FullForm

    (* 0“39.92081875395238 *)

    thus it is zero with the precision used.

  20. Mike Croucher
    March 11th, 2014 at 11:23

    Thanks, Grisha. The issue that I have with Mathematica’s way of doing things is that its different from all other floating point implementations I know of. Thus, there is a risk that if you prototype code in Mathematica, it won’t work correctly when you re-implement in C, Fortran etc

1 trackbacks