## Fun with Mathematica’s Reduce and floating point arithmetic

February 8th, 2013 | Categories: mathematica, matlab | Tags:

I was at a seminar yesterday where we were playing with Mathematica and wanted to solve the following equation

`1.0609^t == 1.5`

You punch this into Mathematica as follows:

`Solve[1.0609^t == 1.5]`

Mathematica returns the following

```During evaluation of In[1]:= Solve::ifun: Inverse functions are being used by Solve,
so some solutions may not be found; use Reduce for complete solution information. >>

Out[1]= {{t -> 6.85862}}```

I have got the solution I expect but Mathematica suggests that I’ll get more information if I use Reduce. So, lets do that.

```In[2]:= Reduce[1.0609^t == 1.5, t]
During evaluation of In[2]:= Reduce::ratnz: Reduce was unable to solve the system with inexact
coefficients. The answer was obtained by solving a corresponding exact system and numericizing the result. >>
Out[20]= C[1] \[Element] Integers &&
t == -16.9154 (-0.405465 + (0. + 6.28319 I) C[1])```

Looks complex and a little complicated! To understand why complex numbers have appeared in the mix you need to know that Mathematica always considers variables to be complex unless you tell it otherwise. So, it has given you the infinite number of complex values of t that would satisfy the original equation. No problem, let’s just tell Mathematica that we are only interested in real values of p.

```In[3]:= Reduce[1.0609^t == 1.5, t, Reals]

During evaluation of In[3]:= Reduce::ratnz: Reduce was unable to solve the system with inexact
coefficients. The answer was obtained by solving a corresponding exact system and numericizing the result. >>

Out[3]= t == 6.85862```

Again, I get the solution I expect. However, Mathematica still feels that it is necessary to give me a warning message. Time was ticking on so I posed the question on Mathematica Stack Exchange and we moved on.

http://mathematica.stackexchange.com/questions/19235/why-does-mathematica-struggle-with-solving-this-equation

At the time of writing, the lead answer says that ‘Mathematica is not “struggling” with your equation. The message is simply FYI — to tell you that, for this equation, it prefers to work with exact quantities rather than inexact quantities (reals)’

I’d accept this as an explanation except for one thing; the message say’s that it is UNABLE to solve the equation as I originally posed it and so needs to proceed by solving a corresponding exact system. That implies that it has struggled a great deal, given up and tried something else.

This would come as a surprise to anyone with a calculator who would simply do the following manipulations

1.0609^t == 1.5

Log[1.0609^t] == Log[1.5]

T*Log[1.0609] == Log[1.5]

T= Log[1.5]/Log[1.0609]

Mathematica evalutes this as

T=6.8586187084788275

This appears to solve the equation exactly.  Plug it back into Mathematica (or my calculator) to get

```In[4]:= 1.0609^(6.8586187084788275)

Out[4]= 1.5```

I had no trouble dealing with inexact quantities, and I didn’t need to ‘solve a corresponding exact system and numericize the result’.  This appears to be a simple problem. So, why does Mathematica bang on about it so much?

Over to MATLAB for a while

Mathematica is complaining that we have asked it to work with inexact quantities.  How could this be? 1.0609, 6.8586187084788275 and 1.5 look pretty exact to me! However, it turns out that as far as the computer is concerned some of these numbers are far from exact.

When you input a number such as 1.0609 into Mathematica, it considers it to be a double precision number and 1.0609 cannot be exactly represented as such.  The closest Mathematica, or indeed any numerical system that uses 64bit IEEE arithmetic, can get is 4777868844677359/4503599627370496 which evaluates to 1.0608999999999999541699935434735380113124847412109375. I wondered if this is why Mathematica was complaining about my problem.

At this point, I switch tools and use MATLAB for a while in order to investigate further.  I do this for no reason other than I know the related functions in MATLAB a little better.  MATLAB’s sym command can give us the rational number that exactly represents what is actually stored in memory (Thanks to Nick Higham for pointing this out to me).

```>> sym(1.0609,'f')

ans =
4777868844677359/4503599627370496```

We can then evaluate this fraction to whatever precision we like using the vpa command:

```>> vpa(sym(1.0609,'f'),100)
ans =
1.0608999999999999541699935434735380113124847412109375

>> vpa(sym(1.5,'f'),100)
ans =
1.5

>> vpa(sym( 6.8586187084788275,'f'),100)
ans =
6.8586187084788274859192824806086719036102294921875```

So, 1.5 can be represented exactly in 64bit double precision arithmetic but 1.0609 and 6.8586187075 cannot.  Mathematica is unhappy with this state of affairs and so chooses to solve an exact problem instead.  I guess if I am working in a system that cannot even represent the numbers in my problem (e.g. 1.0609) how can I expect to solve it?

Which Exact Problem?
So, which exact equation might Reduce be choosing to solve?  It could solve the equation that I mean:

`(10609/10000)^t == 1500/1000`

which does have an exact solution and so Reduce can find it.

`(Log[2] - Log[3])/(2*(2*Log[2] + 2*Log[5] - Log[103]))`

Evaluating this gives 6.858618708478698:

```(Log[2] - Log[3])/(2*(2*Log[2] + 2*Log[5] - Log[103])) // N // FullForm

6.858618708478698````

Alternatively, Mathematica could convert the double precision number 1.0609 to the fraction that exactly represents what’s actually in memory and solve that.

`(4777868844677359/4503599627370496)^t == 1500/1000`

This also has an exact solution:

`(Log[2] - Log[3])/(52*Log[2] - Log[643] - Log[2833] - Log[18251] - Log[143711])`

which evaluates to 6.858618708478904:

```(Log[2] - Log[3])/(52*Log[2] - Log[643] - Log[2833] - Log[18251] - Log[143711]) // N // FullForm

6.858618708478904````

Let’s take a look at the exact number Reduce is giving:

```Quiet@Reduce[1.0609^t == 1.5, t, Reals] // N // FullForm

Equal[t, 6.858618708478698`]```

So, it looks like Mathematica is solving the equation I meant to solve and evaluating this solution it at the end.

Summary of solutions
Here I summarize the solutions I’ve found for this problem:

• 6.8586187084788275 – Pen,Paper+Mathematica for final evaluation.
• 6.858618708478698 – Mathematica solving the exact problem I mean and evaluating to double precision at the end.
• 6.858618708478904 – Mathematica solving the exact problem derived from what I really asked it and evaluating at the end.

What I find fun is that my simple minded pen and paper solution seems to satisfy the original equation better than the solutions arrived at by more complicated means.  Using MATLAB’s vpa again I plug in the three solutions above, evaluate to 50 digits and see which one is closest to 1.5:

```>> vpa('1.5 - 1.0609^6.8586187084788275',50)
ans =
-0.00000000000000045535780896732093552784442911868195148156712149736
>> vpa('1.5 - 1.0609^6.858618708478698',50)
ans =
0.000000000000011028236861872639054542278886208515813177349441555
>> vpa('1.5 - 1.0609^6.858618708478904',50)
ans =
-0.0000000000000072391029234017787617507985059241243968693347431197```

So, ‘my’ solution is better. Of course, this advantage goes away if I evaluate (Log[2] – Log[3])/(2*(2*Log[2] + 2*Log[5] – Log[103])) to 50 decimal places and plug that in.

```>> sol=vpa('(log(2) - log(3))/(2*(2*log(2) + 2*log(5) - log(103)))',50)
sol =
6.858618708478822364949699852597847078441119153527
>> vpa(1.5 - 1.0609^sol',50)
ans =
0.0000000000000000000000000000000000000014693679385278593849609206715278070972733319459651
```
1. “What I find fun is that my simple minded pen and paper solution seems to satisfy the original equation better than the solutions arrived at by more complicated means.”

To be honest I really find this more worrying than funny.

2. As a general rule, never put floating point or finite precision numbers into a symbolic algorithm like Solve. Symbolic transformations are not necessarily numerically stable. That is, the transformations that Solve uses are safe for infinitely precise numbers, but many not always be safe for finite precision numbers. Solve could have used any method it wanted to find the solution to the equation and used any transformation to get there. There isn’t a bound on how bad the result could be. It is possible that Solve could use an extremely complicated set of symbolic transformations that are deeply unintuitive and that these transformations all taken together destroy all the precision in the results.

Instead, use NSolve which is meant to handle finite precision input.

3. Important to note is that machine numbers are not inexact per se, but are treated as such under Mathematica’s numerical model. Specifically, they are taken to have a precision of one part in 35184372088832 (== Round[10^Internal`\$EqualTolerance]*SetPrecision[\$MachineEpsilon, Infinity]), with differences below this level being considered insignificant. Whether the decimal representation is exactly reproducible as a binary floating-point number is not the issue here. Your pencil-and-paper solution isn’t strictly correct under this definition; in fact, you absolutely are solving a “corresponding exact system” in Mathematica’s parlance. Whether or not Mathematica’s model is a good one has been the subject of many heated arguments, so I won’t address this here, but one should certainly be aware that there are significant differences versus other software. If you want to find a numerical solution valid up to the limits of machine precision, FindRoot (with a specified PrecisionGoal) will typically be a better choice than Solve/NSolve or Reduce.

4. Thanks for the info Robert. I think it would be useful if Reduce’s warning message said that FindRoot would be a better choice

5. t /. FindRoot[1.0609^t == 1.5, {t, 6}] // FullForm
gives
6.858618708478827`
which is almost equal to the “Pen,Paper+Mathematica” solution.

6. In addition to this, the supplement is also known to contain a unique blend
of carbohydrates that ensure that the energy levels in your body are fully sustained.
In order to ensure safety, it is recommended to choose product
from a certified and trusted company. We wouldn’t necessarily say we felt any super-energy
effects, but we had ample energy and strength during workouts and felt the burn during and after.