## Playing with a tricky integral in Maple and Mathematica

July 31st, 2013

A colleague recently sent me this issue. Consider the following integral

Attempting to evaluate this with Mathematica 9 gives 0:

 f[a_, b_] := Exp[I*(a*x^3 + b*x^2)];
Integrate[f[a, b], {x, -Infinity, Infinity},
Assumptions -> {a > 0, b \[Element] Reals}]

Out[2]= 0

So, for all valid values of a and b, Mathematica tells us that the resulting integral will be 0.

However, Let’s set a=1/3 and b=0 in the function f and ask Mathematica to evaluate that integral

In[3]:= Integrate[f[1/3, 0], {x, -Infinity, Infinity}]

Out[3]= (2*Pi)/(3^(2/3)*Gamma[2/3])

This is definitely not zero as we can see by numerically evaluating the result:

In[5]:=
(2*Pi)/(3^(2/3)*Gamma[2/3])//N

Out[5]= 2.23071

Similarly if we put a=1/3 and b=1 we get another non-zero result

In[7]:= Integrate[f[1/3, 1], {x, -Infinity, Infinity}] // InputForm

Out[7]=2*E^((2*I)/3)*Pi*AiryAi[-1]

In[8]:=
2*E^((2*I)/3)*Pi*AiryAi[-1]//N

Out[8]= 2.64453 + 2.08083 I

We are faced with a situation where Mathematica is disagreeing with itself.  On one hand, it says that the integral is 0 for all a and b but on the other it gives non-zero results for certain combinations of a and b.  Which result do we trust?

One way to proceed would be to use the NIntegrate[] function for the two cases where a and b are explicitly defined.  NIntegrate[] uses completely different algorithms from Integrate.  In particular it uses numerical rather than symbolic methods (apart from some symbolic pre-processing).

NIntegrate[f[1/3, 0], {x, -Infinity, Infinity}]

gives 2.23071 + 0. I and

NIntegrate[f[1/3, 1], {x, -Infinity, Infinity}]

gives 2.64453 + 2.08083 I

What we’ve shown here is that evaluating these integrals using numerical methods gives the same result as evaluating using symbolic methods and numericalizing the result.  This gives me some confidence that its the general, symbolic evaluation that’s incorrect and hence I can file a bug report with Wolfram Research.

Maple 17.01 on the general problem

Since my University has just got a site license for Maple, I wondered what Maple 17.01 would make of the general integral. Using Maple’s 1D input/output we get:

> myint := int(exp(I*(a*x^3+b*x^2)), x = -infinity .. infinity);

myint := (1/12)*3^(1/2)*2^(1/3)*((4/3)*Pi^(5/2)*(((4/27)*I)*b^3/a^2)^(1/3)*exp(((2/27)*I)*b^3/a^2)
*BesselI(-1/3, ((2/27)*I)*b^3/a^2)+((2/3)*I)*Pi^(3/2)*3^(1/2)*b*2^(2/3)
*hypergeom([1/2, 1], [2/3, 4/3],((4/27)*I)*b^3/a^2)/(-I*a)^(2/3)-(8/27)*2^(1/3)*Pi^(5/2)*b^2*
exp(((2/27)*I)*b^3/a^2)*BesselI(1/3, ((2/27)*I)*b^3/a^2)/((-I*a)^(4/3)*(((4/27)*I)*b^3/a^2)^(1/3)))
/(Pi^(3/2)*(-I*a)^(1/3))+(1/12)*3^(1/2)*2^(1/3)*((4/3)*Pi^(5/2)*(((4/27)*I)*b^3/a^2)
^(1/3)*exp(((2/27)*I)*b^3/a^2)*BesselI(-1/3, ((2/27)*I)*b^3/a^2)+((2/3)*I)*Pi^(3/2)*3^(1/2)*b*2^(2/3)
*hypergeom([1/2, 1], [2/3, 4/3], ((4/27)*I)*b^3/a^2)/(I*a)^(2/3)-(8/27)*2^(1/3)*Pi^(5/2)*b^2*
exp(((2/27)*I)*b^3/a^2)*BesselI(1/3, ((2/27)*I)*b^3/a^2)/((I*a)^(4/3)*(((4/27)*I)*b^3/a^2)^(1/3)))
/(Pi^(3/2)*(I*a)^(1/3))

That looks scary! To try to determine if it’s possibly a correct general result, let’s turn this expression into a function and evaluate it for values of a and b we already know the answer to.

>f := unapply(%, a, b):
>result1:=simplify(f(1/3,1));

result1 := (2/27)*3^(1/2)*Pi*exp((2/3)*I)*((-(1/3)*I)^(1/3)*3^(2/3)*(-1)^(1/6)*BesselI(-1/3, (2/3)*I)
+3*(-(1/3)*I)^(2/3)*BesselI(-1/3, (2/3)*I)+3*(-(1/3)*I)^(2/3)*BesselI(1/3, (2/3)*I)-BesselI(1/3, (2/3)*I)
*3^(1/3)*(-1)^(1/3))/(-(1/3)*I)^(2/3)

evalf(result1);
2.644532853+2.080831872*I

Recall that Mathematica returned 2*E^((2*I)/3)*Pi*AiryAi[-1]=2.64453 + 2.08083 I for this case. The numerical results agree to the default precision reported by both packages so I am reasonably confident that Maple’s general solution is correct.

Not so simple simplification?

I am also confident that Maple’s expression involving Bessel functions is equivalent to Mathematica’s expression involving the AiryAi function. I haven’t figured out, however, how to get Maple to automatically reduce the Bessel functions down to AiryAi. I can attempt to go the other way though. In Maple:

>convert(2*exp((2*I)/3)*Pi*AiryAi(-1),Bessel);

(2/3)*exp((2/3)*I)*Pi*(-1)^(1/6)*(-BesselI(1/3, (2/3)*I)*(-1)^(2/3)+BesselI(-1/3, (2/3)*I))

This is more compact than the Bessel function result I get from Maple’s simplify so I guess that Maple’s simplify function could have done a little better here.

Not so general general solution?

Maple’s solution of the general problem should be good for all a and b right?  Let’s try it with a=1/3, b=0

f(1/3,0);
Error, (in BesselI) numeric exception: division by zero

Uh-Oh! So it’s not valid for b=0 then! We know from Mathematica, however, that the solution for this particular case is (2*Pi)/(3^(2/3)*Gamma[2/3])=2.23071. Indeed, if we solve this integral directly in Maple, it agrees with Mathematica

>myint := int(exp(I*(1/3*x^3+0*x^2)), x = -infinity .. infinity);

myint := (2/9)*3^(5/6)*(-1)^(1/6)*Pi/GAMMA(2/3)-(2/9)*3^(5/6)*(-1)^(5/6)*Pi/GAMMA(2/3)

>simplify(myint);
(2/3)*3^(1/3)*Pi/GAMMA(2/3)

>evalf(myint);
2.230707052+0.*I

Going to back to the general result that Maple returned. Although we can’t calculate it directly, We can use limits to see what its value would be at a=1/3, b=0

>simplify(limit(f(1/3, x), x = 0));

(2/9)*Pi*3^(2/3)/((-(1/3)*I)^(1/3)*GAMMA(2/3)*((1/3)*I)^(1/3))

>evalf(%)

evalf(%);
2.230707053+0.1348486379e-9*I

The symbolic expression looks different and we’ve picked up an imaginary part that’s possibly numerical noise. Let’s use more numerical precision to see if we can make the imaginary part go away. 100 digits should do it

>evalf[100](%%);

2.230707051824495741427486519543450239771293806030489125938415383976032081571780278667202004477941904
-0.855678513686459467847075286617333072231119978694352241387724335424279116026690601018858453303153701e-100*I

Well, more precision has made the imaginary part smaller but it’s still there. No matter how much precision I use, it’s always there…getting smaller and smaller as I ramp up the level of precision.

What’s going on?

All I’m doing here is playing around with the same problem in two packages.  Does anyone have any further insights into some of the issues raised?

## Image Processing Challenge: What does the world look like through my eyes?

July 30th, 2013

I am extremely short sighted and have a contact lens prescription of the order of -10 dioptres or so.  Recently, I was trying to explain to my wife exactly how I see the world when I am not wearing my contact lenses or glasses and the best I could do was ‘VERY blurry!’

This got me thinking.  Would it be possible to code something up that took an input picture and a distance from my eyes and return an image that would show how it looks to my unaided eyes?

Would anyone like to give this a go?  How hard could it be?

## Which journal to publish GPU Papers in?

July 12th, 2013

One of my academic colleagues, Fumie Costen, recently asked members of The University of Manchester GPU Club the following question

‘I am looking for a journal with high impact factor where I can publish our work on GPU for FDTD computations.  Can anyone help?’

So far, we haven’t had any replies so I’m throwing the question open to the world.  Any suggestions?

## Determing your machine name using R

July 11th, 2013

I was recently working with someone who was running thousands of R jobs on our Condor pool and some of them were failing.  As part of the diagnostic process, we needed to determine which machines in the pool were causing the problem.  My solution was to add the line

hostname

To the Bash script that eventually called R and the program he was running. This gives output that looks like this

badmachine.ourdomain.ac.uk

His solution was to do exactly the same thing in R. To do this, he added the following line to the beginning of his R script

(Sys.info()["nodename"])

Which produces output that looks like

nodename
"badmachine.ourdomain.ac.uk"

Either way, the job was done.

## A Month of Math Software – June 2013

July 2nd, 2013

Welcome to my monthly round-up of news from the world of mathematical software.  As always, thanks to all contributors.  If you have any mathematical software news (releases, blog articles and son on), feel free to contact me.  For details on how to follow Walking Randomly now that Google Reader has died, see this post.

Numerics for .NET

Faster, faster, FASTER!

Sparsification

• Version 1.0 of TxSSA has been released.  According to the project’s website, “TxSSA is an acronym for Tech-X Corporation Sparse Spectral Approximation. It is a library that has interfaces in C++, C, and MATLAB. It is an implementation of a matrix sparsification algorithm that takes a general real or complex matrix as input and produces a sparse output matrix of the same size. The non-zero entries are chosen to minimize changes to the singular values and singular vectors corresponding to the near null-space. The output matrix is constrained to preserve left and right null-spaces exactly. The sparsity pattern of the output matrix is automatically determined or can be given as input.”

Data and Statistics

• Stata is a commercial statistics package that’s been around since 1985.  It’s now at version 13 and has a bucket load of new functions and improvements.
• A new minor release of IDL (Interactive Data Language) from Exelis Visual Information Solutions has been released.  Details of version 8.2.3 can be found here.

Molten Mathematics

Some Freebies

• SMath studio is a free Mathcad clone developed by Andrey Ivashov.  Recently, Andrey has been releasing nee versions of Smath much more regularly and it is now at version 0.96.4909.  To see what’s been added in June, see the forum posts here and here.
• Rene Grothmann’s very frequently updated Euler Math Toolbox is now at version 22.8.  As always, Rene’s detailed version log tells you what’s new.

Safe and reliable numerical computing

• Sollya 4.0 was released earlier this month and is available for download at: http://sollya.gforge.inria.fr/.  According to the developers, “Sollya is both a tool environment and a library for safe floating-point code development. It offers a convenient way to perform computations with multiple precision interval arithmetic. It is particularly targeted to the automatized implementation of mathematical floating-point libraries (libm).”
• The INTLAB toolbox for MATLAB is now at version 7.1. INTLAB is the MATLAB toolbox for reliable computing and self-validating algorithms.

Pythonic Mathematics

• Version 5.10 of Sage, the free Computer Algebra System based on Python, was released on 17th June.  Martin Albrect discusses some of his favourite new goodies over at his blog and the full list of new stuff is on the Sage changelog.
• Pandas is Python’s main data analysis library and version 0.12 is out.  Take a look at the newness here.