## Archive for the ‘programming’ Category

A question I get asked a lot is ‘How can I do nonlinear least squares curve fitting in X?’ where X might be MATLAB, Mathematica or a whole host of alternatives. Since this is such a common query, I thought I’d write up how to do it for a very simple problem in several systems that I’m interested in

This is the Julia version. For other versions,see the list below

- Simple nonlinear least squares curve fitting in Maple
- Simple nonlinear least squares curve fitting in Mathematica
- Simple nonlinear least squares curve fitting in MATLAB
- Simple nonlinear least squares curve fitting in Python
- Simple nonlinear least squares curve fitting in R

**The problem**

xdata = -2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9 ydata = 0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001

and you’d like to fit the function

using nonlinear least squares. You’re starting guesses for the parameters are p1=1 and P2=0.2

For now, we are primarily interested in the following results:

- The fit parameters
- Sum of squared residuals

Future updates of these posts will show how to get other results such as confidence intervals. Let me know what you are most interested in.

**Julia solution using Optim.jl**

Optim.jl is a free Julia package that contains a suite of optimisation routines written in pure Julia. If you haven’t done so already, you’ll need to install the Optim package

Pkg.add("Optim")

The solution is almost identical to the example given in the curve fitting demo of the Optim.jl readme file:

using Optim model(xdata,p) = p[1]*cos(p[2]*xdata)+p[2]*sin(p[1]*xdata) xdata = [-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9] ydata = [0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001] beta, r, J = curve_fit(model, xdata, ydata, [1.0, 0.2]) # beta = best fit parameters # r = vector of residuals # J = estimated Jacobian at solution @printf("Best fit parameters are: %f and %f",beta[1],beta[2]) @printf("The sum of squares of residuals is %f",sum(r.^2.0))

The result is

Best fit parameters are: 1.881851 and 0.700230 @printf("The sum of squares of residuals is %f",sum(r.^2.0))

**Notes**

I used Julia version 0.2 on 64bit Windows 7 to run the code in this post

A question I get asked a lot is ‘How can I do nonlinear least squares curve fitting in X?’ where X might be MATLAB, Mathematica or a whole host of alternatives. Since this is such a common query, I thought I’d write up how to do it for a very simple problem in several systems that I’m interested in

This is the Python version. For other versions,see the list below

- Simple nonlinear least squares curve fitting in Julia
- Simple nonlinear least squares curve fitting in Maple
- Simple nonlinear least squares curve fitting in Mathematica
- Simple nonlinear least squares curve fitting in MATLAB
- Simple nonlinear least squares curve fitting in R

**The problem**

xdata = -2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9 ydata = 0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001

and you’d like to fit the function

using nonlinear least squares. You’re starting guesses for the parameters are p1=1 and P2=0.2

For now, we are primarily interested in the following results:

- The fit parameters
- Sum of squared residuals

Future updates of these posts will show how to get other results such as confidence intervals. Let me know what you are most interested in.

**Python solution using scipy**

Here, I use the curve_fit function from scipy

import numpy as np from scipy.optimize import curve_fit xdata = np.array([-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9]) ydata = np.array([0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001]) def func(x, p1,p2): return p1*np.cos(p2*x) + p2*np.sin(p1*x) popt, pcov = curve_fit(func, xdata, ydata,p0=(1.0,0.2))

The variable popt contains the fit parameters

array([ 1.88184732, 0.70022901])

We need to do a little more work to get the sum of squared residuals

p1 = popt[0] p2 = popt[1] residuals = ydata - func(xdata,p1,p2) fres = sum(residuals**2)

which gives

0.053812696547933969

- I’ve put this in an Ipython notebook which can be downloaded here. There is also a pdf version of the notebook.

A question I get asked a lot is ‘How can I do nonlinear least squares curve fitting in X?’ where X might be MATLAB, Mathematica or a whole host of alternatives. Since this is such a common query, I thought I’d write up how to do it for a very simple problem in several systems that I’m interested in

This is the Mathematica version. For other versions,see the list below

- Simple nonlinear least squares curve fitting in Maple
- Simple nonlinear least squares curve fitting in Julia
- Simple nonlinear least squares curve fitting in MATLAB
- Simple nonlinear least squares curve fitting in Python
- Simple nonlinear least squares curve fitting in R

**The problem**

You have the following 10 data points

xdata = -2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9 ydata = 0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001

and you’d like to fit the function

using nonlinear least squares. You’re starting guesses for the parameters are p1=1 and P2=0.2

For now, we are primarily interested in the following results:

- The fit parameters
- Sum of squared residuals

Future updates of these posts will show how to get other results such as confidence intervals. Let me know what you are most interested in.

**Mathematica Solution using FindFit**

FindFit is the basic nonlinear curve fitting routine in Mathematica

xdata={-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9}; ydata={0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001}; (*Mathematica likes to have the data in the form {{x1,y1},{x2,y2},..}*) data = Partition[Riffle[xdata, ydata], 2]; FindFit[data, p1*Cos[p2 x] + p2*Sin[p1 x], {{p1, 1}, {p2, 0.2}}, x] Out[4]:={p1->1.88185,p2->0.70023}

**Mathematica Solution using NonlinearModelFit**

You can get a lot more information about the fit using the NonLinearModelFit function

(*Set up data as before*) xdata={-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9}; ydata={0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001}; data = Partition[Riffle[xdata, ydata], 2]; (*Create the NonLinearModel object*) nlm = NonlinearModelFit[data, p1*Cos[p2 x] + p2*Sin[p1 x], {{p1, 1}, {p2, 0.2}}, x];

The NonLinearModel object contains many properties that may be useful to us. Here’s how to list them all

nlm["Properties"] Out[10]= {"AdjustedRSquared", "AIC", "AICc", "ANOVATable", \ "ANOVATableDegreesOfFreedom", "ANOVATableEntries", "ANOVATableMeanSquares", \ "ANOVATableSumsOfSquares", "BestFit", "BestFitParameters", "BIC", \ "CorrelationMatrix", "CovarianceMatrix", "CurvatureConfidenceRegion", "Data", \ "EstimatedVariance", "FitCurvatureTable", "FitCurvatureTableEntries", \ "FitResiduals", "Function", "HatDiagonal", "MaxIntrinsicCurvature", \ "MaxParameterEffectsCurvature", "MeanPredictionBands", \ "MeanPredictionConfidenceIntervals", "MeanPredictionConfidenceIntervalTable", \ "MeanPredictionConfidenceIntervalTableEntries", "MeanPredictionErrors", \ "ParameterBias", "ParameterConfidenceIntervals", \ "ParameterConfidenceIntervalTable", \ "ParameterConfidenceIntervalTableEntries", "ParameterConfidenceRegion", \ "ParameterErrors", "ParameterPValues", "ParameterTable", \ "ParameterTableEntries", "ParameterTStatistics", "PredictedResponse", \ "Properties", "Response", "RSquared", "SingleDeletionVariances", \ "SinglePredictionBands", "SinglePredictionConfidenceIntervals", \ "SinglePredictionConfidenceIntervalTable", \ "SinglePredictionConfidenceIntervalTableEntries", "SinglePredictionErrors", \ "StandardizedResiduals", "StudentizedResiduals"}

Let’s extract the fit parameters, 95% confidence levels and residuals

{params, confidenceInt, res} = nlm[{"BestFitParameters", "ParameterConfidenceIntervals", "FitResiduals"}] Out[22]= {{p1 -> 1.88185, p2 -> 0.70023}, {{1.8186, 1.9451}, {0.679124, 0.721336}}, {-0.0276906, -0.0322944, -0.0102488, 0.0566244, 0.0920392, 0.0976307, 0.114035, 0.109334, 0.0287154, -0.0700442}}

The parameters are given as replacement rules. Here, we convert them to pure numbers

{p1, p2} = {p1, p2} /. params Out[38]= {1.88185,0.70023}

Although only a few decimal places are shown, p1 and p2 are stored in full double precision. You can see this by converting to InputForm

InputForm[{p1, p2}] Out[43]//InputForm= {1.8818508498053645, 0.7002298171759191}

Similarly, let’s look at the 95% confidence interval, extracted earlier, in full precision

confidenceInt // InputForm Out[44]//InputForm= {{1.8185969887307214, 1.9451047108800077}, {0.6791239458086734, 0.7213356885431649}}

Calculate the sum of squared residuals

resnorm = Total[res^2] Out[45]= 0.0538127

**Notes**

I used Mathematica 9 on Windows 7 64bit to perform these calculations

One of my favourite parts of my job at The University of Manchester is the code optimisation service that my team provides to researchers there. We take code written in languages such as MATLAB, Python, Mathematica and R and attempt (usually successfully) to make them go faster. It’s sort of a cross between training and consultancy, is a lot of fun and we can help a relatively large number of people in a short time. It also turns out to be very useful to the researchers as some of my recent testimonials demonstrate.

Other researchers,however, need something more. They already have a nice piece of code with several users and a selection of papers. They also have a bucket-load of ideas about how to turn this nice code into something amazing. They have all this good stuff and yet they find that they are struggling to get their code to the next level. What these people need is some quality time with a team of research software engineers.

Enter the Software Sustainability Institute (SSI), an organisation that I have been fortunate enough to have a fellowship with throughout 2013. These guys are software-development ninjas, have extensive experience with working with academic researchers and, right now, they have an open call for projects. It’s free to apply and, if your application is successful, all of their services will be provided for free. If you’d like an idea of the sort of projects they work on, take a look at the extensive list of past projects.

So, if you are a UK-based researcher who has a software problem, and no one else can help, maybe you can hire* the SSI team.

*for free!

**Links**

From the wikipedia page on Division by Zero: *“The IEEE 754 standard specifies that every floating point arithmetic operation, including division by zero, has a well-defined result”.*

MATLAB supports this fully:

>> 1/0 ans = Inf >> 1/(-0) ans = -Inf >> 0/0 ans = NaN

Julia is almost there, but doesn’t handled the signed 0 correctly (This is using Version 0.2.0-prerelease+3768 on Windows)

julia> 1/0 Inf julia> 1/(-0) Inf julia> 0/0 NaN

Python throws an exception. (Python 2.7.5 using IPython shell)

In [4]: 1.0/0.0 --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) in () ----> 1 1.0/0.0 ZeroDivisionError: float division by zero In [5]: 1.0/(-0.0) --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) in () ----> 1 1.0/(-0.0) ZeroDivisionError: float division by zero In [6]: 0.0/0.0 --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) in () ----> 1 0.0/0.0 ZeroDivisionError: float division by zero

**Update:**

Julia **does** do things correctly, provided I make it clear that I am working with floating point numbers:

julia> 1.0/0.0 Inf julia> 1.0/(-0.0) -Inf julia> 0.0/0.0 NaN

I support scientific applications at The University of Manchester (see my LinkedIn profile if you’re interested in the details) and part of my job involves working on code written by researchers in a variety of languages. When I say ‘variety’ I really mean it – MATLAB, Mathematica, Python, C, Fortran, Julia, Maple, Visual Basic and PowerShell are some languages I’ve worked with this month for instance.

Having to juggle the semantics of so many languages in my head sometimes leads to momentary confusion when working on someone’s program. For example, I’ve been doing a lot of Python work recently but this morning I was hacking away on someone’s MATLAB code. Buried deep within the program, it would have been very sensible to be able to do the equivalent of this:

a=rand(3,3) a = 0.8147 0.9134 0.2785 0.9058 0.6324 0.5469 0.1270 0.0975 0.9575 >> [x,y,z]=a(:,1) Indexing cannot yield multiple results.

That is, I want to be able to take the first column of the matrix a and broadcast it out to the variables x,y and z. The code I’m working on uses MUCH bigger matrices and this kind of assignment is occasionally useful since the variable names x,y,z have slightly more meaning than a(1,3), a(2,3), a(3,3).

The only concise way I’ve been able to do something like this using native MATLAB commands is to first convert to a cell. In MATLAB 2013a for instance:

>> temp=num2cell(a(:,1)); >> [x y z] = temp{:} x = 0.8147 y = 0.9058 z = 0.1270

This works but I think it looks ugly and introduces conversion overheads. The problem I had for a short time is that I subconsciously expected multiple assignment to ‘Just Work’ in MATLAB since the concept makes sense in several other languages I use regularly.

from pylab import rand a=rand(3,3) [a,b,c]=a[:,0]

a = RandomReal[1, {3, 3}] {x,y,z}=a[[All,1]]

a=rand(3,3); (x,y,z)=a[:,1]

I’ll admit that I don’t often need this construct in MATLAB but it would definitely be occasionally useful. I wonder what other opinions there are out there? Do you think multiple assignment is useful (in any language)?

While on the train to work this morning, I wondered which English words have the most anagrams that are also valid English words. So, I knocked up few lines of Mathematica code and came up with 4 sets of 7:

{{"ates", "east", "eats", "etas", "sate", "seat", "teas"}, {"pares", "parse", "pears", "rapes", "reaps", "spare", "spear"}, {"capers", "crapes", "pacers", "parsec", "recaps", "scrape", "spacer"}, {"carets", "caster", "caters", "crates", "reacts", "recast", "traces"}}

So, according to my program (and Mathematica’s dictionary), the best you can do is 7. I’m not going to post the code until later because I don’t want to influence the challenge which is **‘Write a program in your language of choice that queries a dictionary to find which English words have the most anagrams that are also valid English words.’**

While working on someone’s MATLAB code today there came a point when it was necessary to generate a vector of powers. For example, [a a^2 a^3....a^10000] where a=0.999

a=0.9999; y=a.^(1:10000);

This isn’t the only way one could form such a vector and I was curious whether or not an alternative method might be faster. On my current machine we have:

>> tic;y=a.^(1:10000);toc Elapsed time is 0.001526 seconds. >> tic;y=a.^(1:10000);toc Elapsed time is 0.001529 seconds. >> tic;y=a.^(1:10000);toc Elapsed time is 0.001716 seconds.

Let’s look at the last result in the vector y

>> y(end) ans = 0.367861046432970

So, 0.0015-ish seconds to beat.

>> tic;y1=cumprod(ones(1,10000)*a);toc Elapsed time is 0.000075 seconds. >> tic;y1=cumprod(ones(1,10000)*a);toc Elapsed time is 0.000075 seconds. >> tic;y1=cumprod(ones(1,10000)*a);toc Elapsed time is 0.000075 seconds.

soundly beaten! More than a factor of 20 in fact. Let’s check out that last result

>> y1(end) ans = 0.367861046432969

Only a difference in the 15th decimal place–I’m happy with that. What I’m wondering now, however, is will my faster method ever cause me grief?

This is only an academic exercise since this is not exactly a hot spot in the code!

In common with many higher educational establishments, the University I work for has site licenses for a wide variety of scientific software applications such as Labview, MATLAB, Mathematica, NAG, Maple and Mathcad– a great boon to students and researcher who study and work there. The computational education of a typical STEM undergraduate will include exposure to at least some of these systems along with traditional programming languages such as Java, C or Fortran and maybe even a little Excel when times are particularly bad!

Some argue that such exposure to multiple computational systems is a good thing while others argue that it leads to confusion and a ‘jack of all trades and master of none situation.’ Those who take the latter viewpoint tend to want to standardize on one system or other depending on personal preferences and expertise.

MATLAB, Python, Fortran and Mathematica are a few systems I’ve seen put forward over the years with the idea being that students will learn the basics of one system in their first few weeks and then the entire subject curriculum will be interwoven with these computational skills. In this way, students can use their computational skills as an aid to deeper subject understanding without getting bogged down with the technical details of several different computational systems. As you might expect, software vendors are extremely keen on this idea and will happily parachute in a few experts to help universities with curriculum development for free since this will possibly lead to their system being adopted.

Maybe we’ll end up with electrical engineers who’ve only ever seen Labview, mathematicians who’ve only ever used Maple, mechanical engineers who’ve only ever used MATLAB and economists who can only use Excel. While the computational framework(s) used to teach these subjects are less important than the teaching of the subject itself, I firmly believe that** part of a well-rounded, numerate education should include exposure to several computational systems** and such software mono-cultures should be avoided at all costs.

Part of the reason for writing this post is to ask what you think so comments are very welcome.

I was recently chatting to a research group who were considering moving to Python from MATLAB for some of their research software output. One team member was very worried about Python’s use of indentation to denote blocks of code and wondered if braces would ever find their way into the language? Another team member pointed out that this was extremely unlikely and invited us to attempt to import braces from the __future__ module.

>>> from __future__ import braces File "", line 1 SyntaxError: not a chance