Archive for June, 2011

June 28th, 2011

I saw a great tweet from Marcus du Sautoy this morning who declared that today, June 28th, is a perfect day because both 6 and 28 are perfect numbers.  This, combined with the fact that it is very sunny in Manchester right now put me in a great mood and I gave my colleauges a quick maths lesson to try and explain why I was so happy.

“It’s not a perfect year though is it?” declared one of my colleauges.  Some people are never happy and she’s going to have to wait over 6000 years before her definition of a perfect day is fulfilled.  The date of this truly perfect day?  28th June 8128.

Update: Someone just emailed me to say that 28th June is Tau Day too!

June 26th, 2011

Back in the good old days when I was a freshly minted postgraduate student I had big plans– In short, I was going to change the world.  Along with a couple of my friends I was going to revolutionize the field I was working in, win the Nobel prize and transform the way science and mathematics is taught at University.  Fast forward four years and it pains me to say that my actual achievements fell rather short of these lofty ideals.  I considered myself lucky to simply pass my PhD and land a job that didn’t involve querying members of the public on their preferences regarding potato based products.  The four subjects of Laura Snyder’s latest book, The Philosophical Breakfast Club had broadly similar aims to my younger self but they actually delivered the goods and they did so in spades.

In this sweeping history of nineteenth century science, Snyder gives us not one biography but four — those of Charles Babbage, John Herschel, William Whewell and Richard Jones.  You may not have heard of all of them but I’d be surprised if you didn’t know of some of their work.  Between them they invented computing, modern economics, produced the most detailed astronomical maps of their age, co-invented photography, made important advances in tidology, invented the term scientist (among many other neologisms) and they are just the headliners!  Under-achievers they were not.

These four men met while studying at Cambridge University way back in 1812 where they held weekly meetings which they called The Philosophical Breakfast Club.  They took a look at how science was practiced in their day, found it wanting and decided to do something it.  Remarkably, they succeeded!

I found Snyder’s combination of biography, history and science to be utterly compelling…so much so that during my time reading it, my beloved iPad stayed at home, lonely and forgotten, while I undertook my daily commute.  This is no dry treatise on nineteenth century science; instead it is a living, breathing page-turner about a group of very colourful individuals who lived in a time where science was done rather differently from how it is practiced today.  This was a time where ‘computer’ meant ‘a person who was good at arithmetic’ and professors would share afternoon champagne with their students after giving them advice.  Who would have thought that a group of nineteenth century geeks could form the basis of one of the best books I’ve read all year?

June 18th, 2011

Over at Sol Lederman’s fantastic new blog, Playing with Mathematica, he shared some code that produced the following figure.

Sol's image

Here’s Sol’s code with an AbsoluteTiming command thrown in.

f[x_, y_] := Module[{},
  If[
   Sin[Min[x*Sin[y], y*Sin[x]]] >
    Cos[Max[x*Cos[y],
       y*Cos[x]]] + (((2 (x - y)^2 + (x + y - 6)^2)/40)^3)/
      6400000 + (12 - x - y)/30, 1, 0]
  ]

 AbsoluteTiming[
\[Delta] = 0.02;
range = 11;
xyPoints = Table[{x, y}, {y, 0, range, \[Delta]}, {x, 0, range, \[Delta]}];
image = Map[f @@ # &, xyPoints, {2}];
]
Rotate[ArrayPlot[image, ColorRules -> {0 -> White, 1 -> Black}], 135 Degree]

This took 8.02 seconds on the laptop I am currently working on (Windows 7 AMD Phenom II N620 Dual core at 2.8Ghz). Note that I am only measuring how long the calculation itself took and am ignoring the time taken to render the image and define the function.

Compiled functions make Mathematica code go faster

Mathematica has a Compile function which does exactly what you’d expect…it produces a compiled version of the function you give it (if it can!). Sol’s function gave it no problems at all.

f = Compile[{{x, _Real}, {y, _Real}}, If[
    Sin[Min[x*Sin[y], y*Sin[x]]] >
     Cos[Max[x*Cos[y],
        y*Cos[x]]] + (((2 (x - y)^2 + (x + y - 6)^2)/40)^3)/
       6400000 + (12 - x - y)/30, 1, 0]
   ];

AbsoluteTiming[
 \[Delta] = 0.02;
 range = 11;
 xyPoints =
  Table[{x, y}, {y, 0, range, \[Delta]}, {x, 0, range, \[Delta]}];
 image = Map[f @@ # &, xyPoints, {2}];
 ]
Rotate[ArrayPlot[image, ColorRules -> {0 -> White, 1 -> Black}],
 135 Degree]

This simple change takes computation time down from 8.02 seconds to 1.23 seconds which is a 6.5 times speed up for hardly any extra coding work. Not too shabby!

Switch to C code to get it even faster

I’m not done yet though! By default the Compile command produces code for the so-called Mathematica Virtual Machine but recent versions of Mathematica allow us to go even further.

Install Visual Studio Express 2010 (and the Windows 7.1 SDK if you are running 64bit Windows) and you can ask Mathematica to convert the function to low level C code, compile it and produce a function object linked to the resulting compiled code. Sounds complicated but is a snap to actually do. Just add

CompilationTarget -> "C"

to the Compile command.

f = Compile[{{x, _Real}, {y, _Real}},
   If[Sin[Min[x*Sin[y], y*Sin[x]]] >
     Cos[Max[x*Cos[y],
        y*Cos[x]]] + (((2 (x - y)^2 + (x + y - 6)^2)/40)^3)/
       6400000 + (12 - x - y)/30, 1, 0]
   , CompilationTarget -> "C"
   ];

AbsoluteTiming[\[Delta] = 0.02;
 range = 11;
 xyPoints =
  Table[{x, y}, {y, 0, range, \[Delta]}, {x, 0, range, \[Delta]}];
 image = Map[f @@ # &, xyPoints, {2}];]
Rotate[ArrayPlot[image, ColorRules -> {0 -> White, 1 -> Black}],
 135 Degree]

On my machine this takes calculation time down to 0.89 seconds which is 9 times faster than the original.

Making the compiled function listable

The current compiled function takes just one x,y pair and returns a result.

In[8]:= f[1, 2]

Out[8]= 1

It can’t directly accept a list of x values and a list of y values. For example for the two points (1,2) and (10,20) I’d like to be able to do f[{1, 10}, {2, 20}] and get the results {1,1}. However what I end up with is an error

f[{1, 10}, {2, 20}]

CompiledFunction::cfsa: Argument {1,10} at position 1 should be a machine-size real number. >>

To fix this I need to make my compiled function listable which is as easy as adding

RuntimeAttributes -> {Listable}

to the function definition.

f = Compile[{{x, _Real}, {y, _Real}},
   If[Sin[Min[x*Sin[y], y*Sin[x]]] >
     Cos[Max[x*Cos[y],
        y*Cos[x]]] + (((2 (x - y)^2 + (x + y - 6)^2)/40)^3)/
       6400000 + (12 - x - y)/30, 1, 0]
   , CompilationTarget -> "C", RuntimeAttributes -> {Listable}
   ];

So now I can pass the entire array to this compiled function at once. No need for Map.

AbsoluteTiming[
 \[Delta] = 0.02;
 range = 11;
 xpoints = Table[x, {x, 0, range, \[Delta]}, {y, 0, range, \[Delta]}];
 ypoints = Table[y, {x, 0, range, \[Delta]}, {y, 0, range, \[Delta]}];
 image = f[xpoints, ypoints];
 ]
Rotate[ArrayPlot[image, ColorRules -> {0 -> White, 1 -> Black}],
 135 Degree]

On my machine this gets calculation time down to 0.28 seconds, a whopping 28.5 times faster than the original. Rendering time is becoming much more of an issue than calculation time in fact!

Parallel anyone?

Simply by adding

Parallelization -> True

to the Compile command I can parallelise the code using threads. Since I have a dual core machine, this might be a good thing to do. Let’s take a look

f = Compile[{{x, _Real}, {y, _Real}},
   If[
    Sin[Min[x*Sin[y], y*Sin[x]]] >
     Cos[Max[x*Cos[y],
        y*Cos[x]]] + (((2 (x - y)^2 + (x + y - 6)^2)/40)^3)/
       6400000 + (12 - x - y)/30, 1, 0]
   , RuntimeAttributes -> {Listable}, CompilationTarget -> "C",
   Parallelization -> True];

AbsoluteTiming[
 \[Delta] = 0.02;
 range = 11;
 xpoints = Table[x, {x, 0, range, \[Delta]}, {y, 0, range, \[Delta]}];
 ypoints = Table[y, {x, 0, range, \[Delta]}, {y, 0, range, \[Delta]}];
 image = f[xpoints, ypoints];
 ]
Rotate[ArrayPlot[image, ColorRules -> {0 -> White, 1 -> Black}],
 135 Degree]

The first time I ran this it was SLOWER than the non-threaded version coming in at 0.33 seconds. Subsequent runs varied and occasionally got as low as 0.244 seconds which is only a few hundredths of a second faster than the original.

If I make the problem bigger, however, by decreasing the size of Delta then we start to see the benefit of parallelisation.

AbsoluteTiming[
 \[Delta] = 0.01;
 range = 11;
 xpoints = Table[x, {x, 0, range, \[Delta]}, {y, 0, range, \[Delta]}];
 ypoints = Table[y, {x, 0, range, \[Delta]}, {y, 0, range, \[Delta]}];
 image = f[xpoints, ypoints];
 ]
Rotate[ArrayPlot[image, ColorRules -> {0 -> White, 1 -> Black}],
 135 Degree]

The above calculation (sans rendering) took 0.988 seconds using a parallelised version of f and 1.24 seconds using a serial version. Rendering took significantly longer! As a comparison lets put a Delta of 0.01 in the original code:

f[x_, y_] := Module[{},
  If[
   Sin[Min[x*Sin[y], y*Sin[x]]] >
    Cos[Max[x*Cos[y],
       y*Cos[x]]] + (((2 (x - y)^2 + (x + y - 6)^2)/40)^3)/
      6400000 + (12 - x - y)/30, 1, 0]
  ]

 AbsoluteTiming[
\[Delta] = 0.01;
range = 11;
xyPoints = Table[{x, y}, {y, 0, range, \[Delta]}, {x, 0, range, \[Delta]}];
image = Map[f @@ # &, xyPoints, {2}];
]
Rotate[ArrayPlot[image, ColorRules -> {0 -> White, 1 -> Black}], 135 Degree]

The calculation time (again, ignoring rendering time) took 32.56 seconds and so our C-compiled, parallel version is almost 33 times faster!

Summary

  • The Compile function can make your code run significantly faster by compiling it for the Mathematica Virtual Machine (MVM).  Note that not every function is suitable for compilation.
  • If you have a C-compiler installed on your machine then you can switch from the MVM to compiled C-code using a single option statement.  The resulting code is even faster
  • Making your functions listable can increase performance.
  • Parallelising your compiled function is easy and can lead to even more speed but only if your problem is of a suitable size.
  • Sol Lederman has a very cool Mathematica blog – check it out!  The code that inspired this blog post originated there.
June 16th, 2011

Every time there is a new MATLAB release I take a look to see which new features interest me the most and share them with the world.  If you find this article interesting then you may also enjoy similar articles on 2010b and 2010a.

Simpler random number control

MATLAB 2011a introduces the function rng which allows you to control random number generation much more easily.  For example. in older versions of MATLAB you would have to do the following to reseed the default random number stream to something based upon the system time.

RandStream.setDefaultStream(RandStream('mt19937ar','seed',sum(100*clock)));

In MATLAB 2011a you can achieve something similar with

rng shuffle

Faster Functions

I love it when The Mathworks improve the performance of some of their functions because you can guarantee that, in an organisation as large as the one I work for, there will always be someone who’ll be able to say ‘Wow! I switched to the latest version of MATLAB and my code runs faster.’  All of the following timings were performed on a 3Ghz quad-core running Ubuntu Linux with the cpu-selector turned up to maximum for all 4 cores.  In all cases the command was run 5 times and an average taken.  Some of the faster functions include conv, conv2, qz, complex eig and svd. The speedup on svd is astonishing!

a=rand(1,100000);
b=rand(1,100000);
tic;conv(a,b);toc

MATLAB 2010a: 3.31 seconds
MATLAB 2011a: 1.56 seconds

a=rand(1000,1000);
b=rand(1000,1000);
tic;q=qz(a,b);toc

MATLAB 2010a: 36.67 seconds
MATLAB 2011a: 22.87 seconds

a=rand(1000,1000);
tic;[U,S,V] = svd(a);toc

MATLAB 2010a: 9.21 seconds
MATLAB 2011a: 0.7114 seconds

Symbolic toolbox gets beefed up

Ever since its introduction back in MATLAB 2008b, The Mathworks have been steadily improving the Mupad-based symbolic toolbox.  Pretty much all of the integration failures that I and my readers identified back then have been fixed for example.  MATLAB 2011a sees several new improvements but I’d like to focus on improvements for non-algebraic equations.

Take this system of equations

solve('10*cos(a)+5*cos(b)=x', '10*sin(a)+5*sin(b)=y', 'a','b')

MATLAB 2011a finds the (extremely complicated) symbolic solution whereas MATLAB 2010b just gave up.
Here’s another one

syms an1 an2;
eq1 = sym('4*cos(an1) + 3*cos(an1+an2) = 6');
eq2 = sym('4*sin(an1) + 3*sin(an1+an2) = 2');
eq3 = solve(eq1,eq2);

MATLAB 2010b only finds one solution set and it’s approximate

>> eq3.an1
ans =
-0.057562921169951811658913433179187

>> eq3.an2
ans =
0.89566479385786497202226542634536

MATLAB 2011a, on the other hand, finds two solutions and they are exact

>> eq3.an1

ans =
 2*atan((3*39^(1/2))/95 + 16/95)
 2*atan(16/95 - (3*39^(1/2))/95)

>> eq3.an2

ans =
 -2*atan(39^(1/2)/13)
  2*atan(39^(1/2)/13)

MATLAB Compiler has improved parallel support
Lifted direct from the MATLAB documentation:

MATLAB Compiler generated standalone executables and libraries from parallel applications can now launch up to eight local workers without requiring MATLAB® Distributed Computing Server™ software.

Amen to that!

GPU Support has been beefed up in the parallel computing toolbox

A load of new functions now support GPUArrays.

cat
colon
conv
conv2
cumsum
cumprod
eps
filter
filter2
horzcat
meshgrid
ndgrid
plot
subsasgn
subsindex
subsref
vertcat

You can also index directly into GPUArrays now and the amount of MATLAB code supported by arrayfun for GPUArrays has also been increased to include the following.

&, |, ~, &&, ||,
while, if, else, elseif, for, return, break, continue, eps

This brings the full list of MATLAB functions and operators supported by the GPU version of arrayfun to

abs
acos
acosh
acot
acoth
acsc
acsch
asec
asech
asin
asinh
atan
atan2
atanh
bitand
bitcmp
bitor
bitshift
bitxor
ceil
complex
conj
cos
cosh
cot
coth
csc
csch
double
eps
erf
erfc
erfcinv
erfcx
erfinv
exp
expm1
false
fix
floor
gamma
gammaln
hypot
imag
Inf
int32
isfinite
isinf
isnan
log
log2
log10
log1p
logical
max
min
mod
NaN
pi
real
reallog
realpow
realsqrt
rem
round
sec
sech
sign
sin
single
sinh
sqrt
tan
tanh
true
uint32
+
-
.*
./
.\
.^
==
~=
<
<=
>
>=
&
|
~
&&
||

Scalar expansion versions of the following:

*
/
\
^
Branching instructions:

break
continue
else
elseif
for
if
return
while

The Parallel Computing Toolbox is not the only game in town for GPU support on MATLAB.  One alternative is Jacket by Accelereyes and they have put up a comparison between the PCT and Jacket.  At the time of writing it compares against 2011a.

More information about GPU support in various mathematical software packages can be found here.

Toolbox mergers and acquisitions

There have been several license related changes in this version of MATLAB comprising of 2 new products, 4 mergers and one name change.  Sadly, none of my toolbox-merging suggestions have been implemented but let’s take a closer look at what has been done.

  • The Communications Blockset and Communications Toolbox have merged into what’s now called the Communications System Toolbox. This new product requires another new product as a pre-requisite – The DSP System Toolbox.
  • The DSP System Toolbox isn’t completely new, however, since it was formed out of a merger between the Filter Design Toolbox and Signal Processing Blockset.
  • Stateflow Coder and Real-Time Workshop have combined their powers to form the new Simulink Coder which depends upon the new MATLAB Coder.
  • The new Embedded Coder has been formed from the merging of no less than 3 old products: Real-Time Workshop Embedded Coder, Target Support Package, and Embedded IDE Link. This new product also requires the new MATLAB Coder.
  • MATLAB Coder is totally new and according to the Mathwork’s blurb it “generates standalone C and C++ code from MATLAB® code. The generated source code is portable and readable.”  I’m looking forward to trying that out.
  • Next up, is what seems to be little more than a renaming exercise since the Video and Image Processing Blockset has been renamed the Computer Vision System Toolbox.

Personally, few of these changes affect me but professionally they do since I have users of many of these toolboxes.  An original set of 9 toolboxes has been rationalized into 5 (4 from mergers and the new MATLAB Coder) and I do like it when the number of Mathwork’s toolboxes goes down.  To counter this, there is another new product called The Phased Array System Toolbox.

So, that rounds up what was important for me in MATLAB 2011a.  What did you like/dislike about it?

Other blog posts about 2011a

June 15th, 2011

I needed to install Labview 2010 onto a Ubuntu Linux machine but when I inserted the DVD nothing happened.  So, I tried to manually mount it from the command line in the usual way but it didn’t work. It turns out that the DVD isn’t formatted as iso9660 but as hfsplus. The following incantations worked for me

sudo mount -t hfsplus /dev/sr0 /media/cdrom0 -o loop
sudo /media/cdrom0/Linux/labview/INSTALL

The installer soon became upset and gave the following error message

/media/cdrom0/Linux/labview/bin/rpmq: error while loading shared libraries: libbz2.so.1: 
cannot open shared object file: No such file or directory

This was fixed with (original source here)

 cd /usr/lib32
 sudo ln -s libbz2.so.1.0 libbz2.so.1
 sudo ldconfig
June 13th, 2011

When installing MATLAB 2011a on Linux you may encounter a huge error message than begins with

Preparing installation files ...
Installing ...
Exception in thread "main" com.google.inject.ProvisionException: Guice provision
 errors:

1) Error in custom provider, java.lang.RuntimeException: java.lang.reflect.Invoc
ationTargetException
  at com.mathworks.wizard.WizardModule.provideDisplayProperties(WizardModule.jav
a:61)
  while locating com.mathworks.instutil.DisplayProperties
  at com.mathworks.wizard.ui.components.ComponentsModule.providePaintStrategy(Co
mponentsModule.java:72)
  while locating com.mathworks.wizard.ui.components.PaintStrategy
    for parameter 4 at com.mathworks.wizard.ui.components.SwingComponentFactoryI
mpl.(SwingComponentFactoryImpl.java:109)
  while locating com.mathworks.wizard.ui.components.SwingComponentFactoryImpl
  while locating com.mathworks.wizard.ui.components.SwingComponentFactory
    for parameter 1 at com.mathworks.wizard.ui.WizardUIImpl.(WizardUIImpl.
java:64)
  while locating com.mathworks.wizard.ui.WizardUIImpl
  while locating com.mathworks.wizard.ui.WizardUI annotated with @com.google.inj
ect.name.Named(value=BaseWizardUI)

This is because you haven’t mounted the installation disk with the correct permissions. The fix is to run the following command as root.

mount -o remount,exec /media/MATHWORKS_R2011A/

Assuming, of course, that /media/MATHWORKS_R2011A/ is your mount point. Hope this helps someone out there.

Update: 7th April 2014
A Debian 7.4 user had this exact problem but the above command didn’t work. We got the following

mount -o remount,exec /media/cdrom0

mount: cannot remount block device /dev/sr0 read-write, is write-protected

The fix was to modify the command slightly:

mount -o remount,exec,ro /media/cdrom0
June 10th, 2011

One part of my job that I really enjoy is the optimisation of researcher’s code.  Typically, the code comes to me in a language such as MATLAB or Mathematica and may take anywhere from a couple of hours to several weeks to run.  I’ve had some nice successes recently in areas as diverse as finance, computer science, applied math and chemical engineering among others.  The size of the speed-up can vary from 10% right up to 5000% (yes, 50 times faster!) and that’s before I break out the big guns such as Manchester’s Condor pool or turn the code over to our HPC specialists for some SERIOUS (yet more time consuming in terms of developer time) optimisations.

Reporting these speed-ups to colleagues (along with the techniques I used) gets various responses such as ‘Well, they shouldn’t do time-consuming computing using high level languages.  They should rewrite the whole thing in Fortran’ or words to that effect.  I disagree!

In my opinion, high level programming languages such as Mathematica, MATLAB and Python have democratised scientific programming.  Now, almost anyone who can think logically can turn their scientific ideas into working code.  I’ve seen people who have had no formal programming training at all whip up models, get results and move on with their research.    Let’s be clear here – It’s results that matter not how you coded them.

It comes down to this.  CPU time is cheap.  Very cheap.  Human time, particularly specialised human time, is expensive.

Here’s an example:  Earlier this year I was working with a biologist who had put together some MATLAB code to analyse her data.  She had written the code in less than a day and it gave the correct results but it ran too slowly for her tastes.  Her sole programming experience came from reading the MATLAB manual and yet she could cook up useful code in next to no time.  Sure, it was slow and (to my eyes) badly written but give the gal a break…she’s a professional biologist and not a professional programmer.  Her programming is a lot better than my biology!

In less than two hours I gave her a crash course in MATLAB code optimisation; how to use the profiler, vectorisation and so on.  We identified the hotspot in the code and, between us, recoded it so that it was an order of magnitude faster.  This was more than fast enough for her needs, she could now analyse data significantly faster than she could collect it.   I realised that I could make it even faster by using parallelised mex functions but it would probably take a few more hours work.  She declined my offer…the code was fast enough.

In my opinion, this is an optimal use of resources.  I spend my days obsessing about mathematical software and she spends her days obsessing about experimental biology.  She doesn’t need a formal course in how to write uber-efficient code because her code runs as fast as she needs it to (with a little help from her friends).  The solution we eventually reached might not be the most CPU-efficient one but it is a good trade off between CPU-efficient and developer-efficient.

It was easy…trivial even..for someone like me to take her inefficient code and turn it into something that was efficient enough.  However, the whole endeavour relied on her producing working code in the first place.  Say high-level languages such as MATLAB didn’t exist….then her only options would be to hire a professional programmer (cash expensive) or spend a load of time learning how to code in a low level language such as Fortran or C (time expensive).

Also, because she is a beginner programmer, her C or Fortran code would almost certainly be crappy and one thing I am sure of is ‘Crappy MATLAB/Python/Mathematica/R code is a heck of a lot easier to debug and optimise than crappy C code.’  Segfault anyone?

June 8th, 2011

I’ve been a user of Ubuntu Linux for years but the recent emphasis on their new Unity interface has put me off somewhat.  I tried to like it but failed.  So, I figured that it was time for a switch to a different distribution.

I asked around on Twitter and got suggestions such as Slackware, Debian and Linux Mint.  I’ve used both Slackware and Debian in the past but, while they might be fine for servers or workstations, I prefer something more shiny for my personal laptop.

I could also have stuck with Ubuntu and simply installed GNOME using synaptic but I like to use the desktop that is officially supported by the distribution.

So, I went with Linux Mint.  It isn’t going well so far!

I had no DVDs in the house so I downloaded the CD version, burned it to a blank CD and rebooted only to be rewarded with

Can not mount /dev/loop0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs

I checked the md5sum of the .iso file and it was fine. I burned to a different CD and tried again. Same error.

I was in no mood for a trawl of the forums so I simply figured that maybe something was wrong with the CD version of the distribution – at least as far as my machine was concerned. So, I started downloading the DVD version and treated my greyhound to a walk to the local computer shop to buy a stack of DVDs.

When I got back I checked the .md5 sum of the DVD image, burned it to disk and…got the same error. A trawl of the forums suggests that many people have seen this error but no reliable solution has been found.

Not good for me or Linux Mint but at least Desmond (below) got an extra walk!

Desmond the greyhound

Update 1 I created a bootable USB memory stick from the DVD .iso to elimiate any problems with my burning software/hardware. Still get the same error message. MD5 checksum of the .iso file is what it should be:

md5sum ./linuxmint-11-gnome-dvd-64bit.iso
773b6cdfe44b91bc44448fa7b34bffa8  ./linuxmint-11-gnome-dvd-64bit.iso

My machine is a Dell XPS M1330 which has been running Ubuntu for almost 3 years.

Update 2: Seems that this bug is not confined to Mint. Ubuntu users are reporting it too. No fix yet though
https://bugs.launchpad.net/ubuntu/+bug/636711

Update 3: There is DEFINITELY nothing wrong with the installation media.  Both USB memory stick and DVD versions boot on my wife’s (much newer)HP laptop with no problem.  So, the issue seems to be related to my particular hardware.  This is like the good old days of Linux where installation was actually difficult.  Good times!

Update 4: After much mucking around I finally gave up on a direct install of Mint 11.  The installer is simply broken for certain hardware configurations as far as I can tell.  Installed Mint 10 from the same pen drive that failed for Mint 11 without a hitch.

Update 5: As soon as the Mint 10 install completed, I did an apt-get dist-upgrade to try to get to Mint 11 that way. The Mint developers recommend against doing dist-upgrades but I don’t seem to have a choice since the Mint 11 installer won’t work on my machine. After a few minutes I get this error

dpkg: error processing python2.7-minimal (--configure):
 subprocess installed post-installation script returned error exit status 3
Errors were encountered while processing:
 python2.7-minimal

This is mentioned in this bug report.  I get over that (by following the instructions in #9 of the bug report) and later get this error

p: cannot stat `/usr/lib/pango/1.6.0/module-files.d/libpango1.0-0.modules': No such file or directory
cp: cannot stat `/usr/lib/pango/1.6.0/modules/pango-basic-fc.so': No such file or directory
E: /usr/share/initramfs-tools/hooks/plymouth failed with return 1.
update-initramfs: failed for /boot/initrd.img-2.6.35-22-generic
dpkg: error processing initramfs-tools (--configure):
 subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
 initramfs-tools

I fixed this with

sudo ln -s x86_64-linux-gnu/pango /usr/lib/pango

Trying the apt-get dist-upgrade again leads to

The following packages have unmet dependencies:
 python-couchdb : Breaks: desktopcouch (< 1.0) but 0.6.9b-0ubuntu1 is to be installed
 python-desktopcouch-records : Conflicts: desktopcouch (< 1.0.7-0ubuntu2) but 0.6.9b-0ubuntu1 is to be installed

Which, thanks to this forum post, I get rid of by doing

sudo dpkg --configure -a
sudo apt-get remove python-desktopcouch-records desktopcouch evolution-couchdb python-desktopcouch

A few more packages get installed before it stops again with the error message

Unpacking replacement xserver-xorg-video-tseng ...
Processing triggers for man-db ...
Processing triggers for ureadahead ...
Errors were encountered while processing:
 /var/cache/apt/archives/xserver-xorg-core_2%3a1.10.1-1ubuntu1.1_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

I get past this by doing

sudo apt-get -f install

Then I try apt-get upgrade and apt-get dist-update again…possibly twice and I’m pretty much done it seems.

Update 6: On the train to work this morning I thought I’d boot into my shiny new Mint system. However I was faced with nothing but a blank screen.  I rebooted and removed quiet and splash from the grub options to allow me to see what was going on. The boot sequence was getting stuck on something like checking battery state. Up until now I had only been using Mint while connected to the Mains. Well, this was the final straw for me.  As soon as I got into work I shoved in a Ubuntu 11.04 live disk which installed in the time it took me to drink a cup of coffee. I’ve got GNOME running and am now happy. 

My Linux Mint adventure is over.

June 6th, 2011

Should academic mathematical software (for both teaching and research) be open source, commercial or a mixture of both?  Personally I feel that a mixture is the best way to go which is why I am equally at home with either Mathematica or Sage, MATLAB or Scilab, GSL or NAG and so on.  Others, however, have more polarised views.

Here are some I’ve come across from various places over the years (significantly shortened)

  • We should teach MATLAB because MATLAB is the industry standard.  Nothing else will do!
  • We should teach concepts, not how to use any particular program.  However, when things need to be implemented they should be implemented in an open source package.
  • All research should be conducted using open source software.  Nothing else will do!
  • Students are being asked to pay hefty fees to come to our University.  We should provide expensive mathematical software so that they feel that they are getting value for money.
  • We should only provide open source software to staff and students.  This will save us a fortune which we can put into other facilities.

and so on.

Personally I feel that all of these views are far too blinkered.  When you consider the combined needs of all teachers, researchers and students in a large institution such as the one I work for, only a combination of both open source and commercial software can satisfy everyone.

I’d love to know what you think though so please have your say via the comments section.  If you could preface your comment with a brief clue as to your background then that would be even better (nothing too detailed, just something like ‘Chemistry lecturer’, ‘open source software developer’ or ‘Math student’ would be great)

June 1st, 2011

Welcome to the 5th installment of A Month of Math software where I take a look at all things math-software related.  If I’ve missed something then let me know in the comments section.

Open Source releases

SAGE, possibly the best open-source mathematics package bar-none, has seen an upgrade to version 4.7.  The extensive change-log is here.

Numpy 1.60 has been released.  Numpy is the fundamental package needed for scientific computing with Python and the list of changes from the previous version can be found in this discussion thread.

Version 1.15 of the GSL (GNU Scientific Library), a free and open source numerical library for C and C++, has been released.  A copy of the change log is here.

Scilab, the premier open source alternative to MATLAB, has seen a new minor upgrade with 5.3.2.  Click here to see the differences from version 5.3.1

The GMP MP Bignum library has been updated to version 5.0.2.  GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.  Check out the release notes for what’s new.

Commercial releases

The Numerical Algorithm’s Group (NAG) have released version 0.4 of their CUDA accelerated numerical library.  You can’t actually buy it yet as far as I know but academics can get their hands on it for free by signing a collaborative agreement with NAG.

Magma seems to have a new release every month.  See what’s new in version 2.17-8 here.

Math Software in the blogsphere

Sol Lederman has started a new blog called Playing with Mathematica.  Lots of cool little demonstrations to be found such as the multiple pendulum animation below.
Animated pendulums

Gary Ernest Davis discusses Dijkstra’s fusc function – complete with Mathematica code.

Alasdair looks at the sums of dice throws using Sage.

TOP