Archive for November, 2012

November 28th, 2012

Back when Mathematica 8 was released I tried to work out how many MATLAB toolboxes you’d need to buy to have the same functionality and came up with 9 toolboxes.  Readers of WalkingRandomly suggested several more in the comments.  Now that Mathematica 9 has been released, I thought I’d work through the exercise again.

So I think that Mathematica 9 contains at least some of the functionality of the following 18 MATLAB toolboxes. Click on the relevant toolbox for more information or an example.

I use both Mathematica and MATLAB extensively and sincerely wish that MATLAB had this level of integration.  Does anyone have evidence of any I might have missed (or shouldn’t have included)?

November 13th, 2012

Intel have finally released the Xeon Phi – an accelerator card based on 60 or so customised Intel cores to give around a Teraflop of double precision performance.  That’s comparable to the latest cards from NVIDIA (1.3 Teraflops according to http://www.theregister.co.uk/2012/11/12/nvidia_tesla_k20_k20x_gpu_coprocessors/) but with one key difference—you don’t need to learn any new languages or technologies to take advantage of it (although you can do so if you wish)!

The Xeon Phi uses good, old fashioned High Performance Computing technologies that we’ve been using for years such as OpenMP and MPI.  There’s no need to completely recode your algorithms in CUDA or OpenCL to get a performance boost…just a sprinkling of OpenMP pragmas might be enough in many cases.  Obviously it will take quite a bit of work to squeeze every last drop of performance out of the thing but this might just be the realisation of ‘personal supercomputer’ we’ve all been waiting for.

Here are some links I’ve found so far — would love to see what everyone else has come up with.  I’ll update as I find more

I also note that the Xeon Phi uses AVX extensions but with a wider vector width of 512 bytes so if you’ve been taking advantage of that technology in your code (using one of these techniques perhaps) you’ll reap the benefits there too.

I, for one, am very excited and can’t wait to get my hands on one!  Thoughts, comments and links gratefully received!

November 4th, 2012

Welcome to the October edition of A Month of Math Software where I take a look at everything that is new and updated in the ever evolving world of mathematical software and programming.  If you’d like something included in the next edition please contact me via whatever method suits you best.

GPU accelerated mathematics

In the old days Graphics Processing Units (GPUs) were only used to make computer games look pretty.  These days they can do mathematics very quickly.

  • A new, free linear algebra library for OpenCL has been released, RaijinCL.  Brought to you by @codedevine (author of RGBench for android among other things) what makes this library different is that it is an auto-tuning library that works on lots of different hardware.  Instead of providing a single optimized implementation of kernels, it generates many different kernels, tests it on the user’s machine and records the best performing kernel.  It currently only has matrix-matrix multiplication but Rahul has lots of plans for the future.
  • The OpenCL version of MAGMA has seen a major update.  Version 1.0 of clMAGMA contains lots of new linear algebra routines.
  • After many release candidates, the production release of version 5 of NVIDIA’s CUDA Toolkit was made available this month.  The toolkit is the fundamental piece of software you need if you intend to devlop GPU accelerated applications on NVIDIA hardware. Mathematical updates include a couple of new basic statistical functions (normcdf and normcdfinv) in the CUDA math library, incomplete factorization preconditioners (ilu0 and ic0) in the CUDA Sparse Matrix library and the ability to generate Poisson distributed random numbers in the CUDA random number generation library.
  • Jacket from Accelereyes is a GPU accelerated toolbox for MATLAB and has been updated to verion 2.3.  See the release notes for more detailsI played with an older version of Jacket earlier this year.
  • CULA Dense is a GPU accelerated linear algebra library for NVIDIA CPUs.  Version 16 was released in October and the release notes are available at http://www.culatools.com/files/docs/R16/release_notes_R16.txt.  The CULA sparse library has also been updated (to version 4) but the only new stuff appears to be support for new hardware and CUDA version 5.

Plotting

  • Origin and OriginPro have both been upgraded to version 9.  These commercial plotting packages for Windows are very popular and easy to use (My university has a site license for them and they are used a lot) and this major new release includes lots of new functionality.
  • DISLIN, a scientific plotting library for multiple languages, is now at version 10.2.5 with the new stuff discussed at http://www.mps.mpg.de/dislin/news.html
  • A new release candidate of matplotlib is now available at https://github.com/matplotlib/matplotlib/downloads.  New features include PGF/TikZ backend for easier LaTeX integration and picklable figures.  The plots below were created using the new release candidate and come to you courtesy of @dmcdougall_

matplotlib
Free Statistics

Misc

Follow

 


TOP