## Playing with Mathematica on Raspberry Pi

As soon as I heard the news that Mathematica was being made available completely free on the Raspberry Pi, I just had to get myself a Pi and have a play. So, I bought the Raspberry Pi Advanced Kit from my local Maplin Electronics store, plugged it to the kitchen telly and booted it up. The exercise made me miss my father because the last time I plugged a computer into the kitchen telly was when I was 8 years old; it was Christmas morning and dad and I took our first steps into a new world with my Sinclair Spectrum 48K.

**How to install Mathematica on the Raspberry Pi**

Future raspberry pis wll have Mathematica installed by default but mine wasn’t new enough so I just typed the following at the command line

sudo apt-get update && sudo apt-get install wolfram-engine

On my machine, I was told

The following extra packages will be installed: oracle-java7-jdk The following NEW packages will be installed: oracle-java7-jdk wolfram-engine 0 upgraded, 2 newly installed, 0 to remove and 1 not upgraded. Need to get 278 MB of archives. After this operation, 588 MB of additional disk space will be used.

So, it seems that Mathematica needs Oracle’s Java and that’s being installed for me as well. The combination of the two is going to use up 588MB of disk space which makes me glad that I have an 8Gb SD card in my pi.

**Mathematica version 10!**

On starting Mathematica on the pi, my first big surprise was the version number. I am the administrator of an unlimited academic site license for Mathematica at The University of Manchester and the latest version we can get for our PCs at the time of writing is 9.0.1. My free pi version is at version 10! The first clue is the installation directory:

/opt/Wolfram/WolframEngine/10.0

and the next clue is given by evaluating $Version in Mathematica itself

In[2]:= $Version Out[2]= "10.0 for Linux ARM (32-bit) (November 19, 2013)"

To get an idea of what’s new in 10, I evaluated the following command on Mathematica on the Pi

Export["PiFuncs.dat",Names["System`*"]]

This creates a PiFuncs.dat file which tells me the list of functions in the System context on the version of Mathematica on the pi. Transfer this over to my Windows PC and import into Mathematica 9.0.1 with

pifuncs = Flatten[Import["PiFuncs.dat"]];

Get the list of functions from version 9.0.1 on Windows:

winVer9funcs = Names["System`*"];

Finally, find out what’s in pifuncs but not winVer9funcs

In[16]:= Complement[pifuncs, winVer9funcs] Out[16]= {"Activate", "AffineStateSpaceModel", "AllowIncomplete", \ "AlternatingFactorial", "AntihermitianMatrixQ", \ "AntisymmetricMatrixQ", "APIFunction", "ArcCurvature", "ARCHProcess", \ "ArcLength", "Association", "AsymptoticOutputTracker", \ "AutocorrelationTest", "BarcodeImage", "BarcodeRecognize", \ "BoxObject", "CalendarConvert", "CanonicalName", "CantorStaircase", \ "ChromaticityPlot", "ClassifierFunction", "Classify", \ "ClipPlanesStyle", "CloudConnect", "CloudDeploy", "CloudDisconnect", \ "CloudEvaluate", "CloudFunction", "CloudGet", "CloudObject", \ "CloudPut", "CloudSave", "ColorCoverage", "ColorDistance", "Combine", \ "CommonName", "CompositeQ", "Computed", "ConformImages", "ConformsQ", \ "ConicHullRegion", "ConicHullRegion3DBox", "ConicHullRegionBox", \ "ConstantImage", "CountBy", "CountedBy", "CreateUUID", \ "CurrencyConvert", "DataAssembly", "DatedUnit", "DateFormat", \ "DateObject", "DateObjectQ", "DefaultParameterType", \ "DefaultReturnType", "DefaultView", "DeviceClose", "DeviceConfigure", \ "DeviceDriverRepository", "DeviceExecute", "DeviceInformation", \ "DeviceInputStream", "DeviceObject", "DeviceOpen", "DeviceOpenQ", \ "DeviceOutputStream", "DeviceRead", "DeviceReadAsynchronous", \ "DeviceReadBuffer", "DeviceReadBufferAsynchronous", \ "DeviceReadTimeSeries", "Devices", "DeviceWrite", \ "DeviceWriteAsynchronous", "DeviceWriteBuffer", \ "DeviceWriteBufferAsynchronous", "DiagonalizableMatrixQ", \ "DirichletBeta", "DirichletEta", "DirichletLambda", "DSolveValue", \ "Entity", "EntityProperties", "EntityProperty", "EntityValue", \ "Enum", "EvaluationBox", "EventSeries", "ExcludedPhysicalQuantities", \ "ExportForm", "FareySequence", "FeedbackLinearize", "Fibonorial", \ "FileTemplate", "FileTemplateApply", "FindAllPaths", "FindDevices", \ "FindEdgeIndependentPaths", "FindFundamentalCycles", \ "FindHiddenMarkovStates", "FindSpanningTree", \ "FindVertexIndependentPaths", "Flattened", "ForeignKey", \ "FormatName", "FormFunction", "FormulaData", "FormulaLookup", \ "FractionalGaussianNoiseProcess", "FrenetSerretSystem", "FresnelF", \ "FresnelG", "FullInformationOutputRegulator", "FunctionDomain", \ "FunctionRange", "GARCHProcess", "GeoArrow", "GeoBackground", \ "GeoBoundaryBox", "GeoCircle", "GeodesicArrow", "GeodesicLine", \ "GeoDisk", "GeoElevationData", "GeoGraphics", "GeoGridLines", \ "GeoGridLinesStyle", "GeoLine", "GeoMarker", "GeoPoint", \ "GeoPolygon", "GeoProjection", "GeoRange", "GeoRangePadding", \ "GeoRectangle", "GeoRhumbLine", "GeoStyle", "Graph3D", "GroupBy", \ "GroupedBy", "GrowCutBinarize", "HalfLine", "HalfPlane", \ "HiddenMarkovProcess", "ï¯", "ï ", "ï ", "ï ", "ï ", "ï ", \ "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", \ "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", \ "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ¦", "ï ª", \ "ï ¯", "ï \.b2", "ï \.b3", "IgnoringInactive", "ImageApplyIndexed", \ "ImageCollage", "ImageSaliencyFilter", "Inactivate", "Inactive", \ "IncludeAlphaChannel", "IncludeWindowTimes", "IndefiniteMatrixQ", \ "IndexedBy", "IndexType", "InduceType", "InferType", "InfiniteLine", \ "InfinitePlane", "InflationAdjust", "InflationMethod", \ "IntervalSlider", "ï ¨", "ï ¢", "ï ©", "ï ¤", "ï \[Degree]", "ï ", \ "ï ¡", "ï «", "ï ®", "ï §", "ï £", "ï ¥", "ï \[PlusMinus]", \ "ï \[Not]", "JuliaSetIterationCount", "JuliaSetPlot", \ "JuliaSetPoints", "KEdgeConnectedGraphQ", "Key", "KeyDrop", \ "KeyExistsQ", "KeyIntersection", "Keys", "KeySelect", "KeySort", \ "KeySortBy", "KeyTake", "KeyUnion", "KillProcess", \ "KVertexConnectedGraphQ", "LABColor", "LinearGradientImage", \ "LinearizingTransformationData", "ListType", "LocalAdaptiveBinarize", \ "LocalizeDefinitions", "LogisticSigmoid", "Lookup", "LUVColor", \ "MandelbrotSetIterationCount", "MandelbrotSetMemberQ", \ "MandelbrotSetPlot", "MinColorDistance", "MinimumTimeIncrement", \ "MinIntervalSize", "MinkowskiQuestionMark", "MovingMap", \ "NegativeDefiniteMatrixQ", "NegativeSemidefiniteMatrixQ", \ "NonlinearStateSpaceModel", "Normalized", "NormalizeType", \ "NormalMatrixQ", "NotebookTemplate", "NumberLinePlot", "OperableQ", \ "OrthogonalMatrixQ", "OverwriteTarget", "PartSpecification", \ "PlotRangeClipPlanesStyle", "PositionIndex", \ "PositiveSemidefiniteMatrixQ", "Predict", "PredictorFunction", \ "PrimitiveRootList", "ProcessConnection", "ProcessInformation", \ "ProcessObject", "ProcessStatus", "Qualifiers", "QuantityVariable", \ "QuantityVariableCanonicalUnit", "QuantityVariableDimensions", \ "QuantityVariableIdentifier", "QuantityVariablePhysicalQuantity", \ "RadialGradientImage", "RandomColor", "RegularlySampledQ", \ "RemoveBackground", "RequiredPhysicalQuantities", "ResamplingMethod", \ "RiemannXi", "RSolveValue", "RunProcess", "SavitzkyGolayMatrix", \ "ScalarType", "ScorerGi", "ScorerGiPrime", "ScorerHi", \ "ScorerHiPrime", "ScriptForm", "Selected", "SendMessage", \ "ServiceConnect", "ServiceDisconnect", "ServiceExecute", \ "ServiceObject", "ShowWhitePoint", "SourceEntityType", \ "SquareMatrixQ", "Stacked", "StartDeviceHandler", "StartProcess", \ "StateTransformationLinearize", "StringTemplate", "StructType", \ "SystemGet", "SystemsModelMerge", "SystemsModelVectorRelativeOrder", \ "TemplateApply", "TemplateBlock", "TemplateExpression", "TemplateIf", \ "TemplateObject", "TemplateSequence", "TemplateSlot", "TemplateWith", \ "TemporalRegularity", "ThermodynamicData", "ThreadDepth", \ "TimeObject", "TimeSeries", "TimeSeriesAggregate", \ "TimeSeriesInsert", "TimeSeriesMap", "TimeSeriesMapThread", \ "TimeSeriesModel", "TimeSeriesModelFit", "TimeSeriesResample", \ "TimeSeriesRescale", "TimeSeriesShift", "TimeSeriesThread", \ "TimeSeriesWindow", "TimeZoneConvert", "TouchPosition", \ "TransformedProcess", "TrapSelection", "TupleType", "TypeChecksQ", \ "TypeName", "TypeQ", "UnitaryMatrixQ", "URLBuild", "URLDecode", \ "URLEncode", "URLExistsQ", "URLExpand", "URLParse", "URLQueryDecode", \ "URLQueryEncode", "URLShorten", "ValidTypeQ", "ValueDimensions", \ "Values", "WhiteNoiseProcess", "XMLTemplate", "XYZColor", \ "ZoomLevel", "$CloudBase", "$CloudConnected", "$CloudDirectory", \ "$CloudEvaluation", "$CloudRootDirectory", "$EvaluationEnvironment", \ "$GeoLocationCity", "$GeoLocationCountry", "$GeoLocationPrecision", \ "$GeoLocationSource", "$RegisteredDeviceClasses", \ "$RequesterAddress", "$RequesterWolframID", "$RequesterWolframUUID", \ "$UserAgentLanguages", "$UserAgentMachine", "$UserAgentName", \ "$UserAgentOperatingSystem", "$UserAgentString", "$UserAgentVersion", \ "$WolframID", "$WolframUUID"}

There we have it, a preview of the list of functions that might be coming in desktop version 10 of Mathematica courtesy of the free Pi version.

**No local documentation**

On a desktop version of Mathematica, all of the Mathematica documentation is available on your local machine by clicking on **Help**->**Documentation Center** in the Mathematica notebook interface.** **On the pi version, it seems that there is no local documentation, presumably to keep the installation size down. You get to the documentation via the notebook interface by clicking on **Help**->**OnlineDocumentation** which takes you to http://reference.wolfram.com/language/?src=raspi

**Speed vs my laptop
**

I am used to running Mathematica on high specification machines and so naturally the pi version felt very sluggish–particularly when using the notebook interface. With that said, however, I found it very usable for general playing around. I was very curious, however, about the speed of the pi version compared to the version on my home laptop and so created a small benchmark notebook that did three things:

- Calculate pi to 1,000,000 decimal places.
- Multiply two 1000 x 1000 random matrices together
- Integrate sin(x)^2*tan(x) with respect to x

The comparison is going to be against my Windows 7 laptop which has a quad-core Intel Core i7-2630QM. The procedure I followed was:

- Start a fresh version of the Mathematica notebook and open pi_bench.nb
- Click on Evaluation->Evaluate Notebook and record the times
- Click on Evaluation->Evaluate Notebook again and record the new times.

Note that I use the AbsoluteTiming function instead of Timing (detailed reason given here) and I clear the system cache (detailed resason given here). You can download the notebook I used here. Alternatively, copy and paste the code below

(*Clear Cache*) ClearSystemCache[] (*Calculate pi to 1 million decimal places and store the result*) AbsoluteTiming[pi = N[Pi, 1000000];] (*Multiply two random 1000x1000 matrices together and store the \ result*) a = RandomReal[1, {1000, 1000}]; b = RandomReal[1, {1000, 1000}]; AbsoluteTiming[prod = Dot[a, b];] (*calculate an integral and store the result*) AbsoluteTiming[res = Integrate[Sin[x]^2*Tan[x], x];]

Here are the results. All timings in seconds.

Test | Laptop Run 1 | Laptop Run 2 | RaspPi Run 1 | RaspPi Run 2 | Best Pi/Best Laptop |

Million digits of Pi | 0.994057 | 1.007058 | 14.101360 | 13.860549 | 13.9434 |

Matrix product | 0.108006 | 0.074004 | 85.076986 | 85.526180 | 1149.63 |

Symbolic integral | 0.035002 | 0.008000 | 0.980086 | 0.448804 | 56.1 |

From these tests, we see that Mathematica on the pi is around 14 to 1149 times slower on the pi than my laptop. The huge difference between the pi and laptop for the matrix product stems from the fact that ,on the laptop, Mathematica is using Intels Math Kernel Library (MKL). The MKL is extremely well optimised for Intel processors and will be using all 4 of the laptop’s CPU cores along with extra tricks such as AVX operations etc. I am not sure what is being used on the pi for this operation.

I also ran the standard **BenchMarkReport[]** on the Raspberry Pi. The results are available here.

**Speed vs Python**

Comparing Mathematica on the pi to Mathematica on my laptop might have been a fun exercise for me but it’s not really fair on the pi which wasn’t designed to perform against expensive laptops. So, let’s move on to a more meaningful speed comparison: Mathematica on pi versus Python on pi.

When it comes to benchmarking on Python, I usually turn to the timeit module. This time, however, I’m not going to use it and that’s because of something odd that’s happening with sympy and caching. I’m using sympy to calculate pi to 1 million places and for the symbolic calculus. Check out this ipython session on the pi

pi@raspberrypi ~ $ SYMPY_USE_CACHE=no pi@raspberrypi ~ $ ipython Python 2.7.3 (default, Jan 13 2013, 11:20:46) Type "copyright", "credits" or "license" for more information. IPython 0.13.1 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: import sympy In [2]: pi=sympy.pi.evalf(100000) #Takes a few seconds In [3]: %timeit pi=sympy.pi.evalf(100000) 100 loops, best of 3: 2.35 ms per loop

In short, I have asked sympy not to use caching (I think!) and yet it is caching the result. I don’t want to time how quickly sympy can get a result from the cache so I can’t use timeit until I figure out what’s going on here. Since I wanted to publish this post sooner rather than later I just did this:

import sympy import time import numpy start = time.time() pi=sympy.pi.evalf(1000000) elapsed = (time.time() - start) print('1 million pi digits: %f seconds' % elapsed) a = numpy.random.uniform(0,1,(1000,1000)) b = numpy.random.uniform(0,1,(1000,1000)) start = time.time() c=numpy.dot(a,b) elapsed = (time.time() - start) print('Matrix Multiply: %f seconds' % elapsed) x=sympy.Symbol('x') start = time.time() res=sympy.integrate(sympy.sin(x)**2*sympy.tan(x),x) elapsed = (time.time() - start) print('Symbolic Integration: %f seconds' % elapsed)

Usually, I’d use time.clock() to measure things like this but something *very* strange is happening with time.clock() on my pi–something I’ll write up later. In short, it didn’t work properly and so I had to resort to time.time().

Here are the results:

1 million pi digits: 5535.621769 seconds Matrix Multiply: 77.938481 seconds Symbolic Integration: 1654.666123 seconds

The result that really surprised me here was the symbolic integration since the problem I posed didn’t look very difficult. **Sympy on pi was thousands of times slower** **than Mathematica on pi** for this calculation! On my laptop, the calculation times between Mathematica and sympy were about the same for this operation.

That Mathematica beats sympy for 1 million digits of pi doesn’t surprise me too much since I recall attending a seminar a few years ago where Wolfram Research described how they had optimized the living daylights out of that particular operation. Nice to see Python beating Mathematica by a little bit in the linear algebra though.

time.clock() behaves differently in Windows and UNIX. In UNIX, it reports CPU time, that is not completely reliable, as it tries to take into account the time the computer has actually being computing (self-reported by the process).

Regarding the linear algebra, Numpy uses the BLAS library available, have you installed any? Maybe ATLAS, having so many methods, just has some better suited for ARM, while Mathematica is purely focused on x86.

I didn’t explicitly install any BLAS libs–I just installed numpy etc and let it use whatever it chose to use. Might be interesting to follow up sometime and see if one can do better than the default on pi.

This an excellent test. I keep wondering about picking up a pi.

Also, in order to turn off caching I would think you would need to export the variable (export SYMPY_USE_CACHE=no). I’m not an expert, but I would imagine that there are subprocesses spawned by sympy.

@Craig. I’m fairly sure I tried that as well (I tried a lot more than I wrote up) but will try it again.

Cheers,

Mike

Did you have gmpy installed when you timed SymPy? It can make a huge difference in the performance.

Also, just to check, what version of SymPy were you using? I’m only sceptical because you are using an ancient version of IPython.

I’ll take a look next time I’m on the Pi. The version of Ipython I’m using is the one that comes from doing an apt-get from the raspbian repos.

I’m willing to bet that you do. I checked, and that integral takes forever in ancient versions of SymPy, but in the most recent version, it is computed instantly.

Usually, repositories are a bad idea for installing python packages: they are usually outdated and I have seen buggy ones (like Matplotlib) sitting in Ubuntu’s repositories for several months.

The best way is to use apt-get to install python-devel, pip, and other dependencies and pip install your libraries.

BTW, for Numpy to do linear algebra fast, it has to be linked against a BLAS library. If you pip it, it will automatically select ATLAS or MKL if they are installed. Of course, the best way to install ATLAS is to build it in your own hardware, but it would probably take ages to compile, and the version in the repository (if it exists) would have probably been built on a Pi.

I agree, building my own version from source would have been the best way to go for performance. My intention here, however, was to show what you get out of the box.

But that’s not exactly fair to say that Mathematica is faster than “out of the box” sympy if the sympy is ancient. That’s a statement on the status of the Debian repos more than Mathematica or sympy (or the Pi).

I did what I believe the vast majority of pi users would do if they were interested in mathematics on the pi — plug it in and apt-get what I wanted to play with. As such I think its perfectly fair from that point of view.

I do, however, see your point and will re-run the benchmark on up to date binaries and report the results in a follow up posrt. Thanks for the comments — greatly appreciated.

For me the interest of mathematica and of Wolfram language on Pi is about understanding the use of mathematica in an embedded environment. So thinks like using Mathematica to access other hardware via I2C bus and so on. So far I have just started to search for information.

The use of GPIO is a first step in the right direction!

As to the Wolfram Language it would be interesting to see what used might be possible, specially if I use the raspberry Pi connected to the Internet, so the Wolfram Language can benefit from the cloud!

I wonder if Mathematica will still run on the just-announced Raspberry Pi 2 or if Wolfram has

to do work to port it. It looks like the new Pi is an answer to Intel’s Edison platform, which is

also due to get a Mathematica port.

Mathematica isn’t starting on Raspberry Pi 2. It’s stopping with a kernel license failure.

It’s probably looking for a single core CPU and stopping after recognizing the new quadcore CPU.

Upps, strange, after rebooting Raspian the kernel license failure vanished. Mathematica is starting fine on my Raspberry Pi 2 now.

Results for Raspberry Pi 2

AbsoluteTiming[pi = N[Pi, 1000000];]

{5.419374, Null}

{5.478331, Null}

Matrix product: {4.153776, Null} {4.290373, Null}

Symbolic integral: {0.261577, Null} {0.116455, Null}

I don’t have Ipython installed. I was reading through the example and the general idea is

to time the evaluation of pi to 1,000,000 places.

The Ipython example, above, looks like it evaluates to 100,000 places:

In [1]: import sympy

In [2]: pi=sympy.pi.evalf(100000) #Takes a few seconds

In [3]: %timeit pi=sympy.pi.evalf(100000)

When I get Ipython installed I will try this example.

-lance mitrex

I have installed Ipython and ran these two statements

In [5]: %timeit pi=sympy.pi.evalf(100000) %100,000

100 loops, best of 3: 2.24 ms per loop

In [6]: %timeit pi=sympy.pi.evalf(1000000) %1,000,000

1 loops, best of 3: 12.2 ms per loop

I’ll re-run both of them

Lance Mitrex.

I (re)(re)ran the 1,000,00 digits example again:

In [7]: %timeit pi=sympy.pi.evalf(1000000)

100 loops, best of 3: 17.2 ms per loop

In [8]: %timeit pi=sympy.pi.evalf(1000000)

100 loops, best of 3: 17.1 ms per loop

Later…

Lance Mitrex

Another great bonus to RPi is it’s the cheapest way to start playing with AI considering the Mathematica capabilities. RPi + camera + Mathematica ImageIdentify and you’re training your computer. Amazing where we’re at. Imagine where we’ll be!