## Search Results

Some time in 2013, I helped out at a Software Carpentry event at The University of Bath. As with most software carpentry boot camps, one of the topics covered was shell scripting and the scripting language of choice was bash. As I wandered around the room, I asked the delegates which operating system they use for the majority of their research and the **most popular answer, by far, was Windows.**

This led me to wonder if we should teach using a native Windows solution rather than relying on bash?

A few years ago, this would be an insane proposition since the Windows command shell is very weak compared to bash. PowerShell, on the other hand, is modern, powerful and installed on all modern Windows operating systems by default.

My problem was that I didn’t know PowerShell very well. So, I took the notes for the 2013 Bath shell scripting session - https://github.com/swcarpentry/boot-camps/tree/2013-07-bath/shell - and gave myself the exercise of converting them to PowerShell.

I got close to completing this exercise last summer but various things took higher priority and so the project languished. Rather than sit on the notes any longer, I’ve decided to just publish what I have so far in case they are useful to anyone.

You are free to use them with the following caveats

- This is not necessarily the right way to teach PowerShell. It is an experiment in converting some classroom-tested Linux based notes to PowerShell.
- If you use them, attribution would be nice. I am Mike Croucher, my site is www.walkingrandomly.com Details on how to contact me at http://www.walkingrandomly.com/?page_id=2055
- I have not yet tested these notes in a classroom situation
**These notes aren’t finished yet**- These notes have been developed and tested on Windows 7. Behaviour may be different using different versions of Windows.
- These notes are given as they were left sometime in mid 2013.
**Some things may be out of date.** - I was learning PowerShell as I developed these notes. As such, I fully expect them to be full of mistakes. Corrections and improvements would be welcomed.

If anyone is interested in developing these notes into something that’s classroom-ready, contact me.

## The old Windows Command Shell

The traditional Windows command shell is a program called cmd.exe which can trace its roots all the way back to the old, pre-Windows DOS prompt.

You can launch this command shell as follows

- Hold down both the Windows button and the letter R to open the Run prompt
- Type cmd and press Enter or click OK

- You should see a window similar to the one below

The Windows command shell hasn’t changed significantly for over twenty years and is relatively feature poor compared to more modern shells. For this reason, it is recommended that you use Windows PowerShell instead. Mention of cmd.exe is only included here since, despite its deficiencies, it is still widely in use

## PowerShell

To launch PowerShell:

- Hold down both the Windows button and the letter R to open the Run prompt
- Type
**powershell**and press Enter or click OK

- You should see a window similar to the one below

Note that although the header of the above window mentions v1.0, it could be a screenshot from either version 1.0 or version 2.0. This is a well-known bug. If you are using Windows 7 you will have version 2 at the minimum.

## PowerShell versions

At the time of writing, PowerShell is at version 3. Ideally, you should at least have version 2.0 installed. To check version:

```
$psversiontable.psversion
Major Minor Build Revision
----- ----- ----- --------
3 0 -1 -1
```

If this variable does not exist, you are probably using version 1.0 and should upgrade.

Version 3.0 is available at http://blogs.technet.com/b/heyscriptingguy/archive/2013/06/02/weekend-scripter-install-powershell-3-0-on-windows-7.aspx

## Comments

```
# This is a comment in Powershell. It is not executed
```

## Directories

Users of Bash will feel right at home at first since PowerShell appears to have the same set of commands

```
pwd #Path to current folder
ls #List directory
ls *.txt #Wild Card
ls *_hai*
ls -R #Recursive folder listing
ls . #List current folder
ls .. #List Parent folder
cd .. #Change current folder to parent. (Move up a folder)
cd ~ #Change current folder to your user directory.
mkdir myfolder #Create a folder
mkdir ~/myfolder
mv myfolder new_myfolder #rename myfolder to new_myfolder
rm -r new_myfolder #Delete new_myfolder if its empty
```

# Files

```
cat file # View file
more file # Page through file
cat file | select -first 3 # first N lines
cat file | select -last 2 # Last N lines
cp file1 file2 # Copy
cp *.txt directory
rm file.txt # Delete - no recycle bin.
rm -r directory # Recurse
```

## Different command types in PowerShell: Aliases, Functions and Cmdlets

Many of the PowerShell ‘commands’ we’ve used so far are actually aliases to Powershell Cmdlets which have a Verb-Noun naming convention. We can discover what each command is an alias of using the **get-alias** cmdlet.

```
PS > get-alias ls
CommandType Name Definition
----------- ---- ----------
Alias ls Get-ChildItem
```

This shows that **ls** is an alias for the Cmdlet **Get-ChildItem**

A list of aliases for common Bash commands:

- cat (Get-Content)
- cd (Set-Location)
- ls (Get-ChildItem)
- pwd (Get-Location)

One reason why aliases were created is to make PowerShell a more familiar environment for users of other shells such as the old Windows cmd.exe or Linux’s Bash environment and also to save on typing.

You can get a list of all aliases using **get-alias** on its own.

```
PS > get-alias
```

Finally, here’s how you get all of the aliases for the **Get-ChildItem** cmdlet.

```
get-alias | where-object {$_.Definition -match "Get-Childitem"}
```

For more details on Powershell aliases, see Microsoft’s documentation at http://technet.microsoft.com/en-us/library/ee692685.aspx

### What type of command is mkdir?

The **mkdir** command looks like it might be an alias as well since it doesn’t have the verb-noun naming convention of Cmdlets. Let’s try to see which Cmdlet it might be an alias of:

```
PS > get-alias mkdir
Get-Alias : This command cannot find a matching alias because alias with name 'mkdir' do not exist.
At line:1 char:6
+ alias <<<< mkdir
+ CategoryInfo : ObjectNotFound: (mkdir:String) [Get-Alias], ItemNotFoundException
+ FullyQualifiedErrorId : ItemNotFoundException,Microsoft.PowerShell.Commands.GetAliasCommand
```

It turns out that **mkdir** isn’t an alias at all but is actually yet another PowerShell command type, a function. We can see this by using the **get-command** Cmdlet

```
PS > get-command mkdir
CommandType Name Definition
----------- ---- ----------
Function mkdir ...
Application mkdir.exe C:\Program Files (x86)\Git\bin\mkdir.exe
```

Now we can clearly see that mkdir is a PowerShell function. The mkdir.exe is an Application which you’ll only see if you installed git for windows as I have.

## Cmdlets

A Cmdlet (pronounced ‘command-let’) is a .NET class but you don’t need to worry abut what this means until you get into advanced PowerShell usage. Just think of Cmdlets as the base type of PowerShell command. They are always named according to the convention **verb-noun**; for example **Set-Location** and **Get-ChildItem**.

#### Listing all Cmdlets

The following lists all Cmdlets

```
Get-Command
```

You can pipe this list to a pager

```
Get-Command | more
```

## Getting help

You can get help on any PowerShell command using the -? switch. For example

```
ls -?
```

When you do this, you’ll get help for the **Get-ChildItem** Cmdlet which would be confusing if you didn’t know that ls is actually an alias for **Get-ChildItem**

## History

Up arrow browses previous commands.

By default, PowerShell version 2 remembers the last 64 commands whereas PowerShell version 3 remembers 4096. This number is controlled by the $MaximumHistoryCount variable

```
PS > $MaximumHistoryCount #Display the current value
PS > $MaximumHistoryCount=150 #Change it to 150
PS > history #Display recent history using the alias version of the command
PS > get-history #Display recent history using the Cmdlet direct
```

Although it remembers more, PowerShell only shows the last 32 commands by default. To see a different number, use the count switch

```
PS > get-history -count 50
```

To run the Nth command in the history use Invoke-History

```
PS > invoke-history 7
```

## Word count (and more) using Measure-Object

Linux has a command called **wc** that counts the number of lines and words in a file. Powershell has no such command but we can do something similar with the **Measure-Object** Cmdlet.

Say we want to count the number of lines, words and characters in the file foo.txt. The first step is to get the content of the file

```
get-content foo.txt # gets the content of foo.txt
```

Next, we pipe the result of the get-content Cmdlet to Measure-Object, requesting lines, words and characters

```
get-content foo.txt | measure-object -line -character -word
```

The measure-object Cmdlet can also count files

```
ls *.txt | measure-object #Counts number of .txt files in the current folder
```

When you execute the above command, a table of results will be returned:

```
Count : 3
Average :
Sum :
Maximum :
Minimum :
Property :
```

This is because the measure-object Cmdlet, like all PowerShell Cmdlets, actually returns an object and the above table is the textual representation of that object.

The fields in this table hint that **measure-object** can do a lot more than simply count things. For example, here we find some statistics concerning the file lengths found by the ls *.txt command

```
ls *.txt | measure-object -property length -minimum -maximum -sum -average
```

You may wonder exactly what type of object has been returned from measure-object and we can discover this by running the gettype() method of the returned object

```
(ls *.txt | measure-object).gettype()
```

Request just the name as follows

```
(ls *.txt | measure-object).gettype().Name
GenericMeasureInfo
```

To find out what properties an object has, pass it to the **get-member** Cmdlet

```
#Return all member types
ls *.txt | get-member
#Return only Properties
ls *.txt | get-member -membertype property
```

Sometimes, you’ll want to simply return the numerical value of an object’s property and you do this using the **select-object** Cmdlet. Here we ask for just the Count property of the GenericMeasureInfo object returned by **measure-object**.

```
#Counts the number of *.txt files and returns just the numerical result
ls *.txt | measure-object | select-object -expand Count
```

## Searching within files

The Unix world has **grep**, PowerShell has **Select String. **Try running the following on haiku.txt

```
Select-String the haiku.txt #Case insensitive by default, unlike grep
Select-String the haiku.txt -CaseSensitive #Behaves more like grep
Select-String day haiku.txt -CaseSensitive
Select-String is haiku.txt -CaseSensitive
Select-String 'it is' haiku.txt -Casesensitive
```

There is no direct equivalent to grep’s -w switch.

```
grep -w is haiku.txt #exact match
```

However, you can get the same behaviour using the word boundary anchors, **\b**

```
Select-String \bis\b haiku.txt -casesensitive
```

Grep has a -v switch that shows all lines that do not match a pattern. **Select-String** makes use of the **-notmatch** switch.

```
BASH: grep -v "is" haiku.txxt
PS: select-string -notmatch "is" haiku.txt -CaseSensitive
```

Grep has an -r switch which stands for ‘recursive’. The following will search through all files and subfolders of your current directory, looking for files that contain **is**

```
grep -r is *
```

**Select-String** has no direct equivalent to this. However, you can do the same thing by using get-childitem to get the list of files, piping the output to **select-string**

```
get-childitem * -recurse | select-string is
```

One difference between **grep** and **Select-String** is that the latter includes the filename and line number of each match.

```
grep the haiku.txt
Is not the true Tao, until
and the presence of absence:
Select-String the haiku.txt -CaseSensitive
haiku.txt:2:Is not the true Tao, until
haiku.txt:6:and the presence of absence:
```

To get the **grep**-like output, use the following

```
Select-String the haiku.txt -CaseSensitive | ForEach-Object {$_.Line}
Is not the true Tao, until
and the presence of absence:
```

To understand how this works, you first have to know that **Select-String** returns an **array** of **MatchInfo** objects when there is more than one match. To demonstrate this:

```
$mymatches = Select-String the haiku.txt -CaseSensitive #Put all matches in the variable 'mymatches'
$mymatches -is [Array] #query if 'match' is an array
True
```

So, mymatches is an array. We can see how many elements it has using the array’s Count property

```
$mymatches.Count
2
```

The type of elements in PowerShell arrays don’t necessarily have to be the same. In this case, however, they are.

```
$mymatches[0].gettype()
$mymatches[1].gettype()
```

both of these give the output

```
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True False MatchInfo System.Object
```

If all you wanted was the name of the first object type, you’d do

```
$mymatches[0].gettype().name
MatchInfo
```

Alternatively, we could have asked for each element’s type using the **For-Each-Object** Cmdlet to loop over every object in the array.

```
$mymatches | Foreach-Object {$_.gettype().Name}
```

Where **$_** is a special variable that effectively means ‘current object’ or ‘The object currently being considered by Foreach-Object’ if you want to be more verbose.

So, we know that we have an array of 2 MatchInfo objects in our variable mymatches. What does this mean? What properties do MatchInfo objects have? We can find out by piping one of them to the **Get-Member** Cmdlet.

```
$mymatches[0] | Get-Member
TypeName: Microsoft.PowerShell.Commands.MatchInfo
Name MemberType Definition
---- ---------- ----------
Equals Method bool Equals(System.Object obj)
GetHashCode Method int GetHashCode()
GetType Method type GetType()
RelativePath Method string RelativePath(string directory)
ToString Method string ToString(), string ToString(string directory)
Context Property Microsoft.PowerShell.Commands.MatchInfoContext Context {get;se
Filename Property System.String Filename {get;}
IgnoreCase Property System.Boolean IgnoreCase {get;set;}
Line Property System.String Line {get;set;}
LineNumber Property System.Int32 LineNumber {get;set;}
Matches Property System.Text.RegularExpressions.Match[] Matches {get;set;}
Path Property System.String Path {get;set;}
Pattern Property System.String Pattern {get;set;}
```

Now we can see that each MatchInfo object has a Line property and it’s reasonable to guess that this contains a Line containing a match. Taking a look:

```
$mymatches[0].Line
Is not the true Tao, until
```

Bringing together everything we’ve seen so far, we pull out the Line property of each element in the array as follows

```
$mymatches | Foreach-Object {$_.Line}
```

Alternatively, we can ditch the $mymatches variable and pipe in the output of **Select-String** directly

```
Select-String the haiku.txt -CaseSensitive | ForEach-Object {$_.Line}
Is not the true Tao, until
and the presence of absence:
```

## Regular expressions

```
select-string 's*is' haiku.txt # * Zero or more of preceding token
select-string 's+is' haiku.txt # + On or more of preceding token
select-string '.nd' haiku.txt # . Any token followed by 'nd'
select-string 'es' haiku.txt # matches 'es'
select-string 'es[ht]' haiku.txt # Exactly one of the characters listed
select-string 'es[^ht]' haiku.txt # Matches everything except h and t
select-string 'ex[
select-string '\bis\b' haiku.txt # \b word boundaries
```

## Input and output redirection

`>`

redirects output (AKA standard output). This works in both Bash and Powershell scripts. For example, in Bash we might do

```
#BASH
grep -r not * > found_nots.txt
```

Drawing on what we've learned so far, you might write the PowerShell version of this command as

```
#PS
get-childitem *.txt -recurse | select-string not > found_nots.txt
```

However, if you do this, you will find that the script will run forever with the hard-disk chugging like crazy. If you've run the above command, **CTRL and C** will stop it. This is because Powershell is including the output file, found_nots.txt, in its input which leads to an infinite loop. To prevent this, we must explicitly exclude the output file from the **get-childitem** search

```
get-childitem *.txt -Exclude 'found_nots.txt' -recurse | select-string not > found_nots.txt
cat found_nots.txt
ls *.txt > txt_files.txt
cat txt_files.txt
```

In Linux, `<`

redirects input (AKA standard input). This does not work in PowerShell:

```
cat < haiku.txt
At line:1 char:5
+ cat < haiku.txt
+ ~
The '<' operator is reserved for future use.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : RedirectionNotSupported
```

The above is a forced use of < since one could simply do

```
cat haiku.txt
```

Recall that **cat** is an alias for **get-content**. The use of **get-content** is an idiom that gets around the lack an < operator. For example, instead of

```
foo < input.txt
```

One does

```
get-content input.txt | foo
```

Error messages are output on standard error

```
ls idontexist.txt > output.txt
cat output.txt #output.txt is empty
ls idontexist.txt 2> output.txt # 2 is standard error
ls haiku.txt 1> output.txt # 1 is standard output
ls haiku.txt,test_file.txt 2>&1 > output.txt # Combine the two streams.
```

## Searching for files

```
# Find all
UNIX: find .
PS: get-childitem . -Recurse
PS: get-childitem . -Recurse | foreach-object {$_.FullName} #To give identical output as `find`
```

To save on typing, you can use the alias **gci** instead of **get-childitem**

```
# Directories only
UNIX: find . -type d
PS2: gci . -recurse | where { $_.PSIsContainer }
PS3: gci -recurse -Directory
```

If you have PowerShell 2, you can only use the long winded version. It’s simpler in PowerShell 3. Similarly for searching for files only.

```
# Files only
UNIX: find . -type f
PS2: get-childitem -recurse | where { ! $_.PSIsContainer }
PS3: gci -recurse -File
```

With the Unix find command, you can specify the maximum and minimum search depths. There is no direct equivalent in PowerShell although you could write a function that will do this. Such a function can be found at http://windows-powershell-scripts.blogspot.co.uk/2009/08/unix-linux-find-equivalent-in.html although **I have not tested this!**

```
# Maximum depth of tree
UNIX: find . -maxdepth 2
PS : No direct equivalent
# Minimum depth of tree
UNIX: find . -mindepth 3
PS : No direct equivalent
```

You can also filter by name. Confusingly, PowerShell offers two ways of doing this. More details on the differences between these can be found at http://tfl09.blogspot.co.uk/2012/02/get-childitem-and-theinclude-and-filter.html

One key difference between find and get-childitem is that the latter is case-insenstive whereas find is case sensitive.

```
# Find by name
UNIX: find . -name '*.txt'
PS: gci -recurse -include *.txt
PS: gci -recurse -filter *.txt
#Find empty files
UNIX: find . -empty
PS: gci -recurse | where ($_.Length -eq 0) | Select FullName
#Create empty file
UNIX: touch emptyfile.txt
PS: new-item emptyfile.txt -type file
```

## Command Substituion

In bash, you can execute a command using backticks and the result is substituted in place. i.e.

```
#bash
foo `bar`
```

The backticks are used as escape characters in PowerShell so you do the following instead

```
#PS
foo $(bar)
```

In both cases, the command **bar** is executed and result is substituted into the call to **foo**.

## Power of the pipe

`|`

is a pipe. Use the pipe to connect the output of one command to the input of another:

Count text files

```
ls -filter *.txt | measure
```

`ls`

outputs a list of files, `measure`

inputs a list of files.

```
echo "Number of .txt files:"; ls -filter *.txt | measure | select -expand count
```

`;`

equivalent to running two commands on separate lines.

Question: what does this do?

```
ls -name | select-string s | measure
```

Answer: counts the number of files with `s`

in their name.

```
history | select-string 'echo'
```

Power of well-defined modular components with well-defined interfaces,

- Bolt together to create powerful computational and data processing workflows.
- Good design principle applicable to programming – Python modules, C libraries, Java classes – modularity and reuse.
- “little pieces loosely joined” –
`history`

+`select-string`

= function to search for a command.

## Variables

```
get-variable # See all variables
$MYFILE="data.txt" # Need quotes around strings
echo $MYFILE
echo "My file name is $MYFILE"
$num = 1 #Numbers don't need quotes
$num = $num+1 #Simple Arithmetic
$TEXT_FILES=get-childitem Save output of get-childitem
echo $TEXT_FILES
```

Variables only persist for the duration of the current PowerShell Session

## Environment variables

Windows environment variables don’t show up when you execute `get-variable`

; to list them all you do

```
#PS
get-childitem env: #Show all Windows Environment variables
echo $env:PATH #Show the contents of PATH
$env:Path = $env:Path + ";C:\YourApp\bin\" #temporarily add a folder to PATH
```

This modification to PATH will only last as long as the current session. It is possible to permanently modify the system PATH but this should only be done with extreme care and is not covered here.

## PowerShell Profile

The PowerShell profile is a script that is executed whenever you launch a new session. Every user has their own profile. The location of your PowerShell profile is defined by the variable **$profile**

```
$profile
```

Open it with

```
notepad $profile
```

Add something to it such as

```
echo "Welcome to PowerShell. This is a message from your profile"
```

Restart PowerShell and you should see the message. You can use this profile to customise your PowerShell sessions. For example, if you have installed NotePad++, you might find adding the following function to your PowerShell Profile to be useful.

```
# Launch notepad++ using the npp command
function npp($file)
{
if ($file -eq $null)
{
& "C:\Program Files (x86)\Notepad++\notepad++.exe";
}
else
{
& "C:\Program Files (x86)\Notepad++\notepad++.exe" $file;
}
}
```

With this function in your profile, you can open Notepad++ with the command **npp** or **npp(filename.txt)**

## Conditionals

```
$num = 1
if($num -eq 1)
{
write-host 'num equals 1'
}
$word="hello"
if($word -eq "hello")
{
write-host 'The same'
}
```

By default, string tests are case insensitive

```
$word="hello"
if($word -eq "HELLO")
{
write-host 'The same'
}
```

To force them to be case sensitive, add a `c`

to the operator:

```
$word="hello"
if($word -ceq "HELLO")
{
write-host 'The Same. This won't be printed'
}
```

You can similarly be explicitly case insensitive by adding an `i`

. Although this is the the same behaviour as the undecorated operators and so might seem unnecessary, it shows your intent to the reader.

```
$word="hello"
if($word -ieq "HELLO")
{
write-host 'The same'
}
```

##### Comparison Operators

```
-eq Equal to
-lt Less than
-gt Greater than
-ge Greater than or equal to
-le Less than or equal to
-ne Not equal to
```

##### Logical operators

```
-not Not
! Not
-or Or
-and And
```

## Loops

PowerShell has several looping constructs. Here, I only consider two.

### for

Allows you to run a block of code a set number of times.

```
for ($i=1; $i -le 5; $i=$i+1)
{
Write-Host $i
}
```

### foreach

Do something with every element of a collection

```
foreach($item in $(ls *.txt)) {echo $item.Name}
```

### foreach vs foreach-object

TODO

## Shell scripts

- Save retyping.
- PowerShell scripts have the extension .ps1
- PowerShell scripts must be created with plain text editors such as Notepad or Notepad++. NEVER use Microsoft Word!

Here is a very simple script

```
notepad protein_filter.ps1 #Open the file
#A simple protein filter
$DATE = get-date
echo "Processing date: $DATE"
foreach($item in get-childitem *.pdb)
{
echo $item.Name
}
echo "Processing complete"
```

To run this just type the filename:

```
protein_filter.ps1
```

If you get an error message, it may be because your execution policy is set not to run scripts locally. Change this with the command

```
Set-ExecutionPolicy RemoteSigned #Allow local scripts to run. Needs to be run with admin privileges
protein_filter.ps1 #Run the script
```

## Download files via command-line

Primary Care Trust Prescribing Data – April 2011 onwards

```
$file="prim-care-trus-pres-data-apr-jun-2011-dat.csv"
$path="$pwd\$file" #Path needs to be fully qualified. This puts it in the current folder
$url = "http://www.ic.nhs.uk/catalogue/PUB02342/$file"
$client = new-object System.Net.WebClient
$client.DownloadFile( $url, $path )
```

## Permissions

Windows file permissions are rather more complicated than those of Linux but most users won’t need to worry about them in day to day use.

## The call operator

It is sometimes convenient to construct strings that contain the full path to a PowerShell script we want to execute. For example:

```
$fullpath = "$pwd\myscript.ps1"
```

To actually run the script pointed to by this variable, you need to use the call operator **&**

```
& $fullpath #Runs myscript.ps1
```

You also need to do this if you try to call any script where the path contains spaces

```
"C:\Program Files\myscript.ps1" #Will just display the string literal
& "C:\Program Files\myscript.ps1" #Runs the script
```

## Background Jobs

Consider the script counter.ps1

```
param($step=1)
#counter.ps1: A simple, long running job to demonstrate background jobs
$i=1
while ( $i -lt 200000 )
{
echo $i
$i=$i+$step
}
```

This counts up to 200000 in user-defined steps.

```
./counter.ps1 > 1step.txt #Counts in steps of 1
./counter.ps1 -step 2 > 2step.txt #Counts in steps of 2
```

The script takes quite a while to complete and you cannot do anything else in your PowerShell session while it is working. Using the **start-job** Cmdlet, we can run counter.ps1 in the background

```
start-job -scriptblock { C:\Users\walkingrandomly\Dropbox\SSI_Windows\dir_full_of_files\some_directory\counter.ps1 > C:\Users\walkingrandomly\Dropbox\SSI_Windows\dir_full_of_files\some_directory\outcount1.txt }
```

Note that you have to use the **full path** to both the counter.ps1 script and the output file. You can see the status of the job with **get-job**

```
get-job
Id Name State HasMoreData Location Command
-- ---- ----- ----------- -------- -------
1 Job1 Running True localhost C:\Users\walkingrando...
```

Eventually, your job will complete

```
get-job
Id Name State HasMoreData Location Command
-- ---- ----- ----------- -------- -------
1 Job1 Completed False localhost C:\Users\walkingrando...
ls outcount* #Ensure that output file has been created
remove-job 1 #remove remnants of job 1 from the queue
get-job #Check that queue is empty
```

You can run as many simultaneous jobs as you like but it is best not to run too many or your computer will grind to a halt.

Here’s an example that runs 5 counter.ps1 jobs concurrently

```
#parallel_counters.ps1
#Runs 5 instances of counter.ps1 in parallel
$scriptname = "counter.ps1"
$outputfileBase = "outfile"
$outputfileExt = ".txt"
$scriptPath = "$pwd\$scriptname"
for ($i=1; $i -le 5; $i++)
{
$outputfilePath = "$pwd\$outputfileBase" + $i + $outputfileExt
$command = "$scriptPath -step $i `> $outputfilePath"
$myScriptBlock = [scriptblock]::Create($command)
start-job -scriptblock $myScriptBlock
}
```

Run this as a demonstration

```
parallel_counters.ps1
get-job #Keep running until all have completed
ls outfile*
more outfile5.txt
more outfile2.txt
$myjob=get-job 2 #Get info on job Id 2 and store in variable $myjob
$myjob.Command #Look at the command that comprised job 2
remove-job * #Remove all job remnants from the queue
get-job #Should be empty
```

TODO: Dealing with output, **recieve-job**

## Secure Shell

There is no equivalent to the Linux commands **ssh** and **sftp** in PowerShell. The following free programs are recommended

- http://www.chiark.greenend.org.uk/~sgtatham/putty/ – PuTTY is a free implementation of Telnet and SSH for Windows and Unix platforms.
- http://winscp.net/eng/index.php – Free SFTP, SCP and FTP client for Windows.
- http://mobaxterm.mobatek.net/ – A more advanced terminal than PuTTY with a free ‘personal edition’ and a paid-for ‘professional edition’

## Packaging

There are no direct PowerShell equivalents to zip, unzip, tar etc. There are `write-zip`

, `write-tar`

and `write-gzip`

cmdlets in the third party, free PowerShell Community Extensions but I have not investigated them yet.

## Transcripts

`Start-transcript`

Initializes a transcript file which records all subsequent input/Output. Use the following syntax:

```
Start-Transcript [[-path] FilePath] [-force] [-noClobber] [-append]
```

`Stop-transcript`

Stops recording and finalizes the transcript.

```
start-transcript -path ./diary.txt
ls
echo "Hello dear diary"
stop-transcript
cat diary.txt
```

- Record commands typed, commands with lots of outputs, trial-and-error when building software.
- Send exact copy of command and error message to support.
- Turn into blog or tutorial.

## Shell power

(Bentley, Knuth, McIlroy 1986) Programming pearls: a literate program Communications of the ACM, 29(6), pp471-483, June 1986. DOI: [10.1145/5948.315654].

Dr. Drang, More shell, less egg, December 4th, 2011.

Common words problem: read a text file, identify the N most frequently-occurring words, print out a sorted list of the words with their frequencies.

10 plus pages of Pascal … or … 1 line of shell

```
#BASH version
$ nano words.sh
tr -cs A-Za-z '\n' | tr A-Z a-z | sort | uniq -c | sort -rn | sed ${1}q
$ chmod +x words.sh
$ nano words.sh < README.md
$ nano words.sh < README.md 10
```

The PowerShell version is more complicated but still very short compared to the 10 pages of Pascal

```
#count_words.ps1
Param([string]$filename,[int]$num)
$text=get-content $filename
$split = foreach($line in $text) {$line.tolower() -split "\W"}
$split | where-object{-not [string]::IsNullorEmpty($_)} | group -noelement | sort Count -Descending | select -first $num
count_words.sh README.md 10
```

“A wise engineering solution would produce, or better, exploit-reusable parts.” – Doug McIlroy

## Links

- Software Carpentry‘s online Bash shell lectures.
- G. Wilson, D. A. Aruliah, C. T. Brown, N. P. Chue Hong, M. Davis, R. T. Guy, S. H. D. Haddock, K. Huff, I. M. Mitchell, M. Plumbley, B. Waugh, E. P. White, P. Wilson (2012) “Best Practices for Scientific Computing“, arXiv:1210.0530 [cs.MS].

**Introduction**

Mathematica 10 was released yesterday amid the usual marketing storm we’ve come to expect for a new product release from Wolfram Research. Some of the highlights of this marketing information include

- Mathematica 10 blog post by Stephen Wolfram
- What’s new in 10? From Wolfram Research
- Alphabetical list of new functions

Without a doubt, there is a lot of great new functionality in this release and I’ve been fortunate enough to have access to some pre-releases for a while now. There is no chance that I can compete with the in-depth treatments given by Wolfram Research of the 700+ new functions in Mathematica 10 so I won’t try. Instead, I’ll hop around some of the new features, settling on those that take my fancy.

**Multiple Undo**

I’ve been a Mathematica user since version 4 of the product which was released way back in 1999. For the intervening 15 years, one aspect of Mathematica that always frustrated me was the fact that the undo feature could only undo the most recent action. I, along with many other users, repeatedly asked for multiple undo to be implemented but were bitterly disappointed for release after release.

Few things have united Mathematica users more than the need for multiple undo:

- A website with petition calling for multiple undo
- A Facebook page calling for multiple undo
- A very popular Mathematica StackExchange question about multiple undo

Finally, with version 10, our hopes and dreams have been answered and multiple undo is finally here. To be perfectly honest, THIS is the biggest reason to upgrade to version 10. Everything else is just tasty gravy.

**Key new areas of functionality**

Most of the things considered in this article are whimsical and only scratch the surface of what’s new in Mathematica 10. I support Mathematica (and several other products) at the University of Manchester in the UK and so I tend to be interested in things that my users are interested in. Here’s a brief list of new functionality areas that I’ll be shouting about

- Machine Learning I know several people who are seriously into machine learning but few of them are Mathematica users. I’d love to know what they make of the things on offer here.
- New image processing functions The Image Processing Toolbox is one of the most popular MATLAB toolboxes on my site. I wonder if this will help turn MATLAB-heads. I also know people in a visualisation group who may be interested in the new 3D functions on offer.
- Nonlinear control theoryVarious people in our electrical engineering department are looking at alternatives to MATLAB for control theory. Maple/Maplesim and Scilab/Xcos are the key contenders. SystemModeler is too expensive for us to consider but the amount of control functionality built into Mathematica is useful.

**Entities – a new data type for things that Mathematica knows stuff about**

One of the new functions I’m excited about is GeoGraphics that pulls down map data from Wolfram’s servers and displays them in the notebook. Obviously, I did not read the manual and so my first attempt at getting a map of the United Kingdom was

GeoGraphics["United Kingdom"]

What I got was a map of my home town, Sheffield, surrounded by a red cell border indicating an error message

The error message is “*United Kingdom is not a Graphics primitive or directive.”* The practical upshot of this is that GeoGraphics is not built to take strings as arguments. Fair enough, but why the map of Sheffield? Well, if you call GeoGraphics[] on its own, the default action is to return a map centred on the GeoLocation found by considering your current IP address and it seems that it also does this if you send something bizarre to GeoGraphics. In all honesty, I’d have preferred no map and a simple error message.

In order to get what I want, I have to pass an Entity that represents the UK to the GeoGraphics function. Entities are a new data type in Mathematica 10 and, as far as I can tell, they formally represent ‘things’ that Mathematica knows about. There are several ways to create entities but here I use the new Interpreter function

From the above, you can see that Entities have a special new StandardForm but their InputForm looks straightforward enough. One thing to bear in mind here is **that all of the above functions require a live internet connection in order to work**. For example, on thinking that I had gotten the hang of the Entity syntax, I switched off my internet connection and did

mycity = Entity["City", "Manchester"] During evaluation of In[92]:= URLFetch::invhttp: Couldn't resolve host name. >> During evaluation of In[92]:= WolframAlpha::conopen: Using WolframAlpha requires internet connectivity. Please check your network connection. You may need to configure your firewall program or set a proxy in the Internet Connectivity tab of the Preferences dialog. >> Out[92]= Entity["City", "Manchester"]

So, you need an internet connection even to create entities at this most fundamental level. Perhaps it’s for validation? Turning the internet connection back on and re-running the above command removes the error message but the thing that’s returned isn’t in the new StandardForm:

If I attempt to display a map using the mycity variable, I get the map of Sheffield that I currently associate with something having gone wrong (If I’d tried this out at work,in Manchester, on the other hand, I would think it had worked perfectly!). So, there is something very wrong with the entity I am using here – It doesn’t look right and it doesn’t work right – clearly that connection to WolframAlpha during its creation was not to do with validation (or if it was, it hasn’t helped). I turn back to the Interpreter function:

In[102]:= mycity2 = Interpreter["City"]["Manchester"] // InputForm Out[102]//InputForm:= Entity["City", {"Manchester", "Manchester", "UnitedKingdom"}]

So, clearly my guess at how a City entity should look was completely incorrect. For now, I think I’m going to avoid creating Entities directly and rely on the various helper functions such as Interpreter.

**What are the limitations of knowledge based computation in Mathematica?**

All of the computable data resides in the Wolfram Knowledgebase which is a new name for the data store used by Wolfram Alpha, Mathematica and many other Wolfram products. In his recent blog post, Stephen Wolfram says that they’ll soon release the Wolfram Discovery Platform which will allow large scale access to the Knowledgebase and indicated that * ‘basic versions of Mathematica 10 are just set up for small-scale data access.’* I have no idea what this means and what limitations are in place and can’t find anything in the documentation.

Until I understand what limitations there might be, I find myself unwilling to use these data-centric functions for anything important.

**IntervalSlider – a new control for Manipulate**

I’ll never forget the first time I saw a Mathematica Manipulate – it was at the 2006 International Mathematica Symposium in Avignon when version 6 was still in beta. A Wolfram employee created a fully functional, interactive graphical user interface with just a few lines of code in about 2 minutes –I’d never seen anything like it and was seriously excited about the possibilities it represented.

8 years and 4 Mathematica versions later and we can see just how amazing this interactive functionality turned out to be. It forms the basis of the Wolfram Demonstrations project which currently has 9677 interactive demonstrations covering dozens of fields in engineering, mathematics and science.

Not long after Mathematica introduced Manipulate, the sage team introduced a similar function called interact. The interact function had something that Manipulate did not – an interval slider (see the interaction called *‘Numerical integrals with various rules’* at http://wiki.sagemath.org/interact/calculus for an example of it in use). This control allows the user to specify intervals on a single slider and is very useful in certain circumstances.

As of version 10, Mathematica has a similar control called an IntervalSlider. Here’s some example code of it in use

Manipulate[ pl1 = Plot[Sin[x], {x, -Pi, Pi}]; pl2 = Plot[Sin[x], {x, range[[1]], range[[2]]}, Filling -> Axis, PlotRange -> {-Pi, Pi}]; inset = Inset["The Integral is approx " <> ToString[ NumberForm[ Integrate[Sin[x], {x, range[[1]], range[[2]]}] , {3, 3}, ExponentFunction -> (Null &)]], {2, -0.5}]; Show[{pl1, pl2}, Epilog -> inset], {{range, {-1.57, 1.57}}, -3.14, 3.14, ControlType -> IntervalSlider, Appearance -> "Labeled"}]

and here’s the result:

**Associations – A new kind of brackets**

Mathematica 10 brings a new fundamental data type to the language, associations. As far as I can tell, these are analogous to dictionaries in Python or Julia since they consist of key,value pairs. Since Mathematica has already used every bracket type there is, Wolfram Research have had to invent a new one for associations.

Let’s create an association called scores that links 3 people to their test results

In[1]:= scores = <|"Mike" -> 50.2, "Chris" -> 100.00, "Johnathan" -> 62.3|> Out[1]= <|"Mike" -> 50.2, "Chris" -> 100., "Johnathan" -> 62.3|>

We can see that the Head of the scores object is Association

In[2]:= Head[scores] Out[2]= Association

We can pull out a value by supplying a key. For example, let’s see what value is associated with “Mike”

In[3]:= scores["Mike"] Out[3]= 50.2

All of the usual functions you expect to see for dictionary-type objects are there:-

In[4]:= Keys[scores] Out[4]= {"Mike", "Chris", "Johnathan"} In[5]:= Values[scores] Out[5]= {50.2, 100., 62.3} In[6]:= (*Show that the key "Bob" does not exist in scores*) KeyExistsQ[scores, "Bob"] Out[6]= False

If you ask for a key that does not exist this happens:

In[7]:= scores["Bob"] Out[7]= Missing["KeyAbsent", "Bob"]

There’s a new function called Counts that takes a list and returns an association which counts the unique elements in the list:

In[8]:= Counts[{q, w, e, r, t, y, u, q, w, e}] Out[8]= <|q -> 2, w -> 2, e -> 2, r -> 1, t -> 1, y -> 1, u -> 1|>

Let’s use this to find something out something interesting, such as the most used words in the classic text, Moby Dick

In[1]:= (*Import Moby Dick from Project gutenberg*) MobyDick = Import["http://www.gutenberg.org/cache/epub/2701/pg2701.txt"]; (*Split into words*) words = StringSplit[MobyDick]; (*Convert all words to lowercase*) words = Map[ToLowerCase[#] &, words]; (*Create an association of words and corresponding word count*) wordcounts = Counts[words]; (*Sort the association by key value*) wordcounts = Sort[wordcounts, Greater]; (*Show the top 10*) wordcounts[[1 ;; 10]] Out[6]= <|"the" -> 14413, "of" -> 6668, "and" -> 6309, "a" -> 4658, "to" -> 4595, "in" -> 4116, "that" -> 2759, "his" -> 2485, "it" -> 1776, "with" -> 1750|>

All told, associations are a useful addition to the Mathematica language and I’m happy to see them included. Many existing functions have been updated to handle Associations making them a fundamental part of the language.

- How to make use of Associations? – a question from Mathematica Stack Exchange

**s/Mathematica/Wolfram Language/g**

I’ve been programming in Mathematica for well over a decade but the language is no longer called ‘Mathematica’, it’s now called ‘The Wolfram Language.’ I’ll not lie to you, this grates a little but I guess I’ll just have to get used to it. Flicking through the documentation, it seems that a global search and replace has happened and almost every occurrence of ‘Mathematica’ has been changed to ‘The Wolfram Language’

This is part of a huge marketing exercise for Wolfram Research and I guess that part of the reason for doing it is to shift the emphasis away from mathematics to general purpose programming. I wonder if this marketing push will increase the popularity of The Wolfram Language as measured by the TIOBE index? Neither ‘Mathematica’ or ‘The Wolfram Language’ is listed in the top 100 and last time I saw more detailed results had it at number 128.

**Fractal exploration**

One of Mathematica’s competitors, Maple, had a new release recently which saw the inclusion of a set of fractal exploration functions. Although I found this a fun and interesting addition to the product, I did think it rather an odd thing to do. After all, if any software vendor is stuck for functionality to implement, there is a whole host of things to do that rank higher in most user’s list of priorities than a function that plots a standard fractal.

It seems, however, that both Maplesoft and Wolfram Research have seen a market for such functionality. Mathematica 10 comes with a set of functions for exploring the Mandelbrot and Julia sets. The Mandelbrot set alone accounts for at least 5 of Mathematica 10′s 700 new functions:- MandelbrotSetBoettcher, MandelbrotSetDistance, MandelbrotSetIterationCount, MandelbrotSetMemberQ and MandelbrotSetPlot.

MandelbrotSetPlot[]

**Barcodes**

I found this more fun than is reasonable! Mathematica can generate and recognize bar codes and QR codes in various formats. For example

BarcodeImage["www.walkingrandomly.com", "QR"]

Scanning the result using my mobile phone brings me right back home :)

**Unit Testing**

A decent unit testing framework is essential to anyone who’s planning to do serious software development. Python has had one for years, MATLAB got one in 2013a and now Mathematica has one. This is good news! I’ve not had chance to look at it in any detail, however. For now, I’ll simply nod in approval and send you to the documentation. Opinions welcomed.

**Disappointments in Mathematica 10**

There’s a lot to like in Mathematica 10 but there’s also several aspects that disappointed me

**No update to RLink**

Version 9 of Mathematica included integration with R which excited quite a few people I work with. Sadly, it seems that there has been no work on RLink at all between version 9 and 10. Issues include:

- The version of R bundled with RLink is stuck at 2.14.0 which is almost 3 years old. On Mac and Linux, it is not possible to use your own installation of R so we really are stuck with 2.14. On Windows, it is possible to use your own installation of R but CHECK THAT version 3 issue has been fixed http://mathematica.stackexchange.com/questions/27064/rlink-and-r-v3-0-1
- It is only possible to install extra R packages on Windows. Mac and Linux users are stuck with just base R.

This lack of work on RLink really is a shame since the original release was a very nice piece of work.

If the combination of R and notebook environment is something that interests you, I think that the current best solution is to use the R magics from within the IPython notebook.

**No update to CUDA/OpenCL functions**

Mathematica introduced OpenCL and CUDA functionality back in version 8 but very little appears to have been done in this area since. In contrast, MATLAB has improved on its CUDA functionality (it has never supported OpenCL) every release since its introduction in 2010b and is now superb!

Accelerating computations using GPUs is a big deal at the University of Manchester (my employer) which has a GPU-club made up of around 250 researchers. Sadly, I’ll have nothing to report at the next meeting as far as Mathematica is concerned.

**FinancialData is broken (and this worries me more than you might expect)**

I wrote some code a while ago that used the FinancialData function and it suddenly stopped working because of some issue with the underlying data source. In short, this happens:

In[12]:= FinancialData["^FTAS", "Members"] Out[12]= Missing["NotAvailable"]

This wouldn’t be so bad if it were not for the fact that an example given in Mathematica’s own documentation fails in exactly the same way! The documentation in both version 9 and 10 give this example:

In[1]:= FinancialData["^DJI", "Members"] Out[1]= {"AA", "AXP", "BA", "BAC", "CAT", "CSCO", "CVX", "DD", "DIS", \ "GE", "HD", "HPQ", "IBM", "INTC", "JNJ", "JPM", "KFT", "KO", "MCD", \ "MMM", "MRK", "MSFT", "PFE", "PG", "T", "TRV", "UTX", "VZ", "WMT", \ "XOM"}

but what you actually get is

In[1]:= FinancialData["^DJI", "Members"] Out[1]= Missing["NotAvailable"]

For me, the implications of this bug are far more reaching than a few broken examples. Wolfram Research are making a big deal of the fact that Mathematica gives you access to computable data sets, data sets that you can just use in your code and not worry about the details.

Well, I did just as they suggest, and it broke!

**Summary**

I’ve had a lot of fun playing with Mathematica 10 but that’s all I’ve really done so far – play – something that’s probably obvious from my choice of topics in this article. Even through play, however, I can tell you that this is a very solid new release with some exciting new functionality. Old-time Mathematica users will want to upgrade for multiple-undo alone and people new to the system have an awful lot of toys to play with.

Looking to the future of the system, I feel excited and concerned in equal measure. There is so much new functionality on offer that it’s almost overwhelming and I love the fact that its all integrated into the core system. I’ve always been grateful of the fact that Mathematica hasn’t gone down the route of hiving functionality off into add-on products like MATLAB does with its numerous toolboxes.

My concerns center around the data and Stephen Wolfram’s comment * ‘basic versions of Mathematica 10 are just set up for small-scale data access.’* What does this mean? What are the limitations and will this lead to serious users having to purchase add-ons that would effectively be data-toolboxes?

**Final**

Have you used Mathematica 10 yet? If so, what do you think of it? Any problems? What’s your favorite function?

**Mathematica 10 links**

- Playing with Mathematica on Raspberry Pi – The Raspberry Pi was the first platform that saw Mathematica version 10…and it’s free!

Back in 2008, I featured various mathematical Christmas cards generated from mathematical software. Here are the original links (including full source code for each ‘card’) along with a few of the images. Contact me if you’d like to add something new.

A question I get asked a lot is ‘How can I do nonlinear least squares curve fitting in X?’ where X might be MATLAB, Mathematica or a whole host of alternatives. Since this is such a common query, I thought I’d write up how to do it for a very simple problem in several systems that I’m interested in

This is the MATLAB version. For other versions,see the list below

- Simple nonlinear least squares curve fitting in Julia
- Simple nonlinear least squares curve fitting in Maple
- Simple nonlinear least squares curve fitting in Mathematica
- Simple nonlinear least squares curve fitting in Python
- Simple nonlinear least squares curve fitting in R

**The problem**

xdata = -2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9 ydata = 0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001

and you’d like to fit the function

using nonlinear least squares. You’re starting guesses for the parameters are p1=1 and P2=0.2

For now, we are primarily interested in the following results:

- The fit parameters
- Sum of squared residuals

Future updates of these posts will show how to get other results such as confidence intervals. Let me know what you are most interested in.

How you proceed depends on which toolboxes you have. Contrary to popular belief, you don’t need the Curve Fitting toolbox to do curve fitting…particularly when the fit in question is as basic as this. Out of the 90+ toolboxes sold by The Mathworks, I’ve only been able to look through the subset I have access to so I may have missed some alternative solutions.

**Pure MATLAB solution (No toolboxes)**

In order to perform nonlinear least squares curve fitting, you need to minimise the squares of the residuals. This means you need a minimisation routine. Basic MATLAB comes with the fminsearch function which is based on the Nelder-Mead simplex method. For this particular problem, it works OK but will not be suitable for more complex fitting problems. Here’s the code

format compact format long xdata = [-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9]; ydata = [0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001]; %Function to calculate the sum of residuals for a given p1 and p2 fun = @(p) sum((ydata - (p(1)*cos(p(2)*xdata)+p(2)*sin(p(1)*xdata))).^2); %starting guess pguess = [1,0.2]; %optimise [p,fminres] = fminsearch(fun,pguess)

This gives the following results

p = 1.881831115804464 0.700242006994123 fminres = 0.053812720914713

All we get here are the parameters and the sum of squares of the residuals. If you want more information such as 95% confidence intervals, you’ll have a lot more hand-coding to do. Although fminsearch works fine in this instance, it soon runs out of steam for more complex problems.

**MATLAB with optimisation toolbox**

With respect to this problem, the optimisation toolbox gives you two main advantages over pure MATLAB. The first is that better optimisation routines are available so more complex problems (such as those with constraints) can be solved and in less time. The second is the provision of the lsqcurvefit function which is specifically designed to solve curve fitting problems. Here’s the code

format long format compact xdata = [-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9]; ydata = [0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001]; %Note that we don't have to explicitly calculate residuals fun = @(p,xdata) p(1)*cos(p(2)*xdata)+p(2)*sin(p(1)*xdata); %starting guess pguess = [1,0.2]; [p,fminres] = lsqcurvefit(fun,pguess,xdata,ydata)

This gives the results

p = 1.881848414551983 0.700229137656802 fminres = 0.053812696487326

**MATLAB with statistics toolbox**

There are two interfaces I know of in the stats toolbox and both of them give a lot of information about the fit. The problem set up is the same in both cases

%set up for both fit commands in the stats toolbox xdata = [-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9]; ydata = [0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001]; fun = @(p,xdata) p(1)*cos(p(2)*xdata)+p(2)*sin(p(1)*xdata); pguess = [1,0.2];

Method 1 makes use of **NonLinearModel.fit**

mdl = NonLinearModel.fit(xdata,ydata,fun,pguess)

The returned object is a **NonLinearModel** object

class(mdl) ans = NonLinearModel

which contains all sorts of useful stuff

mdl = Nonlinear regression model: y ~ p1*cos(p2*xdata) + p2*sin(p1*xdata) Estimated Coefficients: Estimate SE p1 1.8818508110535 0.027430139389359 p2 0.700229815076442 0.00915260662357553 tStat pValue p1 68.6052223191956 2.26832562501304e-12 p2 76.5060538352836 9.49546284187105e-13 Number of observations: 10, Error degrees of freedom: 8 Root Mean Squared Error: 0.082 R-Squared: 0.996, Adjusted R-Squared 0.995 F-statistic vs. zero model: 1.43e+03, p-value = 6.04e-11

If you don’t need such heavyweight infrastructure, you can make use of the statistic toolbox’s **nlinfit** function

[p,R,J,CovB,MSE,ErrorModelInfo] = nlinfit(xdata,ydata,fun,pguess);

Along with our parameters (p) this also provides the residuals (R), Jacobian (J), Estimated variance-covariance matrix (CovB), Mean Squared Error (MSE) and a structure containing information about the error model fit (ErrorModelInfo).

Both **nlinfit** and **NonLinearModel.fit** use the same minimisation algorithm which is based on Levenberg-Marquardt

**MATLAB with Symbolic Toolbox**

MATLAB’s symbolic toolbox provides a completely separate computer algebra system called Mupad which can handle nonlinear least squares fitting via its stats::reg function. Here’s how to solve our problem in this environment.

First, you need to start up the mupad application. You can do this by entering

mupad

into MATLAB. Once you are in mupad, the code looks like this

xdata := [-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9]: ydata := [0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001]: stats::reg(xdata,ydata,p1*cos(p2*x)+p2*sin(p1*x),[x],[p1,p2],StartingValues=[1,0.2])

The result returned is

[[1.88185085, 0.7002298172], 0.05381269642]

These are our fitted parameters,p1 and p2, along with the sum of squared residuals. The documentation tells us that the optimisation algorithm is Levenberg-Marquardt– this is rather better than the simplex algorithm used by basic MATLAB’s fminsearch.

**MATLAB with the NAG Toolbox**

The NAG Toolbox for MATLAB is a commercial product offered by the UK based Numerical Algorithms Group. Their main products are their C and Fortran libraries but they also have a comprehensive MATLAB toolbox that contains something like 1500+ functions. My University has a site license for pretty much everything they put out and we make great use of it all. One of the benefits of the NAG toolbox over those offered by The Mathworks is speed. NAG is often (but not always) faster since its based on highly optimized, compiled Fortran. One of the problems with the NAG toolbox is that it is difficult to use compared to Mathworks toolboxes.

In an earlier blog post, I showed how to create wrappers for the NAG toolbox to create an easy to use interface for basic nonlinear curve fitting. Here’s how to solve our problem using those wrappers.

format long format compact xdata = [-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9]; ydata = [0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001]; %Note that we don't have to explicitly calculate residuals fun = @(p,xdata) p(1)*cos(p(2)*xdata)+p(2)*sin(p(1)*xdata); start = [1,0.2]; [p,fminres]=nag_lsqcurvefit(fun,start,xdata,ydata)

which gives

Warning: nag_opt_lsq_uncon_mod_func_easy (e04fy) returned a warning indicator (5) p = 1.881850904268710 0.700229557886739 fminres = 0.053812696425390

For convenience, here’s the two files you’ll need to run the above (you’ll also need the NAG Toolbox for MATLAB of course)

**MATLAB with curve fitting toolbox**

One would expect the curve fitting toolbox to be able to fit such a simple curve and one would be right :)

xdata = [-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9]; ydata = [0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001]; opt = fitoptions('Method','NonlinearLeastSquares',... 'Startpoint',[1,0.2]); f = fittype('p1*cos(p2*x)+p2*sin(p1*x)','options',opt); fitobject = fit(xdata',ydata',f)

Note that, unlike every other Mathworks method shown here, xdata and ydata have to be column vectors. The result looks like this

fitobject = General model: fitobject(x) = p1*cos(p2*x)+p2*sin(p1*x) Coefficients (with 95% confidence bounds): p1 = 1.882 (1.819, 1.945) p2 = 0.7002 (0.6791, 0.7213)

fitobject is of type cfit:

class(fitobject) ans = cfit

In this case it contains two fields, p1 and p2, which are the parameters we are looking for

>> fieldnames(fitobject) ans = 'p1' 'p2' >> fitobject.p1 ans = 1.881848414551983 >> fitobject.p2 ans = 0.700229137656802

For maximum information, call the fit command like this:

[fitobject,gof,output] = fit(xdata',ydata',f) fitobject = General model: fitobject(x) = p1*cos(p2*x)+p2*sin(p1*x) Coefficients (with 95% confidence bounds): p1 = 1.882 (1.819, 1.945) p2 = 0.7002 (0.6791, 0.7213) gof = sse: 0.053812696487326 rsquare: 0.995722238905101 dfe: 8 adjrsquare: 0.995187518768239 rmse: 0.082015773244637 output = numobs: 10 numparam: 2 residuals: [10x1 double] Jacobian: [10x2 double] exitflag: 3 firstorderopt: 3.582047395989108e-05 iterations: 6 funcCount: 21 cgiterations: 0 algorithm: 'trust-region-reflective' message: [1x86 char]

As soon as I heard the news that Mathematica was being made available completely free on the Raspberry Pi, I just had to get myself a Pi and have a play. So, I bought the Raspberry Pi Advanced Kit from my local Maplin Electronics store, plugged it to the kitchen telly and booted it up. The exercise made me miss my father because the last time I plugged a computer into the kitchen telly was when I was 8 years old; it was Christmas morning and dad and I took our first steps into a new world with my Sinclair Spectrum 48K.

**How to install Mathematica on the Raspberry Pi**

Future raspberry pis wll have Mathematica installed by default but mine wasn’t new enough so I just typed the following at the command line

sudo apt-get update && sudo apt-get install wolfram-engine

On my machine, I was told

The following extra packages will be installed: oracle-java7-jdk The following NEW packages will be installed: oracle-java7-jdk wolfram-engine 0 upgraded, 2 newly installed, 0 to remove and 1 not upgraded. Need to get 278 MB of archives. After this operation, 588 MB of additional disk space will be used.

So, it seems that Mathematica needs Oracle’s Java and that’s being installed for me as well. The combination of the two is going to use up 588MB of disk space which makes me glad that I have an 8Gb SD card in my pi.

**Mathematica version 10!**

On starting Mathematica on the pi, my first big surprise was the version number. I am the administrator of an unlimited academic site license for Mathematica at The University of Manchester and the latest version we can get for our PCs at the time of writing is 9.0.1. My free pi version is at version 10! The first clue is the installation directory:

/opt/Wolfram/WolframEngine/10.0

and the next clue is given by evaluating $Version in Mathematica itself

In[2]:= $Version Out[2]= "10.0 for Linux ARM (32-bit) (November 19, 2013)"

To get an idea of what’s new in 10, I evaluated the following command on Mathematica on the Pi

Export["PiFuncs.dat",Names["System`*"]]

This creates a PiFuncs.dat file which tells me the list of functions in the System context on the version of Mathematica on the pi. Transfer this over to my Windows PC and import into Mathematica 9.0.1 with

pifuncs = Flatten[Import["PiFuncs.dat"]];

Get the list of functions from version 9.0.1 on Windows:

winVer9funcs = Names["System`*"];

Finally, find out what’s in pifuncs but not winVer9funcs

In[16]:= Complement[pifuncs, winVer9funcs] Out[16]= {"Activate", "AffineStateSpaceModel", "AllowIncomplete", \ "AlternatingFactorial", "AntihermitianMatrixQ", \ "AntisymmetricMatrixQ", "APIFunction", "ArcCurvature", "ARCHProcess", \ "ArcLength", "Association", "AsymptoticOutputTracker", \ "AutocorrelationTest", "BarcodeImage", "BarcodeRecognize", \ "BoxObject", "CalendarConvert", "CanonicalName", "CantorStaircase", \ "ChromaticityPlot", "ClassifierFunction", "Classify", \ "ClipPlanesStyle", "CloudConnect", "CloudDeploy", "CloudDisconnect", \ "CloudEvaluate", "CloudFunction", "CloudGet", "CloudObject", \ "CloudPut", "CloudSave", "ColorCoverage", "ColorDistance", "Combine", \ "CommonName", "CompositeQ", "Computed", "ConformImages", "ConformsQ", \ "ConicHullRegion", "ConicHullRegion3DBox", "ConicHullRegionBox", \ "ConstantImage", "CountBy", "CountedBy", "CreateUUID", \ "CurrencyConvert", "DataAssembly", "DatedUnit", "DateFormat", \ "DateObject", "DateObjectQ", "DefaultParameterType", \ "DefaultReturnType", "DefaultView", "DeviceClose", "DeviceConfigure", \ "DeviceDriverRepository", "DeviceExecute", "DeviceInformation", \ "DeviceInputStream", "DeviceObject", "DeviceOpen", "DeviceOpenQ", \ "DeviceOutputStream", "DeviceRead", "DeviceReadAsynchronous", \ "DeviceReadBuffer", "DeviceReadBufferAsynchronous", \ "DeviceReadTimeSeries", "Devices", "DeviceWrite", \ "DeviceWriteAsynchronous", "DeviceWriteBuffer", \ "DeviceWriteBufferAsynchronous", "DiagonalizableMatrixQ", \ "DirichletBeta", "DirichletEta", "DirichletLambda", "DSolveValue", \ "Entity", "EntityProperties", "EntityProperty", "EntityValue", \ "Enum", "EvaluationBox", "EventSeries", "ExcludedPhysicalQuantities", \ "ExportForm", "FareySequence", "FeedbackLinearize", "Fibonorial", \ "FileTemplate", "FileTemplateApply", "FindAllPaths", "FindDevices", \ "FindEdgeIndependentPaths", "FindFundamentalCycles", \ "FindHiddenMarkovStates", "FindSpanningTree", \ "FindVertexIndependentPaths", "Flattened", "ForeignKey", \ "FormatName", "FormFunction", "FormulaData", "FormulaLookup", \ "FractionalGaussianNoiseProcess", "FrenetSerretSystem", "FresnelF", \ "FresnelG", "FullInformationOutputRegulator", "FunctionDomain", \ "FunctionRange", "GARCHProcess", "GeoArrow", "GeoBackground", \ "GeoBoundaryBox", "GeoCircle", "GeodesicArrow", "GeodesicLine", \ "GeoDisk", "GeoElevationData", "GeoGraphics", "GeoGridLines", \ "GeoGridLinesStyle", "GeoLine", "GeoMarker", "GeoPoint", \ "GeoPolygon", "GeoProjection", "GeoRange", "GeoRangePadding", \ "GeoRectangle", "GeoRhumbLine", "GeoStyle", "Graph3D", "GroupBy", \ "GroupedBy", "GrowCutBinarize", "HalfLine", "HalfPlane", \ "HiddenMarkovProcess", "ï¯", "ï ", "ï ", "ï ", "ï ", "ï ", \ "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", \ "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", \ "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ", "ï ¦", "ï ª", \ "ï ¯", "ï \.b2", "ï \.b3", "IgnoringInactive", "ImageApplyIndexed", \ "ImageCollage", "ImageSaliencyFilter", "Inactivate", "Inactive", \ "IncludeAlphaChannel", "IncludeWindowTimes", "IndefiniteMatrixQ", \ "IndexedBy", "IndexType", "InduceType", "InferType", "InfiniteLine", \ "InfinitePlane", "InflationAdjust", "InflationMethod", \ "IntervalSlider", "ï ¨", "ï ¢", "ï ©", "ï ¤", "ï \[Degree]", "ï ", \ "ï ¡", "ï «", "ï ®", "ï §", "ï £", "ï ¥", "ï \[PlusMinus]", \ "ï \[Not]", "JuliaSetIterationCount", "JuliaSetPlot", \ "JuliaSetPoints", "KEdgeConnectedGraphQ", "Key", "KeyDrop", \ "KeyExistsQ", "KeyIntersection", "Keys", "KeySelect", "KeySort", \ "KeySortBy", "KeyTake", "KeyUnion", "KillProcess", \ "KVertexConnectedGraphQ", "LABColor", "LinearGradientImage", \ "LinearizingTransformationData", "ListType", "LocalAdaptiveBinarize", \ "LocalizeDefinitions", "LogisticSigmoid", "Lookup", "LUVColor", \ "MandelbrotSetIterationCount", "MandelbrotSetMemberQ", \ "MandelbrotSetPlot", "MinColorDistance", "MinimumTimeIncrement", \ "MinIntervalSize", "MinkowskiQuestionMark", "MovingMap", \ "NegativeDefiniteMatrixQ", "NegativeSemidefiniteMatrixQ", \ "NonlinearStateSpaceModel", "Normalized", "NormalizeType", \ "NormalMatrixQ", "NotebookTemplate", "NumberLinePlot", "OperableQ", \ "OrthogonalMatrixQ", "OverwriteTarget", "PartSpecification", \ "PlotRangeClipPlanesStyle", "PositionIndex", \ "PositiveSemidefiniteMatrixQ", "Predict", "PredictorFunction", \ "PrimitiveRootList", "ProcessConnection", "ProcessInformation", \ "ProcessObject", "ProcessStatus", "Qualifiers", "QuantityVariable", \ "QuantityVariableCanonicalUnit", "QuantityVariableDimensions", \ "QuantityVariableIdentifier", "QuantityVariablePhysicalQuantity", \ "RadialGradientImage", "RandomColor", "RegularlySampledQ", \ "RemoveBackground", "RequiredPhysicalQuantities", "ResamplingMethod", \ "RiemannXi", "RSolveValue", "RunProcess", "SavitzkyGolayMatrix", \ "ScalarType", "ScorerGi", "ScorerGiPrime", "ScorerHi", \ "ScorerHiPrime", "ScriptForm", "Selected", "SendMessage", \ "ServiceConnect", "ServiceDisconnect", "ServiceExecute", \ "ServiceObject", "ShowWhitePoint", "SourceEntityType", \ "SquareMatrixQ", "Stacked", "StartDeviceHandler", "StartProcess", \ "StateTransformationLinearize", "StringTemplate", "StructType", \ "SystemGet", "SystemsModelMerge", "SystemsModelVectorRelativeOrder", \ "TemplateApply", "TemplateBlock", "TemplateExpression", "TemplateIf", \ "TemplateObject", "TemplateSequence", "TemplateSlot", "TemplateWith", \ "TemporalRegularity", "ThermodynamicData", "ThreadDepth", \ "TimeObject", "TimeSeries", "TimeSeriesAggregate", \ "TimeSeriesInsert", "TimeSeriesMap", "TimeSeriesMapThread", \ "TimeSeriesModel", "TimeSeriesModelFit", "TimeSeriesResample", \ "TimeSeriesRescale", "TimeSeriesShift", "TimeSeriesThread", \ "TimeSeriesWindow", "TimeZoneConvert", "TouchPosition", \ "TransformedProcess", "TrapSelection", "TupleType", "TypeChecksQ", \ "TypeName", "TypeQ", "UnitaryMatrixQ", "URLBuild", "URLDecode", \ "URLEncode", "URLExistsQ", "URLExpand", "URLParse", "URLQueryDecode", \ "URLQueryEncode", "URLShorten", "ValidTypeQ", "ValueDimensions", \ "Values", "WhiteNoiseProcess", "XMLTemplate", "XYZColor", \ "ZoomLevel", "$CloudBase", "$CloudConnected", "$CloudDirectory", \ "$CloudEvaluation", "$CloudRootDirectory", "$EvaluationEnvironment", \ "$GeoLocationCity", "$GeoLocationCountry", "$GeoLocationPrecision", \ "$GeoLocationSource", "$RegisteredDeviceClasses", \ "$RequesterAddress", "$RequesterWolframID", "$RequesterWolframUUID", \ "$UserAgentLanguages", "$UserAgentMachine", "$UserAgentName", \ "$UserAgentOperatingSystem", "$UserAgentString", "$UserAgentVersion", \ "$WolframID", "$WolframUUID"}

There we have it, a preview of the list of functions that might be coming in desktop version 10 of Mathematica courtesy of the free Pi version.

**No local documentation**

On a desktop version of Mathematica, all of the Mathematica documentation is available on your local machine by clicking on **Help**->**Documentation Center** in the Mathematica notebook interface.** **On the pi version, it seems that there is no local documentation, presumably to keep the installation size down. You get to the documentation via the notebook interface by clicking on **Help**->**OnlineDocumentation** which takes you to http://reference.wolfram.com/language/?src=raspi

**Speed vs my laptop
**

I am used to running Mathematica on high specification machines and so naturally the pi version felt very sluggish–particularly when using the notebook interface. With that said, however, I found it very usable for general playing around. I was very curious, however, about the speed of the pi version compared to the version on my home laptop and so created a small benchmark notebook that did three things:

- Calculate pi to 1,000,000 decimal places.
- Multiply two 1000 x 1000 random matrices together
- Integrate sin(x)^2*tan(x) with respect to x

The comparison is going to be against my Windows 7 laptop which has a quad-core Intel Core i7-2630QM. The procedure I followed was:

- Start a fresh version of the Mathematica notebook and open pi_bench.nb
- Click on Evaluation->Evaluate Notebook and record the times
- Click on Evaluation->Evaluate Notebook again and record the new times.

Note that I use the AbsoluteTiming function instead of Timing (detailed reason given here) and I clear the system cache (detailed resason given here). You can download the notebook I used here. Alternatively, copy and paste the code below

(*Clear Cache*) ClearSystemCache[] (*Calculate pi to 1 million decimal places and store the result*) AbsoluteTiming[pi = N[Pi, 1000000];] (*Multiply two random 1000x1000 matrices together and store the \ result*) a = RandomReal[1, {1000, 1000}]; b = RandomReal[1, {1000, 1000}]; AbsoluteTiming[prod = Dot[a, b];] (*calculate an integral and store the result*) AbsoluteTiming[res = Integrate[Sin[x]^2*Tan[x], x];]

Here are the results. All timings in seconds.

Test | Laptop Run 1 | Laptop Run 2 | RaspPi Run 1 | RaspPi Run 2 | Best Pi/Best Laptop |

Million digits of Pi | 0.994057 | 1.007058 | 14.101360 | 13.860549 | 13.9434 |

Matrix product | 0.108006 | 0.074004 | 85.076986 | 85.526180 | 1149.63 |

Symbolic integral | 0.035002 | 0.008000 | 0.980086 | 0.448804 | 56.1 |

From these tests, we see that Mathematica on the pi is around 14 to 1149 times slower on the pi than my laptop. The huge difference between the pi and laptop for the matrix product stems from the fact that ,on the laptop, Mathematica is using Intels Math Kernel Library (MKL). The MKL is extremely well optimised for Intel processors and will be using all 4 of the laptop’s CPU cores along with extra tricks such as AVX operations etc. I am not sure what is being used on the pi for this operation.

I also ran the standard **BenchMarkReport[]** on the Raspberry Pi. The results are available here.

**Speed vs Python**

Comparing Mathematica on the pi to Mathematica on my laptop might have been a fun exercise for me but it’s not really fair on the pi which wasn’t designed to perform against expensive laptops. So, let’s move on to a more meaningful speed comparison: Mathematica on pi versus Python on pi.

When it comes to benchmarking on Python, I usually turn to the timeit module. This time, however, I’m not going to use it and that’s because of something odd that’s happening with sympy and caching. I’m using sympy to calculate pi to 1 million places and for the symbolic calculus. Check out this ipython session on the pi

pi@raspberrypi ~ $ SYMPY_USE_CACHE=no pi@raspberrypi ~ $ ipython Python 2.7.3 (default, Jan 13 2013, 11:20:46) Type "copyright", "credits" or "license" for more information. IPython 0.13.1 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: import sympy In [2]: pi=sympy.pi.evalf(100000) #Takes a few seconds In [3]: %timeit pi=sympy.pi.evalf(100000) 100 loops, best of 3: 2.35 ms per loop

In short, I have asked sympy not to use caching (I think!) and yet it is caching the result. I don’t want to time how quickly sympy can get a result from the cache so I can’t use timeit until I figure out what’s going on here. Since I wanted to publish this post sooner rather than later I just did this:

import sympy import time import numpy start = time.time() pi=sympy.pi.evalf(1000000) elapsed = (time.time() - start) print('1 million pi digits: %f seconds' % elapsed) a = numpy.random.uniform(0,1,(1000,1000)) b = numpy.random.uniform(0,1,(1000,1000)) start = time.time() c=numpy.dot(a,b) elapsed = (time.time() - start) print('Matrix Multiply: %f seconds' % elapsed) x=sympy.Symbol('x') start = time.time() res=sympy.integrate(sympy.sin(x)**2*sympy.tan(x),x) elapsed = (time.time() - start) print('Symbolic Integration: %f seconds' % elapsed)

Usually, I’d use time.clock() to measure things like this but something *very* strange is happening with time.clock() on my pi–something I’ll write up later. In short, it didn’t work properly and so I had to resort to time.time().

Here are the results:

1 million pi digits: 5535.621769 seconds Matrix Multiply: 77.938481 seconds Symbolic Integration: 1654.666123 seconds

The result that really surprised me here was the symbolic integration since the problem I posed didn’t look very difficult. **Sympy on pi was thousands of times slower** **than Mathematica on pi** for this calculation! On my laptop, the calculation times between Mathematica and sympy were about the same for this operation.

That Mathematica beats sympy for 1 million digits of pi doesn’t surprise me too much since I recall attending a seminar a few years ago where Wolfram Research described how they had optimized the living daylights out of that particular operation. Nice to see Python beating Mathematica by a little bit in the linear algebra though.

Welcome to my monthly round-up of news from the world of mathematical software. As always, thanks to all contributors. If you have any mathematical software news (releases, blog articles and son on), feel free to contact me. For details on how to follow Walking Randomly now that Google Reader has died, see this post.

**Numerics for .NET**

- Version 3.0 of ILNumerics was released in June. ILNumerics is a numerical library for .NET with syntax that’s similar to that of MATLAB. See what’s new by clicking on http://ilnumerics.net/ilnumerics-version-3.0-is-online.html and follow the ILNumerics team on Twitter to keep right up to date with this product.

**Faster, faster, FASTER!**

- The Numerical Algorithms Group (NAG) have updated the parallel version of their commercial Fortran library to Mark 24. Full details of all 1784 routines available at http://www.nag.co.uk/numeric/FL/nagdoc_fl24/html/GENINT/smpnews.html#SMPNEWS

**Sparsification**

- Version 1.0 of TxSSA has been released. According to the project’s website,
*“TxSSA is an acronym for Tech-X Corporation Sparse Spectral Approximation. It is a library that has interfaces in C++, C, and MATLAB. It is an implementation of a matrix sparsification algorithm that takes a general real or complex matrix as input and produces a sparse output matrix of the same size. The non-zero entries are chosen to minimize changes to the singular values and singular vectors corresponding to the near null-space. The output matrix is constrained to preserve left and right null-spaces exactly. The sparsity pattern of the output matrix is automatically determined or can be given as input.”*

**Data and Statistics**

- Stata is a commercial statistics package that’s been around since 1985. It’s now at version 13 and has a bucket load of new functions and improvements.
- A new minor release of IDL (Interactive Data Language) from Exelis Visual Information Solutions has been released. Details of version 8.2.3 can be found here.

**Molten Mathematics**

- Version 1.4 Beta of the free GPU accelerated Linear Algebra library, Magma has been released with a nice set of performance enhancements and new functions.
- The other Magma on the block is the commercial Magma Computational Algebra System which was updated to version 2.19-7 this month.

**Some Freebies**

- SMath studio is a free Mathcad clone developed by Andrey Ivashov. Recently, Andrey has been releasing nee versions of Smath much more regularly and it is now at version 0.96.4909. To see what’s been added in June, see the forum posts here and here.
- Rene Grothmann’s very frequently updated Euler Math Toolbox is now at version 22.8. As always, Rene’s detailed version log tells you what’s new.

**Safe and reliable numerical computing**

- Sollya 4.0 was released earlier this month and is available for download at: http://sollya.gforge.inria.fr/. According to the developers,
*“Sollya is both a tool environment and a library for safe floating-point code development. It offers a convenient way to perform computations with multiple precision interval arithmetic. It is particularly targeted to the automatized implementation of mathematical floating-point libraries (libm).”* - The INTLAB toolbox for MATLAB is now at version 7.1. INTLAB is the MATLAB toolbox for reliable computing and self-validating algorithms.

**Pythonic Mathematics**

- Version 5.10 of Sage, the free Computer Algebra System based on Python, was released on 17th June. Martin Albrect discusses some of his favourite new goodies over at his blog and the full list of new stuff is on the Sage changelog.
- Pandas is Python’s main data analysis library and version 0.12 is out. Take a look at the newness here.

Welcome to the latest edition of A Month of Math Software where I look back over the last month and report on all that is new and shiny in the world of mathematical software. I’ve recently restarted work after the Easter break and so it seems fitting that I offer you all Easter Eggs courtesy of Lijia Yu and R. Enjoy!

**General purpose mathematical systems
**

- The Python-based computer algebra system, Sage, is now at version 5.8. Take a look at http://www.sagemath.org/mirror/src/changelogs/sage-5.8.txt to see what’s new
- Version 17 of the commercial computer algebra system, Maple, has been released. There are many new features including new signal processing functions, massive performance enhancements and something completely new called The Möbius Project!
- Every six months we get a new MATLAB release and March brought us 2013a. I’m always made a little happier when The Mathworks reduce their toolbox count by combining products and this time they’ve come good by combining The Fixed-Point Toolbox and Simulink Fixed Point products into the new Fixed Point Designer . An overview of other changes can be found at http://www.mathworks.co.uk/company/newsroom/MathWorks-Announces-Release-2013a-of-the-MATLAB-and-Simulink-Product-Families.html
- The almost continuously updated Euler Math Toolbox is now at 21.5. As ever, all of the updates and improvements can be found at http://euler.rene-grothmann.de/Programs/Changes.html
- Magma is now at version 2.19-4.

**MATLAB add-ons**

- The multiprecision MATLAB toolbox from Advanpix has been upgraded to version 3.4.3.3431 with the addition of multidimensional arrays.
- The superb, free chebfun project has now been extended to 2 dimensions with the release of chebfun2.

**GPU accelerated computation**

- Magma is a GPU accelerated linear algebra library and there is now a beta version for the Intel Xeon Phi – http://icl.cs.utk.edu/magma/news/news.html?id=315
- Stream Computing are compiling a list of OpenCL wrappers for various platforms and languages.
- AccelerEyes have published a couple of articles demonstrating how to use their ArrayFire product to accelerate calculations using GPUs. In March they brought us Benchmarks and Financial.
- CuPoisson v1 has been released. “
*CuPoisson is a GPU implementation of the 2D fast Poisson solver using CUDA. The method solves the discrete Poisson equation on a rectangular grid, assuming zero Dirichlet boundary conditions.*“

**Statistics and visualisation **

- Version 3.0 of R was released today! A major change is that full 64bit vector support is now enabled meaning the end to the 2 billion row limit in data frames…bring on the big data! Full details at https://stat.ethz.ch/pipermail/r-announce/2013/000561.html
- Pandas, the data-analysis package for Python, has had a major update. Version 0.11 includes lots of new stuff which you can read about at http://pandas.pydata.org/pandas-docs/dev/whatsnew.html
- Matplotlib, the standard mathematical plotting library for Python, has been updated to version 1.2.1. This is just a bug-fix release; all the new user-visible functionality came in v1.2 which you can read about here.
- GNUPlot, the venerable plotting package from from the GNU Project, has been updated to version 4.6.2. A complete list of changes is at http://www.gnuplot.info/announce_4.6.2.txt
- A new version of the Perl Data Language (PDL) has been made available. See what’s new in version 2.006 at http://mailman.jach.hawaii.edu/pipermail/perldl/2013-March/007803.html
- Statisticsblog.com have reviewed the R-Link package in Mathematica 9 that allows you to integrate R and Mathematica code.

**Finite elements
**

- Version 7.3 of deal.II is now available. deal.II is a C++ program library targeted at the computational solution of partial differential equations using adaptive finite elements.

I’m working on a presentation involving Mathematica 9 at the moment and found myself wanting a gallery of all built-in plots using default settings. Since I couldn’t find such a gallery, I made one myself. The notebook is available here and includes 99 built-in plots, charts and gauges generated using default settings. If you hover your mouse over one the plots in the Mathematica notebook, it will display a ToolTip showing usage notes for the function that generated it.

The gallery only includes functions that are fully integrated with Mathematica so doesn’t include things from add-on packages such as StatisticalPlots.

A screenshot of the gallery is below. I haven’t made an in-browser interactive version due to size.

Welcome to the latest Month of Math Software here at WalkingRandomly. If you have any mathematical software news or blogposts that you’d like to share with a larger audience, feel free to contact me. Thanks to everyone who contributed news items this month, I couldn’t do it without you.

**The NAG Library for Java**

- The Numerical Algorithms Group (NAG) have been producing numerical libraries for over 40 years and now they have one for Java.

**MATLAB-a-likes**

- Version 3.6.4 of Octave, the free, open-source MATLAB clone has been released. This version contains some minor bug fixes. To see everything that’s new since version 3.6, take a look at the NEWS file. If you like MATLAB syntax but don’t like the price, Octave may well be for you.
- The frequently updated Euler Math Toolbox is now at version 20.98 with a full list of changes in the log. Scanning through the recent changes log, I came across the very nice
**iterate**function which works as follows>iterate("cos(x)",1,100,till="abs(cos(x)-x)<0.001") [ 1 0.540302305868 0.857553215846 0.654289790498 0.793480358743 0.701368773623 0.763959682901 0.722102425027 0.750417761764 0.731404042423 0.744237354901 0.735604740436 0.74142508661 0.737506890513 0.740147335568 0.738369204122 0.739567202212 ]

**Mathematical and Scientific Python**

- The Python based computer algebra system, SAGE, has been updated to version 5.7. The full list of changes is at http://www.sagemath.org/mirror/src/changelogs/sage-5.7.txt
- Numpy is the fundamental Python package required for numerical computing with Python. Numpy is now at version 1.7 and you can see what’s new by taking a look at the release notes

**Spreadsheet news**

- A new version of Microsoft Excel, the 800 pound gorilla of the spreadsheet world, was actually released back in January as part of Office 2013 but I managed to miss it somehow. An overview of what’s new in Excel 2013 is available in a blog post from Microsoft, along with a list of new functions in Excel 2013 and a note warning of the possibility of calculation differences between ‘normal’ PCs and those running Windows RT. More in-depth articles include My first Excel 2013 chart, Excel 2013 in depth, Introduction to PowerPivot in Excel 2013 and the new ISFORUMULA Function in 2013.
- Hot on the heels of Microsoft’s product is a new version of the superb,
**completely free**LibreOffice. Version 4.0 was released on 7 February 2013 and includes a wide range of new features. My favourite new feature, by far, is the inclusion of a Logo interpreter! LibreLogo is*‘A Logo-Python programming environment with interactive turtle vector graphics for education and desktop publishing*‘ and there are some blog posts about it here and here. Another improvement I want to point out is the fact that the random number function in Libre Office Calc has been significantly improved.

**R and stuff**

- A new version of R, the open source standard for statistical computing, has been released. Version 2.15.3 is probably going to be the last release before version 3 comes out. The full list of changes can be found at http://cran.r-project.org/bin/windows/base/NEWS.R-2.15.3.html
- Version 0.4.0 of Shiny has been released and the Shiny tutorial has been updated.

**This and that**

- The commercial computer algebra system, Magma, has seen another incremental update in version 2.19-3.
- The NCAR Command Language was updated to version 6.1.2.
- IDL was updated to version 8.2.2. Since I’m currenty obsessed with random number generators, I’ll point out that in this release IDL finally moves away from an old Numerical Recipies generator and now uses the Mersenne Twister like almost everybody else.

**From the blogs**

- The guys at AccelerEyes have been benchmarking NVIDIA’s new Tesla K20.
- NAG’s David Sayers asks ‘Would you like the NAG Routines in Different Precisions?’
- Wolfram Research celebrates the Centennial of Markov Chains.
- Stephen Wolfram asks ‘What Should we call the language of Mathematica?’
- A couple of bloggers discuss performing Monte Carlo simulations using javascript (here and here). I say ‘Fine, but be careful because the Javascript random number generators are not too good’

I was at a seminar yesterday where we were playing with Mathematica and wanted to solve the following equation

1.0609^t == 1.5

You punch this into Mathematica as follows:

Solve[1.0609^t == 1.5]

Mathematica returns the following

During evaluation of In[1]:= Solve::ifun: Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. >> Out[1]= {{t -> 6.85862}}

I have got the solution I expect but Mathematica suggests that I’ll get more information if I use Reduce. So, lets do that.

In[2]:= Reduce[1.0609^t == 1.5, t] During evaluation of In[2]:= Reduce::ratnz: Reduce was unable to solve the system with inexact coefficients. The answer was obtained by solving a corresponding exact system and numericizing the result. >> Out[20]= C[1] \[Element] Integers && t == -16.9154 (-0.405465 + (0. + 6.28319 I) C[1])

Looks complex and a little complicated! To understand why complex numbers have appeared in the mix you need to know that Mathematica always considers variables to be complex unless you tell it otherwise. So, it has given you the infinite number of complex values of t that would satisfy the original equation. No problem, let’s just tell Mathematica that we are only interested in real values of p.

In[3]:= Reduce[1.0609^t == 1.5, t, Reals] During evaluation of In[3]:= Reduce::ratnz: Reduce was unable to solve the system with inexact coefficients. The answer was obtained by solving a corresponding exact system and numericizing the result. >> Out[3]= t == 6.85862

Again, I get the solution I expect. However, Mathematica still feels that it is necessary to give me a warning message. Time was ticking on so I posed the question on Mathematica Stack Exchange and we moved on.

At the time of writing, the lead answer says that ‘Mathematica is not “struggling” with your equation. The message is simply FYI — to tell you that, for this equation, it prefers to work with exact quantities rather than inexact quantities (reals)’

I’d accept this as an explanation except for one thing; the message say’s that it is UNABLE to solve the equation as I originally posed it and so needs to proceed by solving a corresponding exact system. That implies that it has struggled a great deal, given up and tried something else.

This would come as a surprise to anyone with a calculator who would simply do the following manipulations

1.0609^t == 1.5

Log[1.0609^t] == Log[1.5]

T*Log[1.0609] == Log[1.5]

T= Log[1.5]/Log[1.0609]

Mathematica evalutes this as

T=6.8586187084788275

This appears to solve the equation exactly. Plug it back into Mathematica (or my calculator) to get

In[4]:= 1.0609^(6.8586187084788275) Out[4]= 1.5

I had no trouble dealing with inexact quantities, and I didn’t need to ‘solve a corresponding exact system and numericize the result’. This appears to be a simple problem. So, why does Mathematica bang on about it so much?

**Over to MATLAB for a while**

Mathematica is complaining that we have asked it to work with inexact quantities. How could this be? 1.0609, 6.8586187084788275 and 1.5 look pretty exact to me! However, it turns out that as far as the computer is concerned some of these numbers are far from exact.

When you input a number such as 1.0609 into Mathematica, it considers it to be a double precision number and 1.0609 cannot be exactly represented as such. The closest Mathematica, or indeed any numerical system that uses 64bit IEEE arithmetic, can get is 4777868844677359/4503599627370496 which evaluates to 1.0608999999999999541699935434735380113124847412109375. I wondered if this is why Mathematica was complaining about my problem.

At this point, I switch tools and use MATLAB for a while in order to investigate further. I do this for no reason other than I know the related functions in MATLAB a little better. MATLAB’s sym command can give us the rational number that exactly represents what is actually stored in memory (Thanks to Nick Higham for pointing this out to me).

>> sym(1.0609,'f') ans = 4777868844677359/4503599627370496

We can then evaluate this fraction to whatever precision we like using the vpa command:

>> vpa(sym(1.0609,'f'),100) ans = 1.0608999999999999541699935434735380113124847412109375 >> vpa(sym(1.5,'f'),100) ans = 1.5 >> vpa(sym( 6.8586187084788275,'f'),100) ans = 6.8586187084788274859192824806086719036102294921875

So, 1.5 can be represented exactly in 64bit double precision arithmetic but 1.0609 and 6.8586187075 cannot. Mathematica is unhappy with this state of affairs and so chooses to solve an exact problem instead. I guess if I am working in a system that cannot even represent the numbers in my problem (e.g. 1.0609) how can I expect to solve it?

**Which Exact Problem?**

So, which exact equation might Reduce be choosing to solve? It could solve the equation that I mean:

(10609/10000)^t == 1500/1000

which does have an exact solution and so Reduce can find it.

(Log[2] - Log[3])/(2*(2*Log[2] + 2*Log[5] - Log[103]))

Evaluating this gives 6.858618708478698:

(Log[2] - Log[3])/(2*(2*Log[2] + 2*Log[5] - Log[103])) // N // FullForm 6.858618708478698`

Alternatively, Mathematica could convert the double precision number 1.0609 to the fraction that exactly represents what’s actually in memory and solve that.

(4777868844677359/4503599627370496)^t == 1500/1000

This also has an exact solution:

(Log[2] - Log[3])/(52*Log[2] - Log[643] - Log[2833] - Log[18251] - Log[143711])

which evaluates to 6.858618708478904:

(Log[2] - Log[3])/(52*Log[2] - Log[643] - Log[2833] - Log[18251] - Log[143711]) // N // FullForm 6.858618708478904`

Let’s take a look at the exact number Reduce is giving:

Quiet@Reduce[1.0609^t == 1.5, t, Reals] // N // FullForm Equal[t, 6.858618708478698`]

So, it looks like Mathematica is solving the equation I meant to solve and evaluating this solution it at the end.

**Summary of solutions**

Here I summarize the solutions I’ve found for this problem:

- 6.8586187084788275 – Pen,Paper+Mathematica for final evaluation.
- 6.858618708478698 – Mathematica solving the exact problem I mean and evaluating to double precision at the end.
- 6.858618708478904 – Mathematica solving the exact problem derived from what I really asked it and evaluating at the end.

What I find fun is that my simple minded pen and paper solution seems to satisfy the original equation better than the solutions arrived at by more complicated means. Using MATLAB’s vpa again I plug in the three solutions above, evaluate to 50 digits and see which one is closest to 1.5:

>> vpa('1.5 - 1.0609^6.8586187084788275',50) ans = -0.00000000000000045535780896732093552784442911868195148156712149736 >> vpa('1.5 - 1.0609^6.858618708478698',50) ans = 0.000000000000011028236861872639054542278886208515813177349441555 >> vpa('1.5 - 1.0609^6.858618708478904',50) ans = -0.0000000000000072391029234017787617507985059241243968693347431197

So, ‘my’ solution is better. Of course, this advantage goes away if I evaluate (Log[2] – Log[3])/(2*(2*Log[2] + 2*Log[5] – Log[103])) to 50 decimal places and plug that in.

>> sol=vpa('(log(2) - log(3))/(2*(2*log(2) + 2*log(5) - log(103)))',50) sol = 6.858618708478822364949699852597847078441119153527 >> vpa(1.5 - 1.0609^sol',50) ans = 0.0000000000000000000000000000000000000014693679385278593849609206715278070972733319459651