If you type
in a Mac terminal window, or alternatively, click on foo.app in Finder the application foo will be launched.
It turns out that foo.app is actually a directory which made me wonder ‘What determines what gets launched?’
If you look inside an .app folder, you will find a Contents folder. Inside this will be, among other things, a file called Info.plist. It is this file that determines what gets launched. For example, there is an entry in this file called CFBundleExecutable that determines the executable to be launched.
Thanks to Chris Beaumont for the link above.
In my previous blog post I mentioned that I am a member of a team that supports High Throughput Computing (HTC) at The University of Manchester via a 1600+ core ‘condor pool’. In order to make it as easy as possible for our researchers to make use of this resource one of my colleagues, Ian Cottam, created a system called DropAndCompute. In this guest blog post, Ian describes DropAndCompute and how it evolved into the system we use at Manchester today.
The Evolution of “DropAndCompute” by Ian Cottam
DropAndCompute, as used at The University of Manchester’s Faculty of Engineering and Physical Sciences, is an approach to using network (or grid or cloud based) computational resources without having to know the operating system of the resource’s gateway or any command line tools of either the resource itself —Condor in our case — or in general. Most such gateways run a flavour of Unix, often Linux. Many of our users are either unfamiliar with Linux or just prefer a drag-and-drop interface, as I do myself despite using various flavours of Unix since Version 6 in the late 70s.
Why did I invent it? On its original web site description page wiki.myexperiment.org/index.php/DropAndCompute the following reasons are given:
- A simple and uniform drag-and-drop graphical user interface, potentially, to many resource pools.
- No use of terminal windows or command lines.
- No need to login to remote hosts or install complicated grid-enabling software locally.
- No need for the user to have an account on the remote resources (instead they are accounted by having a shared folder allocated). Of course, nothing stops the users from having accounts should that be preferred.
- No need for complicated Virtual Private Networks, IP Tunnelling, connection brokers, or similar, in order to access grid resources on private subnets (provided at least one node is on the public Internet, which is the norm).
- Pop-ups notify users of important events (basically, log and output files being created when a job has been accepted, and when the generated result files arrive).
- Somewhat increased security as the user only has (indirect) access to a small subset of the computational resource’s commands.
The first version was used on a Condor Pool within our interdisciplinary biocentre (MIB). A video of it in use is shown below
Please do take the time to look at this video as it shows clearly how, for example, Condor can be used via this type of interface.
This version was notable for using the commercial service: Dropbox and, in fact, my being a Dropbox user inspired the approach and its name. Dropbox is trivial to install on any of the main platforms, on any number of computers owned by a user, and has a free version giving 2GB of synchronised and shared storage. In theory, only the computational resource supplier need pay for a 100GB account with Dropbox, have a local Condor submitting account, and share folders out with users of the free Dropbox-based service.
David De Roure, then at the University of Southampton and now Oxford, reviewed this approach here at blog.openwetware.org/deroure/?p=97, and offers his view as to why it is important in helping scientists start on the ‘ramp’ to using what can be daunting, if powerful, computational facilities.
Quickly the approach migrated to our full, faculty-wide Condor Pool and the first modification was made. Now we used separate accounts for each user of the service on our submitting nodes; Dropbox still made this sharing scheme trivial to set up and manage, whilst giving us much better usage accounting information. The first minor problem came when some users needed more –much more in fact– than 2GB of space. This was solved by them purchasing their own 50GB or 100GB accounts from Dropbox.
Problems and objections
However, two more serious problems impacted our Dropbox based approach. First, the large volume of network traffic across the world to Dropbox’s USA based servers and then back down to local machines here in Manchester resulted in severe bottlenecks once our Condor Pool had reached the dizzy heights of over a thousand processor cores. We could have ameliorated this by extra resources, such as multiple submit nodes, but the second problem proved to be more of a showstopper.
Since the introduction of DropAndCompute several people –at Manchester and beyond– have been concerned about research data passing through commercial, USA-based servers. In fact, the UK’s National Grid Service (NGS) who have implemented their own flavour of DropAndCompute did not use Dropbox for this very reason. The US Patriot Act means that US companies must surrender any data they hold if officially requested to do so by Federal Government agencies. Now one approach to this is to do user-level encryption of the data before it enters the user’s dropbox. I have demonstrated this approach, but it complicates the model and it is not so straightforward to use exactly the same method on all of the popular platforms (Windows, Mac, Linux).
To tackle the above issues we implemented a ‘local version’ of DropAndCompute that is not Dropbox based. It is similar to the NGS approach, but, in my opinion, much simpler to setup. The user merely has to mount a folder on the submit node on their local computer(s), and then use the same drag-and-drop approach to get the job initiated, debugged and run (or even killed, when necessary). This solves the above issues, but could be regarded as inferior to the Dropbox based approach in five ways:
1. The convenience and transparency of ‘offline’ use. That is, Dropbox jobs can be prepared on, say, a laptop with or without net access, and when the laptop next connects the job submissions just happens. Ditto for the results coming back.
2. When online and submitting or waiting for results with the local version, the folder windows do not update to give the user an indication of progress.
3. Users must remember to use an email notification that a job has finished, or poll to check its status.
4. The initial setup is a little harder for the local version compared with using Dropbox.
5. The computation’s result files are not copied back automatically.
So far, only item 5 has been remarked on by some of our users, and it, and the others, could be improved with some programming effort.
A movie of this version is shown below; it doesn’t have any commentary, but essentially follows the same steps as the Dropbox based video. You will see the network folder’s window having to be refreshed manually –this is necessary on a Mac (but could be scripted); other platforms may be better– and results having to be dragged back from the mounted folder.
I welcome comments on any aspect of this –still evolving– approach to easing the entry ‘cost’ to using distributed computing resources.
Our Condor Pool is supported by three colleagues besides myself: Mark Whidby, Mike Croucher and Chris Paul. Mark, inter alia, maintains the current version of DropAndCompute that can operate locally or via Dropbox. Thanks also to Mike for letting me be a guest on Walking Randomly.
Christmas isn’t all that far away so I thought that it was high time that I wrote my Christmas list for mathematical software developers and vendors. All I want for christmas is….
- A built in ternary plot function would be nice
- Ship workbench with the main product please
- An iPad version of Mathematica Player
- Merge the parallel computing toolbox with core MATLAB. Everyone uses multicore these days but only a few can feel the full benefit in MATLAB. The rest are essentially second class MATLAB citizens muddling by with a single core (most of the time)
- Make the mex interface thread safe so I can more easily write parallel mex files
- More CUDA accelerated functions please. I was initially excited by your CUDA package but then discovered that it only accelerated one function (Matrix Multiply). CUDA accelerated Random Number Generators would be nice along with fast Fourier transforms and a bit more linear algebra.
- Release Mathcad Prime.
- Mac and Linux versions of Mathcad. Maple,Mathematica and MATLAB have versions for all 3 platforms so why don’t you?
- Produce vector versions of functions like g01bk (poisson distribution function). They might not be needed in Fortran or C code but your MATLAB toolbox desperately needs them
- A Mac version of the MATLAB toolbox. I’ve got users practically begging for it :)
- A NAG version of the MATLAB gamfit command
- A just in time compiler. Yeah, I know, I don’t ask for much huh ;)
- A faster pdist function (statistics toolbox from Octave Forge). I discovered that the current one is rather slow recently
- A Locator control for the interact function. I still have a bounty outstanding for the person who implements this.
- A fully featured, native windows version. I know about the VM solution and it isn’t suitable for what I want to do (which is to deploy it on around 5000 University windows machines to introduce students to one of the best open source maths packages)
- An Android version please. Don’t make it free – you deserve some money for this awesome Mathcad alternative.
- The fact that you give the Windows version away for free is awesome but registration is a pain when you are dealing with mass deployment. I’d love to deploy this to my University’s Windows desktop image but the per-machine registration requirement makes it difficult. Most large developers who require registration usually come up with an alternative mechanism for enterprise-wide deployment. You ask schools with more than 5 machines to link back to you. I want tot put it on a few thousand machines and I would happily link back to you from several locations if you’ll help me with some sort of volume license. I’ll also give internal (and external if anyone is interested) seminars at Manchester on why I think Spacetime is useful for teaching mathematics. Finally, I’d encourage other UK University applications specialists to evaluate the software too.
- An Android version please.
How about you? What would you ask for Christmas from your favourite mathematical software developers?
On a Linux machine with a normal install of Mathematica you can usually get access to a command line version of Mathematica by typing
at the command line. Command line Mathematica is useful for situations where you want to do batch processing, perhaps as part of a Condor pool or something, but I’ll not write about that until another time.
On a Mac, however, a standard install of Mathematica doesn’t give you a math command so you have to create it yourself. Add the following line to your system’s /etc/bashrc file.
Now, when you type math at the command prompt it will behave just like a Linux system which is sometimes useful.
I recently helped someone install the new 64bit beta version of MATLAB 2009a on a dual quad core Mac pro and so far he seems very pleased with it. The 32bit version simply didn’t cut it because he needed to be able to access huge amounts of memory. More and more researchers at my University seem to be choosing Mac Pros over other platforms and yet it seems that the MATLAB experience on them is far from perfect (according to this link at least).
People seem to complain that its slow compared with other operating systems on comparable hardware along with a clunky user interface since it uses X11 rather than Cocoa.
I’ll lay my cards on the table – I’m not a major Mac fan – but when so many people, who’s judgement I respect, choose them over other platforms then I sit up, take notice and try to understand. Does anyone reading this have experience with MATLAB on OS X – favourable or otherwise?
I’ve just discovered a blog bost where the author was installing Octave on a Mac. Looks hard!
I compare it with Ubuntu’s installation method for Octave along with the symbolic package:
sudo apt-get install octave octave-symbolic
and wonder what is going wrong for it on Macs. Insights anyone?
I recently installed MATLAB 2009a on a Mac running Mac OS 10.5.6 and, although the installation seemed to go fine, MATLAB wouldn’t start. The error message I received (copied from the console output) was
02/04/2009 16:19:42 [0x0-0x2d32d3].com.mathworks.StartMATLAB dyld: Library not loaded: /usr/X11R6/lib/libXmu.6.dylib
02/04/2009 16:19:42 [0x0-0x2d32d3].com.mathworks.StartMATLAB Referenced from: /Applications/MATLAB_R2009a.app/sys/os/maci/libXm.3.dylib
02/04/2009 16:19:42 [0x0-0x2d32d3].com.mathworks.StartMATLAB Reason: image not found
Now I don’t know very much about Macs and I tend to think of Mac OS X as a closed-source version of Linux with pretty bits so there may be a much better way of fixing this than what you are about to read but it did the job for me. Your mileage may vary.
Firing up a terminal I had a look to see if the offending file was anywhere on the machine by typing
Sure enough I had it in /usr/X11/lib but MATLAB was looking for it in /usr/X11R6/lib so I created a symbolic link as follows
sudo ln -s /usr/X11 /usr/X11R6
Tried MATLAB again and it worked perfectly. Let me know if this works for you or if you are a Mac expert and you know of a better way.
Someone came to visit me today with a MATLAB mex problem and, among other things, I needed to install gcc for them. Now on a Linux machine this would have been trivial. Something like
yum install gcc
apt-get install gcc
would do the trick, depending on which flavour of Linux you are using. One command, a quick download and you’re done. Couldn’t be simpler.
I am as green as grass when it comes to Mac usage and so I assumed that there would be some Mac equivalent to these commands but it seems that this is not the case (please please correct me if I am wrong). As far as I can tell, one needs to install something called Xcode in order to get gcc which is 1 Gigabyte in size. You heard me right – 1GB….for gcc! Of course it isn’t just gcc taking up that 1GB – you get lots of other gubbins too – but I don’t want all of the other gubbins. I just want gcc.
But the size isn’t the worst bit. It turns out that you have to go through a registration process in order to get your hands on XCode – giving Apple information such as your email address, home address, what area you work in, what you are going to use XCode for etc etc
All this to get hold of one of the most fundamental open-source applications there is. There has to be a better way. If anyone can enlighten me as to what that better way might be I would be very grateful.