2009-05-26

Benchmarking Amazon EC2 with GHC

My personal computers are pretty old and/or slow. I have an old PowerBook G4 and a newish EEE PC. The PowerBook was top of its class when I got it with most options maxed out. Alas, that was five years ago. The EEE PC is by definition not a top performer, nor does it try to be. I find that when the two machines perform similarly in day-to-day tasks, at least when the EEE PC is in “Super performance” mode.

Truth is that for most of the mundane stuff I do these two machines perform acceptably. I’m obviously not going to be watching any 1080p movies on them, or enjoying the latest games (from a productivity standpoint I’m not sure these are such bad things), but most everything else works fine.

Where I feel the performance does hurt me is when compiling with GHC (or any compiler for that matter, it’s just that GHC is the one I use most). Often I spend way to much of my precious little private developer time waiting for the compiler to finish. This is in stark contrast to the situation at work where I run GHC on a pretty zippy Dell PowerEdge Blade Server.

In the near future I expect to be spending more time developing at home and want to be able to do so more efficiently, preferably on par with the situation at work. I could obviously buy shiny new hardware but being a miser (in case you couldn’t already tell based on my hardware) I’m looking for alternatives that would allow me to avoid or defer a hefty up-front investment. One such alternative I’m considering is to rent compute capacity in the Amazon Elastic Compute Cloud (EC2).

EC2 compute capacity is sold in the form of instances at an hourly rate ranging from $0.10 to $0.80 depending on capacity/performance plus some small-change for bandwidth and persistent storage. While there are a couple of hurdles to overcome in order to leverage EC2 as a development workstation the first thing I want to do is to make sure it is a goal worthy of pursuing in the first place, i.e. will I get the desired performance gains at a reasonable price?

To at least begin to answer this question I’ve done some informal benchmarking of the aforementioned systems, excluding the pricier EC2 instances. All the regular benchmarking caveats apply and to reinforce the unscientificity of it all I’m not going to bother providing complete specs for the systems. Here are the fundamentals:

  • Apple PowerBook G4: 1.5 GHz PowerPC G4 processor, 2 GB RAM, 5400 rpm HD.
  • Asus EEE PC 900HA: 1.6 GHz Intel Atom processor, 1 GB RAM, 4200 rpm HD.
  • Dell PowerEdge 1855 Blade Server: Two single-core 3.2 GHz Xeon processors, 2 GB RAM.
  • Amazon EC2 Small Instance: 1 EC2 Compute Unit (1 virtual core), 1.7 GB RAM, $0.10 per hour.
  • Amazon EC2 High-CPU Medium Instance: 5 EC2 Compute Units (2 virtual cores), 1.7 GB RAM, $0.20 per hour.

According to Amazon “one EC2 Compute Unit provides the equivalent CPU capacity of a 1.0–1.2 GHz 2007 Opteron or 2007 Xeon processor.”

I figure the results are probably more interesting that the details of the tests so here they are, the systems are ordered by increasing performance which happened to be consistent across the tests:

Benchmark results, times in seconds, shorter times are better.
astro-tables buildhighlighting-kate buildfad test suite
PowerBook G4292(848)28
EEE PC29164318
EC2 Small17151915
Dell PowerEdge752606
EC2 Medium551724

The EEE PC was in “Super Performance” mode during the tests and the PowerBook was at its highest CPU speed. All times are the “real” time as measured by the Unix time command and lower numbers are naturally better.

As can be seen an Amazon EC2 Small instance is only marginally faster than the EEE PC. An EC2 High-CPU Medium instance on the other hand is significantly faster than the zippy Dell PowerEdge Blade server. Is either one a good deal? Good question, I think a case could be made either way depending on your priorities but I’m not going to tackle that today.

If you care about the details of the tests read on, if not please move on to your next blog of choice!

astro-tables build

This package currently consists of a single automatically generated 4000-line monster of a module1. The code is pretty straight-forward but the module takes ages to compile, almost certainly due to me giving the type checker an unnecessarily hard time. I have a trivial rewrite on my todo-list which I expect will shorten the compilation time dramatically but the current form comes in kind of handy for the purposes of this benchmark. The Git repo is git://github.com/bjornbm/astro-tables.git and the commit used in the benchmarking was e63b8978833878526870b2101697197ff64af593. I made sure the dependencies were already installed and ran time cabal install.

highlighting-kate build

From recent memory I knew that John MacFarlane’s highlighting-kate package has a hefty number of modules (the majority of which are also automatically generated) that take a fair amount of time to compile. I downloaded version 0.2.4 from hackage, made sure all dependencies were already installed, and ran time cabal install --flags=executable.

I ran into one snag with this test: the build wouldn’t complete with GHC 6.10.3 on the PowerBook G4 due to some problem with pcre-light (which tends to give me headaches on pretty much every platform). This particular headache2 I was unable to resolve and had to run the test using GHC 6.10.1 on the PowerBook G4.

fad test suite

Finally I did a runtime performance (as opposed to compilation) benchmark: running the test suite of the fad library. The Git repo is git://github.com/bjornbm/fad.git and the commit used was cd2965a6741291570930e4bf6e9f8f9ab64ccadd. I ran ghc --make Test and then time ./Test.


  1. An implementation of the 678 lunisolar terms and 687 planetary terms of the IAU 2000A Precession-Nutation Model.

  2. gcc: Internal error: Virtual timer expired (program cc1)

2009-05-10

May 2009 HCAR Submissions

Last weekend was the submission deadline for contributions to the May 2009 edition of the Haskell Communities and Activities Report (HCAR). I absolutely love the HCAR; it’s a terrific and comprehensive source of information on all the amazing stuff people are doing with Haskell. I’m very grateful to the editor Janis Voightländer (and Andres Löh before him) for compiling the report, in my opinion he does a great service to the Haskell community. I can’t wait to get my hands on the upcoming edition!

Below are my submissions to the HCAR (free blogging material, yay!) with some additional notes.

dimensional: Statically checked physical dimensions

Report by: Björn Buckwalter
Status: active, mostly stable

Dimensional is a library providing data types for performing arithmetics with physical quantities and units. Information about the physical dimensions of the quantities/units is embedded in their types, and the validity of operations is verified by the type checker at compile time. The boxing and unboxing of numerical values as quantities is done by multiplication and division with units. The library is designed to, as far as is practical, enforce/encourage best practices of unit usage.

The core of dimensional is stable with additional units being added on an as-needed basis. In addition to the SI system of units, dimensional has experimental support for user-defined dimensions and a proof-of-concept implementation of the CGS system of units. I am also experimenting with forward automatic differentiation and rudimentary linear algebra.

The current release is compatible with GHC 6.6.x and above and can be downloaded from Hackage or the project web site. The primary documentation is the literate Haskell source code, but the wiki on the project web site has a few usage examples to help with getting started.

Further Reading
http://dimensional.googlecode.com

Dimensional was largely the project that enticed me to learn Haskell, and is the one that I am most proud of to date. It relies on some pretty tricky type-level hackery, stuff I would never have been able to pull off without the help of many papers ranging from Oleg Kiselyov’s amazing type-hacks to Hudak et al’s A History of Haskell.

Its primary shortcoming at present is the lack of Haddock documentation. Dimensional is written in a literate style and I think anyone reading the source will find the documentation pretty comprehensive and easy to follow (barring the fact that type hacking is pretty tricky business). Alas, with the last few years’ infrastructure improvements and increased (de facto) standardization in the community not having proper haddocks is a wart.

fad: Forward Automatic Differentiation

Report by: Björn Buckwalter
Participants: Barak A. Pearlmutter, Jeffrey Mark Siskind
Status: active

Fad is an attempt to make as comprehensive and usable a forward automatic differentiation (AD) library as is possible in Haskell. Fad (a) attempts to be correct, by making it difficult to accidentally get a numerically incorrect derivative; (b) item provides not only first-derivatives, but also a lazy tower of higher-order derivatives; (c) allows nested use of derivative operators while using the type system to reject incorrect nesting (perturbation confusion); (d) attempts to be complete, in the sense of allowing calculation of derivatives of functions defined using a large variety of Haskell constructs; and (e) tries to be efficient, in the sense of both the defining properties of forward automatic differentiation and in keeping the constant factor overhead as low as possible.

Version 1.0 of fad was uploaded to Hackage on April 3. Recent changes can be found via git clone git://github.com/bjornbm/fad.git

Further Reading
http://github.com/bjornbm/fad
http://flygdynamikern.blogspot.com/2009/04/announce-fad–10-forward-automatic.html

The majority of work on fad (including most of the above report) is actually being done by Barak Pearlmutter, principial of the Hamilton Institute’s Brain and Computation Lab. As you can understand his brain is in a different league than mine when it comes to anything related to computation. My contribution at this stage is mostly reviewing his patches (often struggling to understand them) and providing guidance and support on Haskell infrastructure and conventions.

leapseconds-announced

Report by: Björn Buckwalter
Status: stable, maintained

The leapseconds-announced library provides an easy to use static LeapSecondTable with the leap seconds announced at library release time. It is intended as a quick-and-dirty leap second solution for one-off analyses concerned only with the past and present (i.e. up until the next as of yet unannounced leap second), or for applications which can afford to be recompiled against an updated library as often as every six months.

Version 2009 of leapseconds-announced contains all leap seconds up to 2009–01–01. A new version will be uploaded if/when the IERS announces a new leap second.

Further Reading
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/leapseconds-announced
http://github.com/bjornbm/leapseconds-announced

Not much to add about leapseconds-announced other than that it fills a need I have. I elaborated a little on what it is (and what it doesn’t try to be) in the announcement thread.

That’s it! I hope to have another project or two to report on in the November edition…