Herein I demonstrate a calculation performed today in my work. The details are slightly different to protect the innocent but the problem is the same. The purpose of all this is to demonstrate use of the *dimensional* library. What this blog post unfortunately fails to demonstrate is how the compiler helps catch errors in units and formulae, effectively double-checking any derivations for me – this is the real value of dimensional! In case you have never heard of dimensional before here is the blurb from the project site:

Dimensional is a library providing data types for performing arithmetic with physical quantities and units. Information about the physical dimensions of the quantities/units is embedded in their types and the validity of operations is verified by the type checker at compile time. The boxing and unboxing of numerical values as quantities is done by multiplication and division with units. The library is designed to, as far as is practical, enforce/encourage best practices of unit usage.

This is a literate Haskell program. You can copy and paste this blog post into a *.lhs* file and it should compile and run (provided you have installed dimensional). First the formalities:

`> import Numeric.Units.Dimensional.Prelude`

> import Numeric.Units.Dimensional.NonSI

> import qualified Prelude

(In the below I have used parentheses rather than juxtaposition to denote function application for functions of time, e.g. *p*(*t*). Does this help or hurt clarity given the subject matter?)

Consider a leaky container of volume *v* filled with gas to pressure *p*_{0} and launched into space at time *t*_{0} (insert favorite monad tutorial joke here). There is a requirement that at some future time *t*_{1} said container must retain no less than a given amount of gas. The temperature *T* of the container is held constant by a thermal control system so per the ideal gas law (a good enough approximation) we can characterize the requirement to remnant gas by its pressure *p*_{min}. The problem at hand was to determine the maximum allowable leak rate of the container at *t*_{0} to ensure that the pressure at time *t*_{1} is no less than *p*_{min}.

Here are the inputs (don’t blame me for the choice of units, this is what I was given):

`> v = 2 *~ liter`

> t_0 = 0 *~ hour

> p_0 = 750 *~ mmHg -- roughly atmospheric pressure

> t_1 = 133650 *~ hour -- about 15.25 years

> p_min = 5 *~ mmHg

> temp = fromDegreeCelsiusAbsolute 22

For slow leakage it can be assumed that the leakage rate is proportional to the pressure of the gas inside the container (more generally, the pressure difference between the gas in the container and the surrounding medium). Thus the rate of change of the pressure is described by the first order linear differential equation

`> dpdt(t) = negate k * p(t)`

with solution

`> p(t) = c * exp (negate k * t)`

The integration constant *c* is determined from the initial conditions at *t*_{0}:

`> c = p_0 -- follows from t_0 = 0 s.`

We chose the worst case conditions at *t*_{1} to determine *k*:

`> k = log (c / p_min) / t_1`

Now, per the ideal gas law the amount of substance *n* in the container is described by

`> n(t) = p(t) * v / (r * temp)`

where

`> r = 8.314472 *~ (joule / (mole * kelvin))`

is the ideal gas constant. Differentiating *n*(*t*) give the rate of change of substance in the container:

`> dndt(t) = dpdt(t) * v / (r * temp)`

and the maximum allowed rate of change (leakage rate) at time *t*_{0} is produced by

`> main = print (dndt t_0) -- -8.486687441280605e-10 s^-1 mol`

That’s it!