Biofuels in aviation

Boeing 787 Dreamliner. At least 30 football pitches of biofuel crop needed for one full-range flight. Image credit: pjs2005 from Hampshire, UK, CC BY-SA 2.0, via Wikimedia Commons.

Carbon emissions and climate change are a huge story in the news at the moment, and the aviation industry is, quite rightly, often in the spotlight. There is talk of using biofuels to partially or completely displace fossil fuels in aviation.

That’s easy to say, but how much land would be needed to produce the energy crops? This is a complicated question, but what I want to do here is an order-of-magnitude calculation to show the alarming scale of the issue. I’m going to ask what area of oil-seed crop we would need to fuel a single full-range flight of a typical long-haul airliner.

For a smallish long-haul airliner, such as the one above, and using the controversial but high-yielding oil palm for fuel, we’d need the annual crop from 20 hectares of land to fuel a single flight. That’s about 30 football pitches. For one flight.

That figure becomes 100 hectares (a square kilometre, 150 football pitches) if we use the less controversial oil-seed rape. For one flight.

Or to put it into a different context, airports have large areas of grass on them. There’s roughly 2 square kilometres of grass at Heathrow. Let’s suppose that we use all of that area to grow oil-seed rape instead. We could use that crop to fuel TWO full-range flights of a smallish long-haul airliner each year. About a quarter of a million planes take off from Heathrow annually.

I despair at the refusal of people (often privileged Westerners such as myself) to face up to reality when it comes to flying or transport more generally.

Yes, but… (1)

…isn’t this an unrealistically pessimistic calculation? We won’t necessarily be using dedicated fuel crops for aviation. For example, there are other crop residues that we could use to provide fuels.

About 70% of the land area of the UK is devoted to agriculture, about a third of which is arable land: roughly 60 000 square kilometres. So if we used the whole lot for growing oil-seed rape, it looks doubtful that we’d keep Heathrow in jet fuel, even allowing for the facts that not every flight is long-haul and that not all planes take off with full tanks. But if, instead of using a crop optimised for oil production, we use the wastes from crops optimised for food production, the land requirement must increase hugely. And don’t forget that some of those wastes already have uses.

Yes, but…(2)

…can’t we grow the fuels elsewhere and import them?

I haven’t done any sums here. But remember that other countries are likely to want to produce biofuels for their own aviation industries.

The calculation

There’s a table here showing the annual yield of various crops from which we can produce oil. The yields vary from 147 kg of oil per hectare per year for maize, to 1000 kg/ha/yr for oil-seed rape (common in the UK), to 5000 kg/ha/yr for the highly controversial oil palm. I will assume that the oil can be converted to jet fuel with 100% efficiency.

The fuel capacity of long-haul airliners varies from about 100 tonnes (eg Boeing 787 Dreamliner) up to 250 tonnes (Airbus A380).

Taking the smallest plane and the highest-yielding oil crop, the annual land requirement is

\dfrac{100 000 \mathrm{\ kg}}{5000 \mathrm{\ kg/ha}} = 20 hectares per flight.

If we use oil-seed rape instead, the resulting land area is 100 hectares per flight.

Red, green, and blue make…black!

Mixing magenta, cyan, and yellow shadows

In the previous post I looked at how coloured shadows are formed. As I wrote it, I realised how much there is to learn from the coloured shadows demonstration; that’s what this post is about. The image above shows coloured shadows cast by a white paper disc with a grey surround.

Mixing red, green, and blue lights

In the image at the top, we’re mixing shadows. If we were mixing lights in the normal way, it would look like the picture on the right.

So what do we learn from the coloured shadows image?

Red, green and blue add to make white

The white paper disc has all three lights shining on it, and it appears white. Mixing lights shows us this too.

The three lights still exist independently when they are mixed

Some descriptions of colour light mixing could leave you with the impression that when we mix red, green, and blue lights together to make white, they combine to make a new kind of light, a bit like the way butter, eggs, flour and sugar combine to create something completely different: a cake.

But if that were so, we’d only get a black shadow. The fact that we get three coloured shadows show that the three coloured lights maintain their independence even though they’re passing through the same region of space. It’s very like ripples on a pond: if you throw two pebbles into a pond, the two sets of ripples spread through the same region of water, each one travelling through the water as if the other pebble’s ripples weren’t there.

Coloured shadows obey subtractive colour mixing rules

Mixing red, green, and blue lights

When we mix coloured lights, additive colour mixing rules apply:

  • Red and blue make magenta.
  • Red and green make (surprisingly) yellow.
  • Blue and green make cyan.
  • All three colours add to make white.
Mixing magenta, cyan, and yellow shadows

In the coloured shadows image, it looks at first glance as if we are adding together coloured lights. But if we were, we’d expect the centre of the pattern, where all the lights overlap, to be white, as it is in the light-mixing image. Instead, the centre of the coloured shadows pattern is black.

The reason is that we aren’t adding coloured lights, we’re adding coloured shadows, and now subtractive colour mixing rules – the rules of mixing paints – apply.

In the cyan shadow, red has been blocked by the disc, leaving green and blue. In the yellow shadow, blue has been blocked by the disc, leaving red and green. Where the cyan and yellow shadows overlap, the only colour that has not been blocked by one disc or the other is green, so that’s the colour we see. We get the same result when we mix blue and yellow paints: the only colour that both paints reflect well is green. (If the blue and yellow paints reflected only blue and only yellow respectively, the mixture would appear black.)

In the black centre of the pattern, all three lights are blocked by the disc. Something similar happens when you mix every colour in your paint box together.

Colour printing is based on these subtractive colour mixing rules.

Brightness matters, and blue isn’t very bright

In either of the light- or shadow-mixing images, the boundaries between the regions aren’t all equally distinct. The least distinct ones are:

  • magenta and red
  • green and cyan
  • blue and black
  • yellow and white

In each case, the difference in the colours is the presence or absence of blue light.

There are two things at work here. Firstly, more than we might think, our vision is based on brightness, not on colour. We happily watch black-and-white movies; after a while we hardly notice the absence of colour.

Secondly, our sensation of brightness is largely due to the red-yellow-green end of the spectrum – blue makes a very small contribution, if any. So although the presence or absence of blue light can have a strong effect on colour, it has a weak effect on brightness. So boundaries defined by the presence or absence of blue light tend to be relatively indistinct compared to those defined by the presence or absence of red or green light.

Coloured shadows

This is a photograph that I took as a response to a challenge that was set by photographer Kim Ayres as part of his weekly podcast Understanding Photography. The challenge was to produce a photo where the main interest was provided by shadows. I lit a rose cutting using red, green, and blue lights that were about 3 metres away and 30 centimetres apart from each other. The result is a gorgeous display of coloured shadows. Coloured shadows are nothing new, but they are always lovely.

Kim suggested that I do a blog post to explain more about how coloured shadows arise. To do this I set up an arrangement for creating simple coloured shadows. One part of the arrangement is three lights: red, green, and blue arranged in a triangle.

The lights shine upon a white screen set up about 3 metres away. In front of the screen, a wire rod supports a small black disc.

First of all, let’s turn on the red light only. The screen appears red, and we can see the shadow of the disc on it. The shadow occupies the parts of the screen that the red light can’t reach because the disc is in the way.

Next, we’ll turn on the green light only. Now the screen appears green, and for the same reasons as before, there’s a shadow on it. The shadow is further to the left than it was with the red light; this is because the green light is to the right of the red light as you face the screen.

Next, we’ll turn on the blue light only, with the expected result. The blue light is lower than the red and green ones, so the shadow appears higher on the screen. (The shadow is less sharp than the previous two. This is because my blue light happens to be larger than the red or green lights).

Now we’re going to turn on both the red and green lights. Perhaps unsurprisingly, we see two shadows. They are in the same places as the shadows we got with the red and green lights on their own. But now they are coloured. The shadow cast by the red light is green. This is because, although the disc blocks red light from this part of the screen, it doesn’t block green light, so the green light fills in the red light’s shadow. Similarly, the shadow cast by the green light is red.

The screen itself appears yellow. This is because, by the rules of mixing coloured lights (which aren’t the same as the rules for mixing coloured paints), red light added to green light gives yellow light.

We can do the same with the other possible pairs of lights: red & blue, and green & blue. (The green shadow looks yellowish here. It does in real life too. I think this is because it’s being seen against the bluish background.)

We’re now going to turn on all three lights. As you might expect, we get three shadows. The colours of the shadows are more complicated now. The shadow cast by the red light is filled in with light from both of the other lights – green and blue – so it has the greeny-blue colour traditionally referred to as cyan. The shadow cast by the green light is filled in with light from the red and blue lights, so it is the colour traditionally called magenta. And the shadow cast by the blue light is filled in with light from the red and green lights, and thus appears yellow.

The rest of the screen, which is illuminated by all three lights, is white, because the laws for mixing coloured lights tell us that red + green + blue = white. The white is uneven because my lights had rather narrow and uneven beams.

Finally, let’s add further richness by using a larger disc, so that the shadows of the three lights overlap. Now we get shadows in seven colours, as follows.

Where the disc blocks one light and allows two lights to illuminate the screen, we see the colours of the three pairwise mixtures of the lights: yellow  (red+green), magenta (red+blue), and cyan (green+blue).

Where the disc blocks two lights and allows only one light to illuminate the screen, we see the colours of the three individual lights: red, green, or blue.

And in the middle, there’s a region where the disc blocks the light from all three lights, so here we get a good old-fashioned black shadow.

If it’s a bit hard to wrap your head around this, let’s trying looking at things from the screen’s point of view. Here I’ve replaced the screen with a thin piece of paper so that the shadows are visible from both sides. I’ve made holes in the screen in the middle of each of the coloured regions, so that we can look back through the screen towards the lights.

Here’s what you see when we look back through the magenta shadow. We can see the red light and the blue light, but not the green one – it’s hidden behind the disc.

This is the view looking back through the green shadow. We can see only the green light. The red and blue lights are hidden behind the disc.

And so on…

I’ve written some more thoughts about coloured shadows here.

Disappearing lighthouses: atmospheric refraction at Portobello beach

Look out to sea at night from the beach at Portobello in Edinburgh, and you’ll often see lighthouses blinking in the blackness. Here’s a picture of those lights, taken in such a way as to convert time into space so that the different flashing sequences become apparent.

I pointed my camera out to sea, on a tripod, and slowly panned upwards over what was effectively a long exposure of about a minute. The movement of the camera smears out any light source into a near-vertical streak (the tripod wasn’t very level). The two thick streaks are ships at anchor in the Firth of Forth.

A flashing light leaves a broken streak; this reveals the lighthouses. At the left is Elie Ness on the Fife coast, flashing once every 6 seconds. Second from the right is the Isle of May in the Firth of Forth, flashing twice every 15 seconds. And at far right we have Fidra, off North Berwick, flashing 4 times every 30 seconds.

And the green streak? A passing dog with a luminous collar!

Lighthouses are lovely, romantic, things. But what’s extra lovely about the lighthouses in this image is that you shouldn’t be able to see two of them!

Both the Elie Ness and Isle of May lighthouses are, geometrically, beyond the horizon. If light travelled in straight lines, we wouldn’t be able to see either of them. As it is, whether we can see them or not depends upon the weather: if I’d taken the picture in cold weather, only the Fidra light would have been visible.

Isle of May lighthouse. Image: Jeremy Atherton.

The reason that we can see them at all is that light travelling through the atmosphere is refracted by the air: its path is (usually) slightly curved downwards. This means that we can see slightly “over the hill”, a bit further than simple geometry would suggest. Things near the horizon appear higher in the sky than they really are.

It’s not a small effect. For example, when we see the Sun sitting with its lower edge just touching the horizon, the geometric reality is that all of it is actually below the horizon.

The refraction happens because the air gets thinner as you go upwards. Just as light passing through a prism is bent towards the thicker end of the prism, so light passing through the atmosphere is bent towards the thicker (denser) part of the air.

Elie Ness lighthouse. Image: Francis Webb.

The amount of atmospheric refraction depends upon the weather. It depends upon the pressure and temperature, and the temperature gradient (how quickly the temperature drops as you go upwards). When it’s cold, the Elie Ness and Isle of May lights disappear. If it warms up, they pop back into view again. And when it’s really cold and the air is colder than the sea, the refraction can be reversed, and we see mirages along the coast. But that’s for another time.

Mirages on the Fife coast, seen from Portobello (distance about 8 miles/13km). Early April, with air temperature forecast to be about 0° C. The sea temperature was about 7° C.

How horizons work, and how we see things that are beyond the horizon

The horizon exists because the Earth’s surface is curved. In these diagrams the circular arc represents the surface of the Earth. If my eye is at the point O, I can see the Earth’s surface up to the point H (where my line of sight just grazes the surface) but no further. The point H is on my horizon. Its distance depends upon the height of O. It can be surprisingly close: if I stand at the water’s edge, my horizon is only 4.5 km away.


If the thing I’m looking at (a lighthouse L for example) is raised above the surface of the Earth, I’ll be able to see it when it is further away. In the diagram below, I’ll just be able to see the lighthouse. The rays from the lighthouse will just kiss the wave tops in the darkness on their way to my eye. The point is on the lighthouse’s horizon as well as mine. The higher the lighthouse (or me) is, the further away it can be and I’ll still see it.


But if it’s too far away (below), I won’t be able to see it. There’ll be a region of the sea that is not in sight from either my eye or the lighthouse. This “no-man’s land” is the region between my horizon H_O and the lighthouse’s horizon H_L. For example, there’s about 6 km of no-man’s land between me on the promenade at Portobello and the Isle of May lighthouse.


All of these diagrams assume that rays of light are perfectly straight. But rays of light passing through the air aren’t perfectly straight: they have a downward curvature because of atmospheric refraction. This means that rays from an object that is geometrically beyond the horizon might actually reach my eye. That’s why I can sometimes see the Elie Ness and Isle of May lights. 

The curvature of the rays of light is hugely exaggerated in this diagram. Otherwise it would be imperceptible. The rays deviate from straightness by only a few metres over a journey of several tens of kilometres.

The curvature of the rays varies according to temperature, pressure etc. It’s a happy accident (for me) that both lighthouses are only marginally out of view over the horizon, so that they can disappear and reappear according to the whims of the weather.

How I took the picture

The picture isn’t actually a single long exposure. I thought that I might end up with a pale sky rather than a black one if I did that. Instead, I took a one-minute video, slowly panning the camera upwards. I then wrote a program in Python to produce a composite image derived from all of the video frames. For each pixel position in the image, the program inspected that pixel position in all of the video frames and chose the brightest value.

Calculating the distance of the horizon

We can use Pythagoras’ theorem to work out how far away the horizon is. In the diagram below, the circle represents the surface of the Earth, with centre at C and radius r. You, the observer, are at O, a height h (greatly exaggerated) above the surface. Your horizon is at H, a distance d away.

The triangle OHC is right-angled at H. Applying Pythagoras’ Theorem, we get
(r+h)^2 = d^2 + r^2
and so
d^2 = r^2 + 2rh + h^2 - r^2

Where h is very small compared to r, as it will be for the heights we’re dealing with, h^2 \ll 2rh, so we can neglect the term h^2 and get, to a very good approximation (within centimetres)
d = \sqrt{2rh}

Calculating the visibility of the lighthouses

There is a Wikipedia article with formulae for the curvature of near-horizontal light rays in the atmosphere.

I’m only going to give a summary of results here, principally because although I’ve got the spreadsheet that does the calculations, I can’t find the notebook where I worked out the geometry. Here are the details of the lighthouses. Heights are the height of the lamp in metres above mean sea level.

LighthouseDistance (km)Height above sea level (m)
Isle of May4473
Elie Ness3115

Consider the next table as an example, based on roughly typical weather conditions for this time of year (March). The figures assume that I’m standing on Portobello promenade. The Ordnance Survey map shows a 3 metre spot height marked on the prom, so that would make my eye about 4.5 metres above mean sea level.

The first three numerical columns of the table shows how high above the horizon each lighthouse would be in the absence of refraction, what the estimated refraction is, and thus how high above the horizon the light should appear. The heights are expressed as angles subtended at your eye. There’s a lot of uncertainty in the estimated refraction (because of uncertainty about the input values such as temperature gradient), which is why the middle two columns are given to only one significant figure.

LighthouseAngle above horizon with no refraction (degrees)Estimated refraction (degrees)Estimated angle above horizon with refraction (degrees)Water depth over direct line of sight (m)
Isle of May-0.0400.0490.015.3
Elie Ness-0.0510.035-0.026.8

Thus we see that the Isle of May and Elie Ness have negative heights above the horizon without refraction, ie they’re geometrically below the horizon. In the conditions given, refraction is enough to raise Isle of May into visibility, but not Elie Ness – the angle with refraction is still negative. This accords with my experience: I’m more likely to be able to see Isle of May than Elie Ness. Fidra is above the horizon, refraction or no refraction.

Note that the angles above and below the horizon are tiny. For comparison, an object 1 mm across held at arm’s length subtends an angle at your eye of about 0.1 degrees. Most of the angles in the table are less than half that.

The rightmost column is there to help understand how the tide can affect things. Saying that the Isle of May and Elie Ness lights are beyond the horizon is saying that there’s water between my eye and their lamps. Imagine that the light from the lamps could travel through the water completely unimpeded and in a perfect straight line to my eye. This column shows how far under the water the light rays would be at their deepest. As you can see, they are single-digit numbers of metres. Now the tidal range in the Firth of Forth is about 4 metres. What this shows us that that the state of the tide could easily make the difference between seeing or not seeing a given lighthouse. It also brings home how slight a curvature of the rays is produced by refraction: in Isle of May’s case, there’s just enough curvature to get the rays over a 5-metre bump in a 44-kilometre journey.


Image credits

Isle of May: Jeremy Atherton; Elie Ness: Francis Webb. Both under CCA license.

Galactic greenhouse

Before Christmas, my enterprising friend Clare decided to brighten the dark nights of a Scottish winter by turning her greenhouse into an illuminated art gallery. She asked friends to produce translucent artworks that could be hung in the greenhouse and lit from within.

My contribution is a representation of the movement across the sky of Jupiter and Saturn (and some smaller planets) in the two years bracketing the recent Great Conjunction. It’s made from a sheet of wallpaper, painted black, with holes cut out and with coloured filters placed behind the holes.

A photograph of the piece. It’s just over 50 cm wide. The variations in brightness of the discs aren’t part of the plan; the illumination wasn’t perfectly even.

The piece is divided into 30 rows. All but one of these rows contain a large red disc (representing Jupiter) and a large yellow disc (representing Saturn). A row may also contain smaller discs, representing Mars in pink, Venus in white and Mercury in blue. The purple discs represent the ex-planet Pluto. All of the discs hugely exaggerate the size of their planets.

Each row represents the same strip of the sky, in the sense that if I had included stars on the piece, the same stars would appear in the same positions on every row. From top to bottom, the rows show that strip of sky at 25-day intervals, covering a period roughly from roughly one year before the Great Conjunction to one year after. The discs in each row indicate the positions of any planets that are in that strip of sky at the time.

Concentrating on Jupiter (red) and Saturn (yellow) first, we see that they have a general leftward motion, but with periods of rightward motion. Jupiter’s overall leftward motion is faster than Saturn’s: it starts to the right of Saturn and finishes to the left. Because Jupiter overtakes Saturn, there comes a point where they are at the same place in the sky. This is the Great Conjunction: in this row, both Jupiter and Saturn are represented by a single large white disc.

Mars, Venus and Mercury move much faster. Mars crosses our field of view in only 4 rows (roughly 100 days) and Venus and Mercury make repeat visits. Pluto wavers back and forth without appearing to make much leftward progress at all.



Why do the planets move along the same line? They don’t exactly, but it’s pretty close. All of the planets, including the Earth, move around the Sun in roughly circular orbits. Except for Pluto’s, these orbits are more or less in the same plane (like circular stripes on a dinner plate). Because our viewpoint (the Earth) is in this plane, we look at all the orbits edge on, and the planets appear to follow very similar straightish paths across the sky. I have chosen to neglect the slight variations in path and depict the planets as following one another along exactly the same straight line

Why do Jupiter and Saturn move mainly right to left? Looking down from the North, all of the planets orbit anticlockwise. Mars, Jupiter, and Saturn have bigger orbits than the Earth, we’re observing them from inside their orbits (and from the Earth’s northern hemisphere). Thus their general movement is leftwards. (If you don’t get it, whirl a conker around your head on a string, so that it moves anticlockwise for someone looking down. The conker will move leftwards from your point of view.) The orbits of Venus and Mercury are inside the Earth’s orbit; their movements as seen from the Earth are rather complicated.

Why do Jupiter, Saturn, and Pluto sometimes move from left to right? Earth is in orbit too, so we’re observing the planets from a moving viewpoint. If you move your head from side to side, nearby objects appear to move back and forth against the background of distant objects. Exactly the same effect happens with our view of the outer planets as the Earth moves around its orbit from one side the Sun to the other – they appear to move back and forth once a year against the background of distant stars. But at the same time, they are also really moving leftwards (as we look at them). The sum of the planet’s real motion with their apparent back-and-forth motion gives the lurching movement that we see: mainly leftwards but with episodes of rightward motion. Note that the planets never actually move backwards: they just appear to. The same thing happens to Mars, but none of its periods of retrograde motion coincided with its visit to our strip of the sky.

Why do some planets move faster across the sky than others? The larger a planet’s orbit, the more slowly it moves. For the outer planets, a larger orbit also  means that we’re watching it from a greater distance, so it appears to move more slowly still. Saturn’s orbit is about twice as big as Jupiter’s, so it moves more slowly across the sky than Jupiter. Jupiter “laps” Saturn about once every 20 years: these are the Great Conjunctions. Mars’ orbit is smaller than Jupiter’s, so it moves more quickly across the sky. Meanwhile lonely Pluto plods around its enormous orbit so slowly that the leftward trend of its motion is barely discernible; all we see is the side-to-side wobble caused by our own moving viewpoint. As for Mercury and Venus: it’s complicated.

Please could you stop being evasive about the movements of Venus and Mercury? It really is complicated. The orbits of Venus and Mercury are smaller than the Earth’s: we observe them from the outside. If the Earth was stationary, we’d see Venus and Mercury moving back and forth from one side of the Sun to the other. Returning to our conker-whirling experiment, it’s like watching a conker being whirled by somebody else rather than whirling it yourself. But the Earth is moving around its orbit too. And then Venus and Mercury are also moving rather fast: Mercury orbits the Sun 4 times for each single orbit made by the Earth. Combine all of these things and it becomes very confusing. Whereas the outer planets’ episodes of retrograde (backwards) movement across the sky occur less than once a year, Mercury is retrograde about three times a year.

Do the planets really follow a horizontal path across the sky? This question doesn’t have an answer. We’re using the pattern of stars, all inconceivably distant compared to the planets, as the fixed background against which we view the movement of the planets. You may have noticed that the stars move in arcs across the sky during the night; this is due to the Earth’s rotation on its axis. So our strip of sky moves in an arc too, and turns as it moves. So if it ever is horizontal, it is only briefly so, and when and if it is ever horizontal will depend upon your latitude.

Jupiter and Saturn never exactly lined up, did they? No, they didn’t (see the answer to the first question). On this scale, at the Great Conjunction the discs representing Jupiter and Saturn should be misaligned vertically by about a millimetre. With our hugely over-sized planets, this means almost total overlap, which still misrepresents the actual event, where the planets were separated by many times their own diameter. And for all other rows, where the two discs don’t overlap, a millimetre’s misalignment would be imperceptible. A final and maybe more compelling reason for my neglect of the misalignment of the planets’ paths is that I don’t know how to calculate it.

Anything else to confess?  Yes. There’s a major element of fiction about the piece in that it’s not physically possible to see all of these arrangements of the planets. The reason is that for some of these snapshots, the Earth is on the opposite side of the Sun from most or all of the planets, and Sun’s light would drown out the light from the planets. In other words, it would be daytime when the planets are above the horizon, and therefore in practice they would be invisible. This was almost the case for the Great Conjunction, where there was only a short period of time between it becoming dark enough for Jupiter and Saturn to be visible, and them disappearing over the horizon.

A further element of fiction is that, even in the depths of a Scottish winter’s night, Pluto is far too faint to be seen with the naked eye, not to mention not being regarded by the authorities as a planet any more. But it was passing at the time of the Great Conjunction and it seemed a pity to miss it out.


Why hillwalkers should love the Comte de Buffon

Part of Beinn a Bhuird, Cairngorms, Scotland.

Wandering the mountains of the UK has been a big part of my life. You won’t be surprised that before I start a long walk I like to know roughly what I’m letting myself in for. One part of this is estimating how far I’ll be walking.

Several decades ago my fellow student and hillwalking friend David told me of a quick and simple way to estimate the length of a walk. It uses the grid of kilometre squares that is printed on the Ordnance Survey maps that UK hillwalkers use.

To estimate the length of the walk, count the number of grid lines that the route crosses, and divide by two. This gives you an estimate of the length of the walk in miles.

Yes, miles, even though the grid lines are spaced at kilometre intervals. On the right you can see a made-up example. The route crosses 22 grid lines, so we estimate its length as 11 miles.

Is this rule practically useful? Clearly, the longer a walk, the more grid lines the route is likely to cross, so counting grid lines will definitely give us some kind of measure of how long the walk is. But how good is this measure, and why do we get an estimate in miles when the grid lines are a kilometre apart?

I’ve investigated the maths involved, and here are the headlines. They are valid for walks of typical wiggliness; the rule isn’t reliable for a walk that unswervingly follows a single compass bearing.

  • On average, the estimated distance is close to the actual distance: in the long run the rule overestimates the lengths of walks by 2.4%.
  • There is, of course, some random variation from walk to walk. For walks of about 10 miles, about two-thirds of the time the estimated length of the walk will be within 7% of the actual distance.
  • The rule works because 1 mile just happens to be very close to \frac{\pi}{2} kilometres.

The long-run overestimation of 2.4% is tiny: it amounts to only a quarter-mile in a 10-mile walk. The variability is more serious: about a third of the time the estimate will be more than 7% out. But other imponderables (such as the nature of the ground, or getting lost) will have a bigger effect than this on the effort or time needed to complete a walk, so I don’t think it’s a big deal.

In conclusion, for a rule that is so quick and easy to use, this rule is good enough for me. Which is just as well, because I’ve been using it unquestioningly for the past 35 years.

And the Comte de Buffon?

The Comte de Buffon

George-Louis Leclerc, Comte de Buffon,  (1707-1788) was a French intellectual with no recorded interest in hillwalking. But he did consider the following problem, known as Buffon’s Needle:

Suppose we have a floor made of parallel strips of wood, each the same width, and we drop a needle onto the floor. What is the probability that the needle will lie across a line between two strips?

Let’s recast that question a little and ask: if the lines are spaced 1 unit apart, and we drop the needle many times, what’s the average number of lines crossed by the dropped needle? It turns out that it is


where l is the length of the needle. Now add another set of lines at right angles (as if the floor were made of square blocks rather than strips). The average number of lines crossed by the dropped needle doubles to


Can you see the connection with the distance-estimating rule? The cracks in the floor become the grid lines, and the needle becomes a segment of the walk. A straight segment of a walk will cross, on average, \frac{4}{\pi} grid lines per kilometre of its length. Now a mile is 1.609 kilometres, so a segment of the walk will, on average, cross \frac{4}{\pi} \times 1.609 = 2.0486... grid lines per mile, which is very close to 2 grid lines per mile, as our rule assumes. If a mile were \frac{\pi}{2} km (1.570… km), we’d average exactly 2 grid lines per mile.

So the fact that using a kilometre grid gives us a close measure of distance in miles is just good luck. It’s because a mile is very close to \frac{\pi}{2} kilometres.

In a future post, I’ll explore the maths further. We’ll see where the results above come from, and look in more detail at walk-to-walk variability. We’ll also see why results that apply to straight lines also apply to curved lines (like walks), and in doing so discover that not only did the Comte de Buffon have a needle, he also had a noodle.

Mathematical typesetting by QuickLaTeX.


Decisions of cricket umpires

In this post I offer a suggestion for a practically imperceptible change to the laws of cricket that might eliminate controversies to do with adjudications by match officials. The suggestion could apply to any other sport, so even if you aren’t a cricket lover, please read on.

My suggestion doesn’t affect the way the game is played in the slightest. It simply takes a more realistic philosophical angle on umpires’ decisions.

Cricket is a bat-and-ball ‘hitting and running’ game in the same family as baseball and rounders. In these games, each player on the side that is batting can carry on playing (and potentially scoring) until they are “out” as a result of certain events happening. For example, in all of these games, if a batsman* hits the ball and the ball is caught by a member of the other team before it hits the ground, the batsman is out.

In cricket, there are several ways that a batsman can be out. Some of these need no adjudication (eg bowled), but most require the umpire to judge whether the conditions for that mode of dismissal have been met. In the case of a catch, for example, the umpire must decide whether the ball has hit the bat, and whether it was caught before touching the ground. Contact with the bat is most often the source of contention, because catches are often made after the ball has only lightly grazed the edge of the bat.

The umpire’s position is unenviable. They have to make a decision on the basis of a single, real-time, view of the events, and their decisions matter a great deal. The outcome of a whole match (and with it, possibly the course of players’ careers) can hinge on one decision. It’s not surprising that umpire’s decisions are the cause of much controversy.

For most of the history of cricket, the on-field umpire’s judgement has been the sole basis for deciding whether a batsman is out. This is still true today for nearly all games of cricket, but at the highest levels of the game, an off-field umpire operates, using slow motion video, computer ball-tracking, and audio (to hear subtle contact of the ball with the bat). The on-field umpires (of which there are two) can refer a decision to the off-field umpire, and the players have limited opportunities to appeal against the decisions of the on-field umpires. From now on we’ll call the off-field umpire the “3rd umpire”, as is commonly done.

One of the intentions behind all of this was to relieve the pressure on the on-field umpires, but it appears that the opposite has been the case. In a recent Test Match between England and Australia, one of the umpires had 8 of his decisions overturned on appeal to the 3rd umpire. This led to much criticism and must have been excruciating for him.

Here’s a suggestion for a small modification to the laws of cricket that wouldn’t change the course of any match that didn’t have a 3rd umpire, but which would put the on-field umpires back in charge and relieve much of the pressure on them. As a bonus, it would settle another thorny issue in the game – whether batsmen should “walk” or not (see later).

The suggestion

I’ll use the judgement “did the ball touch the bat?” as an example, but the same principle applies to any judgement of events in the game. We’ll assume that the ball was clearly caught by a fielder, so that contact with the bat is the only matter at issue.

There are three elements to an umpire’s decision: the physical events, the umpire’s perception of those events, and the decision based on that perception. We can represent these elements in a diagram:

For our specific example, the diagram looks like this:

Because our perceptual systems are imperfect, the umpire’s perception of events doesn’t necessarily correspond to the actual course of events. They may perceive that the ball has hit the bat when it hasn’t, or vice versa. This source of error is represented by linking the left-hand boxes by a dashed arrow.

On the other hand, the umpire has perfect access to their own perceptions, so the final decision (out/not out) follows inevitably from those perceptions (provided that the umpire is honest). This inevitable relationship is represented by linking the right-hand boxes by a solid line.

Now, at present, the law is specified in terms of the physical events that occurred. This means that, because the umpire’s perception is imperfect, the umpire can make an incorrect decision: one that is not in accord with those physical events.

However, in any match without a 3rd umpire (ie practically all cricket) the umpire is the sole arbiter of whether a batsman is out or not. So regardless of the actual laws, the de facto condition for whether a batsman is out is the umpire’s perceptions, not the physical events, like this:

My suggestion is simply to be honest about this state of affairs and enshrine it in the laws.

Thus, the relevant part of the law, instead of reading (as it does at present):

…if [the] … ball … touches his bat…

would read

…if the ball appears to the umpire to touch the bat (regardless of whether it did actually touch the bat)…

This may seem like a strange way to word the law, but it’s just codifying what happens anyway in nearly all cricket. The course of all cricket matches that don’t have 3rd umpires, past, present, and future, would be entirely unchanged. We’d be playing exactly the same game. The only difference is that all umpires’ decisions would, by law, be correct, and so the pressure on them would be removed.

The other main advantage of my proposal would that it would render 3rd umpires and all their technology irrelevant, and we could get on with the game instead of waiting through endless appeals and reviews. Cricket would once again accord with the principle that a good game is one that can be played satisfactorily at all levels with the same equipment. And the status of the umpires would be restored to being arbiters of everything, rather than being in danger of being relegated to mere ball-counters and cloakroom attendants.

The opposition

I have to confess that no-one I’ve spoken to thinks that this is a good idea. There seem to be two counterarguments. The first is somewhat vague – that there’s something a bit airy-fairy about casting the law in terms of events in someone’s brain rather than what actually happened to balls and bats. I might agree with this argument if my proposal actually changed the decisions that umpires make, but it doesn’t – the only things that change are the newspaper reports and the mental health of umpires.

The second counterargument is more substantial. Under my proposal, even an umpire with spectacularly deficient vision could never make an incorrect decision. Likewise, a corrupt umpire would have a field day (so to speak). Yet quite clearly, we do only want to employ umpires whose decisions are generally “accurate”, in the sense that they reflect what actually happened. My proposal is quite consistent with maintaining high umpiring standards. At the beginning of any match, we appoint umpires, and by doing so we define their decisions to be correct for that match. That doesn’t stop us later (say, at the end of the season) reviewing their decisions en masse and offering training (or unemployment) if the decisions appear to consistently misrepresent what actually happened. Again, this is roughly what actually happens at the moment. Players (usually) accept the umpire’s decision as it comes, but at the end of the game, the captains report on the standard of umpiring. All I’m doing is changing the way we regard the individual decisions.

To walk or not to walk?

My proposal eliminates another controversy in the game: what does a batsman do if they know that the ball has touched their bat and been caught, but the umpire doesn’t see the contact and gives them “not out”?

Some people say that the batsman should “walk” – that is, give themself “out” and head for the pavilion. Others say that the batsman should take every umpire’s decision as it comes, never “walking”, but also departing without dissent if they have been wrongly given “out”. It is possible to make a consistent and principled argument for either position.

With my version of the laws, all of this argument vanishes. Only one position is now valid: batsmen should never “walk”. A batsman may feel the ball brush the edge of their bat on its way to the wicket-keeper’s gloves, but if the umpire perceives that no contact occurred, it is not a mistake – the batsman is purely and simply not out under the law.


* Batsman or batter?

In recent years the term batter has come into use alongside batsman, in some cases as a conscious effort to use a gender-neutral term. It’s interesting to note that the women’s cricket community doesn’t seem to be particularly enthusiastic about batter (nor indeed batswoman) and there seems to be a long-standing preference for batsman. See, for example, this blog post, which explores the history of the matter a little. Note also that since 2017 the Laws of Cricket have been written in a gender-neutral style using he/her his/her throughout, but nevertheless retain batsman. My understanding is that this has been done in consultation with the women’s cricket community.



Here again is the processor package from my old laptop. The processor has a clock in it that delivers electric pulses that trigger the events in the processor. The clock on this processor “ticks” at 2.2 gigahertz, that is, it sends out 2.2 billion pulses per second.

Over two thousand million pulses every second! How can we make sense of such a huge number?

In this post, I’m going to do with time what I did with space in the previous post. I’m going to ask the question:

Suppose that we slow down the processor so that you could just hear the individual “ticks” of the the processor clock (if we were to connect it to a loudspeaker), and suppose that we slow down my bodily processes by the same amount. How often would you hear my heart beat?

Answer: My heart would beat about once every year and a half.

The calculation

How slow would the processor clock need to tick for me to be able to hear the individual ticks? A sequence of clicks at the rate of 10 per second clearly sounds like a series of separate clicks. Raise the frequency to 100 per second, and it sounds like a rather harsh tone; the clicks have lost their individual identity. Along the way, the change from sounding like a click-sequence to sounding like a tone is rather gradual; there’s no clear cutoff.

You can try it yourself using this online tone generator. Choose the “sawtooth” waveform. This delivers a sharp transition once per cycle, which is roughly what a train of very short clicks would do, and play around with the value in the “hertz” box. (Hertz is the unit of frequency; for example, 20 hertz is 20 cycles per second.)

I found that a 40 hertz sawtooth definitely sounds like a series of pulses, and that a 60 hertz sawtooth has a distinct tone-like quality. So let’s say that the critical frequency is 50 hertz, that is, 50 ticks per second. I don’t expect you to agree with me exactly.

If I can hear individual pulses at a repetition rate of 50 hertz, then to hear the ticks of a 2.2 gigahertz clock I need to slow down the clock by a factor of

(1)   \begin{equation*}   \frac{2.2 \times 10^9}{50} = 44 \times 10^6 \end{equation*}

At rest, my heart beats about once per second, so if it was slowed down by the same factor as the processor clock, it would beat every 44 × 106 seconds, which is about every 17 months.

Or should it be twice as long?

The signal from the processor clock is usually a square wave with 50% duty cycle. Try the square wave option on the online signal generator with a 1 hertz frequency (one cycle per second). You’ll hear two clicks per second, because in each cycle of the wave, there are two abrupt transitions, a rising one and a falling one.

This means that if we did connect a suitably slowed-down processor clock to a loudspeaker, we’d hear clicks at twice the nominal clock rate. Looked at this way, we’d need to slow down the clock, and my heart, twice as much as we’ve calculated above. My heart would beat once every three years.

However, most processors don’t respond to both transitions of the clock signal. Some processors respond to the rising transition, others to the falling transition. To assume that we hear both of these transitions is to lose the spirit of what we mean by one “tick” of the processor clock.



Making the micro macro

What is this strange collection of pillars, one of which is propping me up? Read on to find out. Many thanks to Graham Rose for the ilustration.


On the right is the processor package from my old laptop. The numbers processor chipassociated with microelectronic devices like this one are beyond comprehension. The actual processor – the grey rectangle in the middle – measures only 11 mm by 13 mm and yet, according to the manufacturer, it contains 291 million transistors. That’s about 2 million transistors per square millimetre.

To try to bring these numbers within my comprehension, I asked the following question:

If I were to magnify the processor – the grey rectangle – so that I could just make out the features on its active surface with my unaided eye, how big would it be?

The answer is that the processor would be something like 15 metres across.

Consider that for a moment: an area slightly larger than a singles tennis court, packed with detail so fine that you can only just make it out.processor pins

The package that the processor is part of would be over 50 metres across, and the pins on the back of the package (right) would be 3 metres tall, half a metre thick, and about 2 metres apart.


The result above is rather approximate, as you’ll see if you read the details of the calculation below. However, if it inadvertently overstates the case for my processor, which is 10 years old, the error is made irrelevant by progress in microprocessor fabrication. Processors are available today that are similar in physical size but on which the features are nearly 5 times smaller. If my processor had that density of features, the magnified version would be around 70 metres across, on a package 225 metres across. And those pins would be 13 metres tall and 2.25 metres thick.

The calculation

The processor is an Intel T7500. According to the manufacturer, the chip is made by the 65-nanometre process. Exactly what this means in terms of the size of the features on the chip is quite hard to pin down. Printed line widths can be as low as 25 nm, but the pitch of successive lines may be greater than the 130 nm that you might expect. I’ve assumed that the lines on the chip and the gaps between them are all 65 nm across.

“The finest detail that we can make out” isn’t well defined either. It depends, among other things, on the contrast.  But roughly, the unaided human visual system can resolve details as small as 1 minute of arc subtended at the eye in conditions of high contrast. This is about 3 × 10-4 radians. At a comfortable viewing distance of 30 cm, this corresponds to 0.09 mm.

So to make the features on the processor just visible (taking high contrast for granted) we need to magnify them from 65 nm to 0.09 mm, which is a magnification factor of 1385.

Applying this magnification factor to the whole processor, its dimensions of 11 by 13 millimetres become 15 by 18 metres. The pins are 2 mm high, so they become 2.8 metres high and about half a metre thick.

Some processors are now made using 14 nm technology. This increases the required magnification factor by a factor of 65/14, to 6430, yielding the results given in Caveat above.



Anticrepuscular rays

Converging rays

I took this photograph at dusk recently from the beach at Portobello, where Edinburgh meets the sea. As sunset pictures go, it’s not much to look at. But what caught my attention was the faint radiating pattern of light and dark in the sky.  The light areas are where the sun’s rays are illuminating suspended particles in the air. The dark areas are where the air is unlit, because a cloud is casting a shadow.  You may have seen similar crepuscular rays when the sun has disappeared behind the skyline and the landscape features on the skyline cast shadows in the air.

The rays in my picture appear to radiate from a point below the horizon, because that’s where the sun is…isn’t it?

No! Portobello beach faces north-east, not west. The sun is actually just about to set behind me! So why do the rays appear to come from a point in front of me? Shouldn’t they appear to diverge from the unseen sun behind me?

To understand why, we need to realise that the rays aren’t really diverging at all. The Sun is a very long way away (about 150 million kilometres), so its rays are to all intents and purposes parallel. But just as a pair of parallel railway tracks appear to diverge from a point in the distance, so the parallel rays of light appear to diverge from a point near the horizon.

The point from which the rays seem to diverge is the antisolar point, the point in the sky exactly opposite the sun, from my point of view. It’s where the shadow of my head would be. When I took the photograph, the sun was just above the horizon in the sky behind me, so the antisolar point, and hence the point of apparent divergence, is just below the horizon in the sky ahead of me.

For normal crepuscular rays, the (obscured) sun is ahead, and the light is travelling generally* towards the observer. The rays in the picture are anticrepuscular rays, because the light is generally travelling away from me. This was the first time that I had knowingly seen anticrepuscular rays.

*I say “generally” because the almost all of the rays aren’t travelling directly towards the observer. An analogy would be standing on a railway station platform as a train approaches: you’d say that it was travelling generally towards you even though it isn’t actually going to hit you.