Applied numerology

Why astronomers are natural translators

thermoOur astronomer offers an apology and an explanation.

Our astronomer writes:

In dealing with the question how cold is space, I found myself using three different temperature scales and not always providing conversions between them.  I’m afraid I was being impolite, or at least careless.  While I’ve grown quite used to using Faherenheit, Celsius and Kelvin whenever each seems convenient, I should have remembered that not everyone does.  It may have seemed arrogant, and I apologize.

In fact it’s the normal thing to be familiar with either F or C, but to require conversion of the other before it’s clear how hot or cold we’re talking about.  Similarly, people will work either in miles or kilometers.  It’s not a matter of how technically trained one is; our navigator mentions a class of highly competent marine engineers who looked with incomprehension and suspicion on metric units.  (The navigator, of course, uses nautical miles exclusively.)

Astronomers seem to be diffent.  Partly it’s because they work on such vastly different scales that using the same units would be awkward.  Describing a star or a planet, thousands of kilometers are convenient; but for distances around the Solar System, the Astronomical Unit (the mean distance between the Sun and the Earth) is better.  And when you get out between the stars, parsecs or light-years fit more easily.  I incline toward parsecs as they are directly related to angles as measured on the sky, but most people (especially science fiction authors) find it easier to picture light speeding along for a given length of time.

[There are just about as many light-years in a parsec as there are feet in a meter.  And pi squared is pretty close to 10.  You may find those facts useful sometime.]

But mostly astronomers shift between measurements for historical reasons.  I’ll give two examples.

For many years the proportions of the Solar System were very well known, but the scale was not.  That is, the orbits of the planets were exactly expressed as some fraction or multiple of the size of Earth’s, but the distance from the Earth to the Sun in miles or kilometers was uncertain.  It made sense to define a unit and work out other things later.

Much more confusing is the astronomer’s way of measuring a star’s brightness.  It was the Greek Hipparchus, as far as we know, who first made up a catalog of stars and ranked them by how bright they looked.  The brightest were the first magnitude, those clearly fainter the second magnitude, fainter still the third, on down to the sixth, the faintest he could see.  It’s the same sort of system that gives us A-list, B-list and C-list celebrities.

When astronomers came to put star brightness on a quantitative basis (with instruments and all that) they found that a first-magnitude star was generally about a hundred times as bright as a sixth-magnitude star, and each magnitude was about two-and-a-half times as faint as the previous one.  Instead throwing out the old system (and the historical records referring to it) they refined it, making it a carefully logarithmic progression, so that everyone knew what a “magnitude 2.51 star” meant.  But note that a bigger number means a fainter star!

After WWII came radioastronomy.  Radio astronomers had learned their work developing radar during the war, not in telescope domes, and when they measured the brightness of things in the sky they did it in sensible units. The “Jansky” is directly related to watts, square meters and radio frequency (as used by all good physicists); and more Janskys mean a brighter object.

But the radio astronomers did not convert the optical astronomers to their sensible units; nor did the optical astronomers force a “radio magnitude” scale on the newcomers.  Instead, astronomers exerted their talent for using two scales at once.  And for making the subject more complicated to learn.

Share Button