It’s reasonable and makes sense, but it’s wrong
Our astronomer describes how an assumption widely made in his science in past years turned out to be completely erroneous.
Our astronomer writes:
It seems so obvious as to be hardly worth saying: fainter stars must be, on the average, farther away than brighter ones. It should follow directly from the physical fact that light spreads out as it moves away from its source (according to the familiar inverse-square law). And since actually measuring the distance to a star was not a quick or easy thing to do, astonomers studying them in large numbers in the old days had to make assumptions like this one. Even if stars vary a lot in brightness intrinsically, by taking large averages we should be safe. Statistics come to our rescue in many situations that look impossibly difficult at first glance.
So in nineteenth-century astronomy books one can find calculations based on the “fact” that a ninth-magnitude star must be, on the average, a certain distance farther away than an average eighth-magnitude star (remember, astronomers use larger numbers for fainter objects). But if we look at a twentieth-century list of the 25 nearest stars and another of the 25 brightest stars, there is very little overlap. Most of the nearest cannot even be seen without a telescope! Something has gone wrong here.
The problem lies in the fact that, for every intrinsically bright star, there are very, very many intrinsically faint ones. How this works is a bit subtle. Suppose we have an eighth-magnitude star at a certain distance, say 10 light-years. Stars of the same type that we see as ninth magnitude will indeed be farther away (about 16 light-years). But there are many more stars of a different type that appear as ninth magnitude at 10 light-years. The average distance, then, of a ninth-magnitude star is closer to 10 than 16 light-years. In fact, if the increase in star population with intrinsic faintness is fast enough, the average distance for fainter stars could be smaller!
Well, in one sense the assumption was an act of desperation anyway. It allowed astronomers to do something when they didn’t really have an important piece of information. Nowadays we’re more aware of how statistics can be deceiving, and you’ll hear astronomers talk about things like “Malmquist bias”.
But I still find it troubling that an assumption so obvious, so apparently grounded in good physics, turned out to be so completely wrong. Could we be doing something similar now?
I will leave you with one outstanding example of how fainter does not mean farther in astronomy. The farthest object you can see with your naked eye is the Andromeda galaxy. You’ll need to get away from city lights, which is not so easy these days, but if you do it’s not hard to see. That fuzzy glow is over two million light-years away.
The nearest star is (probably) Proxima Centauri. (It’s part of a triple star system, and current measurements place it slightly closer than the other two stars.) It’s a little over four light-years away. You cannot see it at all without a telescope.