J. J. Lodder
2021-09-01 19:27:07 UTC
The original discussion on the question has become muddled by side
paths, so I'll start again.
The assertion was that you can at least in principle use laboratory
measurements of the speed of light to see if it varies.
To see that you can't you need to have at least a vague idea
of how such measurements are done.
A) you build a stable light source.
B) you set up a fixed resonator for it to create a standing wave.
C) using the tricks of the trade you determine
how many wavelength there are in it.
D) idem, and far more difficult, you measure the frequency
of your light source, wrt to an atomic clock.
(frequency dividing, multiplexing, counting etc. very hard)
E) Knowing wavelength and frequency give you speed of light.
All this is the end point of a long evolution.
With a very broad brush:
Starting in the 19th century,
physicists were completely ignorant of the structure of matter,
so units such as platinum bars with scratches
were accepted without further thought.
The speed of light was something you measured with rulers and clocks.
By the end of the 19th people like Rowland
started optical precision measurements,
using interferometry and spectral lines with calibrated wavelengths.
Ultimately all modern precision manufacture came to depend on it.
This soon raised a problem:
the precision of wavelength calibrations inceased to the point
where it came to be limited by the precision to which meter rods
could be reproduced at the site where the wavelength measurements
were done.
So the next step was obvious and inevitable:
the meter was redefined in terms of a suitable stable wavelength,
and the metal bars and blocks became secondary standards.
Next came the precision speed of light measurements, see above.
Again, the same problem arose:
the precision of the light speed measurement was limited
by the accuracy to which that standard wavelength could be reproduced.
So, again, the meter was redefined
now in terms of the frequency of the light source
and a defined value for the speed of light.
This eliminated the meter completely as a fundamental unit,
and all measurements of distance and size
were reduced to measurements of time intervals.
What used to be a measurement of the speed of light
now became a calibration of a standard wavelength,
so of a secondary meter standard.
So the speed of light has dropped out of the story.
If it were to change,
all -measured- lengths of all objects would change in the same way,
and by Alice, this would be unobservable.
Next comes the very good question:
in how far does this capture physical reality?
After all, we can invent other length units.
Would they all vary in the same way? Can we test this?
Going back in history,
the only other length unit that is reproducible enough
for comparison is the metal bar meter.
Supposing some fundemental things are time dependent,
would the platinum meter change in the same way
as the optically defined meter?
(or as I joked several times already,
since precision manufacturing uses optical standards,
would yesterdays pistons fit tomorrows engines)
More practical, in the lab,
would a standing wave that fits an optical resonator
set on a metal frame or granite block go on fitting it forever?
If not, we have a new effect, but what is it,
and how would we interpreted it?
I'll explain in a #2 that the problem could not lie
with the frequency, hence with the clocks,
so we must look at the length units.
Fortuantely, a hundred years of progress
has given us an understanding of the structure of matter,
so we understand our units, at least in principle.
For optical wavelengths the scale is set by the Rydberg unit
(energy, inverse wavelength, frequency)
In crap-free units the Rydberg wavelength is 1/ \alpha^2 m_{electron}
(with a whole slew of higher order corrections)
For dimensions of material objects otoh
the scale is the Bohr radius,
which is (again crap-free) 1/\alpha m_{electron}.
(again with a whole slew of higher order corrections)
So they differ to lowest order by a factor of \alpha.
Now, after all these preliminaries, we can deal with Helbig's question.
The meaningless question: can the measured speed of light vary?
becomes a meaningful question if we rephrase it as:
if things in the universe are variable,
would all possible length units vary in the same way?
(note that this transforms a meaningless question
about dimensioned things into a meaninful question
about dimensionless ratios between units)
From the arguments above the answer is yes,
and we could in principle observe such an effect in the lab.
OTOH there is no way that we would interpret such an observation
as a variable speed of light, supposing we would know what that means.
The prime suspect will be \alpha. (but it might be something higher up)
And for Philip: I hope that this provides the explanation you asked for,
Jan
(about time in #2)
paths, so I'll start again.
The assertion was that you can at least in principle use laboratory
measurements of the speed of light to see if it varies.
To see that you can't you need to have at least a vague idea
of how such measurements are done.
A) you build a stable light source.
B) you set up a fixed resonator for it to create a standing wave.
C) using the tricks of the trade you determine
how many wavelength there are in it.
D) idem, and far more difficult, you measure the frequency
of your light source, wrt to an atomic clock.
(frequency dividing, multiplexing, counting etc. very hard)
E) Knowing wavelength and frequency give you speed of light.
All this is the end point of a long evolution.
With a very broad brush:
Starting in the 19th century,
physicists were completely ignorant of the structure of matter,
so units such as platinum bars with scratches
were accepted without further thought.
The speed of light was something you measured with rulers and clocks.
By the end of the 19th people like Rowland
started optical precision measurements,
using interferometry and spectral lines with calibrated wavelengths.
Ultimately all modern precision manufacture came to depend on it.
This soon raised a problem:
the precision of wavelength calibrations inceased to the point
where it came to be limited by the precision to which meter rods
could be reproduced at the site where the wavelength measurements
were done.
So the next step was obvious and inevitable:
the meter was redefined in terms of a suitable stable wavelength,
and the metal bars and blocks became secondary standards.
Next came the precision speed of light measurements, see above.
Again, the same problem arose:
the precision of the light speed measurement was limited
by the accuracy to which that standard wavelength could be reproduced.
So, again, the meter was redefined
now in terms of the frequency of the light source
and a defined value for the speed of light.
This eliminated the meter completely as a fundamental unit,
and all measurements of distance and size
were reduced to measurements of time intervals.
What used to be a measurement of the speed of light
now became a calibration of a standard wavelength,
so of a secondary meter standard.
So the speed of light has dropped out of the story.
If it were to change,
all -measured- lengths of all objects would change in the same way,
and by Alice, this would be unobservable.
Next comes the very good question:
in how far does this capture physical reality?
After all, we can invent other length units.
Would they all vary in the same way? Can we test this?
Going back in history,
the only other length unit that is reproducible enough
for comparison is the metal bar meter.
Supposing some fundemental things are time dependent,
would the platinum meter change in the same way
as the optically defined meter?
(or as I joked several times already,
since precision manufacturing uses optical standards,
would yesterdays pistons fit tomorrows engines)
More practical, in the lab,
would a standing wave that fits an optical resonator
set on a metal frame or granite block go on fitting it forever?
If not, we have a new effect, but what is it,
and how would we interpreted it?
I'll explain in a #2 that the problem could not lie
with the frequency, hence with the clocks,
so we must look at the length units.
Fortuantely, a hundred years of progress
has given us an understanding of the structure of matter,
so we understand our units, at least in principle.
For optical wavelengths the scale is set by the Rydberg unit
(energy, inverse wavelength, frequency)
In crap-free units the Rydberg wavelength is 1/ \alpha^2 m_{electron}
(with a whole slew of higher order corrections)
For dimensions of material objects otoh
the scale is the Bohr radius,
which is (again crap-free) 1/\alpha m_{electron}.
(again with a whole slew of higher order corrections)
So they differ to lowest order by a factor of \alpha.
Now, after all these preliminaries, we can deal with Helbig's question.
The meaningless question: can the measured speed of light vary?
becomes a meaningful question if we rephrase it as:
if things in the universe are variable,
would all possible length units vary in the same way?
(note that this transforms a meaningless question
about dimensioned things into a meaninful question
about dimensionless ratios between units)
From the arguments above the answer is yes,
and we could in principle observe such an effect in the lab.
OTOH there is no way that we would interpret such an observation
as a variable speed of light, supposing we would know what that means.
The prime suspect will be \alpha. (but it might be something higher up)
And for Philip: I hope that this provides the explanation you asked for,
Jan
(about time in #2)