Discussion:
confirmation of undisputed results
(too old to reply)
Phillip Helbig (undress to reply)
2021-01-04 09:49:07 UTC
Permalink
Not much effort is put into confirming or refuting undisputed results or
expectations, but occasionally it does happen. For example, according
to theory muons are supposed to be essentially just like electrons but
heavier, but there seems to be experimental evidence that that is not
the case, presumably because someone decided to look for it.

What about even more-basic stuff? For example, over what range (say,
multiple or fraction of the peak wavelength) has the Planck black-body
radiation law been experimentally verified? Or that radioactive decay
really follows an exponential law? Or that the various forms (weak,
strong, Einstein) of the equivalence principle hold?

I realize that it is difficult to get funding for things like those, but
at least in some cases the corresponding experiment shouldn't be too
expensive.
Jos Bergervoet
2021-01-05 04:19:28 UTC
Permalink
On 21/01/04 10:49 AM, Phillip Helbig (undress to reply) wrote:
> Not much effort is put into confirming or refuting undisputed results or
> expectations, but occasionally it does happen. For example, according
> to theory muons are supposed to be essentially just like electrons but
> heavier, but there seems to be experimental evidence that that is not
> the case, presumably because someone decided to look for it.

Are you referring to the muon g-2 experiment? Or what other results
are there to indicate this?

> What about even more-basic stuff? For example, over what range (say,
> multiple or fraction of the peak wavelength) has the Planck black-body
> radiation law been experimentally verified? Or that radioactive decay
> really follows an exponential law? Or that the various forms (weak,
> strong, Einstein) of the equivalence principle hold?

Einstein's GR predictions have had attention, but mainly at large scale.
Testing the short-range part of gravity at the lab-experiment scale
would basically be testing Newton's theory, and departures from 1/r^2
have been looked for. Also Eötvös' experiment has often been checked.
I think we need even more basic examples to find something new!

> I realize that it is difficult to get funding for things like those, but
> at least in some cases the corresponding experiment shouldn't be too
> expensive.

But what can we still do?
Ohm's law? Has been done.. (Hall effect, SQUIDs, tunneling, "break
junctions" etc..)
Maybe Maxwell?! Non-linearity at high field-strength is predicted by
QED but has it been tested? And coupling to the axion might also give
low-energy departures (but ADMX is in fact looking for that..)

Conservation of energy, then? Departure from unitarity in QM?
Flatness/isotropy of space at the lab scale? That's all really basic
but I think it is already addressed by some existing experiments. The
real problem here seems to be finding something that is overlooked!

--
Jos
Richard D. Saam
2021-01-06 21:06:01 UTC
Permalink
On 1/4/21 10:19 PM, Jos Bergervoet wrote:
^ The
> real problem here seems to be finding something that is overlooked!
>

Here is an observation to be explained for
all available 1,800 asteroid rotations from
The International Astronomical Union Minor Planet Center
Lightcurve Parameters (2006 Mar. 14)
https://minorplanetcenter.net//iau/lists/LightcurveDat.html
The data base sorted and plotted(100 point moving average)
indicates a minimum rotation(delta hour) per rotation(hour) near
asteroids with 8 hour rotation.
Why is there a minimum
and why is it of this magnitude?
Richard Livingston
2021-01-10 19:44:05 UTC
Permalink
On Wednesday, January 6, 2021 at 3:06:04 PM UTC-6, Richard D. Saam wrote:
> On 1/4/21 10:19 PM, Jos Bergervoet wrote:
> ^ The
> > real problem here seems to be finding something that is overlooked!
> >
> Here is an observation to be explained for
> all available 1,800 asteroid rotations from
> The International Astronomical Union Minor Planet Center
> Lightcurve Parameters (2006 Mar. 14)
> https://minorplanetcenter.net//iau/lists/LightcurveDat.html
> The data base sorted and plotted(100 point moving average)
> indicates a minimum rotation(delta hour) per rotation(hour) near
> asteroids with 8 hour rotation.
> Why is there a minimum
> and why is it of this magnitude?

Richard,

I plotted the period data and I don't see anything peculiar at 8 hours.
What I did was import all that data into Excel and sort the period
column. I then plotted the result. I'm getting a smooth curve from
about 2.3 hours up to over 200 hours, gradually increasing in slope
(i.e. fewer and fewer examples) at the high end. There is a clear step
at about 2 hours however, with relatively few examples between 0.9 and
2.2 hours.

I think there is good reason for there to be discontinuities in this
data. One is observational: The measurement techniques might not work
well for very slow or very fast rotations. Another is physical: Many
of these objects are "dirty snowballs" and very fast rotation would
cause them to fly apart. This data set does not give insight into the
sizes objects, but I would expect very small objects to not be
represented well as they are dim, and as there might very well be a
correlation between size, brightness and rotation speed, there might be
artifacts in the data set that aren't real.

If this problem interests you, you are encouraged to investigate it!
There might be something of interest there.

Rich L.
Jos Bergervoet
2021-01-10 16:17:11 UTC
Permalink
On 21/01/06 10:06 PM, Richard D. Saam wrote:
> On 1/4/21 10:19 PM, Jos Bergervoet wrote:
> ^ The
>> real problem here seems to be finding something that is overlooked!
>>
>
> Here is an observation to be explained for
> all available 1,800 asteroid rotations from
> The International Astronomical Union Minor Planet Center
> Lightcurve Parameters (2006 Mar. 14)
> https://minorplanetcenter.net//iau/lists/LightcurveDat.html
> The data base sorted and plotted(100 point moving average)

I don't see a plot in your link.. (and reading it gives errors
at e.g. Weringia and Oppavia where numbers are missing..)

> indicates a minimum rotation(delta hour) per rotation(hour) near
> asteroids with 8 hour rotation.

Do you mean a minimum in the *variation* of the rotation? Or
else what quantities is the sentence trying to describe?

> Why is there a minimum
> and why is it of this magnitude?

OK, we first need a plot.. If I plot the listed variation vs. the
period (for those that are readable, and using the geometric mean
where a range for the variation is given) then this is the result:
<http://bergervo.home.xs4all.nl/out5.png>
Does this show what you mean?

--
Jos
Douglas Eagleson
2021-01-18 18:16:13 UTC
Permalink
On Monday, January 4, 2021 at 4:49:11 AM UTC-5, Phillip Helbig (undress to reply) wrote:
> Not much effort is put into confirming or refuting undisputed results or
> expectations, but occasionally it does happen. For example, according
> to theory muons are supposed to be essentially just like electrons but
> heavier, but there seems to be experimental evidence that that is not
> the case, presumably because someone decided to look for it.
>
> What about even more-basic stuff? For example, over what range (say,
> multiple or fraction of the peak wavelength) has the Planck black-body
> radiation law been experimentally verified? Or that radioactive decay
> really follows an exponential law? Or that the various forms (weak,

given a single neutron creating a single radioisotope atom
the question becomes "can it never decay?" Meaning does
decay have a probability distribution.

The rate of decay in an exponential function leads to a
non-converging function. I might submit that it is exponential,
but has a time variable called "last atom decayed".

The natural existence of a characteristic decay rate implies
an atom set lifetime. Now a convergent?

But, at some time the last atom.

Given a set of atoms and a 100percent counting efficiency
will the number of counts ever equal the number of
atoms.

basically needing mathematical solution. How to solve
this dilemma? I am still open on this question but
submit it as a version of the halving distances function
dilemma. "If you halve the distance to an object forever
do you ever finally reach the object?"

Or attack it by doing axis or time transform.
Jos Bergervoet
2021-01-18 21:34:32 UTC
Permalink
On 21/01/18 7:16 PM, Douglas Eagleson wrote:
> On Monday, January 4, 2021 at 4:49:11 AM UTC-5, Phillip Helbig (undress to reply) wrote:
>> Not much effort is put into confirming or refuting undisputed results or
>> expectations, but occasionally it does happen. For example, according
>> to theory muons are supposed to be essentially just like electrons but
>> heavier, but there seems to be experimental evidence that that is not
>> the case, presumably because someone decided to look for it.
>>
>> What about even more-basic stuff? For example, over what range (say,
>> multiple or fraction of the peak wavelength) has the Planck black-body
>> radiation law been experimentally verified? Or that radioactive decay
>> really follows an exponential law? Or that the various forms (weak,
>
> given a single neutron creating a single radioisotope atom
> the question becomes "can it never decay?" Meaning does
> decay have a probability distribution.

"Probability" is only required if you insist upon a "collapse"
of the state in QM. But that is now an almost untenable view.
If you just accept that the universe is a superposition of
different branches, as QM literally describes it, then there
is no randomness and "probability" will play no fundamental
role. You just will have the amplitude of one branch decaying
exponentially (and never becoming zero).

NB: of course probability would still be a useful concept for
describing large collections of objects or events, just like
it was in classical physics, but no fundamental need for it
would exist.

> ...
> The natural existence of a characteristic decay rate implies
> an atom set lifetime. Now a convergent?

I don't see how it necessarily "implies" that. It simply states
that the amplitude of the state with an excited atom gradually
decreases in the total superposition of the state of the
universe, while the that of the state with the decayed atom
increases.

> But, at some time the last atom.

Only if you believe in a "collapse"! Otherwise no such time
exists.

> Given a set of atoms and a 100percent counting efficiency
> will the number of counts ever equal the number of
> atoms.

In those branches of the total superposition describing the
universe where all atoms have decayed, there it equals that
number! Already at the beginning of the counting (but at the
beginning the amplitude of that component in the superposition
is very low.)

> basically needing mathematical solution. How to solve
> this dilemma?

Easy: forget the Copenhagen "interpretation" (which isn't
an interpretation, but a pure *rejection* of the gradual,
unitary time-evolution described by the equations of QM.)

> ... I am still open on this question but
> submit it as a version of the halving distances function
> dilemma. "If you halve the distance to an object forever
> do you ever finally reach the object?"

That answer is known: you do reach it if your halving of the
distance becomes faster at a sufficient rate every time you
do it. And otherwise you don't reach it. Just sum the times..

> Or attack it by doing axis or time transform.

Attacking the description of exponential decay is indeed an
interesting field of study, especially the cases where the
time-span is billions of years. How can QM describe such a
slow process, given all the influence from the environment..
Why isn't the transition stimulated by external radiation,
etc.? But those are just questions within the gradual change
mechanism of the Hilbert space state.

See the references given below Matt O'Dowd's latest video:
<https://www.youtube.com/watch?v=j5HyMNNSGqQ>

--
Jos
Phillip Helbig (undress to reply)
2021-01-23 17:50:06 UTC
Permalink
In article <d91462b6-bf4d-422c-ba34-***@googlegroups.com>, Douglas Eagleson <***@gmail.com> writes:

> On Monday, January 4, 2021 at 4:49:11 AM UTC-5, Phillip Helbig (undress to reply) wrote:
>> Not much effort is put into confirming or refuting undisputed results or
>> expectations, but occasionally it does happen. For example, according
>> to theory muons are supposed to be essentially just like electrons but
>> heavier, but there seems to be experimental evidence that that is not
>> the case, presumably because someone decided to look for it.
>>
>> What about even more-basic stuff? For example, over what range (say,
>> multiple or fraction of the peak wavelength) has the Planck black-body
>> radiation law been experimentally verified? Or that radioactive decay
>> really follows an exponential law? Or that the various forms (weak,
>
> given a single neutron creating a single radioisotope atom
> the question becomes "can it never decay?" Meaning does
> decay have a probability distribution.
>
> The rate of decay in an exponential function leads to a
> non-converging function. I might submit that it is exponential,
> but has a time variable called "last atom decayed".
>
> The natural existence of a characteristic decay rate implies
> an atom set lifetime. Now a convergent?
>
> But, at some time the last atom.
>
> Given a set of atoms and a 100percent counting efficiency
> will the number of counts ever equal the number of
> atoms.
>
> basically needing mathematical solution. How to solve
> this dilemma? I am still open on this question but
> submit it as a version of the halving distances function
> dilemma. "If you halve the distance to an object forever
> do you ever finally reach the object?"
>
> Or attack it by doing axis or time transform.

The probability that an atom decays is constant in time. That leads
directly to a declining exponential function for the number of atoms
which have not yet decayed. Of course, that is exactly true only in the
limit of an infinite number of atoms. If the number becomes to small,
then the noise in the function becomes large enough to obscure the
behaviour in the limit. When you are down to one atom, it is still the
case that the probability that it will decay is independent of time. So
you have no idea when it will decay.
J. J. Lodder
2021-01-24 20:31:02 UTC
Permalink
Phillip Helbig (undress to reply) <***@asclothestro.multivax.de>
wrote:

> Not much effort is put into confirming or refuting undisputed results or
> expectations, but occasionally it does happen. For example, according
> to theory muons are supposed to be essentially just like electrons but
> heavier, but there seems to be experimental evidence that that is not
> the case, presumably because someone decided to look for it.
>
> What about even more-basic stuff? For example, over what range (say,
> multiple or fraction of the peak wavelength) has the Planck black-body
> radiation law been experimentally verified?

Very well, given that the cosmic black body radiation has been measured
in great detail to better than a millikelvin.

> Or that radioactive decay
> really follows an exponential law? Or that the various forms (weak,
> strong, Einstein) of the equivalence principle hold?

Eotvos also has been verified to grat precision.

> I realize that it is difficult to get funding for things like those, but
> at least in some cases the corresponding experiment shouldn't be too
> expensive.

You should realise that a lot of that testing is implicit.
The design of all experiments takes the laws of physics,
as we know them, for granted.
If there really is something wrong with those laws
the experiments would not behave as expected,
and then people would start to search for causes.

For example, LIGO takes general and special relativity for granted.
So there really is no point in wringing yet another verification
of Michelson-Morley out of it. (and others can do it much better)
A mention in Guiness book of records as the largest M&M experiment ever
really isn't worth the trouble.

Moreover, confirming the well-known is not without risk.
If you fail to obtain the 'right' result
people will not doubt the result,
they will doubt your competence as an experimentalist.

You can think of the Italian 'speed of neutrinos' experiment
that found greater than light speeds from CERN to Gran Sasso
as a particularly sad example.
'Everybody' with standing told them that this just cannot be right.
And indeed it wasn't, and the team leader resigned in disgrace,

Jan
Douglas Eagleson
2021-01-25 23:02:32 UTC
Permalink
On Monday, January 18, 2021 at 4:34:35 PM UTC-5, Jos Bergervoet wrote:
> On 21/01/18 7:16 PM, Douglas Eagleson wrote:
> > On Monday, January 4, 2021 at 4:49:11 AM UTC-5, Phillip Helbig (undress to reply) wrote:
> >> Not much effort is put into confirming or refuting undisputed results or
> >> expectations, but occasionally it does happen. For example, according
> >> to theory muons are supposed to be essentially just like electrons but
> >> heavier, but there seems to be experimental evidence that that is not
> >> the case, presumably because someone decided to look for it.
> >>
> >> What about even more-basic stuff? For example, over what range (say,
> >> multiple or fraction of the peak wavelength) has the Planck black-body
> >> radiation law been experimentally verified? Or that radioactive decay
> >> really follows an exponential law? Or that the various forms (weak,
> >
> > given a single neutron creating a single radioisotope atom
> > the question becomes "can it never decay?" Meaning does
> > decay have a probability distribution.
> "Probability" is only required if you insist upon a "collapse"
> of the state in QM. But that is now an almost untenable view.
> If you just accept that the universe is a superposition of
> different branches, as QM literally describes it, then there
> is no randomness and "probability" will play no fundamental
> role. You just will have the amplitude of one branch decaying
> exponentially (and never becoming zero).
>


> NB: of course probability would still be a useful concept for
> describing large collections of objects or events, just like
> it was in classical physics, but no fundamental need for it
> would exist.
>
I am an experimentalist btw. Well my interpretation of QM
is Heisenberg's. It is a complete statement when all
things are considered an abstract reservoir. Here is the
meaning of superposition. I went so far to consider the
abstract dam. And here is the meaning of all transformations
being the outcome of QM tunneling. Is tunneling always
probalistic or is it sometimes an analytic function.
The reservoir interpretation is a theorist's verbal
communication.

> > ...
> > The natural existence of a characteristic decay rate implies
> > an atom set lifetime. Now a convergent?
> I don't see how it necessarily "implies" that. It simply states
> that the amplitude of the state with an excited atom gradually
> decreases in the total superposition of the state of the
> universe, while the that of the state with the decayed atom
> increases.
I was trying state the dichotomy of the non-convergent
exponential decay function with a convergent decay.
Given a set of atoms of a certain decay rate can you
detect the decay of all the atoms? Or is there a probability
of detection where sometimes you detect all the atoms decay
while sometimes not detecting all transformations.
This being the origin of atom decay detection statistics.


> > But, at some time the last atom.
> Only if you believe in a "collapse"! Otherwise no such time
> exists.
Again I was commenting a comment.
I am not a theorist so I can't reply.
> > Given a set of atoms and a 100percent counting efficiency
> > will the number of counts ever equal the number of
> > atoms.
> In those branches of the total superposition describing the
> universe where all atoms have decayed, there it equals that
> number! Already at the beginning of the counting (but at the
> beginning the amplitude of that component in the superposition
> is very low.)
Again: I am not a theorist so I can't reply.

> > basically needing mathematical solution. How to solve
> > this dilemma?
> Easy: forget the Copenhagen "interpretation" (which isn't
> an interpretation, but a pure *rejection* of the gradual,
> unitary time-evolution described by the equations of QM.)
>
Yes we do have a QM theory outlook distinction.
> > ... I am still open on this question but
> > submit it as a version of the halving distances function
> > dilemma. "If you halve the distance to an object forever
> > do you ever finally reach the object?"
> That answer is known: you do reach it if your halving of the
> distance becomes faster at a sufficient rate every time you
> do it. And otherwise you don't reach it. Just sum the times..
I am not sure if this is an allowed transformation of time.
> > Or attack it by doing axis or time transform.
> Attacking the description of exponential decay is indeed an
> interesting field of study, especially the cases where the
> time-span is billions of years. How can QM describe such a
> slow process, given all the influence from the environment..
> Why isn't the transition stimulated by external radiation,
> etc.? But those are just questions within the gradual change
> mechanism of the Hilbert space state.
The existence of a decay rate this slow is a testament to the
dynamic range of measurement. This is like the mystery to
the clarity of the heavens or DNA.

Transforming exponential decay might be possible. The
atoms always have an integer value and commonly have a real
x-axis time value. Is this allowed? The time to the last
atom just might be termed convergent. So maybe take this time
and divide to integers? Using a time transform to
ensure time units greater than one. I have no clue mathematically
on legal restating.

You do have to consider here the
distinction of stochastic measure as opposed to non-stochastic
decay constants. It is basically the origin of the implication
of atom set lifetimes. You can have a sample created by a fast
pulse of say neutrons, or a case of a constant rate of neutrons
for a period, or a case of non-uniform irradiation. I would
submit that priory of the production function is demanded
to measure the decay constant. Just think of knowing such sets
distributed in the whole sample of production. There is
no a priory of location. A set of atom sets confounds the systems
decay function measure.

So waiting around for N=1 to decay is an important interpretation
to consider. Given n=1 you can not measure the decay constant. You
can not measure a time of creation. You can not infer the existence
of a decay product's causal event. A unit one has no measurable
statistical event distribution. Maybe waiting around for a zero event
is a waste of time.


>
> See the references given below Matt O'Dowd's latest video:
> <https://www.youtube.com/watch?v=j5HyMNNSGqQ>
>
> --
> Jos
Thanks the site has a great recitation of the outlooks of
QM in its early origins. One of my points of view is that
collapse allows a very special class of information.
It can not alter the thermodynamics of the system other
than a spatial distribution. A human using the knowledge
is not subject t
Phillip Helbig (undress to reply)
2021-01-25 23:03:23 UTC
Permalink
In article <1p3imn8.10geog6da220gN%***@de-ster.demon.nl>, "J. J.
Lodder" <***@de-ster.demon.nl> writes:

> Phillip Helbig (undress to reply) <***@asclothestro.multivax.de>
> wrote:
>
> > Not much effort is put into confirming or refuting undisputed results or
> > expectations, but occasionally it does happen. For example, according
> > to theory muons are supposed to be essentially just like electrons but
> > heavier, but there seems to be experimental evidence that that is not
> > the case, presumably because someone decided to look for it.
> >
> > What about even more-basic stuff? For example, over what range (say,
> > multiple or fraction of the peak wavelength) has the Planck black-body
> > radiation law been experimentally verified?
>
> Very well, given that the cosmic black body radiation has been measured
> in great detail to better than a millikelvin.

Yes, but at what frequencies? As the name indicates, the CMB peaks in
the microwave region. It is well measured there, and a good way in
either direction, but towards higher frequencies the intensity drops
sharply. Even ignoring confusion by other sources and so on, I doubt
that it has been measured to any significant accuracy in the
ultraviolet, not to mention the gamma-ray region. (Photons here will be
few and far between.)

Yes, it looks like a perfect black body, no-one has convincingly argued
that it should be otherwise, and so on, but the question remains over
what range has that been verified.

Discussing the CMB is a bit of a red herring, because if one saw
departures from the black-body spectrum, one would suspect some
astrophysical cause. So think of lab measurements of black bodies: over
what range in frequency have they been made and to what precision?

> > Or that radioactive decay
> > really follows an exponential law? Or that the various forms (weak,
> > strong, Einstein) of the equivalence principle hold?
>
> Eotvos also has been verified to grat precision.

That is just the weak equivalence principle.

> You should realise that a lot of that testing is implicit.
> The design of all experiments takes the laws of physics,
> as we know them, for granted.
> If there really is something wrong with those laws
> the experiments would not behave as expected,
> and then people would start to search for causes.

Right, but, say, a tiny deviation from a black-body spectrum at a
frequency where no-one notices it anyway would go unnoticed.

> For example, LIGO takes general and special relativity for granted.
> So there really is no point in wringing yet another verification
> of Michelson-Morley out of it. (and others can do it much better)
> A mention in Guiness book of records as the largest M&M experiment ever
> really isn't worth the trouble.

Right, to some extent. One can actually use LIGO to contrain
alternatives to GR, which implies that one does not assume GR from the
ground up. For example, Bekenstein's TeVeS theory was ruled out by LIGO
(and the follow-up observations), because it predicts significantly
different Shapiro delays for gravitational and electromagnetic
radiation.

> Moreover, confirming the well-known is not without risk.
> If you fail to obtain the 'right' result
> people will not doubt the result,
> they will doubt your competence as an experimentalist.

Maybe, but that is not good science, especially if someone confirms an
unexpected result.

> You can think of the Italian 'speed of neutrinos' experiment
> that found greater than light speeds from CERN to Gran Sasso
> as a particularly sad example.
> 'Everybody' with standing told them that this just cannot be right.
> And indeed it wasn't, and the team leader resigned in disgrace,

IIRC, they didn't actually believe that their neutrinos went faster than
the speed of light, but said that that was the result of their analysis
and solicited better explanations, which they ultimately got.
Jay R. Yablon
2021-01-28 12:29:56 UTC
Permalink
Phillip Helbig (undress to reply) <***@asclothestro.multivax.de> wrote:

> In article <1p3imn8.10geog6da220gN%***@de-ster.demon.nl>, "J. J.
> Lodder" <***@de-ster.demon.nl> writes:
>
> > Phillip Helbig (undress to reply) <***@asclothestro.multivax.de>
> > wrote:
> >
> > > Not much effort is put into confirming or refuting undisputed results or
> > > expectations, but occasionally it does happen. For example, according
> > > to theory muons are supposed to be essentially just like electrons but
> > > heavier, but there seems to be experimental evidence that that is not
> > > the case, presumably because someone decided to look for it.
> > > > What about even more-basic stuff? For example, over what range (say,
> > > multiple or fraction of the peak wavelength) has the Planck black-body
> > > radiation law been experimentally verified?
> >
> > Very well, given that the cosmic black body radiation has been measured
> > in great detail to better than a millikelvin.
>
> Yes, but at what frequencies? As the name indicates, the CMB peaks in
> the microwave region. It is well measured there, and a good way in
> either direction, but towards higher frequencies the intensity drops
> sharply. Even ignoring confusion by other sources and so on, I doubt
> that it has been measured to any significant accuracy in the
> ultraviolet, not to mention the gamma-ray region. (Photons here will be
> few and far between.)
>
> Yes, it looks like a perfect black body, no-one has convincingly argued
> that it should be otherwise, and so on, but the question remains over
> what range has that been verified.

All Planck radiation has a peak wavelength given by the Wien
displacement law
(https://en.wikipedia.org/wiki/Wien%27s_displacement_law). The median
photon wavelength is at about 1.4182 times the peak. Using Planck's law
(https://en.wikipedia.org/wiki/Planck%27s_law) there is an
extremely-rapid drop-off on either side of this peak, whether one moves
toward the UV or the IR end of the spectrum. It is assumed (as an
`undisputed result') that Planck's law will govern without limit even at
the extreme ends of the spectrum. But does anybody know whether anybody
has ever thought to test for this (find `confirmation')?

For example, if one does the calculation, Planck's law predicts that a
single photon will be emitted with a wavelength shorter than 10% of the
Wien peak for every 3.43x10^18 photons emitted over the entire spectrum
of photons with wavelengths longer than 10% of the peak. So, whether
these UV-end photons really are emitted, or whether there is some type
of emission cutoff (not dissimilar to the photoelectric effect), is not
something anybody would stumble upon casually. Such extreme-UV photons
are needles in quintillion-photon haystacks. Somebody would have to
design and conduct a special experiment look for them.

My question is simple: Does anybody know if anybody has ever looked for
some sort of cutoff at the UV end of the blackbody spectrum, and if so,
what did they find?

[Moderator's note: That is a good question. A brief web search turned
up nothing. :-( -P.H.]
Richard Livingston
2021-01-31 15:42:44 UTC
Permalink
On Thursday, January 28, 2021 at 6:29:59 AM UTC-6, Jay R. Yablon wrote:

> My question is simple: Does anybody know if anybody has ever looked for
> some sort of cutoff at the UV end of the blackbody spectrum, and if so,
> what did they find?
>
> [Moderator's note: That is a good question. A brief web search turned
> up nothing. :-( -P.H.]

The real issue here is motivation. Doing an experiment has cost, in
time and money. A professional scientist must try to maximize his/her
impact while minimizing cost and time. If you don't generate results
that people are interested in you don't make tenure and you don't get
the research grants.

So there has to be some reason to expect an interesting result. That
means either a totally unexpected result or one that confirms or
disproves a theory that many people are interested in. You can spend an
entire career testing limit cases in well established theories and never
get an interesting result. That would be a formula for a short and
pointless career. Unless you are very lucky.

What people do is use current theories and problems/questions to guide
their research. This makes perfect sense. You don't look for gold in
Antarctic glaciers, you look in the same sorts of rock formations that
others have found gold.

Of course there is nothing stopping you from looking where ever you like
for an interesting result. You might be lucky, and then be famous. But
don't expect generous funding for your experiments.

Rich L.
Douglas Eagleson
2021-02-06 02:30:39 UTC
Permalink
On Sunday, January 31, 2021 at 10:42:46 AM UTC-5, ***@gmail.com wrote:
> On Thursday, January 28, 2021 at 6:29:59 AM UTC-6, Jay R. Yablon wrote:
>
> > My question is simple: Does anybody know if anybody has ever looked for
> > some sort of cutoff at the UV end of the blackbody spectrum, and if so,
> > what did they find?
> >
> > [Moderator's note: That is a good question. A brief web search turned
> > up nothing. :-( -P.H.]
> The real issue here is motivation. Doing an experiment has cost, in
> time and money. A professional scientist must try to maximize his/her
> impact while minimizing cost and time. If you don't generate results
> that people are interested in you don't make tenure and you don't get
> the research grants.
>
> So there has to be some reason to expect an interesting result. That
> means either a totally unexpected result or one that confirms or
> disproves a theory that many people are interested in. You can spend an
> entire career testing limit cases in well established theories and never
> get an interesting result. That would be a formula for a short and
> pointless career. Unless you are very lucky.
>
> What people do is use current theories and problems/questions to guide
> their research. This makes perfect sense. You don't look for gold in
> Antarctic glaciers, you look in the same sorts of rock formations that
> others have found gold.
>
> Of course there is nothing stopping you from looking where ever you like
> for an interesting result. You might be lucky, and then be famous. But
> don't expect generous funding for your experiments.
>
> Rich L.


Try the Optics Group at the National Institute of Standards and
Technology (NIST). Have them review your experiment and
have them positively recommend applying for a research grant.
They do allow retrial of old experiments.

Basically think big. For example: Design a compound radiation
detector. A single photon crystal detector with an implanted
temperature sensor. Here your sensor is a Black Body also.
J. J. Lodder
2021-01-29 16:08:22 UTC
Permalink
Phillip Helbig (undress to reply) <***@asclothestro.multivax.de>
wrote:

> In article <1p3imn8.10geog6da220gN%***@de-ster.demon.nl>, "J. J.
> Lodder" <***@de-ster.demon.nl> writes:
>
> > Phillip Helbig (undress to reply) <***@asclothestro.multivax.de>
> > wrote:
> >
> > > Not much effort is put into confirming or refuting undisputed results or
> > > expectations, but occasionally it does happen. For example, according
> > > to theory muons are supposed to be essentially just like electrons but
> > > heavier, but there seems to be experimental evidence that that is not
> > > the case, presumably because someone decided to look for it.
> > >
> > > What about even more-basic stuff? For example, over what range (say,
> > > multiple or fraction of the peak wavelength) has the Planck black-body
> > > radiation law been experimentally verified?
> >
> > Very well, given that the cosmic black body radiation has been measured
> > in great detail to better than a millikelvin.
>
> Yes, but at what frequencies? As the name indicates, the CMB peaks in
> the microwave region. It is well measured there, and a good way in
> either direction, but towards higher frequencies the intensity drops
> sharply. Even ignoring confusion by other sources and so on, I doubt
> that it has been measured to any significant accuracy in the
> ultraviolet, not to mention the gamma-ray region. (Photons here will be
> few and far between.)
>
> Yes, it looks like a perfect black body, no-one has convincingly argued
> that it should be otherwise, and so on, but the question remains over
> what range has that been verified.
>
> Discussing the CMB is a bit of a red herring, because if one saw
> departures from the black-body spectrum, one would suspect some
> astrophysical cause. So think of lab measurements of black bodies: over
> what range in frequency have they been made and to what precision?

The fact that they can measure deviations of the CMB
from the ideal black body spectrum implies
that they can verifiy the black body spectrum
for a laboratory black black body to greater accuracy.
(they use one for calibration, iirc)

Asking about the high end tail is not very useful,
for there will always be a higher point
where it is not verified, so you can go on asking forever,

Jan
Phillip Helbig (undress to reply)
2021-01-29 18:38:41 UTC
Permalink
In article <1p3rzqg.11e6pqz12s5pn3N%***@de-ster.demon.nl>,
***@de-ster.demon.nl (J. J. Lodder) writes:

>> Discussing the CMB is a bit of a red herring, because if one saw
>> departures from the black-body spectrum, one would suspect some
>> astrophysical cause. So think of lab measurements of black bodies: over
>> what range in frequency have they been made and to what precision?
>
> The fact that they can measure deviations of the CMB
> from the ideal black body spectrum implies
> that they can verifiy the black body spectrum
> for a laboratory black black body to greater accuracy.
> (they use one for calibration, iirc)

I'm pretty sure that any deviations are from the theoretical curve, not
from a lab measurement. Certainly the theoretical curve and the lab
measurements agree over the range in which they have been compared. But
at really high frequencies, the CMB signal isn't strong enough to
detect, and, as far as I know, no-one has measured at really high
frequencies in the lab either.

> Asking about the high end tail is not very useful,
> for there will always be a higher point
> where it is not verified, so you can go on asking forever,

That is true. But the original question was to what multiple of the
peak has it been measured in the lab?
J. J. Lodder
2021-02-02 10:07:34 UTC
Permalink
Phillip Helbig (undress to reply) <***@asclothestro.multivax.de>
wrote:

> In article <1p3rzqg.11e6pqz12s5pn3N%***@de-ster.demon.nl>,
> ***@de-ster.demon.nl (J. J. Lodder) writes:
>
> >> Discussing the CMB is a bit of a red herring, because if one saw
> >> departures from the black-body spectrum, one would suspect some
> >> astrophysical cause. So think of lab measurements of black bodies: over
> >> what range in frequency have they been made and to what precision?
> >
> > The fact that they can measure deviations of the CMB
> > from the ideal black body spectrum implies
> > that they can verifiy the black body spectrum
> > for a laboratory black black body to greater accuracy.
> > (they use one for calibration, iirc)
>
> I'm pretty sure that any deviations are from the theoretical curve, not
> from a lab measurement. Certainly the theoretical curve and the lab
> measurements agree over the range in which they have been compared. But
> at really high frequencies, the CMB signal isn't strong enough to
> detect, and, as far as I know, no-one has measured at really high
> frequencies in the lab either.
>
> > Asking about the high end tail is not very useful,
> > for there will always be a higher point
> > where it is not verified, so you can go on asking forever,
>
> That is true. But the original question was to what multiple of the
> peak has it been measured in the lab?

That is also not a very useful question,
for there is no such thing as a 'black body' in the lab.
A black body is an idealisation.
So any observed deviations are going to be ascribed
to the laboratory 'black body' not being ideal,
rather than to errors in the theoretical black body formula,

Jan
Jos Bergervoet
2021-02-04 09:33:52 UTC
Permalink
On 21/02/02 11:07 AM, J. J. Lodder wrote:
> Phillip Helbig (undress to reply) <***@asclothestro.multivax.de>
> wrote:
>
>> In article <1p3rzqg.11e6pqz12s5pn3N%***@de-ster.demon.nl>,
>> ***@de-ster.demon.nl (J. J. Lodder) writes:
>>
>>>> Discussing the CMB is a bit of a red herring, because if one saw
>>>> departures from the black-body spectrum, one would suspect some
>>>> astrophysical cause. So think of lab measurements of black bodies: over
>>>> what range in frequency have they been made and to what precision?
>>>
>>> The fact that they can measure deviations of the CMB
>>> from the ideal black body spectrum implies
>>> that they can verifiy the black body spectrum
>>> for a laboratory black black body to greater accuracy.
>>> (they use one for calibration, iirc)
>>
>> I'm pretty sure that any deviations are from the theoretical curve, not
>> from a lab measurement. Certainly the theoretical curve and the lab
>> measurements agree over the range in which they have been compared. But
>> at really high frequencies, the CMB signal isn't strong enough to
>> detect, and, as far as I know, no-one has measured at really high
>> frequencies in the lab either.
>>
>>> Asking about the high end tail is not very useful,
>>> for there will always be a higher point
>>> where it is not verified, so you can go on asking forever,
>>
>> That is true. But the original question was to what multiple of the
>> peak has it been measured in the lab?
>
> That is also not a very useful question,

Then you say that it "should not be disputed" while Phillip's
header presumably meant results that "have not been disputed"
(bringing to mind one of the prestigious prizes in physics..
<https://archive.vn/DTv1N#History> )

> for there is no such thing as a 'black body' in the lab.
> A black body is an idealization.

But all our laws of physics are probably idealizations, and the
concepts they use likewise. So then nothing about them would be
a useful question since ideal things do not exist in the lab.
Your reasoning is too restrictive.

> So any observed deviations are going to be ascribed
> to the laboratory 'black body' not being ideal,

Possibly at first they will, and that by itself can be useful.
Improvements of experimental techniques are often obtained by
trying to measure the (almost) impossible. But there may also
remain some real result after measurements have been improved.

> rather than to errors in the theoretical black body formula,

But eventually we *do* expect deviations. The black-body curve
at least might have deviations at the Planck-energy, but why
not earlier? Maybe at twice the electron mass, where a pair
creation channel opens? Or there may be a cusp or peak at the
axion mass?! (In a sense, the latter is what ADMX is looking
for, although they probably don't care about the CBM shape..)
Anyhow, I'd think something unexpected certainly is possible.

--
Jos
Phillip Helbig (undress to reply)
2021-02-06 02:30:09 UTC
Permalink
In article <1p3xbnf.1slwrdw1lwxd92N%***@de-ster.demon.nl>,
***@de-ster.demon.nl (J. J. Lodder) writes:

> Phillip Helbig (undress to reply) <***@asclothestro.multivax.de>
> wrote:
>
> > In article <1p3rzqg.11e6pqz12s5pn3N%***@de-ster.demon.nl>,
> > ***@de-ster.demon.nl (J. J. Lodder) writes:
> >
> > >> Discussing the CMB is a bit of a red herring, because if one saw
> > >> departures from the black-body spectrum, one would suspect some
> > >> astrophysical cause. So think of lab measurements of black bodies: over
> > >> what range in frequency have they been made and to what precision?
> > >
> > > The fact that they can measure deviations of the CMB
> > > from the ideal black body spectrum implies
> > > that they can verifiy the black body spectrum
> > > for a laboratory black black body to greater accuracy.
> > > (they use one for calibration, iirc)
> >
> > I'm pretty sure that any deviations are from the theoretical curve, not
> > from a lab measurement. Certainly the theoretical curve and the lab
> > measurements agree over the range in which they have been compared. But
> > at really high frequencies, the CMB signal isn't strong enough to
> > detect, and, as far as I know, no-one has measured at really high
> > frequencies in the lab either.
> >
> > > Asking about the high end tail is not very useful,
> > > for there will always be a higher point
> > > where it is not verified, so you can go on asking forever,
> >
> > That is true. But the original question was to what multiple of the
> > peak has it been measured in the lab?
>
> That is also not a very useful question,
> for there is no such thing as a 'black body' in the lab.
> A black body is an idealisation.
> So any observed deviations are going to be ascribed
> to the laboratory 'black body' not being ideal,
> rather than to errors in the theoretical black body formula,

That is another way of formulating the question. The theoretical curve
is understood. If there is any deviation for a physical black body,
then those deviations must be understood (and presumably usually or
always are). But at a frequency a few times higher than the peak
frequency, I doubt that ANY physical black body has been observed. IF
any deviation is seen, then one cannot just do some hand waving and say
that it is due to an imperfect black body; the deviation has to be
explained quantitatively. And if there is any deviation which is the
same for many different substances, then Occam's razor would suggest
that the theory is wrong, rather than several imperfect black bodies
showing the same deviation.

I don't have any reason to believe in such a deviation, but that is
different from knowing that it has been confirmed.
Douglas Eagleson
2021-01-27 23:24:22 UTC
Permalink
On Saturday, January 23, 2021 at 12:50:10 PM UTC-5, Phillip Helbig (undress to reply) wrote:
> In article <d91462b6-bf4d-***@googlegroups.com>, Douglas Eagleson <***@gmail.com> writes:
>
> > On Monday, January 4, 2021 at 4:49:11 AM UTC-5, Phillip Helbig (undress to reply) wrote:
> >> Not much effort is put into confirming or refuting undisputed results or
> >> expectations, but occasionally it does happen. For example, according
> >> to theory muons are supposed to be essentially just like electrons but
> >> heavier, but there seems to be experimental evidence that that is not
> >> the case, presumably because someone decided to look for it.
> >>
> >> What about even more-basic stuff? For example, over what range (say,
> >> multiple or fraction of the peak wavelength) has the Planck black-body
> >> radiation law been experimentally verified? Or that radioactive decay
> >> really follows an exponential law? Or that the various forms (weak,
> >
> > given a single neutron creating a single radioisotope atom
> > the question becomes "can it never decay?" Meaning does
> > decay have a probability distribution.
> >
> > The rate of decay in an exponential function leads to a
> > non-converging function. I might submit that it is exponential,
> > but has a time variable called "last atom decayed".
> >
> > The natural existence of a characteristic decay rate implies
> > an atom set lifetime. Now a convergent?
> >
> > But, at some time the last atom.
> >
> > Given a set of atoms and a 100percent counting efficiency
> > will the number of counts ever equal the number of
> > atoms.
> >
> > basically needing mathematical solution. How to solve
> > this dilemma? I am still open on this question but
> > submit it as a version of the halving distances function
> > dilemma. "If you halve the distance to an object forever
> > do you ever finally reach the object?"
> >
> > Or attack it by doing axis or time transform.
> The probability that an atom decays is constant in time. That leads
> directly to a declining exponential function for the number of atoms
> which have not yet decayed. Of course, that is exactly true only in the
> limit of an infinite number of atoms. If the number becomes to small,
> then the noise in the function becomes large enough to obscure the
> behaviour in the limit. When you are down to one atom, it is still the
> case that the probability that it will decay is independent of time. So
> you have no idea when it will decay.

sorry for the confusion.

wiki T1/2 and exponential decay expresses my concern.
there is a well stated law of large samples, i.e. N

Leaving the issue of mean lifetime of an atom. I just need
to study the issue of mixed sample kinetics. Also would it not be
interesting to measure mean lifetime of decay relative
to time of atom production.
Jos Bergervoet
2021-01-29 21:32:25 UTC
Permalink
On 21/01/26 12:02 AM, Douglas Eagleson wrote:
> On Monday, January 18, 2021 at 4:34:35 PM UTC-5, Jos Bergervoet wrote:
>> On 21/01/18 7:16 PM, Douglas Eagleson wrote:
>>> On Monday, January 4, 2021 at 4:49:11 AM UTC-5, Phillip Helbig (undress to reply) wrote:
>>>> Not much effort is put into confirming or refuting undisputed results or
>>>> ...
...
>>> given a single neutron creating a single radioisotope atom
>>> the question becomes "can it never decay?" Meaning does
>>> decay have a probability distribution.
>>
>> "Probability" is only required if you insist upon a "collapse"
>> of the state in QM. But that is now an almost untenable view.
>> If you just accept that the universe is a superposition of
>> different branches, as QM literally describes it, then there
>> is no randomness and "probability" will play no fundamental
>> role. You just will have the amplitude of one branch decaying
>> exponentially (and never becoming zero).
>>
>> NB: of course probability would still be a useful concept for
>> describing large collections of objects or events, just like
>> it was in classical physics, but no fundamental need for it
>> would exist.
>>
> I am an experimentalist btw. Well my interpretation of QM
> is Heisenberg's. It is a complete statement when all
> things are considered an abstract reservoir. Here is the
> meaning of superposition. I went so far to consider the
> abstract dam. And here is the meaning of all transformations
> being the outcome of QM tunneling. Is tunneling always
> probalistic or is it sometimes an analytic function.

Neither. It is *always* an analytical function! The wavefunction
gradually changes from one which only has a high amplitude in one
region to one with the high amplitude in the other region (at the
other side of the barrier). There is nothing probabilistic about
that, not even in the most hard-core Copenhagen picture. Even
there, the game of chance would only start when you 'measure'
what has happened to the wavefunction, the tunneling process
itself would still be by the analytical Schrodinger equation.

> The reservoir interpretation is a theorist's verbal
> communication.

Unclear where in the discussion we had a reservoir interpretation
(but I'm no fan of interpretations anyway, let's just stick to the
description as it is, given by the Hamiltonian!)

>>> ...
>>> The natural existence of a characteristic decay rate implies
>>> an atom set lifetime. Now a convergent?
>> I don't see how it necessarily "implies" that. It simply states
>> that the amplitude of the state with an excited atom gradually
>> decreases in the total superposition of the state of the
>> universe, while the that of the state with the decayed atom
>> increases.
> I was trying state the dichotomy of the non-convergent
> exponential decay function with a convergent decay.
> Given a set of atoms of a certain decay rate can you
> detect the decay of all the atoms?

I agree that the question is interesting, for several reasons.
The following three possibilities could perhaps be tested:

1)
It is all governed by the initial state of the decaying
atom, so slight deviations in its wavefunction from the ideal,
pure, excited state, will determine how long it takes before
the decay occurs. Much like when you put a pencil on its tip:
the amount of deviation from the pure vertical determines how
long it takes before it topples over.

2)
It is governed by external influences. Like the pencil
again, but now in a drafty room (or with vibrations in the
building) where those external influences create the slight
deviations and then it's back to the previous situation.

3)
Something inherently probabilistic is happening. Even if
the initial state is pure, and external influences are absent,
the decay will still occur.

If it is 1), then a special procedure, or special treatment
of the atoms (to make the excited state extremely pure, like
putting the pencil very close to vertical) should suppress
all 'quick-decay' cases.
If it is 2) then extra environmental disturbances should
lead to a quicker decay.
If it is 3) then the only proof for it would be to rule out
(without any loopholes) that it is 1) or 2).

Of course we know that in many cases 2) will occur, stimulated
emission can easily be shown. Also 1) is in fact observed as
the Quantum Zeno effect by which you can 'freeze' a system
for some limited time, so indeed the 'quick-decay' cases are
then suppressed.

Still there may be cases where it is in fact possible to show
experimentally that it can't be either 1) or 2). Personally I
wouldn't mind if such cases can *not* be found (and QM is
simply deterministic, and no interpretation or augmentation
is needed). But it's certainly something to search for..

> ...
> You do have to consider here the
> distinction of stochastic measure as opposed to non-stochastic
> decay constants.

We seem to agree on the thing to look at! (Although it could be
that you will perhaps prefer another outcome, but in science that
does not matter..)

--
Jos
p***@ic.ac.uk
2021-02-10 18:31:57 UTC
Permalink
Jos Bergervoet <***@xs4all.nl> wrote:
> > And here is the meaning of all transformations
> > being the outcome of QM tunneling. Is tunneling always
> > probalistic or is it sometimes an analytic function.

> Neither. It is *always* an analytical function! The wavefunction
> gradually changes from one which only has a high amplitude in one
> region to one with the high amplitude in the other region (at the
> other side of the barrier). There is nothing probabilistic about
> that, not even in the most hard-core Copenhagen picture.

What sorts of things are called "tunneling" is often a matter
of usage; and my experience differs. Whilst doing my PhD,
for example, I had cause to make a clear distinction between
"coherent tunneling" of the kind you describe, and other
tunneling between two states, which *was* statistical, and
driven by quantum noise (see e.g. doi:10.1103/PhysRevA.40.4813
or doi:10.1103/PhysRevA.43.6194).

Now it might be that you are horrified that such processes
could be called "quantum tunneling", but to people working
in the area, it was unremarkable. Not all terminology is
always used in the same way.


#Paul
Jos Bergervoet
2021-02-10 20:23:03 UTC
Permalink
On 21/02/10 7:31 PM, ***@ic.ac.uk wrote:
> Jos Bergervoet <***@xs4all.nl> wrote:
>>> And here is the meaning of all transformations
>>> being the outcome of QM tunneling. Is tunneling always
>>> probalistic or is it sometimes an analytic function.
>
>> Neither. It is *always* an analytical function! The wavefunction
>> gradually changes from one which only has a high amplitude in one
>> region to one with the high amplitude in the other region (at the
>> other side of the barrier). There is nothing probabilistic about
>> that, not even in the most hard-core Copenhagen picture.
>
> What sorts of things are called "tunneling" is often a matter
> of usage; and my experience differs. Whilst doing my PhD,
> for example, I had cause to make a clear distinction between
> "coherent tunneling" of the kind you describe, and other
> tunneling between two states, which *was* statistical,

I'm pretty sure you cannot prove that!

> and
> driven by quantum noise (see e.g. doi:10.1103/PhysRevA.40.4813
> or doi:10.1103/PhysRevA.43.6194).

Then the question is which things you call "quantum noise". if
you just mean all the degrees of freedom of the surroundings
then it is still deterministic quantum mechanical time evolution,
so that is not what I would call statistical. Likewise, if it is
determined by very fine details of the initial state then it is
again not statistical, at least not in the "playing with dice"
sense. Those things are merely intractable (and of course in
that sense can be called statistical).

So did you have proof that there exist cases where tunneling
(or anything that happens to a quantum state) can *not* be
explained by the initial state and the coupling to surroundings?

> Now it might be that you are horrified that such processes
> could be called "quantum tunneling", but to people working
> in the area, it was unremarkable. Not all terminology is
> always used in the same way.

Still I don't think this is just a discussion about terminology.
It's the old question whether QM is deterministic or not! And
if you can give proof that it isn't, I won't be too horrified,
just very surprised. Actually, the deterministic aspect is of
course horrifying in its own way.. :-)

>
> #Paul

--
Jos
Jay R. Yablon
2021-02-11 10:58:33 UTC
Permalink
On Friday, February 5, 2021 at 9:30:43 PM UTC-5, ***@gmail.com wrote:
> On Sunday, January 31, 2021 at 10:42:46 AM UTC-5, ***@gmail.com wrote:
> > On Thursday, January 28, 2021 at 6:29:59 AM UTC-6, Jay R. Yablon wrote:
> >
> > > My question is simple: Does anybody know if anybody has ever looked for
> > > some sort of cutoff at the UV end of the blackbody spectrum, and if so,
> > > what did they find?
> > >
> > > [Moderator's note: That is a good question. A brief web search turned
> > > up nothing. :-( -P.H.]
> > The real issue here is motivation. Doing an experiment has cost, in
> > time and money. A professional scientist must try to maximize his/her
> > impact while minimizing cost and time. If you don't generate results
> > that people are interested in you don't make tenure and you don't get
> > the research grants.
> >
> > So there has to be some reason to expect an interesting result. That
> > means either a totally unexpected result or one that confirms or
> > disproves a theory that many people are interested in. You can spend an
> > entire career testing limit cases in well established theories and never
> > get an interesting result. That would be a formula for a short and
> > pointless career. Unless you are very lucky.
> >
> > What people do is use current theories and problems/questions to guide
> > their research. This makes perfect sense. You don't look for gold in
> > Antarctic glaciers, you look in the same sorts of rock formations that
> > others have found gold.
> >
> > Of course there is nothing stopping you from looking where ever you like
> > for an interesting result. You might be lucky, and then be famous. But
> > don't expect generous funding for your experiments.
> >
> > Rich L.
> Try the Optics Group at the National Institute of Standards and
> Technology (NIST). Have them review your experiment and
> have them positively recommend applying for a research grant.
> They do allow retrial of old experiments.
>
> Basically think big. For example: Design a compound radiation
> detector. A single photon crystal detector with an implanted
> temperature sensor. Here your sensor is a Black Body also.

Rich L. is exactly right: The real issue here is motivation. Let me
suggest one motivation:

Planck's Law (https://en.wikipedia.org/wiki/Planck%27s_law) applies to a
perfect blackbody which does not occur in nature, but is only an
idealization for what occurs in the natural world. Physical observations
of blackbody radiation are at best, close approximations. So, let's work
with those close approximations.

What Hawking discovered (https://en.wikipedia.org/wiki/Hawking_radiation)
based on Bekenstein
(http://www.scholarpedia.org/article/Bekenstein-Hawking_entropy) is
that perfect black holes emit the same Planck radiation spectrum as
perfect blackbodies. But if you listen to Susskind's lecture
(https://www.cornell.edu/video/leonard-susskind-2-black-holes-conservation-of-information-holographic-principle)
at about 32 minutes (or refer to another source which makes similar
points), it is clear that photons above a certain cutoff energy, near a
black hole, will be captured by the black hole and unable to escape to
be seen by a distant observer, while others below that energy will
bounce off and will be able to escape.

If you want to find this energy boundary, you can use a particle in a
box (https://en.wikipedia.org/wiki/Particle_in_a_box) approach, whereby
a photon particle with a wavelength smaller than the Schwarzschild
diameter of the black hole box will become trapped, while a photon with
a larger wavelength will escape and so can be observed from afar. If you
do the calculation for a black hole, you will find that this boundary
occurs at slightly longer than 1/8 of the Wien peak wavelength. So, for
black holes, there is an ultraviolet (UV) cutoff in the Planck spectrum,
which we can ascribe physically to the black hole gravitational field
holding back the highest energy photons. In fact, Susskind as referenced
above uses this approach to derive the temperature of Hawking radiation
from Bekenstein's black hole relation.

The question then arises whether the same cutoff exists for an *ordinary
blackbody*, which is *not* a black hole (so far as we know based on
present theory). There are two possible answers: yes or no.

If no, then perfect black holes and perfect blackbodies emit the same
spectrum above ~1/8 of the Wien peak wavelength, but do NOT emit the
same spectrum at shorter wavelengths. Rather, blackbodies still have a
spectrum over this domain, while black holes do not. Black holes revert
over this high-UV domain to being truly black. This now breaks the
spectral identity between blackbodies and black holes at very short
wavelengths / high (UV) energies.

If yes, then the spectral identity between black holes and ordinary
blackbodies remains intact over the entire spectrum domain. But, if yes,
then we have to explain how the statistical thermodynamics underlying an
ordinary blackbody spectrum can give rise to such a UV cutoff without
the *apparent* involvement of black holes to trap photons with
wavelengths shorter than the black hole Schwarzschild diameter.

Jos Bergervoet makes the very astute observation that eventually we *do*
expect deviations. The black-body curve at least might have deviations
at the Planck-energy, but why not earlier? Let's flip this a bit: We
know that the fluctuations at the Planck energy are so dense (Wheeler
1957, 1962), that the Planck vacuum will be filled with a sea of
ultra-tiny black holes. Accordingly, these will emit Hawking blackbody
radiation and there *will* be a UV cutoff because the entire vacuum
across the whole sea of fluctuations acts as one omnipresent
photon-trapping box. The question is whether we can ever observe this
from where we sit in the natural order.

To this question, when we observe an ordinary blackbody, what we are
really doing is emptying out a cavity as best we can, then heating that
empty space using the experimental tool of a physical housing
surrounding the cavity, and observing the spectrum coming out of a hole
in the wall of the housing. So, we are really observing the Planck vacuum
by probing it with thermal energy, but at temperatures many orders of
magnitude removed relative to what would be the intrinsic temperature of
that vacuum. This remoteness of our observation simply damps the spectral
curve of the vacuum down to lower temperatures, because of redshifting
and screening effects.

So, if we were to observe this cutoff in an ordinary blackbody, there
would appear to be *no other explanation* for this but that we are
observing the Planck vacuum from a relativistically very remote frame of
reference. And, for all we do not know about quantum gravity, what we
*do* know is that the Planck vacuum is a place where quantum gravity
*does* come into play. So, by observing such a cutoff, we would for the
first time be observing a phenomenon directly rooted in and attributable
to quantum gravity.

Motivation: check.

Next, over to Douglas Eagleson's suggestion to try the NIST optical
group, which I suspect would not be averse to bagging the first
experimental observation of a quantum gravitational effect.

PS:I will also note without elaboration unless someone asks for it, that
one can combine the Bekenstein bound
(https://en.wikipedia.org/wiki/Bekenstein_bound) with the Wien
displacement law
(https://en.wikipedia.org/wiki/Wien%27s_displacement_law) to
deductively arrive at the above black hole-based cutoff near 1/8 of the
Wien peak. This is a second motivation which independently supports the
first motivation detailed above.
p***@ic.ac.uk
2021-02-19 08:45:35 UTC
Permalink
Jos Bergervoet <***@xs4all.nl> wrote:
> > What sorts of things are called "tunneling" is often a matter
> > of usage; and my experience differs. Whilst doing my PhD,
> > for example, I had cause to make a clear distinction between
> > "coherent tunneling" of the kind you describe, and other
> > tunneling between two states, which *was* statistical,

> I'm pretty sure you cannot prove that!

I presume you are not actually asking me to prove my experience
as a grad student actually existed. :-)

Any other relevant proof - such as it is - could have been fairly
easily found by following the doi's (and references therein) in my
post. So, in answer, what I might claim to be "pretty sure" of is
not an opinion, but actually derivations you can go check. Feel
free to raise any queries (or disagreements with) here and I'll
try to answer them.



#Paul
Jos Bergervoet
2021-02-20 21:21:53 UTC
Permalink
On 21/02/19 9:45 AM, ***@ic.ac.uk wrote:
> Jos Bergervoet <***@xs4all.nl> wrote:
>>> What sorts of things are called "tunneling" is often a matter
>>> of usage; and my experience differs. Whilst doing my PhD,
>>> for example, I had cause to make a clear distinction between
>>> "coherent tunneling" of the kind you describe, and other
>>> tunneling between two states, which *was* statistical,
>
>> I'm pretty sure you cannot prove that!
>
> I presume you are not actually asking me to prove my experience
> as a grad student actually existed. :-)

No, the only thing that would help is to explain what your sentence
meant with 'statistical'.

>
> Any other relevant proof - such as it is - could have been fairly
> easily found by following the doi's (and references therein) in my
> post. So, in answer, what I might claim to be "pretty sure" of is
> not an opinion, but actually derivations you can go check.

If your claim is to have settled the dispute whether QM is deterministic
or stochastic, then this should have been common knowledge by now (I
think that who can give a proof either way, will be the most famous
physicist of the century!) It is just not clear if that is what your
sentence intended to say.

> Feel
> free to raise any queries (or disagreements with) here and I'll
> try to answer them.

If you really claim to have the answer to the dispute mentioned, there
are other people much more qualified than me to challenge you (and I'm
sure they will). If on the other hand, you merely mean it is intractable
due to many dependencies on initial- and boundary conditions, then it
was just not addressing the point in my post you responded to, where I
wrote that the QM description of a tunneling process is deterministic.

So you first need to clarify whether you actually disagree with me
on that (by clarifying 'statistical') before I can raise any queries.

>
> #Paul

--
Jos
p***@ic.ac.uk
2021-02-21 20:01:34 UTC
Permalink
Jos Bergervoet <***@xs4all.nl> wrote:
> So you first need to clarify whether you actually disagree with me
> on that (by clarifying 'statistical') before I can raise any queries.

We are at cross purposes; I was reporting a usage of the terminology
"quantum tunneling" with an explicitly statistical meaning. I was not
making a claim about the fundamental properties of quantum mechanics.

If you want to dispute the sense of the usage I reported, and (eg)
claim that it is not statistical, then I have provided perfectly
adequate references that specify the model; and using which you can
pick apart the mathematics amd physics if you so desire. But the usage
exists, whether you like it or not, and whether you personally think
it suitable or not.

#Paul
George Hrabovsky
2021-02-25 08:20:27 UTC
Permalink
On Saturday, February 20, 2021 at 3:21:56 PM UTC-6, Jos Bergervoet wrote:
> On 21/02/19 9:45 AM, ***@ic.ac.uk wrote:
> > Jos Bergervoet <***@xs4all.nl> wrote:
> >>> What sorts of things are called "tunneling" is often a matter
> >>> of usage; and my experience differs. Whilst doing my PhD,
> >>> for example, I had cause to make a clear distinction between
> >>> "coherent tunneling" of the kind you describe, and other
> >>> tunneling between two states, which *was* statistical,
> >
> >> I'm pretty sure you cannot prove that!
> >
> > I presume you are not actually asking me to prove my experience
> > as a grad student actually existed. :-)
> No, the only thing that would help is to explain what your sentence
> meant with 'statistical'.
> >
> > Any other relevant proof - such as it is - could have been fairly
> > easily found by following the doi's (and references therein) in my
> > post. So, in answer, what I might claim to be "pretty sure" of is
> > not an opinion, but actually derivations you can go check.
> If your claim is to have settled the dispute whether QM is deterministic=

> or stochastic, then this should have been common knowledge by now (I
> think that who can give a proof either way, will be the most famous
> physicist of the century!) It is just not clear if that is what your
> sentence intended to say.
> > Feel
> > free to raise any queries (or disagreements with) here and I'll
> > try to answer them.
> If you really claim to have the answer to the dispute mentioned, there
> are other people much more qualified than me to challenge you (and I'm
> sure they will). If on the other hand, you merely mean it is intractable=

> due to many dependencies on initial- and boundary conditions, then it
> was just not addressing the point in my post you responded to, where I
> wrote that the QM description of a tunneling process is deterministic.
>
> So you first need to clarify whether you actually disagree with me
> on that (by clarifying 'statistical') before I can raise any queries.
>
> >
> > #Paul
>
> --
> Jos
This last post contains is a common misconception, and is almost a
straw-man kind of argument. The rules of quantum mechanics actually
allow you to calculate the probability distributions from which the
results of measurements are taken. These are completely precise to
our ability to measure. Just because the results are probabilistic
(not statistical) does not mean they cannot be made precisely. What
is determined is a distribution rather than a number. Here is the
misconception, the prediction does not allow any more precise
calculation that the distribution--it does not allow you to know
the actual number being measured.

In addition, recently, Gerard t'Hooft has suggested the beginnings
of theory that poses that quantum mechanics can be founded upon a
deterministic basis drawn from cellular automata theory. While this
is by no means settled, the fact that it is not automatically insane
makes it worth a bit of study. I do not find the t'Hooft arguments
compelling, but I cannot disprove the results out of hand.

George
Jos Bergervoet
2021-02-27 11:11:43 UTC
Permalink
On 21/02/25 9:20 AM, George Hrabovsky wrote:
> On Saturday, February 20, 2021 at 3:21:56 PM UTC-6, Jos Bergervoet wrote:
>> On 21/02/19 9:45 AM, ***@ic.ac.uk wrote:
>>> Jos Bergervoet <***@xs4all.nl> wrote:
...
>> ... If on the other hand, you merely mean it is intractable
>> due to many dependencies on initial- and boundary conditions, then it
>> was just not addressing the point in my post you responded to, where I
>> wrote that the QM description of a tunneling process is deterministic.
>>
>> So you first need to clarify whether you actually disagree with me
>> on that (by clarifying 'statistical') before I can raise any queries.
>
> This last post contains is a common misconception,

Which one is that? I'm merely asking a question: whether 'statistical'
was meant as intractable due to many dependencies or as an inherent
stochastic mechanism in the laws of nature..

> ... and is almost a
> straw-man kind of argument.

You mean that my requirement for proof (of either of the two options
mentioned above) is inappropriate?

> The rules of quantum mechanics actually
> allow you to calculate the probability distributions from which the
> results of measurements are taken. These are completely precise to
> our ability to measure. Just because the results are probabilistic
> (not statistical)

What do you mean with "probabilistic (not statistical)"? Do you mean
intractable due to many dependencies? Or proving that there is an
inherent stochastic mechanism in the laws of nature? That remains
just as unanswered as it was..

> ... does not mean they cannot be made precisely. What
> is determined is a distribution rather than a number.

A distribution of complex amplitudes. In Maxwell theory the E-field
was also a distribution (of real numbers as a function of space). So
nothing essentially new here, those amplitudes may be all that exists..

> ... Here is the
> misconception, the prediction does not allow any more precise
> calculation that the distribution--it does not allow you to know
> the actual number being measured.
> ...

Why do you believe there is an 'actual number'? If the distribution
of complex amplitudes is the full description of nature, then we do
not need that. And 'being measured' does not help, we can describe
everything by the complex amplitude distribution of QM: experimental
equipment, the experimenter, his friends, the books he writes..
There will never be a moment when your "actual number" would be
needed. Also the phrase "probability distribution" for the amplitudes
is then misleading, since there never is any mechanism in the laws of
physics that uses those amplitudes to generate things with certain
probabilities.

It seems that you do not want to accept this description of physics,
but then the burden of proof is on you! Note that I do not claim that
it is true, I only point out that it is possible. If you want to deny
that, you have to give proof, and until now all experiments have failed
to do that.

The initial attitude of a century ago, that there may be a wave function
describing the electron in hydrogen but there still should be a "true"
position as well, and that the theory was just "incomplete" by not yet
describing that additional part, and that for macroscopic systems it
would become "obvious" that wave functions are not the full description,
simply has not been proven. On the contrary: experiments with bigger and
bigger molecules are still consistent with the other view, that the
wave function is the full description (which of course is then a big,
entangled wave function of many degrees of freedom.. So where's your
proof that this cannot be the full description of physics?)

--
Jos
Loading...