All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Research

, Volume: 13( 11)

The physics of a universe with a variable Planck constant

*Correspondence:
En Okada
Independent Researcher, Japan
E-mail: enokada1985@gmail.com

Received date: Nov-8-2024, Manuscript No. tsse-24-156225; Editor assigned: Nov-10-2024, Pre-QC No. tsse-24-156225 (PQ); Reviewed: Nov -12-2024, QC No tsse-24-156225(Q); Revised: Nov-14-2024, Manuscript No. tsse-24-156225(R); Published: Nov-30-2024, DOI.10.37532/2320-6756.2024.13(11).372

Citation: Okada E. The Physics of a Universe with a Variable Planck “Constant”. J Space Explor.2024;13(11).372

Abstract

I propose a novel theoretical paradigm that all physical realities can be concretely defined by the degree of spontaneous symmetry breaking in a binary field, providing an alternative interpretation of the Higgs mechanism. Together with a newly proposed hypothesis that the Planck constant evolves in accordance with the cosmic scale factor, which drives an evolution of the mass and electric charge of elementary particles, my model solves all the hierarchy problems of theoretical physics in one shot, demystifying all the four fundamental interactions as different aspects of a singular consistent story.

Keywords

Physics beyond standard model; Super unified theory

Introduction

I would like to dedicate my paper to Dr. John Archibald Wheeler, the man who conceived the slogans of “It from bit” and “Law without law”. He was undoubtedly the closest figure who could have reached exactly the same conclusions 40 years ahead of me [1-5]. It is no exaggeration to say that this paper is all the way reaffirming the greatness of his insights that sharply hit the deepest truths of our mother nature. Wheeler was right to realize that all kinds of existence can only be defined in contrast to non-existence, therefore all physical realities can be reduced down to a collection of binary choices between 0 and 1 of supreme and ultimate abstraction. However, it is not the bit information carried in the binary digits, but instead the degree of asymmetry of a digital field that defines and gives birth to all physical realities. Suppose a field comprised of discrete “atoms” or “cells” that can take value of either +1 or −1 (namely two opposite states) with equal probability (1/2 each), and that adjacent cells would “annihilate”, erasing both of their values back to zero. The well-established mathematics of Bernoulli trials tells us that we can reasonably expect a surplus favoring one side to the other, who cannot find partners for annihilation, by a magnitude proportional to the square root of the total number of the cells. Let us arbitrarily denote the surplus side as 1, hereafter. A field with perfect symmetry, namely all of its constituent cells (“spatial quanta”, hereafter) take value 0 instead of 1, has nothing existent in it. We physicists do not have to worry about such a deadly quiet universe with no subjects at all, and thus no room for physics in the first place. “Why such a symmetry breaking has to occur?” is a question of interest rather to theologizes. Therefore, it is safe enough for us to take the occurrence of such a spontaneous symmetry breaking as granted, instead of recklessly stepping into their sanctuary (FIG.1).

space-exploration-quanta

Figure 1: Illustration of the discrete digital field model with spatial quanta (cells) taking values of +1 or −1. The cells with equal probability annihilate each other, leaving behind a surplus of one state (1) that defines the asymmetry of the field. This surplus serves as a fundamental contributor to the formation of physical realities through the process of spontaneous symmetry breaking.

It is essentially equivalent to regard the stochastic surplus as a probabilistic “flipping” of a spatial quantum from ground state to excitation state, which is more convenient for our discussion hereafter. The field should have its averaged 3-D and 2-D and 1-D probability densities of quanta flipping from 0 to 1, namely per how many additional spatial quanta on average when we search in 3-D or 2-D or 1- D space respectively, can we hit another 1 as the closest neighbor of the existing 1 in our question.

With the 1-D probability density, we can quantify the degree of symmetry breaking in any specific area of the field, based on the mathematics of exponential distribution. No matter how complicated a specific configuration might be, and regardless of the number of spatial dimensions, it can be ultimately broken down to a collection of bilateral pairs of 1 (“quanta pair” hereafter).

The mathematics of exponential distribution tells us that the distance (“length”, hereafter, which is measured in multiples of a unit length as will be calculated later) between any two spatial quanta shall have an expectation of

and an upside accumulative probability of

We can make use of this feature to quantify the “rareness” of a specific quanta pair out of the total population, as the asymmetry the pair has added to the field. By further taking the natural logarithm of that accumulative probability, a manipulation qualitatively resembles to the calculation of entropy, we obtain a linear function of R, the larger the rarer.

The extreme abstractness of the flipped quanta assures that all of them equally take the state 1 (there are no values such as 1.5 or 2 or π). Therefore, length is the only variable that can distinguish the pairs, since every pair has two flipped quanta alike. With a barometer proportional to the length, now it is quite natural to define another barometer which is inversely proportional to the length. Let us assign coefficients to them as below, somehow all of sudden but of course they did not come out of nowhere.

As we shall shortly see the evidence supporting my hypothesis, E(R) and T(R) are exactly the concrete definition of time and energy respectively. We have been using the notions of time and energy totally intuitively, without noticing the profound meanings in behind. Moreover, it is exactly this probability density of quanta flipping that quantum mechanical wave functions describe. The reason why the squared amplitude of quantum mechanical wave function gives the probability density to detect a Fermion is rightly because Fermions are defined by two flipped quanta. Square means two flips simultaneously occur in the vicinity of a specific locus. After all, we cannot define the degree of asymmetry with a singular spatial quantum. Pair is the minimal and the most reasonable unit to focus on. Given that the energy of a pair is defined proportional to the inverse of its length, the force acting between the quanta, from its definition as the derivative of energy by distance, must be proportional to the inverse square of the length, regardless of the number of spatial dimensions. It is this very nature of the interaction (force) between spatial quanta, namely generally proportional to the inverse square of distance, that dictates only 3-dimensional field can stably and self-consistently exist, instead of the logic of the contrary as has long been wrongly believed.

The negative sign shows the force is universally attractive. As the final result of such attraction, we may naturally expect a situation in which two quanta are back-to-back, forming a “binary star system” with a diameter twice the expanse of a spatial quantum. The diameter of spatial quanta (2R̂), which serves as the minimal length unit in our binary field, can be reasonably calculated supposing when two spatial quanta are brought within a spherical region of diameter 2R̂, they will instantly form a mini-black hole according to our definition of energy. (The illustrated situation in below does not really occur, which is only hypothetical in order to calculate R̂.) (FIG.2).

space-exploration-length

Figure 2: The figure depicts a theoretical scenario to calculate the minimal length unit (2R̂) of the spatial quanta, where the quanta are positioned back-to-back. When brought within the defined region, these quanta would theoretically form a mini-black hole, serving as the basis for the calculation of the spatial quantum's diameter. The scenario shown is hypothetical and does not occur in practice.

From now on, serves as the yardstick in our binary field, together with a series of quantized mass of the Planck scale, corresponding to discrete states that can only have diameters in multiples of

Particle physics

The hierarchy gap between the magnitude of gravity and electromagnetic force, in the extreme case, namely for the electron-electron interactions, is

On the other hand, the aforementioned binary star system shall have a diameter of and thus a combined mass of where the two spatial quanta shall rotate at velocity c/4 according to simple Newtonian calculation.

Suppose there is a tiny “seed” in each of the “mass lump” of upon which a much stronger repulsive force (whose magnitude exactly equals to that of electromagnetism) is acting to balance with the universal gravitation between the spatial quanta. Then, we may find that the mass of the seeds m̃ is close enough to the electron mass me being inversely adjusted by the Lorentz factor of c/4, which is too unlikely to be just a coincidence (FIG.3).

space-exploration-magnitude

Figure 3: he repulsive force is assumed to have a magnitude equivalent to that of electromagnetism. The figure depicts how the mass of the seeds (m̃) is found to closely approximate the mass of an electron (me), adjusted by the Lorentz factor of c/4, suggesting a remarkable coincidence between the two.

(The reason why our detectable rest mass of electron needs an inverse Lorentz adjustment from m̃ will be revealed in later chapters.) Though I am unable to specify the exact physics behind the calculation with enough confidence at this stage, it strongly implies that electrons (and thus positrons as well) are highly likely to be churned out from such an equilibrium, and that the origin of electric charges as vectorial physical properties are highly likely to be a reflection of the rotational motion of spatial quanta within the binary star system. Among the three spatial dimensions, if they are not perfectly homogeneous in nature, then there will be two intrinsically different permutations as shown in below. (Actually, there is nothing that dictates the three spatial dimensions must be homogeneous, and such a heterogeneity is highly likely to be the very origin of the three generations of elementary particles and the cause of P-symmetry breaking in the weak interaction, as we will see later.) This very feature provides us with a unique way to define two distinct modes of rotations whose axis may arbitrarily tilted in the three-dimensional space. (For example, the permutation can be defined by firstly taking a unit vector pointing the north pole (in whose eyes the rotation looks anti-clockwise) of the rotation axis, and then ordering X, Y, Z by the coordinates of the unit vector in descending order (FIG.4.)

space-exploration-origin

Figure 4: The figure suggests that this heterogeneity may be responsible for the origin of the three generations of elementary particles and the P-symmetry breaking in weak interactions.

As we shall see later, the magnitude gap between gravity and electromagnetic force is nothing but a reflection of the very fact that the current 1-D probability density of quanta flipping is 1/(2.04 × 1021), or the inverse square root of 4.16 × 1042. In other words, the L in the aforementioned exponential distribution is currently

Moreover, as will be further addressed in the cosmology part, this figure evolves with the cosmic scale factor (or the age of our universe), which in turn provides an astoundingly beautiful and persuasive solution to all of the so-called hierarchy problems. Note that the hierarchy gap I adopted here was actually the extreme case, namely between the electron electron interactions, instead of electron-proton or proton-proton interactions. Similar calculations do hold in the latter two cases, however, as we shall see shortly, it is nothing but the hierarchy gap between electron-electron interactions that serves as the key to unveil the secret behind gravity and electromagnetism. Just as I could have never found the astounding relationship between the Planck scaled physical quantities and that of elementary particles, as long as I just pay attention to the Planck length and mass without the critical divisor √2 under As mentioned earlier, the flipped spatial quanta could be regarded as unannihilated surpluses after a series of Bernoulli trials choosing one out of two opposite states with 50%:50% probability. Keeping in mind that the increase of the total number of spatial quanta within our universe is a gradual process, layer by layer as the cosmological event horizon expands, let us see how the probability of quanta flipping may evolve over time. If the size of spatial quanta remains unchanged throughout the history of the universe, on the surface of event horizon of each cosmic age, there should be spatial quanta in a number proportional to the square of the cosmic radius, thus the expected surplus shall be proportional to the radius. An integral along the radius gives a total number of flipped quanta proportional to the square of cosmic radius, thus the averaged 3-D probability density of quanta flipping shall be inversely proportional to cosmic radius (square divided by cube), and 1-D probability density shall be inversely proportional to the cubic root of cosmic radius. We may take another approach. Suppose there is a primordial wave propagating at the speed of light, all the way from the singularity (we now should say a spatial quantum, instead of a point without volume), and literally pioneers the frontier of the universe. The amplitude of this wave in turn governs the 3-D probability destiny of quanta flipping. Whichever explanation may you prefer, either by the conservation of energy or the mathematical property of three-dimensional Laplacian, the amplitude of the wave shall be inversely proportional to the distance from its origin. If we take the extension of the frontier of the wave as the expansion of space, we may find that the effect of space expansion to dampen the amplitude of the wave rightly makes ends meet such that the probability density settles to exactly the same magnitude everywhere in the universe. This wave can be regarded as the Master or Mighty or Mother wave for all subsequent wave functions of specific particles. I shall revisit the essence of such a wave, when the time is ripe. (As we will see later, the size of spatial quanta does change over time such that the 1-D probability density of quanta flipping is actually inversely proportional to the cosmic radius. Nevertheless, the above logic is qualitatively true, demonstrating that the probability density of quanta flipping should evolve with the cosmic age.) Next, let me present some interesting calculations that may strongly support my hypothesis, however with less clarity in their respective physical images at this stage.

Mean lifetime of free neutron

Taking the figure τbeam ≈ 880 s measured by the so-called beam method, it shall hardly be a coincidence that

where mn is the rest mass of neutron, m̃e is the relativistic electron mass adjusted by the Lorentz factor of c/4. This calculation implies that β -decay (and maybe the weak interaction in general) might be a phenomenon which is rightly characterized by the stochasticity of the spontaneous symmetry breaking in the binary field. Moreover, there is a well-known conundrum that the mean lifetime of free neutrons measured by the so-called bottle method is τnbottle ≈887 s, which is inexplicable within the range of experimental error. I noticed that this gap can be well enough approximated by the Lorentz factor of c/8,

a velocity we may obtain supposing two quanta are rotating with a diameter of instead of (FIG.5).

space-exploration-neutron

Figure 5: Illustration of the relationship between the rest mass of the neutron (mn) and the relativistic electron mass (m̃e) adjusted by the Lorentz factor of c/4. This figure suggests that β-decay, and possibly the weak interaction in general, may be characterized by the stochasticity of spontaneous symmetry breaking in the binary field. Additionally, it addresses the conundrum of the neutron's mean lifetime (τnbottle ≈ 887 s), proposing that this gap can be approximated by the Lorentz factor of c/8, corresponding to the rotational velocity of quanta with a diameter of 8.

It may have something to do with the participation of the two additional quanta, which cannot get closer than due to the existing stand-by free neutron. The case may be that in the bottle method, the focus is on those undecayed stand-by neutrons without additional quanta, while in the beam method, the focus is on the decayed neutrons thus with the disturbance from the additional quanta with velocity c/8, which may in turn bring about an extra mass into the system and prolong its lifetime. In short, the - 8 - mysterious discrepancy may be caused by the fact that we were actually observing two slightly but intrinsically different phenomena.

Mean lifetime of the higgs boson

As the latest figure, τHiggs = 2.1(+2.3/−0.9) × 10−22 s agrees well enough with the calculation in below which has a clear similarity with the case of free neutron.

This calculation strongly suggests that the Higgs mechanism could be exactly a rephrase of the spontaneous symmetry breaking of the binary field. Next, let me move on to the implications of my hypothesis on the strong force. In particular, I will first mathematically calculate the mass of up quark and down quark, and then reveal the physics behind the color charges, together with the true mechanism of the asymptotic freedom and the so-called quark confinement. As mentioned earlier, the calculation that has implied the possible mechanism of electron-positron pair production applies to proton-antiproton pair production as well, by adopting the hierarchy gap of proton-proton interactions. The fact implies a mass-independent general relationship, which can be put into the equation in below.

It means that the electromagnetic repulsion between two elementary charges separated by a distance correspondent to the energy required for a pair production of two Lorentz adjusted masses (by the factor of c/4) structurally balances with the gravitational attraction between the rest masses sitting at a distance of . Such a mass-independent relationship can be simplified into a more familiar form, which may have revealed the secret behind the fine structure constant. 1/α ≈ 137.036 may be 128 adjusted by the square of the Lorentz factor of c/4, plus some higher order refinements.

Another implication from the equation is somehow latent, but the profoundness of its impact may be far beyond our first impression. The inversely proportional relationship

hints that if the part of a particle was altered for some reason, the remaining ⅇ2/4πε0 part that governs the strength of electromagnetic force exerted by the particle should also change accordingly to make ends meet. Electron and proton have different mass-to-charge ratios. What if the mass carried by proton, due to some unknown mechanism, has a weaker reactivity to electromagnetism so that proton needs more mass than electron to behave as an electric charge? As for how could the part be variable, the first idea came up to my mind was that the mechanics of rigid body was much richer than that of mass point. What if proton and neutron are a kind of rigid-body-type particles while electron is a mass-point-type particle? Note that I have already excluded the concept of zero distance in our binary field, thus even for mass points, they still have a minimal diameter of (as they consist of two spatial quanta). “Point” just means they do not have any rotational degree of freedom. Moreover, we may reasonably postulate that the “spatial span” of a rigid-body-type degree of freedom is π times that of a mass-point-type one, as the latter is “locked” in its diameter instead of circumference (FIG.6).

space-exploration-variable

Figure 6: The figure suggests that the 4R̂ part of a particle could be variable, potentially explaining the differences in behavior between rigid-body-type particles (proton and neutron) and mass-point-type particles (electron). It is also postulated that the "spatial span" of a rigid-body-type particle is π times that of a mass-point-type particle, due to the difference in rotational degrees of freedom and minimal diameters (4R̂ for mass points).

In high energy hadron collisions, suppose that smashed nucleons may instantly degenerate one or two of its rigid-body-type degree(s) of freedom to mass-point-type. If we define “effective span” of a particle by taking the geometric average of the spatial span on all the three dimensions, we shall obtain the fractional powers of π as shown in below, indicating how “bulgy” the partially degenerated rigidbody-type particle still is, compared to a genuine mass-point type counterpart (FIG.7).

space-exploration-powers

Figure 7: The fractional powers of π represent how "bulgy" a partially degenerated rigid-body-type particle remains in comparison to a genuine mass-point-type counterpart.

This inversely proportional relationship implies that a larger effective span should diminish the electromagnetic reactivity of a particle. Therefore, each partially degenerated state of nucleon shall respectively have an inferior electromagnetic reactivity by a factor of

compared with the mass-point-type electron. Suppose the two spatial quanta evenly contribute to the electromagnetic reactivity of the Fermion that they collectively define, each spatial quantum would have an electromagnetic reactivity further inferior to that of electron, by factors of 1/9.2 and 1/4.3 respectively. A lesser electromagnetic reactivity means a larger mass is required to behave as an electric charge, therefore, the theoretical mass of a singular spatial quantum in the partially shrunk rigid-body-type particles shall be 9.2 and 4.3 times of the mass of electron, respectively. They are exactly the theoretical mass of down quark and up quark. The discussions so far strongly suggest that the entity of quark is actually one of the two spatial quanta within a partially degenerated rigid-body-type particle (hadron) in high energy collision experiments. They transiently interact with one of the two spatial quanta that collectively define the incoming bullets (leptons), and let them scatter. Quark, as a singular spatial quantum in a transiently degenerated hadron, can only exist together with its partner spatial quantum, it does not make sense outside of hadrons. After all, detectable Fermions are defined by a pair of spatial quanta, thus the imaginary state of “quark” does not independently exist. This shall be the secret of the so-called quark confinement. The color charge of quark may highly likely to be a reflection of the details of the shrunk dimension(s) as shown in the schematic figure and tables in below, just for an example (FIG.8 and 9).

space-exploration-spatial

Figure 8: The figure further illustrates how quarks are transiently defined by a pair of spatial quanta in hadrons, explaining the concept of quark confinement and suggesting that quark color charge reflects the shrunk dimensions of these quanta.

space-exploration-mass

Figure 9: The figure shows how the spatial quanta, when compared with the mass-point-type electron, contribute differently to the particle's electromagnetic reactivity. This leads to a larger required mass for these quanta to behave as electric charges, specifically 9.2 and 4.3 times the mass of the electron, corresponding to the theoretical masses of the down and up quarks, respectively.

If one dimension has degenerated, the transient state would have an imaginary electric charge of ± 1/3, reflecting the fact that one of three dimensions is mass-point-type. If two dimensions have degenerated, the charge would be ± 2/3 by similar logic. Down quark (1-D degenerated) has a larger geometric mean of spatial span than that of up quark (2-D degenerated). It may well explain why nucleons have their respective charge radii and mass radii, since the superposition of mass and the offset of electric charges take place respectively in an independent manner (deservedly as my model predicts). The sign of the fractional charge of quarks may reflect the rotation mode of the degenerated particle seen from each dimension. By the aforementioned definition of the rotation mode, we can define not only the mode of the entire particle, but also the mode seen from each axis of spatial dimension. The illustration in below is for an example (FIG.10).

space-exploration-geometric

Figure 10: The figure shows how the degeneration of one or two dimensions results in quarks carrying electric charges of ± 1/3 and ± 2/3, respectively. It explains the differences in geometric mean spatial spans of down and up quarks, as well as the relationship between nucleon charge radii and mass radii. The sign of the fractional charge of quarks is also linked to the rotational mode of the degenerated particle, which can be observed from each spatial dimension.

Now it is almost needless to say that the information about the rotation mode is conveyed by the quantum mechanical wave function. And more importantly, this rotation of spatial quanta beneath the detectable particle they define, is exactly the reason why quantum mechanical waves must be defined - 12 - as complex functions. The necessity to use complex numbers in describing the dynamics of the spatial quanta might be due to the historical inevitability that we had chosen (of course not by accident) real numbers to construct the physics of the detectable particles with which we are much more familiar. Moreover, the mathematical property of imaginary number turns out to be the final sentence to the theory of QCD, as I have found a strikingly simple solution for how could protons be held harmonically together within atomic nuclei. Before moving on to the highlight of this paper, let me add some complemental comments about the mass of proton. Proton has a mass about 1836 times that of electron. It is a well-known fact that 6π5 is a good approximation of 1836. Out of the 6π5 , proton as a genuine rigid-body-type particle shall be an inferior reactor to the electromagnetic force than electron by a factor of 1/π2 (as all three dimensions are rigid-body-type). The remaining 1/6π3 might be a factor reflecting a qualitative leap from mass-point-type to rigid-body-type. In other words, the logic of my calculation of theoretical mass of u quark and d quark compared with electron may only apply for particles that have at least one mass-point-type dimension. Although its mechanism needs to be further elucidated, the assumption does not sound so unreasonable either, as 6 is the degree of freedom of a three-dimensional rigid body, while π3 could be the ratio of “effective volume” (the product of spans over all the three dimensions) between rigid-body-type and mass-point-type particles. It is interesting to note that the theoretical mass of strange quark is ~186 times of the electron mass, and 186 is a good enough approximation of 6π3. Next, let me unveil the secret of the nuclear force that holds protons and neutrons together. Imagine a homogeneous sphere with a uniformly positive charge density, as a good approximation of atomic nucleus. The below equation of radial motion, with a fairly simple integration, gives us an equation about the velocity. Let the constant of integration be zero, which is equivalent to the conserved mechanical energy is zero, as the most simplified situation for my thought experiment.

The equation looks nonsense in the conventional context, since the square of the velocity is negative. However, being free from all kinds of prejudice, if imaginary velocities are allowed, what will happen? A direct consequence shall be that the Lorentz factor turns out to be smaller than one. Having paved the way for quite a while, we believe that now an idea that this imaginary number velocity represents the motion of the spatial quanta may not sound abrupt, which is highly likely to be the case. Substitute the actual figures of the unit charge, the mass of nucleon, the permittivity of vacuum, the charge density of proton as the average charge density of atomic nuclei, into the equation. Then multiply the resulted “Hubble constant” of atomic nuclei with ~10−15m as the order of their radii. Surprisingly, we may notice that the imaginary velocity falls to exactly the same order with the speed of light. At such a nonnegligible velocity, the aforementioned smaller-than-one Lorentz factor would rightly result in a relativistic mass lighter than the rest mass by a few percentage points, which matches well enough with the binding energy per nucleon for those elements of double-digit atomic number.

The above approximation may not work well enough in the atomic nuclei of light elements as their charge density shall be far from homogeneous. The imaginary velocity is proportional to the distance of nucleons from the center of the atomic nucleus. Thus, the magnitude of relativistic mass reduction (namely the level of binding energy) increases with the radius. This should be exactly the underlying mechanism of the so-called nuclear force that shares similarities with the asymptotic freedom of quarks, a concept that was earlier demonstrated to be unnecessary in explaining the quark confinement and therefore could now be completely abandoned. As for the motion of spatial quanta within atomic nuclei, the most reasonable explanation should be that they switch their “owners” just like the free electrons in metals. It is by such a sharing of spatial quanta that nucleons reach a less massive and thus more stable state. Due to limitation of space, I will not go any further into detailed quantitative discussions on specific atomic nucleus. However, it may not diminish the persuasiveness of my hypothesis by a little bit, I firmly believe. The discovery of imaginary velocities urges me to slightly correct my previous equations. Instead of multiplying the Lorentz factor of c/4 or c/8, I should divide that of cⅈ/4 or cⅈ/4, which do not make too much difference except we may obtain a closer approximation of the fine structure constant. I have intentionally ignored the slight difference, since the time would only be ripe at this stage to reveal the secret in behind. In short, the rest mass of elementary particles needs an inverse adjustment from pure theoretical calculation rightly because the rest mass in our perspectives is the relativistic mass from the view point of spatial quanta, and the former is always lighter than the latter due to the imaginary velocity. The above discussions have almost reveled all the major secrets of the strong force. However, without explaining the origins of so many exotic baryons and mesons, my hypothesis may not acquire full credit. So, now let me cope with it. Adopting the Lorentz factor adjusted proton mass as one unit of standard nucleon mass in collision experiments, we may find with great surprise that the mass of the 16 baryons (other than proton and neutron) that supposedly to consist of only u, d or s quarks align in an extremely elegant pattern.

It strongly suggests that exotic baryons are actually transient figures of nucleons during high energy collision, expanding one of its three spatial dimensions in a discrete manner (FIG.11).

space-exploration-nucleons

Figure 11: The figure highlights the transformation of nucleons into exotic baryons and the dynamic nature of their spatial dimensions under extreme conditions.

In contrast to the deep inelastic scattering in which we can just indirectly assume that nucleons have inner structures from the scattering pattern of electrons, hadron collision experiments do actually churn out numerous detectable baryons and mesons. The difference is that, exotic hadrons, though very shortlived, are nonetheless made of spatial quanta pair, and thus are genuine rigid-body-type particles as carriers of electric charge. Compared with proton, the electromagnetic reactivity of each excitation state should be, by the same logic in my calculation of quark mass, inversely proportional to their effective span, which explains their increased mass. The effective span shall be reasonably calculated by equally distributing the span of the expanded dimension onto all three dimensions, therefore the cubic roots of half-integers or integers. (The reason why square root of 1.5 gives rise to sigma baryons remains to be studied. The case might be one of the three dimensions had collapsed first, then the two-dimensional “disk” expands one of the remaining two.) The reason why the cubic roots of integers correspond to spin 3/2 baryons while the cubic roots of half-integers give rise to spin 1/2 baryons may due to the fact that the former are expansions in multiples of , which may render the baryons an additional integral spin by a mechanism that awaits further study. In summary, quark is one of the two quanta that define a partially shrunk nucleon in which one or two of its three spatial dimensions transiently degenerate from rigid-body-type to mass-point-type degree of freedom. Gluon is the spatial quantum exchanged in the transitions between the different states of degeneration. Exotic baryons are nucleons transiently expanded along one of its three dimensions, while mesons are the energy exchanged during the transition between these different states of expansion. The dazzling varieties of the cascades in hadron decay are probably the reflections of the probable transitions among all possible states, which shall be explained with no big problem in the context of our model, as a matter of time. Moreover, it is interesting to note that the mass of baryons with c quark substitution and b quark substitution are generally and roughly twice and five times heavier than their counterparts made of only u/d/s quarks, respectively. It implies that nucleons may have three intrinsically distinctive modes for the transient expansion of its spatial dimensions, namely which one of the three dimensions to be expanded. One natural explanation could be that compared with the first and the easiest choice that gives rise to those baryons supposed to be made of u/d/s quarks, the second and third harder choices may, for some unknown reason, result in a much weaker electromagnetic responsiveness by a factor of ~1/8 and ~1/125 respectively, which in turn generate baryons roughly twice and five times massive than those generated by the first mode. It may not be meaningless to point out that the ratio between the mass of tauon and muon is roughly 136:8, though we have no idea where comes the residual ~8π (after dividing the mass of muon by 8 me) (FIG.12).

space-exploration-space

Figure 12: It explains the correspondence between the U(1), SU(2), and SU(3) Lie groups in Yang-Mills gauge theory and their relation to the rotations of spatial quanta. The figure also hints at the deeper layer of real and complex space, offering a unified view of electromagnetism, weak force, and strong force, while addressing the discrepancies between theoretical predictions and experimental data as minor disturbances yet to be fully accounted for.

The heterogeneity among three spatial dimensions is likely to be the reason why the P-symmetry is broken in the weak interaction. If the three dimensions are homogeneous, we may no longer be able to distinguish the two intrinsically different modes of rotation as we have proposed earlier. From this point of view, it is rather natural that right-handed spin and left-handed spin should differ to each other in an inherently distinguishable fashion. Note that the weak force is the only interaction where the numbers of participating spatial quanta do not conserve before and after the process. In other words, it could rather be a phenomenon that only becomes noticeable to us because of the addition of newly flipped spatial quanta to the pre-existing physical system we had been observing. The reason why only the Bosons for the weak interaction possess mass, might be the very reflection of this non-conservation of the number of flipped quanta before and after the interaction. After all, electric charge is a vectorial property generated out of the rotation of spatial quanta as a culmination of the universal attraction between them. The strong force can be bisected into two parts. The binding of nucleons within atomic nuclei can be explained by their sharing of spatial quanta just like the free electrons in metals, where the motion of spatial quanta with imaginary velocities contributes to a relativistic mass reduction, stabilizing the atomic nucleus in the form of binding energy. Those phenomena that imply any inner structures of hadrons are indeed the transient snapshots of them, which should not have been even noticed unless they were smashed to each other in the ultra-high energy colliders. The mathematical structure of QCD exactly reflects the fact that the three rigid-body-type dimensions of nucleons may randomly change their type or extend their spatial span under high energy conditions. The weak interaction shall be rather regarded as an inevitable consequence of the spontaneous symmetry breaking of the binary field, which occurs whenever two additional flipped spatial quanta are brought by the universal attraction to the vicinity of an existing particle. My theory vividly explains, with clearcut physical images, why electromagnetism, weak force and strong force are respectively linked with U(1), SU(2) and SU(3) Lie group in the Yang-Mills gauge theory. The groups correspondent to rotations in complex space is rightly the reflection that they describe the motion of spatial quanta that live in another layer which is deeper than that of the real number based detectable particles. The meaning of dimension number in each of the Lie group is now rather trivial, we believe, after the revelation of the underlying physics behind each force. Hereby, all the four fundamental interactions are unified as four aspects of a singular story based on a selfconsistent theoretical paradigm, namely the spontaneous symmetry breaking of a binary field (space). As for why there are certain errors, though very slight, between the experimental data of baryon masses and my simple calculation, the main contributor should be some minor disturbances by those factors I am not yet able to fully take into consideration at this stage. It is well known that the construction process of the standard model of particle physics was indeed a series of hindsight, through which tens of artificial parameters have been added for the fine tuning with existing experimental data. Thus, it has good reason to “predict” the outcome of “newly” designed experiments, which are actually nothing but reconfirming the model’s reproductivity by thousands of minorly tinkered versions of similar conditions, without harshly challenging its credibility. - 16 - Luckily enough though, more and more clues have been piling up recently, indicating that the model is far from complete or even correct. Shall we satisfy with a 21st century version of the Ptolemaic epicycle theory, or had we better pursue the possibility of a Copernican revolution (even though not yet sophisticated as Newton or Einstein)? In front us is a vital choice between a self-satisfaction with blind precision and an aesthetic/philosophical awakening. In the end of the particle physics part, let me by the way point out that the “spooky action of distance”, namely the quantum entanglement, is nothing spooky. The key is the super-luminal phase velocity of the de Broglie wave, plus the relativity of simultaneity which was ironically found by Einstein himself.

Suppose an observer sitting at the center of mass carries out a set of “simultaneous” measurements on the spin of the electron and positron, momentarily separated by 2vt0 on his own ruler. This distance would appear to be

on the rulers of electron and positron due to the Lorentz contraction. In the inertial frame moving together with the electron, considering the relativity of simultaneity, the measurement of electron itself would look like to be earlier than that of positron by a time window of

Convert this time window into the clock of the observer at mass center,

is exactly the time required for the de Broglie wave of the electron to catch up the positron. In other words, the electron will “see” the measurement of the positron to be carried out exactly upon the arrival of its de Broglie wave that conveys the influence of its own measurement. - 17 - Therefore, the seemingly instant transmission of quantum states over light years is indeed a totally legal propagation of the phase change at the super-luminal phase velocity of de Broglie wave. The causal relationship was simply hidden by a strangely overlooked fact that there is actually a time window exactly sufficient for the transmission from one particle to the other. In the comoving reference frame with the “influencer”, the “influenced” receives the phase change with no wonder, and no violation of any physical laws.

Cosmology

It is reasonable to assume that the universe has evolved all the way from a Planck scaled stage to its current state. The fact that our observable universe has a radius ~1060 times of Planck length while its energy density is ~10−120 times of Planck density strongly implies that the effective pressure realized at the cosmological event horizon of our universe shall be structurally fixed by some hitherto unnoticed mechanism at negative 1/3 of its energy density such that the event horizon expands at a constant velocity, namely the speed of light.

Such a relationship that energy density is proportional to the inverse square (instead of inverse cube) of the radius applies to black holes as well, therefore shall be rather a general feature of phenomena governed by gravity. It is too naïve to believe we are living in a special era when the density of ordinary matter, dark matter and dark energy miraculously meet at roughly the same order. Note that some integral powers of ~1020 repetitively show themselves in the so-called hierarchy problems of fundamental physics. Firstly, ~1040 is the magnitude gap between electromagnetism and gravity (which is actually a consequence of the very fact that ~1020 itself is the ratio between the Planck mass and the mass of electron or proton). Secondly, ~1060 is the multiple of the current Hubble radius, mass and time compared with their Planck scaled counterparts. And lastly, ~1080 is the notorious Eddington’s number. Instead of asking the reason why the Planck units are so distant from our ordinarily observable realities by those specific magnitudes, or trying to find special meaning for those figures, the right question shall be what kind of a mechanism may assure them to evolve synchronically such that their current values are not special. There were some physicists, with Sir Paul Dirac as the most renowned, who had sought the possibility that some physical constants might actually be time-dependent variables, unfortunately without success to date. Here I present a hitherto unfalsified hypothesis in which a cosmic scale factor or time-dependent decrease of the Planck constant drives an evolution of the scale of the Planck units and the property of elementary particles (mass and electric charge), which turns out to be able to persuasively solve the most profound conundrums in both cosmology and particle physics at one shot.

As a pivotal figure, ~1020 (its precise value is 2.67 × 1020 as will be demonstrate later) is exactly the expansion rate of cosmic scale factor from the very beginning up until today. The evolution of key parameters that characterize our universe are shown in below.

Planck constant

Gravitational constant

G = const.

Speed of light

c = const. (so do ε0 and μ0)

Mass of elementary particles (mass ratio between proton and electron remains constant, mp~1836me )

Elementary charge

Mass-to-charge ratio of electron and proton

Fine structure constant

Planck length

Planck time

Planck mass

Planck density

Rydberg constant

A smaller Rydberg “constant” in the past (proportional to cosmic scale factor) implies that the redshift we observe today may not correctly reflect the true expansion of the universe. 1+z has to be reinterpreted downward to its square root. For example, a seemingly 4-fold redshift (raw z=3) is indeed a spectrum emitted when the Hubble radius was 1/2 of the current length (already “redshifted” by two folds judged by today’s knowledge of spectrometry), being actually redshifted by two folds (true z=1). For small z, by simple math, true redshift shall be 1/2 of raw redshift.

By reducing the Hubble’s constant by 1/2 (as it is calculated at z ≪ 1), our hypothesis drastically downsized the critical density to 1/4 of its current figure. Such an egg of Columbus solution to the conundrum of the dark energy, of course has a few minor issues to be addressed. Firstly, how could the weighted average pressure of the various contents of our universe be negative 1/3 of their energy density without the contribution from dark energy. Recall that the pressure of photon gas with a fixed boundary is 1/3 of its energy density. By a reverse logic, the effective pressure realized at a spatial boundary that is expanding at the speed of light (while its constituents remain virtually static to the spatial fabric) should naturally and reasonably be negative 1/3 of the energy density. A good example how a strikingly simple fact can be overlooked for decades. In normal context, pressure should be calculated locally, of course. However, in studying the dynamics of the entire universe as a singular physical system (ignoring all internal forces) and tracing the energy income & expenditure between our observable universe and its exterior, a “glocal” (global + local) view is rather needed. For a dynamic spatial region being stretched uniformly and isotropically (while its boundary expands at a special velocity, namely the speed of light), the only meaningful and legitimate local pressure exerted by each of its constituent particle (either matter or radiation) has to be measured with regards to the boundary receding at the special velocity, c.

Secondly, it seems to be rather an overshoot compared with Ωm~0.315, as the latest estimate by the PLANCK satellite. Note that the above discussion has better affinity with the Cepheid/supernova-based straightforward measurement of the Hubble’s constant, which gives a ~10% larger figure compared with that drawn from the observation of CMB. Should we agree on the matter density (ordinary plus dark) which is confirmed by other methods such as gravitational lensing as well, 10% increase of the Hubble’s constant gives 21% larger critical density that is further subject to the 1/4 reduction, and 0.315/1.21=0.26 is now closer to 0.25.

Lastly, if the universe is expanding at a constant velocity, then a non-linear downward reinterpretation of zraw to ztrue should result in brighter-than-expected supernovae (compared with the theoretical simulation of the redshift-luminosity plot in the case of constant velocity expansion) instead of - 20 - dimmer-than-expected as actually observed, as zraw increases. (e.g. zraw = 0.1 is equivalent to ztrue = 0.0488, zraw = 0.2 is equivalent to ztrue = 0.0954, and 0.0954 < 2 × 0.0488.) As will be quantitatively explained, the ~20% (+0.2 in luminosity magnitude) dimmer-than-expected supernovae at zraw ~0.5 are actually due to an underestimate of the Hubble’s constant (as will be demonstrated later, the theoretical value of observable raw H0 should be 85 km⁄s /Mpc, with half of this figure as the true rate of expansion). We will also provide a persuasive calculation as for how the supernovae with zraw > 1 turn to be brighter-than-expected again, as actually observed.

Recall that

• the spontaneous symmetry breaking in a binary field gives rise to all physical realities;

• the Planck constant evolves with the cosmic scale factor, which further drives the time-dependent evolution of the massand electric charge of elementary particles are the two main pillars of my proposal.

Here comes an astounding integration of the two hypotheses. The calculations in below strongly indicate that the Big Bang was indeed a breaking of equilibrium in which the probability of two quanta flips simultaneously occur was surpassed by the figure of a fractional number of nucleon mass, which was equivalent to the mass of the entire universe back then. The raw Hubble’s constant from the latest HST result is

The true Hubble’s constant (=H0/2) is

The current Hubble radius is

The current mass density of the universe is

The current Hubble mass is

equivalent to 1.02 × 1080 times of neutron mass (as the precursor of a proton and an electron through beta decay). Since the Planck length is itself proportional to the inverse square of cosmic scale factor, the cubic root of the ratio between the currentHubble radius and the current

which is 2.80 × 1020, reflecting the true increase of the cosmic scale factor. According to previous discussions, the Eddington number should have increased by a factor of 6.14 × 1081(4th power of 2.80 × 1020) from its initial value, thus the initial Eddington number was 1.66 × 10−2. On the other hand, the inverse of 2.80 × 1020 is the decreasing factor of the probability density of quanta flip. The current probability density is the inverse square root of the magnitude gap between gravity and electromagnetism of electron-electron interactions, namely the square root of 1/(4.16 × 1042), which is equal to 1/(2.04 × 1021) . This means the initial probability density should be 1/7.29. And astonishingly,

The calculation exactly equals to 1, when

Here comes H0 = 85.2km/s /Mpc as my theoretical observable Hubble’s constant. Let me now carry out a quantitative simulation on the luminosity of supernovae compared with the theoretical brightness in the case of constant speed expansion (h~0.7).

Firstly, the Chandrasekhar limit remains unchanged in my hypothesis, which assures that both the mass and absolute luminosity of type-Ia supernovae should be basically time-independent.

For those supernovae with zraw~0.5 (ztrue~0.225) and hraw~0.85 or htrue~0.43 (which was wrongly believed to be htrue ~0.7), the luminosity curve we used as theoretical standard was equivalent to zeffective~0.25 & hraw~0.7, or zraw~0.5 & htrue~0.35. (Note that we have to down-scale either zraw or hraw by one half, to prevent double-count.)

The apparent flux is proportional to the inverse square of hz. Thus, the observed luminosity is ~83.7% of its theoretical standard.

The dimmer-than-expected luminosity shall end at zraw~1 where the effect of the non-linear revision of zraw down to ztrue balances with that of the underestimation of H0.

The underestimation of the Hubble’s constant by the HST should be largely due to the intrinsic nonlinearity of ztrue against zraw. Since the effect of space expansion is largely hidden by the proper motion of those closest supernovae with zraw ≪ 1, the redshift-luminosity plots astronomers adopted to calculate the Hubble’s constant (Hraw) are those already subject to a substantial downside bending, resulting in a smaller Hraw. With such a cosmic scale factor or age-dependent evolution of the mass and charge of electron and nucleons, chemistry should be quite different in ancient universe. It urged me, above all, to reexamine the well-established theory of the Big Bang nucleosynthesis. For the reproducibility of the relative abundance of helium 4, since the energy gap between neutron and proton (Q) and the energy level of the freezeout temperature (kBTfreeze) are both proportional to the inverse cube of the scale factor, the neutron-proton ratio should remain unchanged from conventional theory.

Should the mass of elementary particles evolve with the cosmic age as we have hypothesized, the validity of those cosmological parameters drawn from the observation of CMB needs to be re-examined from scratch. No matter how elaborate or resilient the ΛCDM model may look, after all, precision cosmology does not equal to accurate cosmology. Given that both the energy density of non-relativistic matter and radiation are proportional to the inverse square of the scale factor, there should be no such thing as the “radiation dominant” era in the history of our universe, but instead

On the other hand, kBT has to be inversely proportional to the scale factor to the 7/2 power (it is not only impossible but also unnecessary for our purpose to further identify the scale factor dependence of kB and T respectively) to let the energy density of black body radiations

be proportional to the inverse square of the scale factor. Moreover, the number density of photons shall now be

thus, the total number of photons increases with the 9/2 power of the scale factor. Recall the mass of individual Fermions are proportional to the inverse cube of the scale factor, while their total mass is proportional to the scale factor (since ρ ∝ a−2). It implies that the total number of Fermions (and thus baryons as well) shall be proportional to the 4th power of the scale factor, which is the very reason why the current Eddington’s number is ~1080 while the universe has expanded by a factor of ~1020.

Now the baryon-to-photon ratio shall be

whose initial value was

which now settles to a much more reasonable order. My assumption that the number ratio between matter and photon changes over time is rather natural, considering that both of them are constantly and newly generated via the spontaneous symmetry breaking of the binary field. Having got rid of the notion of radiation dominant era, plus the increasing number density of both baryons and photons as the universe expands, the CMB radiation should rather be interpreted as a mixture of photons and neutrinos emitted not only during but also right after and even long after the recombination, approximately in a thermal equilibrium. If the CMB is a snapshot of photons at a particular cosmic age without any subsequent interaction with other particles, its temperature shall be inversely proportional to the 5th power of the comic scale factor (E = hc/λ ∝ a−4 /a 1 = a −5 ) instead of a−7/2 . There should be no sharp or qualitative transition from the state of baryon-photon fluid to the so-called dark era, which is a key insight for our discussion hereafter. With all preparations done, let us recalculate the redshift of the recombination era. The Saha’s ionization equation shall now look like

where Q is the binding energy of Hydrogen atoms, which is proportional to the inverse cube of the scale factor, since

Interestingly, there are two solutions that give X=1/2.

The former is shortly after the birth of our universe that began with

but would have swiftly reached a fully ionized state (X=1) and kept it all the way until 1 + z = 2.87 × 106, when the recombination occurred. When 1 +z = 2.24 × 1018, the baryon-to-photon ratio was

which is fairly close to two, where the Hydrogen atoms, photons, free protons and electrons were fully engaged in the reversible reaction. Recall that my theoretical Hubble’s constant was h = 0.426 (half of 0.852), on the other hand, the PLANCK satellite gave h ~ 0.67. It means that my theory implies the critical density of universe to be 40% (= (0.426/0.67)2) of the PLANCK result, where Ωb was 0.048 (~1/8 of 40%). Suppose the true baryon-to-photon ratio is actually eight times of our current belief, the Saha’s equation will give an initial degree of ionization as

while η0 ~ 80. It is saying that in the very beginning of our universe, there were on average 79 Hydrogen atoms per one pair of proton and electron, and only one Hydrogen atom among them reacts with a photon, which is in an equilibrium with its reverse reaction. The above calculation beautifully explained the asymmetry between the abundance of matter and antimatter. After all, matter in our definition is nothing but a materialization of a pair of +1 (instead of −1), which was, in the first place, a totally stochastic surplus out of perfectly even Bernoulli trials. If this is not persuasive enough, let me bring out the more decisive one. Recall that I have denied both the concepts of a snapshot CMB and the termination of the baryon-photon fluid state. As a result, the horizon of the CMB shall always be the same to the cosmological event horizon and that the sound wave horizon shall be solely determined by the speed of sound in the everlasting baryon-photon fluid. The energy density of CMB photon is

which is equivalent to Ωγ = 5.50 × 10−5 (h = 0.67 → ρc = 8.43 × 10−27 kg/m3), while the energy density of CMB neutrino is

Thus, Ωradiation = Ωγ + Ων = 1.69Ωγ

The sound wave horizon in the baryon-photon fluid (which is still alive today), if the curvature of the universe is zero, shall have a radius spanning over a visual angle of 1/96.4 rad = 0.594° on the celestial sphere. The angle is equivalent to l = 303 as the theoretical major peak in the TT power spectrum of the CMB, which is further equivalent to observational l ~ 220, exactly as the Planck result shows.

Hereby, I would like to ask a favor of experimentalists to verify my hypothesis. With 22.9 billion years as the current cosmic age (RH~2.17 × 1026 m), an experiment measuring the elementary charge or the mass of electron over a time span of one year should yield a detectable difference in the 9th digit after the decimal point. Note that the 2019 revision of SI base units happened to have wrongly linked the standard of second, meter, kilogram and ampere with physical properties that actually evolve with the cosmic scale factor according to my hypothesis. More specifically,

Experimental verification of my hypothesis should take these effects into consideration. At last, let me present an alternative mathematical model, translating the Riemannian geometry of the general relativity into Maxwellian field language. It is known that in the Parameterized Post-Newtonian (PPN) formalism, the centripetal gravitational force in an environment of Schwarzschild-type mass distribution can be simply written as

where γ indicates how much space curvature is produced by unit rest mass, β indicates how much nonlinearity is there in the superposition law for gravity. γ = β = 0, c = ∞ for Newtonian mechanics, γ = β = 0, c = finite for special relativity, and γ = β = 1, c = finite for general relativity. It means that only one additional factor of (v ∕ c)2 is needed from SR to GR. In other words, SR could itself explain 2/3 of the GR effects (e.g. apsidal precession) from Newtonian mechanics. Now it is not hard to notice that if we define a momentum density 4-vector (as the counterpart of the 4-current vector in the special relativistic Maxwell an electromagnetism), the GR correction to SR is nothing but a Lorentzian gravitational force generated by the field of momentum. Just like electric current yields magnetic field, mass in relative motion can also give rise to an orthogonal field whose strength and direction depend on the curl of the momentum. It is not hard to write down the field equations and the equation of motion that can perfectly reproduce the GR effects, which look almost the same to those of the Maxwell an electromagnetism, with the minus sign before the mass of testing particle in the equation of motion as the only difference.

Field equations

Equation of motion

The concept of space-time curvature in response to energy momentum tensor is an intuitive way to interpret gravitational phenomena, being further elaborated by a sophisticated analytical language. However, this fact alone does not guarantee that it must be the only feasible mathematical model to describe gravity. Unveiling the Maxwellian structure beneath the general relativity, in other words, becoming free from the fetters of the continuous space-time dogma, shall be a critical step toward the unified understanding of gravity with other forces.

Discussions

In the end, let me address the pending question I have raised earlier in this paper, namely the entity of the master wave function that governs the entire universe. My answer is, in the deepest layer of mother nature, there are no laws at all. All regularities or physical laws are nothing but statistically correct patterns or statements. The law of large number and the central limit theorem tell us that even out of a complete randomness, we may still expect certain patterns to appear as far as our sampling procedures are consistent. In other words, order comes not from the nature itself, but instead from the ordered actions of its observer. It is not abstract mathematics that ultimately governs the universe. “The unreasonable effectiveness of mathematics in the natural sciences”, as admired by Eugene Wigner, in my view, is not because of any divine power of mathematics, but because it is the only language that we human being can make use - 28 - of to describe the nature. The fact that some mathematical theories are extremely powerful in physical studies is simply because they happened to share certain similarities in their structures with that of the physical phenomenon in our question. All successful scientific theories are nothing but a set of self-consistent logical statements, including but not limited to our definition of time and energy. Any theory that first seemed perfect but was later proven to be incomplete, for example the Newtonian mechanics, is because its seemingly “perfect” logical structure had not been challenged by the hardest test yet. For the case of Newtonian mechanics, the problem was that the Galilean transform was not consistent with our definition of time (whereas the Lorentz transform was). The invariance of the speed of light rightly complies with our definition of time. As I have reveled in this paper, the velocity of light, as the conversion coefficient between spatial separations and temporal progressions (namely the degree of spatial asymmetry), deservedly has to be constant. Back to the entity of the master wave of probability density, it is probably no more than an imaginary construct that can best explain all the phenomena that happen on scales that are macroscopic enough. In this sense, even the fundamental physical constants may only seem to be invariant as we always measure them with huge enough number of trials. The completely random and stochastic nature of the quantum mechanical world can be alternatively interpreted as if it were the basic constants that are wandering, vice versa. After all, it is a matter of subjective decision as for how to interpret the nature. The uncertainty principle tells us that only when we have carried out enough number of trials may we obtain a result with a higher certainty. In his famous book “What is Life”, Erwin Schrödinger had sharply pointed out that all physical laws become reliable only when they are judged by the average behavior of a huge enough number of atoms, which is the very reason why all living creatures have to acquire a certain macroscopic size. A search for the ultimate law of the nature will necessarily end up with “law without law”. It is a conclusion that can be drawn from repetitive rounds of logical reasoning. If we worship a deterministic rule to be the final destination of scientific explorations, then what renders the deterministic character to the rule? The only way to escape from such an endless rat race is to admit Wheeler’s slogan, “law without law”. We can also reach this conclusion from yet another approach. The concept of entropy and the 2nd law of thermodynamics as the explanation of the arrow of time, when we inspect them carefully enough, are actually contradictory with the notions of trajectory, history and even physical laws. It is not hard to realize that if we could distinguish particles of the same type from the tiniest difference, there can no longer be any high entropy or low entropy states. Being totally unable to relate two particles in two time-lapse snapshots of the reality, one as precursor, the other as descendant, we may notice that the notions of trajectory or history of a specific particle become nonsense. Thus, any physical law that is believed to “govern” the motion of particles must be nothing but our wishful illusion.

Eventually, our newly proposed theoretical paradigm may rephrase the almighty principle of least action as a principle of most probability, which is equivalent to least asymmetry in the space. Physical action has a unit of angular momentum whose conjugate unit in Heisenberg’s uncertainty principle is dimensionless, which could be understood either as an angle or as a probability. In the latter context, - 29 - the existence of a larger quantum angular momentum is equivalent to a lesser probability of occurrence of such a situation. Thus, the principle of least action is a rephrase of a trivial fact that it is always the event with the highest probability to be the most likely to happen (What a statement of zero-bit information!). As the scale of our observation grows up to macroscopic, the predominance of the highest probability becomes more and more overwhelming compared to the second highest in an exponential manner due to the multiplicative nature of probability. Therefore, even a mere probabilistic pattern may well look like a virtually deterministic law. Unlike the speed of light and the gravitational constant, the Planck “constant” as a rate limiting factor between deterministic classical physics and probabilistic quantum mechanical world, now has good reasons to be a variable dependent on the size of the universe. A cosmological event horizon that encloses a larger volume may contain more spatial quanta within it (even without the scale factor dependent decrease of the diameter of quanta), thus quite understandably corresponds to a much more deterministic universe. It is rather natural that the Planck scale is a set of evolving standards instead of rigid rulers. Its evolution in accordance with the size or age of our universe (every possible value of the Planck “constant” shall eventually realize at a certain cosmic stage) gratefully frees us from the “mission impossible” to draw specific meaning from the current values of many physical constants or to “glimpse the mind of God” who governs the entire universe.

Conclusion

In response to Einstein’s famous quote “God does not play dice”, Bohr warned him “Don’t tell the God what to do, Einstein.” Today, we have found a better reply: “Yes, you are right, Dr. Einstein, but in the sense that the dicey character of our mother nature is the very proof that there is no God at all.” As for the possibility that there exists another universe (or other universes) that obeys a set of totally different physical laws (I mean a completely different mathematical logic, instead of just the value of physical constants differ), from my aforementioned view that all the theories of natural sciences are eventually a set of mathematically self-consistent (in the most rigorous sense) logical architecture, I believe that the current one we have is the only solution that does not have any self-contradiction. However, as Kurt Gödel proved, this self-consistency may not be provable in the end. If you ask me why the Planck “constant” must be inversely proportional to the 4th power of the cosmic scale factor, my best answer (not only now, but forever) might be “Because it keeps the whole story consistent.”

References