Case Report
J Phys Astr, Volume: 6( 1)

Universal Binary Design and Curved Classical Mechanics

*Correspondence:
Mark T Department of Astrophysics, University of Canada, Canada
Tel: 33662922853; E-mail: [email protected]

Received Date: December 08, 2017 Accepted Date: March 21, 2018 Published Date: March 28, 2018

Citation: Mark T. Universal Binary Design and Curved Classical Mechanics. J Phys Astron. 2018; 6(1):140

Abstract

This paper asserts that the universe operates in a binary manner that is limited in measurability by the information processing ability of the observer in all dimensions: universally, at the large scale, at the small scale, and through time. This has interesting consequences such as increasing the speed limit of the universe to √2c, doing away with the need for a space-time fabric, applying parsimonious uses to dark matter and dark energy, and accounting for theories such as MOND. The assertion is arrived at on the basis of an original cosmology and mathematical representations of universal rudiments and gravity. Newton’s gravitational constant is given a range of 0.869G – G that should be used at long distances.

Keywords

Nanoelectrolysis; Gravitational; Electrolysis; Dark matter; Electrode

Introduction

This paper will claim that the universe operates in a binary manner and that √2c is the speed limit of the universe. This conclusion is arrived at on the basis of an original cosmology centered around a full account of dark matter and dark energy and a new paradigm for observing phenomena via a speed- and energy-dependent observational field. The cosmology accounts for a maximum speed of √2c by linking dark matter, which is one binary output of the universe, to dark energy directly. The new paradigm creates a basic framework for limiting measurement possibilities classically based on speed- and energy-dependent observational fields that can be applied in cases of long observational distances, large-scale phenomena visible at the speed of light, and quantum uncertainty. √2c is asserted as the universal speed limit of the universe once more on the mathematical representation of (S)E=(S)mc2 as a universe-containing sphere with a conic ratio of √2, with all four properties being scalar and time-invariant. Newton’s gravitational constant is given a range of 0.869G – G that should be used at long observational distances such as those of MOND. An account of gravity is provided [1-3].

Case Report

Universal binary: Observational fields and a universal speed limit based on finite entropy

The observational field of an observer has the following rules:

1. It is related proportionally to increases in the energy proportionality of the observer: c at space-time E=pc, c2 within space-time E=mc2, and cμ as pure matter dE0t/dpt=0.

2. The range of speed affecting observational fields is c-1.4 to 1.4c with measurement possibilities decreasing at higher speeds.

3. It is characterized as either x (classical), x2 (quantum), or x4 (entropically-spent), with only one measurement possibility available when x and c match: spin. The measurement of spacetime within space-time xc2 requires a composite measurement construction.

4. Within a classical paradigm x, uncertainty within an observational field depends on angular frequency of the observer and the observed (4), serving in the same role as the reduced Planck constant within x2.

5. Silver spirals would be used to form the observational field. Speed would increase the size of the rectangle proportionally and energy of the observer would increase depth of the cylinder proportionally.

Measurements appear to be limited by the information processing ability of the observer in a permutable universe composed of binary outputs. The best approach to ascertain truth is to set unfalsifiable bounding conditions regardless of information processing ability of the observer. Considering the teleological and universal law of entropy needing to be fulfilled with the existence of mass, and considering how speed of light and mass must remain in a constant ratio according to E=mc2, these proportional scalar quantities are representable in polar coordinates. Since these four scalar quantities are so-related exclusively, they can derive physical meaning as an entropy vector by being represented on a sphere with a conic radius of √2 and thus energy being both subtended and angular. Moreover, since this combination would be sufficient to create expansion of the universe instead of reducing to one, the limiting feature c of this universe as measured from within space-time would be multiplied by √2 to get the scalar true value cT. vT would be limited at the low-end to 1e-11 or c-1.4 because there must always be an area subtended (Figure 1).

Physics-Astronomy-reactions

Figure 1: Solar reactions create entropic wave dynamics.

From an abstract perspective one can see that motion is not limited by a space-time fabric but by the limited information processing ability of the observer relative to quantum behavior and entropy. What natural phenomena account for our limited information processing ability of motion?

Discussion

Large scale binary: Entropic relativity amidst large galactic masses

From the point of view of an observer with the energy and speed of a gamma wave, a galaxy appears as three horizontal sheets. The top and bottom of these sheets represent dark matter while the middle sheet contains stars and an extremely prominent black hole. This is ultimately an efficient and efficiently permutable system much in the same way as a quantum system (Figure 2).

Physics-Astronomy-matter

Figure 2: Effect of dark matter on containing galaxy.

Dark matter, seen as two of three sheets at the speed of light and energy of a gamma wave, appears to accrue around galactic masses to prevent spinoff1 according to different physical laws. It forms these two sheets when observed at a speed approaching that of light and becomes slower and more diffuse as the observer decelerates. This indicates that dark matter abides by principles within a different dimension: a mass-energy dimension, where dark matter is measured in terms proportionally related to the speed of the observer. As will be explained this is ultimately the efficient result of equilibrating the net flow of entropy in response to light and gravity in the universe. Condensed matter would be composed of many 2*3 grids of quarks alternating through time between their quark, antiquark and deactivation. Dark matter would involve only one quark alternating randomly and increasingly slowly between its quark and antiquark. Due to the infinitesimally small and fluid nature of dark matter particles, the two viewed sheets of dark matter represent all the dark matter in the universe acting together on the individual galaxy. The top layer contains all deleted universal entropy before that moment and the bottom layer contains all potential universal entropy, with a varying time vector incorporated within entropy itself.

The entropic deletion process (irreversible energy transfer) occurs at 1.4c and is the result of black hole gravitation stripping all energy from mass itself. This speed is consistent with the abstract scalar sphere and the fact that universal mass remains constant while 2/3 of universal energy is dark energy2 resulting directly from the entropic deletion process. 2E=2mc2. This parsimoniously links two previously mysterious phenomena. Moreover, this speed is made possible by the rate of time reaching almost zero in a black hole. This phenomenon accounts for the accelerated expansion of the universe from dark energy that counters the effects of gravity in an effort to maximize entropy.

Bidirectional time: Gravity as negative volume

Gravity is ultimately a consequence of the existence of light in a universe that maximizes the rate of entropy as physical processes are undertaken.

Cosmologically it can be imagined that there was no gravity at all in the primordial universe, and that small masses operated only according to the other three fundamental forces. As primordial stars emitted much high-energy radiance, thus vastly increasing the scope of time and limit of space in the universe, huge entropic dead-ends formed as the stars naturally died out. The major effect of the first black holes was the counteraction of these entropic dead-ends. Black holes achieve this by offering a time gateway where mass-energy is stripped of all energy, thus eliminating entropy and reducing the rate of time to zero.

This can be understood from the rudiments of the universe and their relations. Assume that there is, initially, an abstract constant time adjustable only according to the speed and energy of the observer, the latter due to increasing quantum mechanical possibilities. Assume that the universe contains space, mass and entropy. Assume that a maximized rate of entropy is ideal. Entropy clearly depends on the speed and energy of the observer as well as universal mass/space. As such one can deduce that maximum entropy is some ratio of mass/space/time rate. The existence of much high-energy radiance within a hugely enlarged universe would drastically lower this ratio as space increases without a commensurate mass increase. The rate of time and observable entropy would vary too greatly over incredibly large distances. Considering that entropy uses mass-energy within space and time, the solution for lowering time variability and reducing space must involve the elimination of existing entropy without disobeying conservation of mass.

How is this achieved? Consider a vector through space and time as velocity and a vector through mass and energy as entropy. Time varies inversely with velocity and proportionally to energy. In a black hole, however, time (vector) depends solely on space; mass, entropy, and time (scalar) are irrelevant. As an object enters the event horizon, time uniformly and continuously slows with position as mass transfers energy (–mass/entropy). The gravity in a black hole is therefore simply dt/dr and provides a uniform dt/dr gradient operating irrespective of motion and energy of the object. A third property – volume of the object – is also translated as a uniform dt/dr gradient forms.

One may assume that, just as within a black hole, any uniform dt/dr gradient creates a curvature in space that is acted upon as gravity if energy is transferred. Yet these conditions simply cannot occur outside of a black hole. One can conclude that a uniform dt/dr gradient that is caused by irreversible energy transfer will necessarily lead to a black hole. Moreover, one can conclude that the existence of irreversible energy transfer and black holes would simultaneously create a disturbance in the original scalar flow of time. Since time has now become a vector with (Figure 3) coordinates of volume in addition to motion and energy, rudimentary space-time and gravity are born. Space-time fabric therefore forms from other natural occurring phenomena in the universe seeking equilibrium. Gravity can be understood more generally as a time vector that calibrates to absorbed volumes and creates negative volume. A gravitational wave would occur as a shockwave in the collective mass of dark matter due to the increase (merging black holes) or appearance (merging binary system) of the entropic deletion process [2].

Physics-Astronomy-extrapolated

Figure 3: Thick lines represent appearance of dark matter and can be extrapolated on graphs to the left.

This theory would explain observations such as MOND at long distances and the precession in the orbit of mercury. At long distances presented by distant galaxies, an infinitesimally small dt/dr gradient appears within our observational field, distorting the field much the same way a black hole distorts space. Additionally, at the volatile energy transfer rates of the orbit of mercury, a new degree of negative volume is created.

Calculation of G

equation

Correction of eqaution to correct for the difference. The denominator r2 reduces to r at high values in Newton’s law of universal gravitation [3]. While other values, such as √3x with DG, would also work for determining G, the overall theory relies on universal limiting values such as c being maximum at their scalar true values of √2x and may require using only √2 multipliers for all values. Moreover, while G is the specific limiting value of v and can be placed on another scalar sphere that can be reduced infinitesimally to 1, the universal scalar sphere cannot be reduced to one as it is sufficient to create expansion.

Small scale binary: Energy of the observer in relation to quantum entanglement

Quantum entanglement is a result of the mathematical idea, rooted in nature, that a bit of data (qubit) having two possible outcomes only acquires an actual outcome upon measurement. In nature, this would mean that the outcomes are constantly interchanging as a universal law. Quantum entanglement arises as the system in question acquires a probability distribution for the outcome of a measurement of a property such as spin along an axis of the second particle only upon measurement of the first particle.

What are the universal laws that govern qubit outcome interchangeability? The answer may lie in what energy “senses”. Different energies sense differently, with “sense” here referring to that with which the energy can actually interact via energy transfer. Notably, the interface between baryonic and dark matter is dual-outcomed because its measurement depends on the speed and energy of the observer. This may apply to qubits as well (Table 1).

  Classical Quantum
Qubits n2 --> n
Uncertainty 0 --> Heisenberg uncertainty
Energy of observer xc2 --> x2c2

Table 1. The interface between baryonic and dark matter is dual-outcomed because its measurement depends on the speed and energy of the observer.

These account for differences in certainty. For example, as uncertainty is increased as qubits are reduced by a factor of a square root, moving from classical theory to quantum theory the identifier must increase by that very same factor, x –> x2. Measurement possibilities reduce when x and c match, without accounting for a composite measurement construction such as space-time. Full interaction on the scale of x4c4 would require an energy increase only attainable within a singularity. Nonetheless, these parameters may enable the construction of speed and energy-dependent observational fields [3-6]

Conclusions

From these ideas combined, one can formulate the following theory:

1. Uncertainty is caused not by a reduction from classical bits to qubits, but by the information processing ability of the observer and randomness.

2. The contents of the universe operate according to binary combinations of the smallest kind that can only be interacted with at impossibly high energy levels proportional to c4 and low speeds of approximately c–1.4.5.

3. The entropic deletion process at the interface of dark matter and baryonic matter occurs at 1.4c and can only be measured at impossibly high speeds of c, yet at energy that is proportional to c. This process is responsible for gravity.

4. Considering points (2) and (3), one can deduce that quantum mechanical possibilities increase as an action is taken according to the energy of the observer elevating from being proportional to c to being proportional to c4 and according to the speed of the observer decreasing from approximately 1.4c to approximately c-1.4.

5. Values above c–1.4 can be rounded to 1e-11 and, as such, the same scale can be used for the limiting value of G. This provides a mathematical solution to MOND based on the distortion of an observational field due to a time gradient dt/dr much in the same way that a uniform dt/dr gradient distorts space in a black hole.

References