Eliza Donavan Blog

Gravity Simplified

2021 Eliza Donavan

Gravity is a touchy subject for some. It elicits knee jerk reactions and for those in the physics and cosmology community, as there are assumptions that are embedded into fossilized theories.

And then there is the New Age Community…ai yi yi!

That one is post modernist in the extreme, and the flat earthers are in with them.
 Most have no knowledge of physics and are proud of it. Even worse, they denigrate those that have knowledge of those fields, and have a kind of inverse snobbery going on with that which is beyond belief or even common sense. Sometimes, it is good to have a fresh look at something without presuppositions, but with some knowledge of how the universe works, and physical laws. These guys thumb their noses at that and think that all they have to do is think about it, manifest it, and poof! It’s real. If wishes were horses beggars would ride.

So first let’s set some ground rules: It makes sense to look at certain devices that work, and seem to have a strange kind of behavior that is not covered in mainstream physics, or post-Heaviside electromagnetics. Of course, that means we have to use the original Maxwell electromagnetics, quaternions, that have not been made politically correct or lobotomized to exclude gravitational interactions. 

So which ones are these?

1) The N machine of Bruce DePalma

2) The Searle generator/disk

3) The OTC-X1 of Otis T. Carr

4) The Nassikas Type 1 Thruster

5) The Podkletnov YBCO disk
6) The Biefield-Brown Disk

7) The Naudin lifter

8) The Kowsky Frost Crystal Experiment/ Deflexion crystal

Each of these are working devices.
Of these, I don’t think the Naudin lifter has been tested in a vacuum chamber like the Townsend Brown disk has. It has been theorized that the Brown disk worked due to an ion wind, but that theory fell apart when it still worked in a vacuum. No air, no ions.

First postulate: The ratio between the electric and gravitational “fields”, about 4.1 x 1042 power. (http://www.batesville.k12.in.us/physics/phynet/e%26m/electrostatics/michaels_question.htm)

Yes I know…we have a semantic problem with fields and potentials. The best definition is that time is the engine that turns potentials into fields. If there is no delta, no variation in the amplitude over time, no vector in 3 space, then we have a potential. That potential would appear to be static on the surface, but internally we can have engines operating that are varying in amplitude over time, and give the appearance of a potential. Gravity, electric and magnetic forces have been called both fields and potentials interchangeably, probably because the actual definition has been forgotten, or perhaps more importantly suppressed or ignored. 

First let’s look at the N machine. This piece of hardware can give us the first clue about this. We have a disk with an axial magnetic field, and when spun produces an electric potential from hub to rim, but only from the frame of the non moving observer. Why is that? If one were sitting on the disk, there would be no motion of the “field” and therefore no voltage from hub to rim. Actually, that would vary from the exact center of the hub being zero, and the maximum on the rim. 

But we also have another effect. When the disk is spun, there is an apparent reversal of the earth’s gravitational field every 180 degrees. Imagine someone mounted on a wheel where they are inverted every 180 degrees and you get the idea. Why is this important? Well, in the original Maxwellian electromagnetics, when the gravitational potential varies, an electric field is seen, in one dimension del cross G, and in a higher space delta G. So what we are calling a pure electric field is actually a gravitational potential with an incredibly high delta, where the apparent gravitational component is minimized but never completely eliminated. With enormously high electric field tensions, we will see higher shadow vector gravitational potentials “pop out” of that high electric field, and the lifters seem to confirm that empirically.

Second postulate: The ratio between the electric and gravitational potentials is actually a delta, or a frequency.

Empirical formulas are worth their weight in gold. If one has a theory, then an experiment has to be performed to confirm that theory, to prove it. Sometimes the experiment is flawed and it seems to confirm part of the theory, but becomes problematic with the other parts. But if we have a piece of hardware that is doing something, but do not have a mathematical model of how it’s doing that, then it behooves us to backtrack and produce one based on it’s behavior. 

I had the beginnings of one years ago, but it lacked critical elements. It’s time to look at that one again, but first let’s look at the other side of the coin, as what is termed “free energy” and “antigravity” are on opposite sides of that coin. It has been known that FE (Free Energy) devices seem to get lighter, such as  the Sparky Sweet VTA the more power is drawn from them. Remember that it’s the delta that we’re after, and time can affect gravitational attraction between masses. Change the time metric, and you change pretty much everything. The matter waves that make up mass are dependent on that delta as well for the magnetic and electric dipole moments of the particle/wave structure. If time were reduced to zero, mass would also be reduced to zero. So much for the Higgs boson (actually, that was “discovered” through some statistical slight of hand, when the funding was about to run out, but that’s another story). Also, changing the local time metric, in a bubble fashion, could cause the local volume of spacetime inside the bubble to have a lower energy density than on the outside, and then the whole thing would rise like a helium balloon. It was called counterbary in the 50’s and 60’s, because even then antigravity was becoming a dirty word. 

In the late 50’s a concerted effort was made to understand the connection between gravity and electromagnetics. There was even a term that was invented—electrogravity. A paper was authored called Electrogravitic Systems along this line. Many patents were granted for this, including some for Townsend Brown, as well as others. So what happened? Sometime in the early 60’s the lid slammed down on all things gravitic, and not much was mentioned. Some things did leak out, of course, and these were in the sci fi venue. The ship that Klaatu came to Earth in 1951 was described to use a modified application of electromagnetic and nuclear energy. This was a tongue in cheek clue, as a few physicists know that the strong nuclear force can be modeled as a gravitational effect with local curvature among the nucleons. The Jupiter 2 of the original Lost in Space series, that had the original script written around the time that the suppression was kicking in, had a gravimetric drive. The ridiculous attempt at a remake used rockets, and a joke was that the next remake would have the ship shot out of a cannon. Star Trek used warp drive, which is now termed the Alcubierre drive, collapsed space in front of the ship as it expanded it in the back, for faster than light travel.

Despite the suppression, a few things did leak out. Robert L. Forward wrote articles on gravitational engineering, and even one called “The Ultimate Fuel” about antimatter. The suppression was relentless, and after the 21st century, very little information could be seen “in the open” about the subject, to the point now where we are on the verge of a dark age of science, where the post modernist mindset, created by those who engineered the suppression, threatens to collapse civilization. Yes, it is that serious.

So much for history.

I might get back into this, as the only thing that we see recently is the Nassikas Lorentz force thruster. The heady days of free experimentation are long gone, and pressure must be applied to bring it back. 

So we see that the electric field is related to the gravitational potential, and modeling the electromagnetic wave as a G x A wave when one is riding with it is perfectly reasonable, and is also the reason why they don’t do thought experiments about riding a beam of light. So when we ride with it, time stops, and the deltas cease to exist. However, we know that energy can curve spacetime just as large collections of matter can. But these are just potentials as we see when riding the wave, so therefore potentials can curve spacetime just as matter can. It all depends on your perspective. Matter itself acts like a trapped potential, and can be charged or discharged. That matter has a high internal delta, and as a result has only a small “leak” of gravitational potential, which is a good thing, as a high leakage would mean that the matter itself is unstable, and prone to high gravitational leakage. 

Is that a clue? Yes!

High atomic numbers, properly stimulated, can be teased to release higher than normal amounts of gravitational flux. This is why in the OTC X-1 the double cones were filled with mercury. In theory, even higher numbers would be even better, but the trans uranics are way too unstable for this, at least here on Earth. Filling our emitters with stuff that is so unstable it can go into spontaneous fission is a really bad idea, at least until we can find atomic weights in the far end of the island of stability way up in an extended periodic table. In all deference to Bob Lazar’s element 115, I think he probably was confusing this with indium, which instead of the element number, was actually the atomic weight. Indium has some rather strange properties anyway, being the one element that “sings” when bent, and has odd properties interacting with sound, which are longitudinal waves. Perhaps we should try galinstan in those OTC X-1 cones, which is a material used as a mercury replacement in thermometers, which is gallium, indium and tin alloy. I predict it will have some unusual and surprising properties when stimulated in an acoustic field along with a magnetic field bias. Why is this? Sound is also not only just a trapped potential in a resonant cavity, but also a trapped longitudinal potential. A clue for this is Ed Leedskalnin, of Coral Castle fame, that was said to sing to his stones until they “sang with him” in a sympathetic resonance, and he could move the stones around in a near weightless condition. Also John Keeley, who had devices that had antigravity properties, which he called sympathetic vibratory physics. Tesla also had a fascination with sympathetic resonance in electromagnetics, and built several inventions based on that, most notably the Tesla coil.

Some Thoughts on Engines

So what do I mean about engines? I read about things such as vacuum engines in Bearden’s writings, but there were no real explanations for that. It is a bit irritating, as when you look at a complex equation and there is no explanation for unfamiliar terms in there. So here goes: Let’s say there is a car, and we know what comes out the exhaust pipe, and what goes in for fuel and air. Without taking the engine apart, we have to make assumptions on what goes on in there without direct observation. We know heat is developed, and mechanical energy output, and with knowledge of chemistry can deduce the chemical reactions of the fuel and air. The same is true of vacuum engines, or engines in general. So with the B field, we know that taking the magnetic vector potential, we can apply a delta to that and the cross component is the B field. Time in that case is the engine that turns the potential into the field. The same is true for any potential. Now here’s the kicker: does time have energy? If the answer is yes, then the potential does not necessarily have to have energy in the normal sense. But we know that a pure static potential cannot do work, and must have a time varying aspect in order to do that. There is a term “potential energy” which exists in the virtual realm, and is released through interaction either through motion, that requires time, or variation in amplitude standing in one place, that also requires time. So if we are correct that potentials can curve space, and have potential energy, then it is the interaction of potentials with time, like the fuel and air in the auto engine, that creates the field. The interactions within that framework is the engine, and so we can interact enormously high potentials and small bits of time, or smaller potentials and large chunks of time, and get the same result. 

Commonalities

So where do we go with this? If one examines every working device, there are commonalities between them. Those dependent upon electric field interactions, will have common performance characteristics, such as the Naudin Lifter and the Biefield-Brown effect disks. Both work well with asymmetric waveforms with a moderately high frequency, and high voltage. It actually went further than that when Brown made one electrode substantially larger than the other, with an asymmetric density to the electric field gradient. So the key word here is “asymmetry.” Now let’s look at the Nassikas type 1 thruster, also called the Lorentz force thruster. Here we have a magnet that is placed into a YBCO collar that is supercooled in liquid nitrogen. What does this do? Due to the Meissner effect, it forces the magnetic field lines to go around the collar on one side, while the flux density remains tight on the other. We have a magnetic field density asymmetry, causing greater local curvature on one side, and attenuating that curvature on the opposite side. And sure enough, the magnet goes toward the narrow high flux density end. Now how about the N machine? Well, here we have a velocity asymmetry for the magnetic field, with the rim side of the magnet in a high rotational speed, and the center nearly standing still—another asymmetry. The Podkletnov disk? Same thing, with an external magnetic field, a nearly nonrotating frame near the hub, and high velocity near the rim. The Searle disk? Same thing again, and this time more so with the rollers at an extremely high speed, and several different layers of rings progressively larger, causing an even greater speed differential between the rollers of the ring assemblies. The OTC X-1 of Otis T. Carr? Here we have another dimension of asymmetry, but this time with the gyroscopic component, and the vector converging above the disk, and diverging below it.

It is important to note that for any field gradient to actually be measured, it must have an asymmetric component. If there were no gradient, it would be scalar by definition. So if we have a gradient, it also has a gradient of energy density, and thus local curvature of the vacuum. If we are to imitate nature, then that gradient, and preferably asymmetry would also be needed to observe an effect. So therefore if we have field gradients with the required asymmetric component, then there should be either a propulsive effect, or a local curvature that would cause current to flow toward that volume of localized curvature, and thus apparent “free energy.” It’s not actually free at all, and one is removing from the local vacuum a tiny amount that changes the energy density locally and producing an effect on the masses and time metric. But since the energy density has been estimated by Shinichi Seike at 1092 ergs per cubic centimeter, we can draw enormous amounts without significantly destructive effects.

So we see that what we observe as fields, unlike Democritus and the atom so long ago, is divisible. In fact, it will probably be seen as an infinitely recursive fractal structure. The farther down we go in scale, the more dimensions open up, at least until we hit the Planck level that is 10-33 cm in size. Even that number might be debatable, as dimensions change as we go down in scale. So in higher dimensions, the finer the divisions become, and recently there has been proposed an octonion model that is promising, using 8 dimensions out of a possibly huge number to unify all the forces seen in 3-space. This might be how to explain the impossibly huge delta that is expressed as a rate of change over time for the gravitational “field” to electric, of about 4.1 x 1042 Hz. This would be expressed as an upper limit. However, here’s where things get really, really interesting. If we look at Planck time, 5.391247 x 10-44 seconds, it’s close at 2.439 x 10-43 seconds. Then there is wavelength, which is 7.3120 x 10-35 meters, close to Planck length at 1.616255 x 10-35 meters. The ratio for both of those works out to be 4.523, without exponents. Why is that ratio there, and what is the significance?

Then there is the Chandrashekhar limit. This is the temperature limit of photons in the universe, where the “glue” that holds photons together is 6 billion kelvin, which is way, way below what this blackbody temperature might be. The particles making up this wave would be way too unstable to last any significant length of time. Unless it’s something else… So the next question would be what particle/wave structure is responsible for propagating that? In “A Dual Ether Universe” by Sokolow, he mentions that the photon is composed of a neutrino anti-neutrino pair bound together by a specific binding energy, which makes sense, and when that binding energy is exceeded, then it falls apart, causing the neutrino flash horizon seen in supernova blasts. We also know from quantum and wave mechanics that the higher the energy, the smaller the quanta, and therefore the greater the energy density per unit volume of that quanta. So here’s a stretch: what if the Chandra limit of temperature is like the periodic table’s island of stability, and that is the low end of a bell curve that swings back up again as the energy is increased, except in this case it is the curve of spacetime keeping the pair together? If the spin 2 graviton is composed this way, then it would be nearly impossible to pry it apart, and would appear to be a fundamental force or potential that cannot be reduced to smaller components. It would have incredibly high energy, and at those almost unimaginably small scales, would be operating in higher dimensions at the same time, and like the blind men and the elephant, would appear to be different things depending on where our dimensional perspective happened to be at the time. This is also a tenet of M theory as well, but I’m extending it to gravity in this case. If that is true, can the magnetic vector potential “A” be such a case? And if that is true, then the model of the electromagnetic wave in the static sense would be two gravitational ripples in the active vacuum caused by two interacting potentials out of phase in that dimensional spacetime. This might make modeling things like  longitudinal or torsion wave propagation more easy to understand, or circular polarization.

Usually at this point I try to make a joke, like gravity being a really heavy concept to understand, but if it is really a fundamental force in the universe, not only attracting masses together, but responsible for mass itself as well as the spacetime it exists in, then we are really on to something. It is best to think of the primary particles as a balloon, and the skin of the balloon as a Casimir boundary between the pressure outside and the pressure inside. Too much pressure inside, and the balloon pops, and the particle is unstable, such as the hyperons and superheavies. But the strange part is that it’s a crazy kind of gas in that it can change densities, and implode as well to create stable gravitons.

Conclusion

The normal attitude when one is confronted with weird phenomena that cannot be readily explained is “Hey, that is really strange…” and then forget about it because it can’t fit inside a readily made cubby hole in the mind. Other people, myself included, just pop that into a cubby hole marked, “Better take a look at this later,” and see if there is a connection somewhere else that has not been seen before. Good scientists think like that, including everything, ignoring nothing. Of course, most of us are guilty of assumptions, and those are merely tools to get to where we want to go, and stopping or ending with  assumptions are a bad idea and an indication of faulty logic.

So we can begin with assumptions, as long as they lead somewhere, preferably a working theory. It’s the beginning of the journey, much like opening up the front door of your apartment or house. The theory is analogous to the path you walk to get to where you’re going. And if others use that path, and arrive at the same place, then that repeatability is what makes science. After the path has been trod often enough, then perhaps a road or a walkway is built, and then the theory is proven and becomes a working theory. The problem with those of us on the path for gravity or similar forces is that there are so few that follow. The path becomes overgrown, and forgotten, and then others need to rediscover it, such as the decades long gap between the passing of Tesla and the rediscovery of his work with the International Tesla Society, now defunct. 

It is my sincerest desire that there will be others to follow on this path.

Peace!

Eliza Donavan

Leave a Reply

Your email address will not be published. Required fields are marked *