Frontiers in Propulsion Science Outline

This article is under reconstruction on December 30, 2021.

Purpose of this Forum. This forum will serve as my (David Roffman’s) notes on the textbook, Frontiers in Propulsion Science (Edited by Marc G. Millis and Eric M. Davis; published by the American Institute of Aeronautics and Astronautics, Inc.). The book, hereafter referred to only as The Textbook, provides the most up to date information for engineering physics as it relates to spaceship propulsion. Originally I posted questions (highlighted in red) to pursue at Embry-Riddle Aeronautical University where I earned a B.S. in space physics. However, it only took me 5 semesters to earn my B.S., and much of that time was spent researching the density of the Martian atmosphere. This means that some of the questions remain to be addressed even though I now have my PhD in physics with a specialization in Computational Condensed Matter Theory .

My notes are broken up into sections that match the chapters of The Textbook.  Comments and/or corrections to my notes and questions by the AIAA textbook authors or other knowledgeable authorities are most welcome and will be published in the appropriate sections unless they are of a private or potentially classified nature.  Originally I wrote most of this outline in 2011. Back then our Pentagon had not yet admitted to frequent encounters between Navy FA-18 fighters and UFOs known called Tic Tac fighters. While the Pentagon does not confirm that the objects are piloted by aliens, Dr. Michael Salla makes a good case for Germans using alien technology. This is covered in some detail on my father’s website at ArkCode.com. While we may not yet know which type of breakthrough propulsion technology powers the tics tacs, as 2021 come to an end we do know that there is real technology currently operating far in advance of known physics. More, our deceased uncle, Eugene Roffman, played a major role in back-engineering what was found at Roswell, New Mexico in 1947. Between what he left us, and actions we have seen taken by American and British intelligence officials, it is only a matter of time until we are permitted to have a true understanding of our place in the universe.

FRONTIERS COVER

 

CHAPTER TITLE STATUS OF NOTES
Preface Preface for Frontiers in Propulsion Science Updated 2/12/2012
1 Recent History of Breakthrough Propulsion Studies Updated 2/12/2012
2 Limits of Interstellar Flight Technology Updated 2/12/2012
3 Prerequisites for Space Drive Science Updated 2/12/2012
4 Review of Gravity Control Within Newtonian and General Relativistic Physics Updated 2/10/2012
5 Gravitational Experiments with Superconductors: History and Lessons Updated 2/10/2012
6 Nonviable Mechanical “Antigravity” Devices Updated 2/10/2012
7 Null Findings of Yamishita Electrogravitical Patent Updated 6/28/11
8 Force Characterization of Asymmetrical Capacitor Thrusters in Air Updated 5/12/11
9 Experimental Findings of Asymmetrical Capacitor Thrusters for Various Gasses and Pressures Updated 2/12/12
10 Propulsive Implication of Photon Momentum in Media Updated5/13/11
11 Experimental Results of the Woodward Effect on a Micro-Newton Thrust Balance Updated 2/3/12
12 Thrusting Against the Quantum Vacuum Updated 6/20/11
13 Inertial Mass from Stochastic Electrodynamics Updated 2/3/12
14 Relativistic Limits of Spaceflight Updated 5/16/11
15 Faster-than-Light Approaches in General Relativity Updated 2/10/12
16 Faster-than-Light Implications of Quantum Entanglement and Nonlocality Updated 2/9/12
17 Comparative Space Power Baselines Updated 2/7/12
18 On Extracting Energy from the Quantum Vacuum Updated 2/7/12
19 Investigating Sonoluminescence as a Means of Energy Harvesting Updated 2/7/12
20 Null Tests of “Free Energy” Claims Text Repaired and Updated 2/7/12
21 General Relativity Computational Tools and Conventions for Propulsion Updated 2/7/12
22 Prioritizing Pioneering Research Updated 5/16/11
     

PREFACE

This book focuses on the science of concepts, not the technology.  That doesn’t mean that technological approaches are not mentioned.  While higher end breakthroughs may prove too difficult to achieve, the text explores each concept from a theoretical concept as well as from a feasibility viewpoint.  The preface points out that questioning better increases understanding, even if there is no success.  It is important to remember that this was the first book of its kind.

Frontiers in Propulsion Science is not favorable towards Podkletnov’s gravity shields, T.T. Brown, and Yamishita’s electrogravitics.  Anti-gravity is not endorsed as a legitimate discovery at the moment (despite the claims made in Nick Cook’s book, THE HUNT FOR ZERO POINT, and by a number of associates who have contacted my family on seemingly unrelated archeological issues). It is shown in chapters 8 and 9 that oscillators/lifters do not constitute anti-gravity devices.  Chapter 15 indicates that warp drives and wormholes are possible, but would be incredibly difficult to engineer, with the first not being feasible.

The book also considers tapping the quantum vacuum, sono-fusion, causality, and faster than light travel methods.

       FRONTIERS IN PROPULSION SCIENCE  is an outstanding place to acquire a survey of unclassified work pertaining to Breakthrough Propulsion Physics.  But as the Preface acknowledges on page xxv, it “should not be interpreted as the definitive last word on the topic of seeking spaceflight breakthroughs.”  Indeed, at least one of the authors (Dr. Hal Puthoff) has had very public associations with key members of the UFO community, and eight of the 22 chapters are the property of the U.S. Government, meaning that they are subject to Government censorship.

My opinion of this book after earning my bachelor’s degree in space physics, Masters and PhD in physics as follows. The book covers many subjects, but the math to back the assertions is patchy.  While there may be some complex equations, there are few attempts to show derivations as is done in almost all textbooks. The difficulty level is also highly variable in between the chapters. This book should really be classified as an overview or reference book of the field and not as a textbook (there are no questions/problems to solve and possible answers). In producing the write-ups I had to do further research to understand/elaborate on the content. Updated 1/9/2011

CHAPTER 1 (Recent History of Breakthrough Propulsion Studies). Chapter by Paul A. Gilster of the Tau Zero Foundation. Notes by David A. Roffman.

The revolutionary field of Breakthrough Propulsion Physics is dedicated to advancing our knowledge of theoretical propulsion science and novel power sources.  It entertains ideas from warp drives to the Quantum Vacuum.  For the time being, this new discipline is relatively underfunded and has scattered scientists that may or may not share work.  Those who do not share are somewhat of a burden to the whole because they force others to reinvent the wheel and drain limited research funds.  However, the “think-tank” has persevered in the face of budget crises, and has skillfully used its limited funds to test (and, as of February 2009 when this book was published) mostly disprove ideas posted by many other researchers.

The textbook covers a multitude of propulsion proposals, including Space Drive, Wormholes, Faster than Light Travel (FTL), Anti-Gravity, Warp Drive, Zero Point Energy (ZPE), and so forth.  In order to define some of the terms, the Space Drive entails a propulsion device that relies on the use of indigenous matter as a power source for motion.  However, since the hydrogen and other gases in interstellar space are relatively scarce, this idea may not have merit, unless the Quantum Vacuum is tapped.  The vacuum of space is not at all empty.  In fact, scientists wish to procure a variety of exotic particles and unusual “matter” from this un-voidly void.  Particle creation is given by the uncertainty principle: ΔE*Δt > (h-bar)/2.  This version is the energy-time uncertainty principle.  It allows for the creation of a particle of a specified energy for a specified amount of time.  Since h-bar (Planck’s constant divided by 2) is so small (~10-34 in SI units), large amounts of energy are only available for a very short time; plug in numbers to see what happens.

The hopeful prospect in the vacuum is ZPE.  This is the lowest allowable energy state allowable.  Currently, it can be drawn from the vacuum and used as an inefficient power source via the Casmir Effect – what happens when two charged plates are placed in extremely close proximity, and ZPE is drawn through.  Currently, the whole idea of the vacuum as a power source is listed as nonviable, but more testing is needed.

Another bizarre quantum idea is quantum tunneling.  This uses a wormhole to travel faster than light in terms of distance traveled in a given unit of time (not the actual speed of the wave/particle in question).  Tunneling is also deemed to be nonviable by the textbook, because only huge wavelengths (allegedly too large to effectively carry information) can be passed through the “vortex.”  Look at the energy-time uncertainty principle and the fact that E = h*c/λ.  Larger wavelengths mean less energy, meaning more time is allowed to perhaps send a signal.    Once again though, more testing is needed to see if this path leads to a dead end.

By the book’s publication , there were several attempts to research the field of Breakthrough Propulsion Physics, including: Vision-21, the Breakthrough Propulsion Physics Project (NASA based), Project Greenglow, ESA’s General Studies Program and Advanced Concepts Team, etc.  NASA’s work had a total of 1.6 million dollars to fund their research over a seven year period.  But given that small amount of money, it is doubtful that the funds paid for more than the AC bill.  The program was eventually cut, and thus, went nowhere (except, perhaps, for disproving Podkletnov and his Gravity Shield claims amongst others).

GRAVITOMAGNETIC FIELDS IN ROTATING SUPERCONDUCTORS

“The coupling of electromagnetism, gravity, and spacetime,” according to Gilster, “offers ground for continuing interest.”  This raised questions about Eugene Podkletnov’s claims which will be discussed in my review of Chapter 5. Martin Tajmar and his associates used a spinning superconductor ring to detect what looked like a frame dragging force that was much stronger than theory would have predicted.  Such effects show up below a critical temperature for a number of materials including a nonsuperconducting ring of aluminum.

 CHAPTER 2 – Limits of Interstellar Flight Technology. (Chapter by Robert A. Frisbee, Jet Propulsion Laboratory, CAL Tech,  Notes by David A. Roffman.

Voyages to other stars are currently science fiction, and it appears that it will stay that way for a long time.  Proposed (and approved by the Textbook) methods for propulsion though are: Light-Sails, Matter-Antimatter annihilation rockets, and the Fusion Interstellar Ramjet.  Of course, these methods are currently infeasible, and would require much work and funding to develop; both of which are in great shortage unless there are efforts in this field that are backed by so-called black budget funds as was the case with stealth aircraft development.

LIGHT SAILS

The first of these three proposals (Light-Sails) relies on the use of a laser to fire a high energy beam at a “sail” for propulsion.  A laser must be constructed on or around Earth for this plan to work.  There are no practical applications of this laser except for space travel.  This method does not involve solar energy, which is important because solar radiation available deceases in accordance the inverse square of the distance from the sun.

The laser receiving sails would only be 63 atoms of aluminum thick (if that element is selected).  Such a thin layer couldn’t easily be manufactured on Earth, so the best approach may be to construct the sails in zero-G space.  The Textbook proposes that Al be sprayed onto a plastic sheet, and that the sun be allowed burn away the plastic template, leaving only the Al sheet behind. Low density for the sails would be a complete antithesis to the size of the sails (i.e. length and width).  Total length might be as long as the distance between the Earth and Moon.

Size “impossibility” is also coupled with the heat produced by the laser on Earth.  Too much intensity could cause agglomeration (droplet formation of Al in this case on the surface).  Evaporating Al could prove unpleasant for any mission itinerary.  Furthermore, the laser’s power would be reduced by relativistic effects (i.e. more mass + big problem).  Also, the red shift to follow would alter the reflectivity and absorbance of the sail.

The craft as a whole could be a two stage adventure, with one stage reflecting laser lights to another.  Although this design is a possibility, it seems like way too much work and “impossible” designs (with luck).  Also, the “sails” would be subject to tear by interstellar particles.  However, there could be things that get in the way of the laser.  The question is what effects they would have on propulsion.

THE FUSION RAMJET

The interstellar particles are some of the same “dust” that can be harvested for the Fusion Ramjet.  The ramjet approach utilizes Hydrogen (Deuterium specifically – it has one proton and one neutron) and Helium-3 in a reactor, and fuses the Deuterium together to create Helium.  All of the fuel is made in-situ, that is, it is collected from the environment.  There is a problem though.  Efficient fusion is currently impossible, except in stars.  There are fusion reactors today, but they require a much greater input of energy than the output.

Impossibility also dogs the anti-matter approach.  It can barely be harvested today.  However, if production obstacles can be overcome, it would be possible to generate enough energy to travel at least 50% the speed of light.  There are two key reactions.  The first is where a proton and anti-proton are combined to yield two uncharged pions, 1.5 position pions, and 1.5 negative pions.  Those uncharged particles transform into deadly gamma rays.  The charged particles become muons and neutrinos.  The second reaction involves a positron (positively charged electron) and an electron.  This yields two gamma rays that are 355 times less deadly than the two produced in the first reaction.

A problem associated with all types of non-space bending propulsion is the amount of time required to reach peak velocity.  In many of the theoretical cases, deceleration after reaching that velocity also poses a problem.  Thus, the speed up and slow down periods would require too much time.

The desired great speeds also have another weakness: Interstellar dust.  It is proposed to build a dust shield, but how effective can it be?  Although the impact speed is great, the thin size of the sail (again, only 63 atoms of aluminum) does not allow for great heat generation in the short period of time involved during the impact (at .1C, just 5 X 10-16 seconds).

This dust would also impede on the ramjet.  Interstellar dust would slow down the craft with drag.  The hydrogen also may not be useful for the ramjet.  Deuterium (a form of hydrogen) is scarce compared to regular hydrogen.  So, much of the impacting dust is “waste.”

A more conventional, but no less wasteful idea for transport is the pulsed fission propulsion.  This process releases explosives charges behind the craft for acceleration.  These charges are nuclear devices, providing an opportunity of a life time to surf the “big one.”  They would allow speeds of 3.3% of light to be reached.  However, the amount of nukes required would be in the hundreds of thousands, with a one megaton yield each.  Unfortunately for this craft’s bold designer (Freeman Dyson), the world does not possess that many warheads.  Also, one warhead must be released every few seconds.  Another complication is that the propulsion would be too slow, taking 130 years to reach Alpha Centauri.  On the positive side, Dr. Strangelove (see Part 10 of 10 for the movie) would love such an idea.

FISSION FRAGMENT PROPULSION

Yet another nuclear idea is to shoot waste out of the pipe of a rocket (fission fragment propulsion).  As the name suggests, fragments of U-235 fission are used as exhaust.  Some of the favorites in the book are Sr-90 (Strontium) and Xe-136 (Xenon).  Like all “reasonable” propulsion options, maximum velocity is little (in this case about .03C).  The “waste” will be “excreted” via magnetic fields.

A variant of the fission fragment propulsion rocket is fission fragment “Sails.”  A fissionable sail is used here in which the layer that “burns” releases particles to be intercepted by the absorbing layer.  The sail will degrade and become less efficient over time.

ANTI-MATTER

The Anti-matter approach can lead to much energy, but currently only 10 nanograms of antimatter are produced a year.  Storage is also a problem.  An Anti-Proton is subject to the space-charge limits imposed on any ion plasma; with current magnet technology this equals about 1010 to 1012 ions/cmor 10-14 to 10-12 g/cm3.  Electromagnets make storage an expense too.  Due to these problems, a rocket may seek to use liquid Hydrogen and solid anti-hydrogen (in floating pellet form).  The Anti-Hydrogen must be at 1K to avoid contact with container (destruction).

Although Anti-Matter production is not currently efficient, we can use anti-matter as a catalyst.  It can be used to lower the activation energy of fission, which in turn can be used to generate fusion.  Negative muons could be used to generate 16 neutrons in fission rather than the standard 2 to 3.  This hybrid approach could allow a 130-day Mars mission (roundtrip with 30 day stop over).  It would also be possible to reach Jupiter in a roundtrip in 1.5 years (with 30 day stop over).  The approaches (technologies) are currently available, and quite feasible.  This travel would be useful for in solar system travel, but isn’t it a little dangerous (nuclear, fission, and anti-matter risks)?

Gilster discusses how human habitation of other worlds may require journeys in the hundreds of years.  If so, multiple generations must be born, raised, and die on board ships.  The minimum initial crew size must be 50 in order to have a genetically stable crew over 2,000 years (to prevent genetic drift and excessive inbreeding).  However, more realistic mission plans should probably be for no more than 40 years so that the results can be viewed within the lifetime of scientists working on the project.  The ethical issues of multiple generation ships are problematic.  Kids would be denied a home to remember or to look forward to occupying.

Such ships of this size, without a dramatic breakthrough, are probably going to be impossible for centuries.  The amount of power and/or antimatter would take anywhere from hundreds to trillions of years to produce at our current rates.  In the short-term, high specific impulse speeds (ISP)s equal slow velocities.  However for the long-term, there is no tradeoff.  Because of the short-term tradeoff, high ISPs have not been used.  Furthermore, spaceships would also require enhanced communications technologies. Improved navigation systems are required.  A slower rate of damage (longer life spans for equipment) may require less efficient, but more reliable technology.

With the Anti-Matter rocket, we must first build an antimatter factory.  For laser sail power it is possible to use the sun at close range as a power source (for the laser), but this would require a huge sail.  With the laser sails, our lasers aren’t accurate enough for very long distances out to the stars.  The most simple space design is the solar sail.  Although solar sails are the most simplistic, they are the hardest to build.  A dust shield must also be built for effective use.  In conclusion, we do not know if the technologies mentioned here will ever be used, but they are possible from a physics standpoint.  From an engineering and practicality standpoint, all the propulsion systems mentioned may not be possible.   But, as we shall explore, there are others options that are covered in Frontiers in Propulsion Science.

ACCELERATION NOTES

A light sail that accelerates at .0488 g (a g is about 9.81m/sec2) and that reaches Vcruise of 0.5C on a 40 light year flyby would take 84.97 years to reach its target star, but the signal would not reach the Earth for another 40 years.  Thus the mission would take 124.97 years to get the signal back to Earth.  There are about 1,000 stars within 40 LY of Earth.

CHAPTER 3 – Prerequisites for Space Drive Science

Chapter by Marc G. Millis, NASA Glenn Research Center, Cleveland, Ohio. Notes by David A Roffman

A space drive is a device that uses indigenous mass as a means to propel itself.  With that in mind, the main issue with a space drive is, what can is the reaction mass?  The answer to this question can vary, but there is no apparent winning solution at the moment.  Galactic Hydrogen, Dark Matter, Dark Energy, Cosmic Background Radiation, Quantum Vacuum Fluctuations, and other means are possibilities.  However, the mass densities for many of these are too minute.  Due to our lack of knowledge though, the small values (energy densities) provided in the book many be off by as much as 120 orders in magnitude (J/m3).  Ever hear of a google?  Well, imagine being off by that (a one with 100 zeros after it).  See Table 1 from Chapter 3 below:

Frontiers in Propulsion Science, Chapter 3, Table 1

KNOWN INDIGENOUS SPACE PHENOMENA

Known forms of mass and energy In terms of mass density, (kg/m3) In terms of energy density, (J/m3)
Total matter in the universe (critical density) ProportionsΩ = 1.00 9.5 X 10-27 8.6 X 10-10
Dark Energy L = 0.73 6.9 X 10-27 6.2 X 10-10
Dark Matter DM = 0.22 2.1 X 10-27 1.9 X 10-10
Baryonic matter (normal matter) B = 0.04 3.8 X 10-28 3.4 X 10-11
Photons and relativistic matter rel = 8.3 X 10-4 7.9 X 10-31 7.1 X 10-14
Cosmic background radiation CMB = 10-5 10-31 10-15
Quantum vacuum fluctuations
Inferred as dark energy 10-26 10-9
Up to nucleon Compton frequency (1023 Hz) 1018 1035
U to Planck limit (2043 Hz) 1098 10113
Galactic hydrogen 3.3 X 10-21 3.0 X 10-4
Spacetime itself
In terms of mass density 9.5 X 10-27 8.6 X 10-10
General Relativity analogy to Young’s modulus 5.3 X 1025 4.8 X 1042

Ignorance is not the only problem.  A major issue with space drives is conservation of momentum.  With space nearly empty according to our incomplete measurements, we probably cannot easily use it as a medium.  Proposals for how to circumvent this involve using space-time as a means of propagation.  Not much is known here.  Some more problems are the lack of being able to incorporate general relatively into a frame-dependent version of Mach’s principle, and not knowing if Geometric or the Euclidean view is correct for relativity.  Mach’s principle states that an inertial frame is created by and connected to the surrounding matter in the universe.  If there is a universal reference frame, then it might be possible to use it as a propulsion source.

When considering using space-time itself as a means of propulsion, one must consider how inflexible space-time is.  According to a calculation in the book, the stiffness (a measure of how much an object in strained as a result of distortion) is about 4.8 X 1042 N/m2 (Pa).  This very high figure indicates that space time is not so easily bent, and requires an enormous amount of energy or mass to do so.  Difficulties implied by our ignorance have left us in the “questioning” stage of the scientific method.

There are some common concepts that must be discarded when considering the space drive.  One such notion is infinite specific impulse (ISP).  Just because the fuel supply is unlimited, doesn’t mean the same for ISP.  ISP =F/[g X (dm/dt)].  The derivative shown here means propellant mass flow rate.  What propellant is flowing from the original wet mass out?  The answer is none.  A space drive uses exterior resources as “fuel,” not interior (organic) ones that deplete (as shown in the standard rocket equation).  Because the denominator of the ISP equation is now worthless, so is the formula.

Another standard equation that is now worthless, is E=.5 X Mp X (ISP X g)2.  It would state that a space drive would require infinite energy, or that zero energy would be required if there was no propellant.  This makes no sense, so the equation must go.  Cancelling out the effects of inertia on a craft is currently not possible, so inertial dampeners as seen in Star Trek and Star Gate are not possible.  The positive uses of altering inertia are currently not fully known.

The Gravity Shield is asserted to be an infeasible option.  It has failed under numerous tests, and it would defy the Equivalence Principle.  This states that if gravitational mass is modified, then the inertial mass must be too.  Gravity would not necessarily be reduced by such a device.  The mass of the rocket above any such g shield could be altered, or gravitational mass could be altered.

Maneuvers with a space drive are thought of in terms of kinetic energy (a space drive converts potential energy to kinetic), while a rocket’s maneuvers are thought of in terms of velocity increases.  The space drive hence has a different equation for energy input.  It is also true for non-relativistic flight (speed less than 10% that of light) that a space drive uses two to three times less energy than a rocket.  Specific impulse for the last situation has the space drive win by 150 orders in magnitude for a rendezvous mission, and 72 orders of a magnitude for a one way trip.  This data is for hypothetical discoveries pertaining to deep space flight. Space drive propellant type is not considered.

Instead of pondering deep space flight like the last paragraph, Earth to orbit energies will be discussed.  The book is very brief here with mostly equations, but the bottom line is that the space drive is 3.65 times more efficient with energy use than rockets.  Thus the space drive has another win here.

In terms of comparing the space drive and the rocket engine for levitation, the space drive has an advantage because it expends no propellant, and doesn’t fall when it runs out of fuel.  How to view this levitation concept depends on how “force” is used in equation, amongst other factors.  An approach here could be to remove an object from a gravitational field (i.e., as if moving the object to infinity).  The book says to levitate a 1 kg object near the Earth’s surface would take 62 Mega joules (about twice the requirement for low Earth orbit).  But the book doesn’t specify how to do it, as the method is unknown.

There are several hypothetical space drives.  FRONTIERS discusses ten versions in this chapter.  The Bussard Interstellar Ramjet collects galactic hydrogen for fusion use.  Two problems are introduced (aside from the fact that cold fusion is currently infeasible): the large amount of space needed to be covered to collect hydrogen and the need for a laser to start acceleration of the craft.  The later is probable as the jet cannot collect adequate hydrogen until it’s at the appropriate speed.  For the first, about 1024 m3 of space must be covered in order to collect 1 metric ton of hydrogen (Table 7 on page 152 of Chapter 3).

The next section deals with multiple types of “sail” drives.  These include: Differential sail, Induction sail, and the Diode sail.  A concept familiar to all variations is the maximum speed of .99997c due to drag forces in space.  The differential sail uses the concept of an ideal radiometer.  This is a device that has photons push the sails in a vacuum.  In a pure vacuum (hence ideal) the sails push from white to blacks, with the white sails reflecting two units of momentum, and the black sails absorbing one unit of momentum.  However, if the environment has some air left over, and then the situation is that of a real radiometer.  The sails (in this case, paddles) are pushed the opposite way due to interactions with air particles.  The induction sails uses the last principle, by altering the energy-density of the medium.  Unlike the differential sail that becomes useless as equilibrium approaches, the induction sail doesn’t due to continuous energy flow.  The last sail is the diode sail.  It doesn’t have a problem with removing absorbed energy, because it is a one-way mirror.  However, the sail for this would be massive, about 100 million square kilometers.  As a final note, quantum energy might be used, but there is not much known here when pertaining to sails or to quantum energy in general.

The book discusses inertia modification, but the main point is that the equivalence principle is the problem.  The main inertial modification device proposal is the oscillatory inertia thruster.  In the Woodward approach, inertia changes, not just position or velocity.  In this system a device “cyclically changes the distance between two masses, while the inertia of each mass is oscillating about its nominal mean so that the system as whole shifts its position to the right.”  There is an issue about conservation of momentum.  Millis asks whether inertia is an intrinsic property of matter only, or does it measure a relationship between matter and space-time?  He points out the need to revisit Mach’s principle.

FIELD DRIVES

The field drive idea uses a field (gravitational, electromagnetic, etc.) to propel the craft.  There are two issues here.  Since the field completely surrounds the craft, all the forces would seem to be internal, and thus useless.  Also, conservation of momentum is a problem yet again.  There are four variants: (1) Diametric, (2) Disjunction, (3) Gradient Potential, and (4) Bias Drives.

  1.  DiametricDriveuses negative mass for propulsion.  This hypothetical concept is defined as an object that travels in the opposite direction of the force exerted on it.  It can have negative inertia. To continue, some terms must be defined:

Inertial mass is a characteristic that defines the force and acceleration relationships.

Active gravitational mass creates a gravitational field only.

Passive gravitational mass reacts to a gravitational field.

With regard to the diametric drive, electric charges are incompatible with it.  This is because all charges have positive inertia. Furthermore, the magnitude of a charge’s inertia is not directly linked to the magnitude of the electrical charge. The drive can work by creating a gradient by positive and negative point sources (negative mass and normal mass).

  1. Disjunction Drive relies on separating passive and active mass.  To conserve momentum though, the active and passive masses will be accelerated toward each other.  However, if the two are separated by a sturdy device (so they can’t collide), then the whole system (both masses) will accelerate due to the gravitational force of the active mass on the passive mass.
  2.  Gradient Potential Drive, unlike the last two, relies on a whole field being altered, and not just points.  The first problem is the net external force requirement.  Usually a field acts on a device and the field in question.  In this case though, it seems as if all forces are internal, and nothing would happen.  However, if a gradient can be created then, the craft would travel across it, and hence move.
  3. The Bias Drive alters the properties of space-time itself.  The textbook provides the “soap-boat” example.  By simply adding soap to a solution of water, the boat will move forward.  In this model, space itself is the reaction mass, much like how the soap is in the last example.

It is important to note that not much is known in any of the last areas, and that the conservation of momentum and net-external force requirements are impediments.  Energy must also be conserved too.  Flight must be stable, and craft must be controllable.  Space-time itself must also be researched, as well as inertial frames.  Levitation must also be further studied.  To summarize, there is still a very long way to go in this area.

Additional Notes: Review paper on Negative Matter, Repulsion Force and Dark Matter by Yi-Fang Chang, Department of Physics, Yunnan University, Kunmin, 650091, China

Note: There is no copyright on this chapter because it is a work of the U.S. Government.

CHAPTER 4 – Review of Gravity Control Within Newtonian and General Relativistic Physics. Chapter by Eric W. Davis, Institute for Advanced Studies at Austin, Austin Texas. Notes by David A Roffman.

This chapter deals with antigravity, formally known as gravity control.  It is well known that gravity is the weakest of the four fundamental forces.  Unlike the electromagnetic force, which we can control, unclassified literature indicates that we cannot yet control gravity. Gravity control is acceptable under the quantum gravity, cosmological vacuum energy, and quantum field theories.  Possibilities for manipulation include high intensity magnetic and electric fields.

NEGATATING GRAVITY THE HARD WAY

A known way to cancel gravity is an impractical one.  All we have to do is find another planet the size of Earth, and drag it near Earth.  Another approach is to pull a dwarf star or neutron matter close Earth.  Both approaches rely on a canceling effect by equal gravity from each object.  A more realistic method is to create an ultra dense disk (as suggested by Robert Forward), and use that to negate gravity over an area.  However, we do not yet have technology to do this.

APPROACHES FOR ORBITAL STABILITY

The Six-Mass Compensator (from Forward) is a practical way to cancel tidal forces (minor forces that are almost ignorable) in orbit.  This uses six 100kg tungsten or lead, 20 cm spheres evenly spaced to work.  The gravitational attraction of the spheres will negate the small tidal forces almost completely.  Due to its weakness though, it is ruled out as useful for propulsion. Tidal forces are due to unequal gravitational forces being applied from one side of a body to the other (or throughout the body).  The Tidal force is equal to 2*G*M*m*dr/r3, where dr is the distance across the object (it could be R, the radius of the object) and r is the center to center distance of the two masses.

NEWTONIAN LEVITATION ENERGY ESTIMATE

The energy required for levitation of 1kg is 62.5 MJ (2.05 times the energy required for low Earth orbit (LEO) of the same mass).  To circumvent annoyances like this, the negative mass approach is mentioned.  Negative mass repels other negative mass, baryonic matter (normal matter), and mostly anything else.  By creating a system where near equal amounts of matter and negative matter are used, propulsion is possible.  Normal matter attracts negative mass (note: negative mass is not antimatter), so both will stay apart, while moving forward.

With a tether between the two, the forces will act appropriately.  Now, momentum is conserved, as negative mass places a minus sign in the momentum equation, thus equaling zero.  It is important to know that the electromagnetic and gravitational forces with negative matter are reversed.  Kinetic energy is conserved, even if the craft started at a standstill.  One side of the conservation of energy equation equals zero due to the minus sign of negative mass.

In terms of how to create and where to find negative matter, this is unknown.  Forward has proposed that some negative matter can be found in the voids of space, as it is repelled by regular matter.  Whenever negative matter would be created (if possible), an equal amount of regular matter is too.  Antimatter is created in a similar way.  The major objection to negative matter is causality, which is linked to the second law of thermodynamics.  However, it is allowed in other scientific concepts.  Murray GellMann’s Totalitarian Principle states that in physics, “anything which is not prohibited is compulsory.”

An idea by Forward to create an antigravity field is to use a dipole electric field generator.  The concept creates mass flow by moving mass around the inside of a torus (donut shaped thing) via electric flow.  The magnetic field will fluctuate due to time lag.  This field will generate an electric field.  This accelerator would be “antigravity,” causing objects on top of the device to levitate.  However, the amount of density of the matter needed to be accelerated is the equivalent to that of a dwarf star.  Also, the torus would be about as wide as a football field, with kilometer dimensions.  The acceleration in this device would be about 10-10 M/S2.  However, to counteract the Earth, it would have to be 1011 M/S2 in the pipes.  Such a device would too massive to build.

Forward has another torus design (inside-out whirling dense matter torus) which would act like a series of catapults for space travel.  Some problems with gravitomagnetic antigravity are the lack of technology to achieve enormous mass densities, extreme speeds and accelerations, and large device dimensions.  Some countermeasures, as suggested by Forward, are cooling a gas of neutrons from a nuclear reactor to extremely low temperatures using magnetic forces or magneto-gravitational traps to form tetraneutrons (four neutrons glued together).  Another work by Forward was an unsuccessful attempt to transform time-varying electromagnetic fields into time-varying gravitational fields.  In Maxwell’s Equations a time varying electric field generates a magnetic field and a time-varying magnetic field generates an electric field.  However there is no such relationship for gravity.

A strong gravitational field is not required for anti-gravity.  The one who discovered this (Felber) also noticed that an antigravity field repels objects in a backwards direction with strength equal to one-half the antigravity field in the forward direction.  Because negative energy and negative pressure are acceptable under General Relativity, it is possible to use negative energy as a source of propulsion.  It could be used as a bubble around the craft for propulsion.  The negative energy density required to overcome Earth’s gravity is on the scale of a dwarf star to neutron star density.

There is a natural source of antigravity.  This source causes the universe to continually expand, rather than contract.  This sounds like dark energy, but the book labels it Cosmological antigravity.  It can act as pressure according to one model.  Negative pressure takes energy to expand, not to compress.  Essentially, the cosmological vacuum has unlimited energy, and thus can produce unlimited expansion.

Dark energy presents another opportunity to be exploited.  This is what composes 74% of the universe.  It is responsible for the inflationary effects.  It was first discovered while observing the red shifts in supernovae.  The properties mentioned here, should make it obvious for its inclusion in the book.  Dark energy is almost certainly Einstein’s cosmological constant.  The inflation of the universe can also be used as propulsion.  I find this somewhat amusing.  Technically we are moving away from distant objects due to inflation, but I wouldn’t call the propulsion.   This force is Λ*r, where Λ is the cosmological  constant and r is the separation of the objects.

An energy source is only as useful as its accessible quantity.  Dark energy available in a volume the size of our solar system amounts to the mass equivalent of a small asteroid.  However, there was an idea put forth by White and Davis to generate dark energy in a laboratory for the purpose of designing warp drives.  It was based on D-Brane quantum gravity theory – to be covered in Chapter 15.

The next section of the chapter is Miscellaneous Gravity Control Concepts.  Dr. Puthoff worked on his own system of space time coordinates (cylindrical), relying on the Levi-Civita Effect.  Puthoff’s thing would be able to slice the speed of light in half by using enormous amounts of power.

Another idea relates to pulsed power.  It is not too far off in the future.  This uses a laser to generate acceleration.  A more interesting innovation would be gravitational wave (GWs) rockets.  Beams of GWs are allowed under the General Theory of Relativity.  The system works by ejecting GWs into space-time, thus generating propulsion.  This has been shown to work when a star undergoes asymmetric octupole collapse, achieving a velocity boost of 100-300 Km/s.

Baker proposed a method to produce GWs in the lab by using a system of small masses (less than the length of a GW) and then oscillating those with a rapid change in acceleration, a jerk (third time derivative of motion).  The “rapid jerks” would be even picoseconds or less, relying on powerful electric, magnetic, and other forces.  This device would produce GWs with a frequency of 1012 Hz and above.

Another opportunity to make GWs is to produce gravitons in the laboratory via unique electromagnetic field.  It is conceivable that a photon could decay to yield a graviton.  However, this may be wrong as the photon and graviton are exchange particles for different forces.  We have technology today to detect gravitons if we want to according to the book (I have doubts, as we have never detected gravitons which should be emitted from matter-it interacts gravitationally).  However, to produce gravitons in the lab would require high intensity lasers that would fire 1019 to 1034 W/m2 power.  The current device we have that can achieve the desired goal is the Z-Machine at Sandia National Laboratory.

There is a caveat though in that it can only generate .1% of the required energy (109J) to produce gravitons.  The follow up, though is the X-1 Machine, which would be 10 times stronger than the Z-Machine.  This shouldn’t be disappointing, because in practice, these types of machines exceed their potential (power output is far greater).  Therefore, it is conceivable that the next generation of machines (after the X-1) would be able to make gravitons.

Graviton production can be furthered by second-harmonic photons.  The theoretical propulsion system would look like a system of long linear arrays that are about 500m long that are composed of multiple implosion hohlraum segments.  All the linear arrays form cylindrically concentric super-arrays.  The system works by chain reaction.  Each laser fires at the linear arrays, causing the hohlraums to implode, thereby creating a tsunami of collimated high frequency gravitons.  The gravitons are excreted out the back end of the rocket.

Another method of procuring gravitons is to use a particle accelerator.  There are already portable electron accelerators, so it is possible to use the photon-graviton transformation.  However, the power of the beam may not be enough for propulsion.

Yet another way to make GWs are Gertsenshtein waves.  These are formed by very high light and magnetic field intensities.  They were initially thought to be only produced through astronomical processes, but the prerequisites for magnetic intensities are now available with high intensity 90’s lasers.  There are several variations of this idea.

In terms of performance, the GW rocket would have an exhaust velocity equivalent to the speed of light, as the particles propagate outwards at that speed.  This is the same speed as a photon rocket.  There is one problem though.  GW rockets require far more jet power than conventional rockets to deliver the same thrust.

A Casimir device in a weak gravitational field will slightly counteract gravity.  This truth helps to prove that negative Casimir energy in a gravitational field will act like negative mass. The force exerted by the Zero Point Fluctuations is too difficult, if not impossible, to measure with current technology.  Despite this, E. Calloni has devised a possible way to use the Casimir effect.  He would like to use multiple rigid Casimir cavities with a dielectric material separating two thin metal disks inside.  The dielectric material is preferably silicon dioxide.  There is more to this, but the technical difficulties will currently be placing this idea on the backburner. It turns out that atomic transitions can be caused by zero-point-fluctutations, as these perturb the Hamiltonian.

A proposal by F. Pinto is for the use of Van der Waals forces.  These forces are only important at the molecular levels in atoms with covalent bonds, but he has an idea how to boost the strength of these normally weak forces (with polarization, lasers, etc).  There are enormous technical difficulties here too.

The next item is a quantum field theory for space propulsion, but it is not necessarily a propulsion concept.  Heim devised a weird model for the universe, with two more fundamental forces.  He proposes multiple new dimensions, and other ideas.  In terms of the propulsion, it is predicted that Heim-Lorentz (see section 3.3 of this link) forces would produce much force, and that gravitophoton interaction can reduce inertial mass by as much as 104.

For the final method, F. E. Alzofon of Boeing Aerospace ’s old is mentioned.  He wished to use Al-27 and Cr with Mg (or Fe) to decrease weight via a static magnetic field and pulsed microwave radiation.

To conclude, it seems that dark energy and matter have no immediate propulsion possibilities.  However, with major breakthroughs (preferably more compact technology), it is possible to finally use many of the concepts mentioned here (if they would work in the first place).  Some things, like tabletop lasers are almost within our reach.  Only time will tell if we will ever conquer (negate) gravity.

CHAPTER 5 – Gravitational Experiments with Superconductors: History and Lessons.

Chapter by George D. Hathway, Director, Hathaway Consulting Service, Toronto, Ontario, Canada. Notes by David A Roffman.

A superconductor is an object that has properties that allow for easy conduction of electricity, with virtually no resistance.  The major development in this area was the development of a yttrium barium copper oxide (YBa2Cu3O7), often abbreviated as YBCO) ceramic superconductor.  This material is important because it allowed for cheap liquid nitrogen to be used for a coolant, rather the more expensive liquid helium.  Normally, superconductors work better at colder temperatures, requiring helium which boils at 4.2K. The YBCO works with liquid nitrogen, which boils at 77K.  The whole goal of this chapter is to review the past, current, and future status of superconductors with respect to antigravity and propulsion.

It must be stressed that work in this area is very difficult, and susceptible to many errors, as the low temperatures and levels of accuracy are unforgiving.  Carelessness on a scientist’s part can result in the scientific version of political suicide.  We first must consider thermal effects that can cloud results.  Temperature differences (including on the part of the scale apparatus, and air) can alter the recorded mass.  Materials can condense onto the test mass, and thus alter the perceived test mass results.  Buoyancy can also play a role in mass change due to gases in the sample test area.  Vibrations can be detrimental too.  With frozen over wires due to the cryogenic agent, the slightest shock from the outside could skew the already small mass change most likely to be observed.  The cryogen itself can create little tremors that can impede result analysis.

Even with a vacuum, the results aren’t safe.  Any residue gases can produce pressure gradients and become a problem.  Free electrical charges can then propagate through the left over particles, and cause charge (electrical) gradients.  Rogue magnetic fields and electrical fields (no matter how small) can further alter weight.  Finally, if the scientist defies the odds of experimental error, then human error will mostly likely deny him or her fame.  Bias error, or the will to want to see the hypothesis proven correct, can cause an individual to ignore negative results and only “see” what is wanted to be seen.

PODKLETNOV GRAVITY SHIELDING CLAIMS

Rather than discuss limited success, it is better to discuss alleged failure by the book’s least favorite scientist, Eugene Podkletnov.  This man decided to run an antigravity experiment with a massive superconductor (YBCO) and with primitive failsafe systems for accuracy.  He provided vague diagrams to how he performed his experiment, along with no precautions against the mentioned errors in the past few paragraphs.  Since his procedure was so flawed, the results of 0.3% loss in mass he “discovered” were put down by Frontiers.  They were tested in later experiments, but were never confirmed. However, those who tested the designs didn’t use as big a superconductor (less than half the size) and refused to use the required Liquid He with high frequency magnetic fields (no permanent magnets).

PODKLETNOV FORCE BEAM CLAIMS

Podkletnov also researched force beams.  His idea here was to use discharges from a YBCO superconductor to move objects that were far away and separated by walls.  A pencil that was standing upright on a table in an adjoining room fell over just as a blue planar discharge moved from the superconductor to the annulus.  The review of the experiment is unfavorable because there was a lack of supporting evidence.  This was a better apparatus than seen in the previous experiments.

GRAVITY WAVE TRANSDUCERS

The next experiment (not done by Podkletnov) is about gravitational wave thrusters.  It was proposed to use superconductors as gravitational wave transducers for RF radiation.  Chiao failed here.  Harris then said that it was because that neither gravitoelectric nor gravitomagnetic fields accompany gravitational waves.  Many others tried and failed here too.

TAJMAR EXPERIMENTS                

In 2001 Tajmar and De Ma Matos showed that every electromagnetic field is linked to a gravitoelectric and gravitomagnetic field.   They also said that the coupling is generally valid but it can be increased by using massive ion currents.  This can be accomplished by rotating mass or a dense plasma and by aligning electron and nuclear spins.  Thus any substance set in rotation becomes the seat of a uniform intrinsic gravitomagnetic field.

In a 1950 text on superfluids, London came up with an expression for the magnetic field produced by a spinning superconductor or superfluid.  It was proportional to the Cooper pair mass to charge ratio and angular velocity.  This London moment is used to determine the Cooper pair mass, which was predicted to be < twice the mass of an electron, but was actually slightly larger.

In 2003 Tajmar and De Matos found that a huge internal gravitomagnetic field was needed to explain the mass anomaly.  It could be measured in a lab.  The field was predicted to have form Bg = 2ωρ*/ρ with ρ* = Cooper pair mass density, ρ is the classical bulk mass density of the superconducting material, and ω is the superconductor’s rotational speed.  In 2006 Tajmar did an experiment with mechanical spinning of niobium and high temperature ceramic superconductor rings at LHe (liquid helium) temperature.  He did not apply an external magnetic field, and he used a sudden acceleration deceleration of the superconducting rings to produce the acceleration required to see the anticipated field.  Note that moving charges create magnetic fields.

Tajmar thought he found the expected large gravitomagnetic field as detected by sensors like accelerometers and laser gyroscopes, but he backed off the claim in 2007.  Still there was a residual signal with a large coupling constant of 10-8 between the observed acceleration effect and the applied angular velocity.  It was proportional to temperature after passing a critical temperature that depended on the spinning ring.  It was more pronounced in a clockwise direction as viewed from above.  It did not decay as would a dipole field.

An experiment under different conditions at the University of Canterbury in New Zealand showed less impressive results, but did indicate a possible effect like Tajmar’s observations.

The search for frame dragging effects speculated by Tajmar and his team goes on, just like the research in the field of superconductors and gravity.  It is important to note that not too many scientists are researching in this area, and that cooperation and coordination are needed.

CHAPTER 6 – Nonviable Mechanical “Antigravity” Devices. Toronto, Ontario, Canada.

Chapter by Marc G. Millis, NASA Glenn Research Center, Cleveland, Ohio. Notes by David A Roffman

This chapter is about nonviable antigravity devices, and is the work of the U.S. Government.  The first of many archetypes is the oscillation thruster (see Figures 1 and 2 below).  It is a device that uses internal mass movement to create net thrust.

All drives have three primary components:

  • Chassis to support masses,
  • Cycler to move the masses in asymmetric motion, and
  • An energy source.

Such a device moves forward because the forward speed of the internal masses travels faster in one direction than the other.  This would seem to be a breakthrough (no propellant), but these devices require connection to the ground (hence propellant) to function.  If left to dwell in space, or to be used for upward motion, nothing special will happen (the device won’t work).  The device will move in jolts, as the friction of the ground will eventually overcome the vehicle (masses eventually stop moving forward faster).  Then the process starts anew.

A simple test to disprove such a device is the pendulum test.  Simply put, a pendulum is over the device and turns the device on.  The pendulum should oscillate rather than stay pushed out.  If there is oscillation, then no net thrust is produced (as the pendulum would just oscillate like the device and not counteract gravity by staying in an unfavorable gravitational position).  It is important to have the device and power supply focused on the pendulum and for the pendulum to be tall.  The reasons are to avoid power cable annoyances and to reduce natural oscillation frequency to be less than the device (it also makes the lateral force more pronounced).  It must be a level pendulum so as not to yield false impressions due to the titling base of a simple pendulum.  For the same reason, an air track in not recommended.  An air track would also create an initial velocity that needs to be accounted for.  The “jerk” effect has been shown to baseless in claim.  It is important to have an open mind though.

The next device is the gyroscopic antigravity device (see Figure 9 below). Examples such as Eric Laithwaite’s in 1973 do not demonstrate antigravity.  Devices like his lift are based on a change in the axis of the gyros.  They are completely dependent on an external gravitational field and their own torques.  A typical machine consists of gyros, a main spindle, and pivots.  In order to test such a device, it is necessary to measure the weights and not thrust of the machine (as there are none).    The suggested test is to place an on and off device on opposite sides on a scale, and to see if the scale tips (see Figure 11 below).  Note that there can be error here as fluctuations and oscillations exist.  A more realistic approach is to analyze all external forces acting on the antigravity device and then to search for anything that crosses the surface of the test subject that carries momentum.

Other false proposals include reaction/momentum wheels.  Any device, despite the claims, cannot change the position of the center of mass in a system.  Some have tried to use frame dragging effects, but with no real success.  It is important to note that this chapter was designed to show nonviable devices and not viable ones.

Although patents were issued for the Linear oscillation thruster (see Figure 1 below, Patent 5,685,192) and Laithwaite propulsion system (see Figure 7, Patent 5,860,317), this does not mean that they do what they claim to do.  Oscillation thrusters are misinterpretations of differential friction.  Gyroscopic devices misinterpret torques as linear thrust.  In my opinion the only way to achieve antigravity is to use exotic matter and to actually observe gravitons to get a better understanding of gravity.

Note: There is no copyright on this chapter because it is a work of the U.S. Government.

CHAPTER  7 -Null Findings of Yamishita Electrogravitical Patent.

Chapter by Kenneth E. Siegenthaler, Professor of Astronautics, Department of Astronautics,  and Timothy J. Lawrence, Director, Space Research Center, U.S. Air Force Academy, Colorado Springs, Colorado.  Notes by David A Roffman.

Chapter 7 is about claims of antigravity based on the Electrogravitational Theory, which is not necessarily correct.  It is stated that moving electrical charges are the cause of gravity, and that it is as such only a residue effect.  Electrons are the key to what we call gravity.  It is argued that because the Earth possesses more electrons than the moon, that the Earth is heavier than the moon.  These are but the basic tenements of the theory.  I disagree with this theory for many reasons.  Photons are spin 1 particles and gravitons are spin 2.  They are hence not the same.  Neutrinos interact weakly with gravity but have no charge and are not composed of other particles that do have charge (such as neutrons).  Gravity therefore cannot be residue.

Yamishita is the scientist whose experiment will be discussed.  His device consists of charged a rotating cylinder with components that weighed 1300g (1.3kg).  The four primary parts were: base plate, electrode, rotor, and electric motor.  He was far more specific in his experimental setup than Podklednov, with rotor dimensions given exactly as were most equipment used, and with the exact number of RPM’s (3,000). However, the dielectric (non-conducting-insulator) material used was not specified.  In his unreproducible experiment, by use of Van de Graaf generator, he reduced his device’s mass by 11g (the machine is accurate to within 1g).  By reversing the polarity of the electrical input, he increased the mass of the machine by 4g.  The weight loss was very significant, being 1%.

A first attempt was made to replicate his experiment with a motor that was capable of 19,500 RPM.  The motor used was different, as was the dielectric material, but that shouldn’t matter.  This is because if the claims are true, the concepts at work should be the same.  The book was very specific as to what they used, naming nearly every part’s dimensions and product name.  This test allowed for a third degree equation to be built comparing rotor speed and supply current.  Note that the motor used couldn’t spin backwards like Yamishita’s.  Every precaution was taken to see that the scale was accurate, and that the spinning inside the machine wouldn’t alter the scale (before the electrical current was added).

Tests were initiated every few seconds with the machines being charged for a few seconds (in intervals).  The results match the title of this chapter.  Changes in weight were between one and two grams out of 1,315 grams.  Various experiments were done with electrical charge and RPMs.  Note that this first experiment was done at the Air Force Academy.  This analysis is not sufficient enough to debunk Electrogravitational Theory, as the results are not statistically significant enough.  Furthermore, the motor used in the experiment broke after one of the last tests.

The second test performed was by a separate group.  Its goal was twofold.  The first goal was to repeat the experiment of Yamishita, and the second was to discover why the motor in the first experiment malfunctioned.  The reasons for motor damage may discredit the last experiment. The motor was dogged by lack of balance issues.  At first, the paint was blamed, but that was proved to be false.  The hole designed for motor shaft interface was off center too.  As for the motor mount, it wasn’t properly secured.  The electrode didn’t fully enclose the rotor (yet another hole).

It is important to know that the experiment two had funding issues.  Parts were improvised and the device looks like…not good, but it functioned properly.  Motor shaft problems were discovered, but fixed with colloquial methods.  The motor was far slower than the one used for Air Force Academy trial, but it still met Yamishita’s requirements.

So the experiment went forward.  As an insulating material, the machine used a different dielectric material (Vanguard Class F Red VSP-E-208) than in the first experiment (which used Xylene), and weighed almost twice as much.  Because of this, the results could have been inaccurate (didn’t follow Yamishita’s procedure fully).  The results of this experiment weren’t statistically significant.  The weight of the machine varied between 2584.99 and 2585.56 g (with variation due to reliability issues pertaining to the scale). When the machine was turned on weights varied by .01 to .03 grams.  There were some errors with the positive charge test (i.e., the test apparatus was touched during operation), but the weight change was so little that it is impossible to prove Yamishita.  Details will not be elaborated on for this test, as the machine was in no way like Yamishita’s (shape, etc.).

This second experiment at the Air Force Academy is classified as inconclusive, just like the first.  The equipment and tests were not sufficient.  Yamishita’s idea has been ruled out for propulsion, as it would take 830 years to get to Mars from Earth with a 500kg craft with the method provided by him.  It does merit more research, but not for propulsion.  A final note is that a negative charge through the pseudo-Yamishita machine produced a slight mass loss, and a positive charge made for a slight mass gain.  However, these results are too tiny to be statistically significant.

Note: There is no copyright on this chapter because it is a work of the U.S. Government.

Chapter 8 – Force Characterization of Asymmetrical Capacitor Thrusters in Air.

Chapter by William M. Miller, Sandia National Laboratories, Albuquerque, New Mexico, Paul B. Miller, Eat Mountain Charter High School, Sandia Park, New Mexico, and Timothy J. Drummond, Sandia National Laboratories, Albuquerque, New Mexico.   Notes by David A Roffman.

The old antigravity claim is that by using an Asymmetrical Capacitor Thruster (lifters), levitation can be achieved.  By applying charges using the aforementioned device, forces were measures to within +/- 100nN.  Forces created this way are independent of polarity.  Uniform magnetic fields have no effect on force.  Geometrical variations in asymmetry caused little change in force.  However, these results best match the coronal discharge effect.  With this in mind, there have been some wild claims about complete control of gravitation over the last 80 years (such as those by Thomas Townsend Brown), although, most claims do not go that far.  The Office of Naval Research (ONR) debunked Brown’s claim, (back in 1952) attributing effects to lunar tides.  Without a 5th fundamental force, the measured changes must be dismissed as some known effect.

Typical reports state a need for high (uncalibrated) voltage to achieve levitation.  To verify these claims, an experiment was devised by the authors of this chapter (such experiments are rare).  A lifer was constructed, and a test mass composed of Balsa wood was used.  The procedure of this experiment is at the bottom of this page (will be attached later).  Results from this experiment yielded a force that was proportional to applied current.  Current was a requirement for the device to work.  Because the ONR claimed coronal effects, the apparatus was coated in the second part of the experiment with a non-conducting material (glyptol).  This greatly reduced the force measured for mid range results.  For high and low voltages, not much change was observed, most likely due to “burn out” of the glyptol.

 

Magnitude and direction of force was independent of excitation voltage.

Force was not related to the gravitation field of Earth, but was solely related to geometry of the asymmetric capacitor, and always directed along the symmetry axis of that capacitor.

Force exhibits a power law dependence with voltage (proportional to the square of the voltage) and current (force is proportional to current raised to 2/3 power).

Current must flow for there to be force.  Conclude the effect is a current flow phenomenon (not an electric field phenomenon).

 

Sinusoidal excitation experiments were conducted.  Canning thought that even with uniform voltages applied, leakage does not occur in a steady manner, but as pulses, implying that neither DC nor AC voltage in truly uniform.

The presence of electrical forces was tested for under the uniform application of negative DC voltage.  The apparatus was placed on a PVC stand with a probe positioned securely at a constant distance from the wire and plate.  No noticeable results were observed until reaching 8.3kV.  There were many components, but what matters here are the results.  Trichel pulses were detected.  They increased in frequency as more voltage was applied.  But the odd thing was that the frequency could be very erratic.  So force and voltage measurements were taken.  Even with AC running through the device, the lifter can still generate force (AC or DC is irrelevant).

Another test searched for geometric variations. This setup utilized two hollow, cylindrical plastic supports that were each 130mm tall.  In the center of each lay an approximately 60mm plastic support tube that was positioned such that the support could rotate.  This was placed on an aluminum plate, and wire would change distance from the plate.  Results were taken.

Generally, the shorter the wire, the more force.  Force varied as voltage cubed, while force varied by the 2/3 power of current.  For certain test points, there was a linear result.  The authors dubbed three regimes for linear results.  The “non-self-sustaining current regime” has voltages from low all the way to 6,000V.  Next is the “corona discharge regime” which spans from 6,000v to 14,000V.  Finally comes the “Fowler-Nordheim regime” which spans above 14,000V.  It is noted that F=C √P appeared, where F is the force in µN, C is the constant (1 in units of µN/√µW), and P is the power in µW.  Therefore, the force will be proportional to the length of the asymmetric capacitor.

The change in force during the geometric experiments was small.  Once again, only current mattered.  There were no major problems with the experiment.  More efficient generation of large currents with moderate voltages could improve efficiency.

For the final experiment, magnetic fields were tested.  The setup will be shown in a picture to be posted at the bottom of the page.  To summarize, the magnetic fields are irrelevant to exerting force.  Trichel pulses are formed when electrons leave plasma and attach to oxygen, forming a negative ion sheath.  This sheath of plasma collapses at higher currents and caused instability resulting in periodic collapses, or Trichel pulses (first noticed at 6,000 V).  They occur only for negative coronal discharge, but comparable forces exist for positive coronal discharge.  A corona discharge in general is sustained by an avalanche of accelerated electrons propelled by high electric fields.  There are second order effects that must be accounted for.

This chapter also mentions quantum tunneling through Fowler-Nordheim tunneling (field emission).  For this kind of tunneling, current passes through an insulating barrier (air for this experiment).  Supposedly, according to page 16, quantum tunneling is unviable.  Is the inclusion of this topic here again completely consistent with the earlier statement (earlier question)?  Quantum tunneling naturally occurs in nature, where an object climbs a potential well that it could not classically overcome.  On the macroscopic scale this is really not observed.  An example of this is rolling a ball up a hill with almost enough energy to get to the other side.  Classically the ball will come back, but quantum mechanically there is a finite (but very small) chance that the ball will appear on the other side of the hill a start rolling down.  Returning to Fowler-Nordheim tunneling, there is an equation that can describe it.

J = A(y)(E 2/Φ)exp[-B(y) Φ3/2/ E];  Here  E  is the electric field, Φ is the work function, J is the current density, and the functions are functions.  Since the geometry is asymmetrical it is not possible to use the integral form of Gauss’ law once to get the electric field from the charge distribution.     Lifters do indeed fly by creating unipolar ion current at the wire electrode.  The air velocity of a device is proportional to the square root of the current.  Lifters can produce this desired effect.  A final note is that there are no special new forces created by lifters, as Tajmar proved.

CHAPTER 9 – Experimental Findings of Asymmetrical Capacitor Thrusters for Various Gasses and Pressures.

Chapter by Francis X. Canning, Simply Sparse Technologies, Morgantown, West Virginia. Notes by David A Roffman on Chapter 9 of

Chapter 9 again deals with asymmetric capacitors, but considers the environment, thruster application, and configuration of the devices. Following in the footsteps of T.T. Brown, enthusiasts have been able to create levitating devices.  But the question is how do these devices work? This chapter furthers attempts to answer that question. It states up front that current involves charged ions which experience multiple collisions with air, collision that transfer momentum.  All measured data was found to be consistent with this model. This chapter investigates several alternative explanations (including a number of options not discussed in chapter 8), but dismisses them all.

The first capacitor discussion focuses on geometry. There were four devices made, and their circuit diagrams are shown below. Devices 3 and 4 have greater asymmetry than devices 1 and 2, and consisted of a cylinder and disk.  When all of these tests were made, the location of the ground was constant, as to have a test that was legitimate. Many other tests were made by others, but they rarely had a constant setup with the ground. The chapter emphasizes that different results in previous experiments were due to a problem pertaining to the location of the ground.

Several devices use aluminum and have sharp edges. Sharp edges and high voltage create electric fields, which may help with exerting a force. A typical lifter is made of aluminum foil, and has the wire near the aluminum foil on the upper side (see Figure 1).  The wire is a sharper surface than the Al foil, with wire charged different potentials.  Likewise, disks were considered to be sharper objects than cylinders.

To summarize this test, devices 1 and 2 produced forces on the capacitor toward the non-grounded surface. Devices 3 and 4 (4 had wires pointing away from the disk) produced forces on the capacitor away from the cylinder, and toward the disk. Devices 1 and 2 created a larger force when the disk was the non-grounded surface. Polarity has virtually no effect on force. Whenever the cylinder was grounded, devices 3 and 4 were more powerful, but device 3 produced more force than 4. The current to the non-grounded portion of each device was stronger than the grounded side. People could witness the electric force on their hair. Note: the chapter had photos for devices 1, 2, and 3, but this chapter had a copyright so the photos will not be included here. There was some confusion as to whether circuits A, B, C, and D were equivalent with devices 1, 2, 3, and 4, but it eventually became clear that they were not synonymous. 

Tests to follow were atmospheric in nature, with different gasses and pressures used. For these tests, bursts of current (Trichel pulses) were observed in the non argon and nitrogen tests. Very high frequency (VHF) emissions of radiation were also observed in the non argon and nitrogen environments. The air became ionized when it was available.  Radiation is just photons. When these hit an atom they can excite or liberate the electrons.  For excitation, a specific energy (frequency) of the photon is required that varies with the atom. Devices 1 and 2 produced a force dependent on the location of the ground for this experiment. However, devices 3 and 4 made a force that was determined by asymmetry.  When a complete vacuum was used, there was only a brief flash of light, and nothing more of interest. This can most likely be explained by residue water droplets that condensed on to the device.

It is unlikely that mass was lost (ejected) from the machine over the course of the experiment. Image charges could have disrupted this experiment, but by pulling down toward the Earth rather than pushing away (as with antigravity) indicated that this was not what could explain lift. The effects in this experiment were most likely caused by ion drift.  This is probably true because of the lag time for force. It took longer for ions to cross a gap, and thus longer for force to be generated.  77% of the force predicted was generated, which helps to prove this idea. To conclude on gasses and forces, the less atmospheric pressure, the weaker the force becomes.

CHAPTER 10 – Propulsive Implication of Photon Momentum in Media.

Chapter by Michael R. LaPoint, Project Manager, Science Research and Technologies Project Office, NASA Marshall Space Flight Center, Huntsville, Alabama. Notes by David A Roffman.

There has been a major debate in science for about 100 years over the issue of momentum density of electromagnetic field propagation through an index of refraction greater than unity.  Now, there have been two theories for how to handle the situation, and they are direct opposites.

One alternative is the Minkowski equation and the other is the Abraham equation.  For the first equation, momentum density increases when passing through a greater index of refraction (by a factor of n2).  As for the second equation, the opposite happens; momentum density decreases (by a factor of 1/n).  It is important to know that both are attempted derivatives of the Maxwell Field Equations, which pertained to a vacuum.

Attempts to prove either theory have ranged from thought process to experimentation.  In 1935, Halpern showed that symmetric energy (provided prerequisites), will not satisfy Minkowski’s equation.  So, Halpern was an Abraham fan.  Balasz used a thought experiment to buttress Abraham.  He proposed two enclosures in uniform motion without external forces.  Both enclosures have a dielectric rod, and one has a slower electromagnetic wave passing though the enclosure.  The wave moving through the vacuum will be moving faster than the one in the rod.  To keep center of mass and momentum, the Abraham (slowing down) tensor must be used.

While thought experiments are nice, they are not real physical experiments.  The first physical experiment took place in 1954 with Jones and Richards.  Although it wasn’t well known, it did demonstrate one thing.  For this experiment, many metallic reflectors in air and in dielectric media were used to test radiation pressure.  This experiment backed up Minkowski  But the results were ignored.

Another experiment was carried out in 1973 by Ashkin and Dziedzic.  This one used laser light to measure the radiation pressure on an air-water interface.  A net outward force was detected, thus giving more credit to Minkowski.   The same year a follow up was done.  Gordon published a report on pseudomomentum.  He summarized that Abraham was correct for nondispersive dielectric media, while Minkowski was correct for determining the radiation pressure of an object in a media.  Using time varying voltage, Walker helped to prove that Abraham was right.

Later experiments by Brevik and Gibson (used a photon experiment) helped to prove that both view points were correct.  For high frequencies, Minkowski’s tensor should be used.  But for low frequencies, Abraham’s tensor is a good idea.

Other researchers have proposed taking the average of the two tensors.  The Bose-Einstein condensate experiment utilized rubidium in a gas to check the tensor theories.  Based on the refraction and momentum, Minkowski’s tensor seemed correct.  To summarize this cyclical debate, there is still no answer about who was right (what tensor is right).  The book mentions on page 356 that a narrow band optical pulse of 600-fs duration ans 1-MW/cm2 peak power, incident upon a multilayer photonic bangap structure  with mass of 10-5 grams may produce accelerations up to 108 m/s2.  In optics, an ultrashort pulse whose time duration is on the order of the femtosecond (which is 10 − 15 seconds.  There is an indication that the short interaction time limits displacement and velocity, but these experiments could be used to test electromagnetic phenomena and momentum transfer to large objects (like a spaceship). Using a relativistic acceleration formula it is obvious that the speed of light cannot be reached or surpassed.

While all of the previous discussion is nice, propulsion is what really matters.  Here the chapter starts with Slepian’s Electromagnetic Space Ship.  He used an oscillating magnetic field for propulsion.  He asks two questions: Is there an unbalanced force acting on the material system of the spacecraft, and can the unbalanced force be used to propel the ship?  (answers – yes to 1, and no to 2).  This was not to be taken seriously, as it was published with a rebuke by him a month later.  It was designed only to provoke thought.  The idea is nonsense, as the thruster would go forward and back an equal distance (no net movement).  However, Corum et al. say that unidirectional motion is possible.  This claim has been shown to be false, but the time derivative of electromagnetic density might alter space, causing some acceleration (this was shown by the US Air Force Academy).

Brito tried to use the principles that this chapter discusses to create a propulsion device (Electromagnetic Inertia Manipulation Propulsion), but it failed.  However, there was some force observed that may not have been due to error.  Another idea is Feigel’s Hypothesis, that is, Zero Point vacuum fluctuations move dielectric objects in crossed electric and magnetic fields.

The European Space Agency’s companion, van Tiggelen et al. believe the Feigel hypothesis to be measurable and quite real.  They calculate that a small magneto-electric object inside an isotropic (and monochromatic radiation field) could move if external fields were switched on.  Also, for a field intensity of 10kW/cm2, a velocity of approximately 10-5 cm/s could happen if an object’s composition was FeGaO3.  This can be verified by a 10 micro gram crystal of the same substance as the previous object mounted at the end of a piezo-resistive cantilever (but there is no more data in this area).

The real question is how to use the two tensors developed, and how to apply them to space travel.  This is still a mystery, and many more tests are needed.  A breakthrough here will not only finish a hundred year battle in physics, but may open up new worlds for us to explore.  Note p = h/λ, where “p” is the momentum, “h” is Planck’s constant, and “λ” is wavelength.  This applies to photons and particles with mass.

CHAPTER 11 – Experimental Results of the Woodward Effect on a Micro-Newton Thrust Balance.

Chapter by Nembo Buldrini (Research Scientist, Space and Propulsion Advanced Concepts) and Martin  Tajmar (Head, Space and Propulsion Advanced Concepts)Austrian Research Centers, GmbH-ARC, Seibersdorf, Austria. Notes by David A Roffman.

Ten years before the new millennia, a scientist named James F. Woodward was starting to see signs that all matter is connected, and can interact over astronomical distances instantly.  Mach’s principle (in theoretical extension) was used to support this claim.  Chapter 11 is all about testing Woodward’s thrusters on a µN thrust balance in vacuums in order to find out the truth of his experiments.  Buldrini and Tajmar indicate results about an order of magnitude below Woodward’s past claims.  Such claims of linked matter have been put forth for some time.

In 1953, Dennis Sciama demonstrated that inertial reaction force can be seen as a kind of radiation of action caused by distant objects on local objects.  Woodward would follow, with his equations to describe such behavior.  In the search for the mass fluctuation effect, no solid data has been found.  Based on the equations, predictions typically do not match results for experiments.  With the efforts to use Woodward’s equations for propulsion, other ideas have come forth.  Brito and Elaskar have decided to rely solely on electromagnetic effects.  These two believe in “hidden momentum,” and have built a thruster to check on Woodward’s thruster.

The balance used in this experiment was made to measure an indium field emission electric propulsion thruster.  A force made by one such thruster can be in the micro-Newton range.  Two sensors were used.  One had longer range (but less accuracy) and the other was the opposite.  Dampeners were needed.  There was a counterweight to level the device.  Setup is based shown with a picture, but his chapter is not government owned (it is copyrighted).  All devices were based on Woodward’s equations, in which the unidirectional force should be achieved by fixing the mass fluctuations of the dielectric capacitor via the Lorentz force.  Whereas Woodward and Vandeventer recorded a thrust of 50 micro-Newtons, this was not seen in this experiment.  However, there were errors due to thermo-mechanical bending.  The device for this experiment produced 2 micro-Newtons, and not the 5 predicted by Woodward’s equations.  These results were for the Mach-5C device.

The next device to be tested was the Mach-6C.  It was more controllable than the 5C.  Woodward claimed it could generate 150 micro-Newtons of thrust.  Thermal errors were eliminated in this experiment.

In another device, the Mach-6CP, the power cables were arranged differently, which may have caused trouble in measurement.  Woodward said that a thrust of between 100-200 micro-Newtons should be observed.  It was not.  The force may be a magnitude lower than Woodward said.  The “error” in this experiment (perhaps electrical leakage) is not Machian in nature.

A separate device was latter built to test frequency and force.  The 2-MHz Breadboard Device had a dissipation factor one order of magnitude less than the other there as well as devices in the previous paragraphs.  It was less vulnerable to overheating.  Also, the coils had a relative phase shift of 90 degrees, with allowed for maximum thrust (180 degrees is minimum thrust).  Here, a thrust of 1-6 mN was calculated, but none was observed.  However, the dielectric material used in this experiment may be unsuitable.  While there are some interesting results for some tests, further investigation is needed in this general area.  Recently Woodward has come to agreement, but said that his force was smaller than before.

 

CHAPTER 12 – Thrusting Against the Quantum Vacuum.

Chapter by G. Jordan Maclay (Professor Emeritus, University of Illinois), Quantum Fields LLC, Richland Center, Wisconsin. Notes by David A Roffman.

Chapter 12 explores how quantum vacuum properties may be applied to propulsion.  Quantum electrodynamics (QED) is a theory of how light and matter interact (the photon is the force exchange particle for electromagnetism).  It has been verified to 1 to 10 billion.  QED predicts that the quantum vacuum (the lowest state of the electromagnetic field) holds a fluctuating virtual photon field.  Although vacuum is everywhere, currently very little force can be derived from it.

Most efforts in this area revolve around Casimir forces, which arise due to quantum fluctuations.  These forces have recently been used in microelectromechanical systems.  The chapter will consider spacecraft that use the vacuum for hypothetical possibilities, not engineering feasibilities.  To have effective propulsion, breakthroughs in material, methods, and understanding will be necessary.  There are no simple ways to mathematically understand how to achieve success.

The field of quantum mechanics is the key to understanding Casimir forces.  This area states that the lowest state is that of the quantum vacuum.  Particles and light are both quantized fields that are fully relativistic.  Any number of photons can exist, and can be easily transformed into coordinate systems (called Lorentz transformations).  In the vacuum, pairs of particles (photons, electron-positron pairs, etc.) appear and disappear instantaneously (virtual pairs).  Fluctuating electromagnetic fields, which composed the vacuum, are quantized.  The vacuum is similar to Heisenberg’s Uncertainty Principle, where momentum and position continually oscillate in certainty.  Note: Heisenberg’s Uncertainty Principle ΔPx h/2 where ΔP is the uncertainty in the momentum and Δx is the uncertainty of the position.  In the lowest state, the oscillator is still vibrating, with an energy ½ hω.  The Δx and ΔP are actually standard deviations.

A zero-point electromagnetic field is an isotropic fluctuating electromagnetic field that occurs in a particle field, and is present everywhere at zero K with all electromagnetic sources removed.  Fluctuations affect all things in the universe.  This energy comes from virtual photons of energy.  For the quantum vacuum, frequencies that correspond to less than 10-34 m (Planck length) are ignored.  Energy is predicted to exist in the 10114 J/m3 range.  However, real results have shown it to be hundreds of orders of magnitude less.  This is the greatest discrepancy in scientific history.  Some solutions to this cosmological constant problem have been renormalization, super-symmetry, string theory, and quintessence.  In this system, real photons have less energy than virtual photons.

Casimir forces were predicted by Heindrick Casimir in 1948.  An important note is that modes with frequencies greater than the plasma frequency aren’t really affected by metal surfaces due to transparency of metal at those levels.  To not have infinite quantities, the finite change in energy of the vacuum due to surfaces must be computed.  These forces can act differently for differently shaped formations.  For a cube or sphere, Casimir forces jut outward.  However, for a rectangular cavity, the forces may be outward, inward, or zero.  For application to space travel, it is hoped that by transferring energy (arising from radiation pressure) from virtual photons to surfaces, net propulsion can be generated.

The dynamic Casimir effect has parallel plates that should move rapidly, and that can cause an excited state of the vacuum between the plates (this causes the creation of real photons).  Unfortunately, this has yet to be observed through experimentation.  A vibrating mirror could be used here for space propulsion.

Even though the Casimir effect is well known, there are still alternative explanations.  The observed effects could just be derivatives of Van der Waals forces.  They could also be interpreted in terms of source fields.

Despite all of our knowledge, this field, like many others, has limitations on what can be calculated.  Parallel plate geometry (almost sphere-flat plate geometry) has been the only for which results have been calculated.  Other surfaces are too difficult to calculate.  Right angles provide a real source of trouble.  Properties of binding energies are typically ignored.

The force wasn’t accurately measured until 1998.  Typically measurements are made by having one surface flat, and the other curved.  Recent work has confirmed effects and predictions for finite conductivity, surface roughness, temperature, and uncertainty in dielectric functions.

Parallel plate Casimir forces result in an inverse fourth power relationship as the plates change in distance for conducting surfaces.  The sticking of micromachined membranes to each other may be caused by these forces.  However, for semiconductor surfaces, the equation for force is more complicated.  For this situation, it is possible to tune plasma frequency by light, temperature, or voltage.  Arnold et al. were able to see an increase in Casimir forces due to light.  This has yet to be repeated though.

As for space propulsion, it is possible, but is not efficient.  An important fact in this area is that if vacuum energy is independent of craft position, then energy and momentum are constant.  The space ship mentioned in the book is that of quantum sails.  One would think symmetry of radiation pressure must be broken.  Equal virtual photon impacts on both sides of a sail will produce no net force.  Different materials on each side of the sail will make no difference.  Temperature gradients may, however, may cause a force to be exerted.  Invariance of zero-point fluctuations is a precept of the quantum vacuum.  If this didn’t exist, then it would be possible to find a universal rest frame for the universe.  But if that were true, then special relativity would be false.  While it would seem that thermal effects could generate propulsion, causality may throw this pleasant result out.  The real question is how to remove energy from the vacuum.

There have been ideas about using negative vacuum energy density to assist in propulsion.  This would make negative mass (or so it is hoped).  As discussed in earlier chapters, this has repulsive properties, and could provide endless propulsion.  However, there has been no negative vacuum energy density ever produced (it is always positive).  If success occurs, then it may also be possible to generate wormholes.  It may be possible to reduce mass through this approach.

A dynamic system is another possibility for propulsion.  The vibrating mirror is one such approach.  A mirror is powered to generate radiation.  Such a rate of vibration starts at zero, increases, and then returns to zero.  The book considers ideal conditions, and efficiency.  All photons produced are assumed to fly off in one direction (and not all the over the place).  Efficiency is still very low, with a momentum to energy ratio of about 1/c (the speed of light).  The following are ignored in this setup: mass change in the craft, radiative mass shifts, fluctuations and divergence issues, and dissipative force that made the mirror vibrate.  Based on the dynamic Casimir effect and known science, 10-5 photons will emitted per second.

Chapter 12 goes further by proposing a craft that relies 100% on the quantum vacuum.  The motor that would power the mirrors could be run off quantum energy.  Quantum energy could be collected via perfectly conducting (uncharged) parallel plates.  Casimir forces would do work on the plates, and with a reversible isothermal process, success could achieved (mirrors would accelerate).  A best-case result of such a rocket would peak at 8 m/s.  This is about 103 fold less than a chemical rocket.  While propulsion is possible, an assistive power source would be a good idea to have.

Mirrors could produce a velocity 3 x 10-20 m/s2.  This is inefficient and slow.  Not all hope is lost, as the chapter provides ideas to increase acceleration.  A typical response to this problem is to use the dynamic Casimir effect, which hasn’t been proven yet.  In 1994, it was predicted (by Law) that a resonant response of the vacuum to an oscillating mirror in a one-dimensional cavity would occur.

If the oscillation frequency is equal to the odd integer multiple of the fundamental optical resonance frequency, then (for the GHz range) it possible to increase acceleration of the theoretical craft by a factor 109 (to 3 x 10-11).  By raising temperature of a 1cm cavity to 290K, it is possible to provide another increase of 103.  This means that after ten years, a velocity of 10m/s is reached (three orders of magnitude less than voyager).

Results are very dependent upon assumptions.  The book used plate mass/area and systems liberally.  However, for the oscillation amplitudes, conservative estimates were used.  To create large amplitudes (for oscillation) it may be necessary to use carbon-nanotubes.  Another approach is creating a large gradient of an index of refraction using a plasma front.  When the gas and the semiconductor in this approach are viewed, the acceleration of the mirror can be 1020 m/s2.  There will be Fourier components, and there is still much work to be done in this area.  It may also be possible to focus the fluctuations of the vacuum electromagnetic field.

While all this may seem to be great, there is still too much unknown in physics.  We still cannot magnify Casimir forces to the macroscopic levels.  Complex geometries and facts of interest from them are still all in the dark.  There is no consensus for the general outlook and the effects of materials.  Numerous tests are needed to find closure on negative mass claims and the dynamic Casimir effect.  The only real use of Casimir forces has been in micro and nano-electromechanical systems.  There are more possibilities to be discovered here, including quantum torque. Only time will tell if success waits, but miniaturization and progress are expected.

 

CHAPTER 13 – Inertial Mass from Stochastic Electrodynamics.

Chapter by Jean-Luc Cambier, Senior Research Scientist, Propulsion Directorate – Aerophysics Branch U.S. Air Force Research Laboratory, Edwards Air force Base, Edwards, California. Notes by David A Roffman.

Chapter 13 focuses on inertial mass, that is, an object’s tendency to resist a change in velocity.  The resistance means that a force must be applied, and that the inertial mass is the constant of proportionality.  There may be a relation here to the Mach principle (all matter in the universe in connected, if extended to that point).  Some believe that mass is the result of the interaction of quantum background fields.  If this is true, then by manipulating the background, we can change mass.  While the relation of matter to mass is still not understood, reducing mass would allow higher acceleration for a given force, thus making fast as light travel more feasible.  The proponents of this idea in Chapter 13 initially list Haisch, Rueda, and Puthoff (HRP).  This chapter probes the idea of stochastic electrodynamics (SED) in relation to other physics.  But as we follow the flow of the chapter, we find criticism of SED and Haisch and Rueda.  Question: Does this mean that Dr. Puthoff withdrew his support of Haisch and Rueda?

SED is the theory of the interaction of point-like charges that are particles with fluctuating electromagnetic fields in a vacuum (zero-point fields – ZPF).  The vacuum force doesn’t exert any force on the inertial frame (this is key, as inertial frames cannot be subject to forces and accelerations).

Of note is the equivalence principle.  It follows that any freefalling lab that is small enough (so gravity at the top and botton are the same) is an intertial refrence frame.  I mention this to clarify what is meant by intertial refrence frame.  Even though there is a gravitational field, inside the free falling lab objects do not accelerate relative to the floor of the lab.

The main goal of HRP was to bring the Einstein and Hopf results down to non-inertial frames, as well as to obtain another retarding force equivalent to acceleration.  While it may appear that mass originates from ZPF 100%, there are still many problems.

HRP proposed an extension of the classical oscillator model, with an oscillating electromagnetic field (with radiation reaction).  They consider ZPF to be composed of many frequencies; all the way up to the Planck frequency 1.8 x 1043rad/s.  In my view the Planck quantites are really not as important as people make them to be.  They are all found by simply taking the fundamental constants and performing algebriac operations on them.  It is simple dimmensional analysis.  There could be a number of any arbitrary size in front of the Planck lenght, mass, frequency, etc.  Force is caused by dephasing between the oscillating velocity and the oscillating magnetic field.  Radiative dampening is nonexistent in this model.  There is no natural cutoff for frequencies.  There is no way to compute mass in an inertial frame (the model is designed for non-inertial frames).  Inertial mass can only be computed when a frame accelerates when oscillated.  It is hard to believe how a parton (sub-atomic particle) can lose energy when subjected to intense high frequency induced by ZPF.

There are more problems with this model, as either a rest mass that is too high and unphysical, or an ad hoc particle must be introduced to match observations (if so, then the model does nothing).  Negative mass was considered, but this is too unconventional.  The book Chapter 13 next mentions only Haisch and Rueda (HR).  They changed the model, but not with too much success.  Their results contradicted Boyer (who worked on the same problem).  A note here is that Sunahata is a major proponent of ZPF.

Quantum field theory is where matter and fields are described at the quantum level.  ZPF is tenement of this theory.  The rules and diagrams will be copied verbatim.  There are an infinite number of such diagrams.

There are differences between SED and quantum electrodynamics (QED).  They are competing, and the latter is mainstream physics (and more accurate) and relativistic.  SED is based upon assumptions, and odd math.  While both rely on the concept of bare parameters, the radiative correction of QED leads to an additive term and SED has the multiplication of a term.  QED has logarithmic divergences, while SED maintains severe quadratic divergences.  Corrections in QED yield require renormalization of the rest mass, while SED requires acceleration for it to be correct.

An important concept in this area is Unruh-Davies equation (for temperature).  Acceleration equals heat at a very slow pace (2.5 x 1020m/s2 to produce 1 K).  Of note is that temperature classically is proportional to the velocity  squared of particles, and hence doesn’t depend on acceleration.  Despite the arguments made by SED, QED seems to be a better theory, as it is more complete, and has extremely accurate results.  SED needs to be refined.  It is quite possible that with corrections, SED will disappear, and become QED.  Granted, SED may see some results that QED cannot, we cannot be sure.  SED must also be computable, and not make ad hoc decisions.  While SED will remain a “radical” theory, it could have applications to plasma physics.

CHAPTER 14 – Relativistic Limits of Spaceflight.

Chapter by Bruce N. Cassenti, Associate Professor, Department of Engineering and Science, Rensselaer Polytechnic Institute,  Hartford, Connecticut. Notes by David A Roffman.

Einstein’s Special Theory of Relativity is a hallmark of physics, and is incredibly accurate.  Even though it is designed for inertial frames, it can still be applied to spacecraft if continuously altered.  A tenement of this theory is that the speed of light in a vacuum is constant for all inertial frames, regardless of the observer’s location. Maxwell’s equations are assumed to be perfectly accurate.  Also, physical laws do not break down in any inertial frame.  Time can be dilated if one reaches speeds close enough to that of light.  Specifically this describes time dilation due to relative motion, whereas general relativity time dilation is due to gravity. I am not an expert in the later, however I am quite familiar with the former.  A analogy that is likened to derivation of the relativistic factor is below.

Consider a river that flows from up to down with a speed “v” as shown in Figures 1 and 2.  How long would it take to swim (at a velocity “v”) up a distance “d” (that is, against the current) and back?  This is shown in Figure 1.  The time up is just d/(c-v) and the time back is d/(c+v).  The total time is therefore 2*d*c/(c2-v2).  If one were to swim across the river and back (a total distance of 2*d as in Figure 2) the total time would be 2*d/sqrt(c2-v2).  Take the first time over the second time.  One gets t1/t2 = c/sqrt(c2-v2).  Therefore t1 = t2/sqrt(1-v2/c2).  These two times to go the same distance are only equal if v = 0.  They are pretty much the same as long as v<<c.  Notice how if v is not equal to 0 it will always take longer for the swimmer to travel up and down than left and right.

There are a number of paradoxes generated by special relativity, such as: the pole vaulter, twin, faster than light travel, and instant messaging. The pole vaulter paradox is one of sizes and causality. As a pole vaulter is traveling very fast, he enters a barn and burst through the front door. To an outside observer, it appears that the pole (which is longer than the length of the barn) is completely enclosed in the barn.  However, to the pole vaulter though, the barn appears half as long, and the back door opened before the front door. This seems illogical, but when considering that the light from the doors takes longer to reach people than the time for the doors to close, all is clarified. The closing of the front door is independent of the back doors opening (if the speed of light is the ultimate speed in existence).

While the pole vaulter paradox may seem odd, the next one (the twin paradox) is the most commonly discussed. One twin waits on Earth, while the other travels on a relativistic space craft. Both twins will see each other’s clocks dilate at the same rate. Despite the slowdown of the clocks, each twin cannot think that the other has aged less. This situation is the paradox.

As for faster than light travel, it is possible for a moving inertial frame that an event can precede its cause.  Many physicists like to take the easy way out by ruling out faster than light travel, rather than trying to find the “how to go this fast.”  Instant messaging is quite simply a loop concept of people receiving data that can go on forever.  It is very confusing, so a wave function collapse may help to explain it.  I rule out FTL in another write-up.

With all of these paradoxes, special relativity may seem to be discredited, but the experimental evidence it almost flawless. The Michelson-Morley experiment splits a beam of light into two, and then recombines the beams.  Velocity doesn’t change at any point in this process. Relativistic particles and Doppler shifts also have helped to support special relativity. Cerenkov radiation occurs when something moves faster through a substance than light would. A light boom follows, releasing radiation perpendicular to the applied object.  This radiation is consistent with the theory.

Searches for faster than light particles (Tachyons) have not yet been proven successful.  If they were real, they would not have the time properties expected.

Relativistic rockets are the next subject, and with an acceleration of 1g, it is quite possible for a human to circumnavigate the universe in one working lifetime.  In terms of the photon rocket, it is possible to build, but would be incredibly difficult.  A last note is that special relativity doesn’t allow for faster than light travel.

Fraction of light speed Amount of time that passes in one year
.5 1.1547
.9 2.29416
.99 7.08881

CHAPTER 15 – Faster-than-Light Approaches in General Relativity.

Chapter by Eric W. Davis, Institute for Advanced Studies at Austin, Austin Texas. Notes by David A Roffman.

This chapter considers some common space travel options in science fiction, such as warp drives, wormholes, and faster than light travel (FTL).  While all of these options may be possible, some are more viable than others.  To achieve any of the previous options, it is necessary to create very specialized local geometries, and to obtain negative energy.  Negative energy has a negative energy density, and is negative because of its special ability to create wormholes.  Despite being created in the lab, physicists fear using its name because of the sensationalist aspects.

To utilize any of the advanced technology in the last paragraph, violations of energy conditions amongst other things must occur.  A way to obtain negative energy is to squeeze quantum vacuum states.  A few more methods are the Casimir effect, static radial electric (and magnetic) fields (by high intensity tabletop lasers), and gravitationally squeezed electromagnetic zero-point-fluctuations (ZPF).  The static fields are static while at peak intensity.

By reducing the energy below the ZPF, the vacuum becomes squeezed, resulting in negative energy.  This is because the definition of the vacuum is a state having vanishing energy.  By having less energy than this, it has a renormalized (negative) expectation value of energy density.  So, the vacuum oscillates between a negative and positive energy density, with it being on average more positive.  As a side note, zero-point-energy is the lowest energy  an object can have as given by the Uncertainty Principle.

Energy could come from this squeezed vacuum by use of an ultrahigh intensity laser coupled with fast moving mirrors.  For the book’s example, both pulses (positive and negative) are equal in time interval release.  Rapidly rotating mirrors in this setup would serve the purpose of separating the positive and negative energy, if the beams hit the mirror at a very shallow angle.  Another method would be to superimpose photons upon each other in order to create a beam of negative energy.  The tools for understanding negative energy have just become available.

Negative energy is produced naturally by gravity, which drags ZPF downward.  There is supposedly a halo of some negative energy around the Earth and other astronomical bodies.  So, gravity naturally creates squeezed vacuum states needed to create wormholes.  We have no way to gravitationally squeeze fields in the lab.

The most renowned way to produce negative energy is to use the Casimir effect.  This effect can be extended via a moving mirror (electrically conducting).  Accelerating such a mirror creates a negative energy flux.  Frequency distribution changes with acceleration.  However, the mirror effect is not very significant in terms of producing an operational system, as there are better ways to produce negative energy than this.  Side note:  Accelerating charges creates radiation (photons).

The electromagnetic Casimir effect can be used to generate a wormhole, but there is a catch.  Very small cavity separations are needed to create a decent sized wormhole.  But such small plate separations destroy the Casimir effect, as Van der Waals forces take over at these distances.

So far transversable wormholes and warp drives have been discussed.  All credible theories in this area are full of wormholes, time machines, and a warp drive of sorts.  Most situations involving faster than light travel involve the general theory of relativity, and some alterations and mass.

Transversable wormholes must use exact metric solutions.  The following are some desired requirements: travel through a wormhole must be less than one year as seen by travelers and outside observers, no time dilation for travelers via relativistic effect, no more than 1g for travelers, travel though the wormhole must not exceed the speed of light, travelers must not be torn to shreds by the wormhole walls, no event horizon, and no singularity (ex. black hole).  With a wormhole, only the wormhole mouth is of importance (as far as physics is concerned).

Alcubierre derived a four dimensional warp drive setup.  A space craft within such a warp bubble never exceeds the speed of light, although it may look that way on the outside.  Others have devised warp tube concepts.  However, any warp drive would actually be quite slow, and would require immense amounts of negative energy to achieve even that.

Wormholes, while possible, still require quite a bit of negative energy.  The amount rapidly increases as the wormhole widens in size.  But the warp drive requires even more energy than the wormhole.  As such, warp drives will never be technologically feasible, unless new geometries and ways to generate massive amounts of negative energy are found.

Quantum inequalities (QI) are conjecture extending from the Heisenberg uncertainty principle.  Some of postulates are (1) a longer negative energy pulse results in a weaker negative one, (2) the positive pulse to follow must exceed the strength of the negative pulse, and (3) longer time interval between the two pulses results in a larger positive pulse. These conditions are all violated by the Casimir effect (and other effects).  This has not been verified by lab experimentation.

The net energy stored in a warp bubble should be not as much as the total rest energy of the space craft (this imposes a speed limit).  Warp bubbles would be slow; in fact, some may be at a snail’s pace (literally).  Side note: If the energy of an object is much less than its rest energy, then it is non-relativistic. Some equations yield results to show that a wormhole requires little negative energy.  While wormholes are the most possible route to interstellar space (and a recipe is provided by general relativity for geometric and material components), no one knows how to build one.  It is of great curiosity whether a wormhole can be found and enlarged, or one must be built from scratch.

Negative energy is responsible (predicted) in nature for lensing, chromaticity, and micro and macro lensing events.  Spectral analysis can distinguish between these negative energy effects, and that of positive energy.  Gamma ray bursts have been shown to contain negative energy.  It is suggested that negative energy may have existed in comparative amounts to regular energy in the moments after the big bang.

Whenever light strikes a negative energy region, the light rays fly outward, leaving a zero umbra region.  This doesn’t compare to divergence, as light enhancement can be far greater.  We have the technology today to detect any abnormalities produced in a lab by negative energy.  The results of Davies and Ottewill show that temperature (kinetic energy of particles, if things with rest mass are being discussed) drops result from negative energy.  Wormholes directly imply time machines that can be easily built, but would require much effort.  There are opportunities to break causality in general relativity.  Some have put forth that there is no predestined time line (includes general relativity).  Time machines are also known as closed timelike curves.  They could exist at least at the semi-classical quantum gravity theory level.  Local chronology doesn’t imply a global one.  Some say that temporal paradoxes are nonexistent. Causality may be maintained in relativistic field theories, even if there is faster than light travel.

In terms of conservation of momentum, a warp drive may emit radiation.  However, much more research is needed.  Conservation of momentum for faster-than-light travel has yet to be published in literature.  I do not believe that FTL is possible, as anything that has mass cannot exceed or even reach the speed of light due to the relativistic factor.  This blows up to infinity (and hence so does the energy, which is not allowed) as the speed reaches the speed of light.  Only massless particles (such as photons and gravitons) may travel at the speed of light in a vacuum.  The only way to have FTL is to use negative mass squared, which mean the mass is imaginary.  This is nonsense.  If the mass were negative it would correspond to mass having a sort of “charge” like the columb force or the strong force (a minus sign is not a complex number), and I could understand it.  However I do believe wormholes are possible as negative mass is involved and not imaginary mass.

CHAPTER 16 – Faster-than-Light Approaches in General Relativity.

Chapter by John G. Cramer, University of Washington, Seattle, Washington State. Notes by David A Roffman.

Two particles are typically thought to be independent of one another, even if they emerged from the same event.  However, this is wrong, as the two particles still interact through quantum entanglement.  This at a distance connection is a principle known as nonlocality.  Einstein, Podolsky, and Rosen argued that nonlocal connections require faster than light travel (which is odd, since Einstein invented special relativity-this appears to be in direct conflict).  Their work was known as the EPR experiment/paper.  What must be unraveled is whether this concept is in nature only or can it be extended to manipulation?

Whatever happens to one photon must happen to the other.  Entanglement is typically the result of a conservation law within a system.  The Bell inequalities deal with topics in this area, when polarization is considered.  These demonstrate that semi classical, local, and hidden-variable theories are in conflict with standard quantum mechanics.  The EPR experiments have validated quantum mechanics, and have helped to refute the other theories.  When the inequalities were violated, this indicates that either parameter independence or outcome independence is broken.

While outcome independence is evident in quantum formalism, parameter independence is not necessarily evident.  These will become important in “no-signal” theorems.  Results such as those in the EPR experiments lead to the conclusion (for some) that faster than light effects cannot be used for communication.  This is due to the claim that one change in a subsystem will not be apparent in another.  These are “proofs,” but they are not necessarily solid.  Nonlocal signaling will only be allowed if parameter independence is allowed.

Nonlocal communication is not against special relativity.  Faster than light travel in special relativity is banned due to its intrinsic property of un-leveling the playing field for reference frames.  There can be no fixed simultaneous action (even for time traveling signals) as signals have no fixed set of timing or order.  Transmission and arrival cannot be synchronized, as both paths are delay-dependent variables.  Special relativity is maintained due to the light-like lines falling into place under than Lorentz invariance.

If nonlocal signaling is possible, then causality is violated. There might be a universal reference frame, but nonlocal signaling cannot verify this. Einstein made an objection about quantum mechanics due to a gedanken (thought) experiment (which pertained to momentum domain). Nonlocal signaling (if possible) would most likely occur in Einstein’s original domain. A way of producing entangled photon pairs employs the optical process of spontaneous parametric down-conversion. This uses a laser to pump, and a nonlinear crystal to transform a photon into two photons, with components that add up to the original photon.

A famous experiment in this area is the Shih group’s “ghost interference” experiment.  Vertically polarized photons that pass through a single or double slit produce interference.  There are some nonlocal signals, but this is not communication, because a classical communication link is used to impose coincidence upon the photons.  Coincidences are needed to preserve interference. It is still not known with certainty whether the coincidence requirement can be removed. Coherence and entanglement are important for nonlocal communication. However, an increase in one cuts down the other. A trade off point is being searched for to optimize the situation.

To send a meaningful signal, binary code could be used.  Multiple photons would have to be sent in order to have progress in communication.  At least 10-100 photons would have to be sent.  This is very difficult, so maybe people could transmit signals in bursts. Signals could be received before they are sent if nonlocal communication is possible.

If the previous situation occurs, a time loop could be created by resending the message back after it is received. If one sends a book script back in time years and the receiver publishes it, then the book was never written.  Information was sent, but in the new reality (the receiving end), the book wasn’t written, only received. This is a paradox.

Weinberg and Polchinski have published ideas that may allow the previous situation to work. Their work is detrimental for Copenhagen representation of a wave function as “observer knowledge.” Polchinski has shown that a small change (nonlinear) turns a hidden nonlocality of regular quantum mechanics into one that can be used for nonlocal communication.  Due to weak gravity and relatively flat space, quantum mechanics may be nonlinear. This effect may be very small.  More experiments are needed in this area. There is one experiment in progress that might produce a coincidence-free version of the ghost interference experiment. An important note is that quantum mechanics ignores gravity’s curvature of space. That doesn’t mean that one can’t plug in gravitational potential. One can insert it into the Schrodinger equation, but the results are only an approximation.

CHAPTER 17 – Comparative Space Power Baselines.

Chapter by Gary L. Bennett, Director, Metaspace Enterprises, Emmett, Idaho. Notes by David A Roffman.

This chapter discusses spaceship power sources and energy storage options.  We mostly use chemical approaches to space travel.  However, there are ways to improve.  Free radical propulsion utilizes neutral atomic fragments produced by dissociation of molecules.  Another version of propulsion is metastable, which has excited atoms or molecules with a radiative lifetime of more than a microsecond.  Multiple kinds of propulsion systems may be needed for a long mission.  There is a proposal to use the sun as a slingshot for extra-solar missions.

A common space power system employs solar panels.  NASA has the goal of increasing solar panel efficiency.  Solar panels will not work well at distances past Mars.  Panels around Jupiter would have to be 25 times bigger to work the same as they do around Earth.  For the Dwarf planet Pluto, they would have to be over 1500 times as large.

Considering temperature, radiation fields, and light are important.  Batteries can help for short term power boosts, while rechargeable ones are weaker and steadier.  Radioisotope power relies on converting heat into electrical energy, via a nuclear reaction.  Nuclear power has been used in space for quite some time including in the first military GPS satellites, weather satellites, Apollo lunar data collectors, voyagers, the Mars rovers, and Galileo.  Some of these reactors were so great that they lasted decades after they should have stopped working.

The New Horizons mission is currently on route to Pluto, and uses nuclear power.  Its mission will rely on this power source all the way up to certain parts of the Kuiper Belt.  Nuclear power has been used for decades (since the 60’s in space), and will continue to be used for probes to explore our solar system (and perhaps beyond that).  For the record, the first nuclear reactor in space was for the SNAP-10A in 1965.

When spacecraft do fail, a significant amount of malfunctions are due to other system errors, not power shortage.  America wasn’t the only country to have used fission power.  The former Soviet Union launched perhaps as many as 33 space craft using this power source (31 have been confirmed).  For nuclear propulsion, the standard proposed method is to heat hydrogen gas (using the nuclear reactor) and shoot it out the back end of the craft.

Nuclear propulsion promises to be between two to three times faster than chemical propulsion, although materials may limit its efficiency.  There are a variety of designs, but those were discussed in earlier chapters.  While fusion and antimatter processes are best for space travel, they are currently out of reach.  A  common misconception is that rockets push against something to move.  In reality Newton’s 2nd Law is F = dp/dt = dV/dt + Vexhaust*dm/dt.  Rockets move because of the change in the mass of the rocket due to the exhaust velocity of the propellant.

CHAPTER 18 – On Extracting Energy from the Quantum Vacuum.

Chapter by Eric W. Davis, Senior Research Physicist, and H. E. Puthoff, Director Institute for Advanced Studies at Austin, Austin Texas. Notes by David A Roffman.

The quantum vacuum supposedly contains massive amounts of energy.  This chapter considers possible ways to extract that energy.  If the vacuum is considered to energy frequencies up the Planck level (1043 Hz), then the energy density is 10113 J/m3.  Chapter 18 ignores the coupling of the electromagnetic vacuum and the quantum chromodynamic  (QCD) vacuum; both are separated.  The QCD vacuum has a regular vacuum inside (exterior to the hadron), and is immune to quark color.  The interior has gluons that allow color to exist.  By color it is not meant that quarks actually have a color like red or blue.  Instead color refers to charge.  This is not electrical charge (for which there is positive and negative), but is color charge (red, blue, and green).

Zero-point field (ZPF) energy causes:  the Lamb shift, spontaneous atomic emission, cold van der Waals forces, Casimir effect, radiation pressure/noise, and the so called “cosmological constant” (according to some). This chapter considers how to extract and use zero-point energy (ZPE) for propulsion.  A vacuum battery can supposedly be built by using the Casimir to do work on a pile of charged conducting plates.  By having a both Casimir plates with the same polarity, a force can oppose the Casimir force.  By having the opposing force slightly weaker than the Casimir force can allow energy to flow out.  Such a battery will need to be recharged at the cost of energy.

Any such battery would require more input of energy than output, although it does demonstrate ZPF work on matter.  Mead and Nachamkin proposed resonant dielectric spheres be used to obtain electrical power from the vacuum.  If the spheres are slightly detuned from each other, then a beat-frequency downshift of higher energy frequencies should cause energy production.  The converter in this scheme must have a tuner, transformer, and a rectifier.

Despite the fact that is scheme has been shown to work in Air Force simulations, no one has tested this idea yet.  ZPF is attributed to (usually) the Heisenberg uncertainty principle.  It is this that causes the fuzziness in measurement.  Stochastic electrodynamics theory is another alternative to help explain ZPF.  However, this is a theory with many problems, as mentioned in earlier chapters.  ZPF induced voltage fluctuations have been searched for, and can be detected at frequencies in the 100GHz range (perhaps even the 100MHz range).

Through this voltage (high), high frequencies, and coils (at colder temperatures), it may be possible to gain energy.  Another proposal is to use ground state energy reduction. There is a paradox (as identified by Boyer and Puthoff) with this topic.  Certain elements emit no radiation, but classical electrodynamics says that all atoms emit radiation.  How can this be?  It has been proposed in quantum mechanics that electrons have zero angular momentum, while SED says the classical orbital velocity of the ground state electron is c/137.  The plan here is to suppress atoms in a microcavity, which may release energy.

Heavier elements in this scheme will be able to use larger Casimir cavities.  These elements may help to reduce zero-point radiation.  A device could use a cube.  This cube would have billions of tunnel cavities.  The Casimir forces would build up, and be used to generate power.  It’s hard to engineer such a device; cross sections and micro chip procedure would have to be used.  Gas passing though the cavities would bring electrons (and thus, energy).  Hydrogen gas may be used to change this process.

Although the Casimir force may seem analogous to a non-rechargeable battery, there may be a way to rectify this problem.  Thin-film switchable mirrors may be able to create a rechargeable battery.  Another tunable device would use photonic crystals with a band gap of photons that is able to transmit through a tunable structure.  Tests were performed separately with a hydrogen atmosphere, and resulted in no noticeable difference.  Dimension changes will have no effect on energy production.

An interesting phenomenon mentioned in this chapter is the electromagnetic vortex.  Plasma vortices are thought to be ball lightning.  Electrons became so concentrated that they seemed to violate the space charge law.  These vortices could pierce metal.  A late name of EVs (electrum validum-strong electron) was given.  Shoulder is a key researcher in this area.  Puthoff and Piestrup have proposed that ZPF (Casimir forces in this case) could be causing the electrons to stay that dense; a 4th power inverse square law.  There were some corrections needed, and they were resolved.

Despite the fact that EVs can be easily produced in labs, the claim that they can extract vacuum energy has yet to be proven.  There is more research needed in this area.  The second-quantized QED theory applies to the electromagnetic vacuum, and it discusses fluctuations as standing or traveling wave modes.  A problem with the vacuum energy is that it can be 120 orders of magnitude greater than the cosmological constant.  People have tried to ignore vacuum energy, as it is so huge and difficult to rectify, but it must be dealt with.

Vacuum energy must be degradable in order to be useable for a decent battery.  There are claims against the quantum vacuum.  Some say that the effects can be self fields generated by matter.  Degradable forms of the vacuum include those that are gravitationally squeezed and those that are red-shifted.  Another is melting the vacuum.  Atoms are composed of three quarks.  Quark-gluon plasma energy collisions could yield enormous amounts of energy.  These types are not supported by solid evidence.  Only new particle accelerators and time will tell what works and what doesn’t.

CHAPTER 19 – Investigating Sonoluminescence as a Means of Energy Harvesting.

Chapter by John D. Wrbanek, Gustave C. Fralick, Susan Y. Wrbanek, and Nancy R. Hill, NASA Glenn Research Center, Cleveland, Ohio. Notes by David A Roffman.

Sonoluminescence uses sound to create light in cavitations fluids.  This process also generates bubbles that can produce temperatures higher than thousands of degrees (Celsius).  A spherical collapse of a bubble could have a peak release temperature of 3 X 108 K.  The bubbling heat/light flashes were discovered in the 1930s during sonar research.  A possible model for sonoluminescence is a two stage plasma bubble in which there is a low-density halo, and a high-density core.

It may be that fusion can be achieved through sonoluminescence.  The heat associated with the process may be enough, although fusion has yet to be observed.  However, it has been noted that different liquids produce varying temperature bubbles.  Some have projected that temperatures of one million Kelvin can occur in acetone, whereas only half a million Kelvin is projected for water.  Deuterium in water (heavy water) typically results in better bangs from sonoluminescence.

Taleyarkhan’s group has recorded neutron and gamma ray flux.  These results have been confirmed by others, although there is not enough evidence to say that nuclear reactions are taking place.  To be effective for power generation, devices need to be compactable.  Calculations in the book show that the minimal cell size is 4.6mm in diameter.

NASA has launched broad ranging experiments to study sonoluminescence.  One of their tests confirmed that brightness increases by 20% in 0 g.  Standard equipment is: flask containing liquid, ultrasonic transducers, piezoceramic amplifier, and a generator.  If the gas is not saturated in the liquid, then a single bubble is produced.  However, if the gas is saturated, then multiple bubbles are produced.

For experiments with varying flask sizes, the smaller sized containers worked best.  This may be because there was less opportunity for dissipation.  As for the experiments with water versus heavy water, heavy water was needed to observe what is described in the next paragraph.

Plates that combined different materials were used in the experiment.  Through zooming in, it became apparent that some metals fused over the course of the experiment.  Temperatures required for this welding were around several thousand Kelvin.  This cannot be acquiesced with certainty (that temperature was the cause).

In the case of fusion, it is necessary to find its products.  Scintillation detectors may be used to find the products of a fusion reaction.  To make use of energy produced, it is a good idea to transform thermal energy into electrical energy.  A heat gradient can produce electricity via the Seebeck coefficient.  Thermoelectric power has been used in space missions.  The hot side uses radioisotope generators, while the cold side is space.

NASA is currently researching film ceramic thermocouples for hot environments.  There needs to be a way to reduce resistance in order for a device to work, as each individual unit (as shown in the book) loses a lot of power to it.  Despite successes with some applications of sonoluminescence, there is still much to be learned.  NASA and (more likely) other organizations will continue to study this curious concept.

CHAPTER 20 – Null Tests of “Free Energy” Claims

Chapter by Scott R. Little, EarthTech International, Austin Texas. Notes by David A Roffman.

The first energy source mentioned in this chapter is zero-point-energy.  All it means is that the lowest energy state of a system is non-zero.  An example of zero-point energy is the energy of the quantum harmonic oscillator.  In this case E = hbar*omega*(n + 1/2).  Here hbar is Planck’s constant divided by 2*π, omega is the angular frequency of oscillation, and n is an integer that is greater than or equal to zero.  Notice that the lowest possible energy is not zero, but is hbar*omega/2. In this chapter the electromagnetic zero-point field is discussed.  This arises from the uncertainty principle.  The force per unit area for the Casmir Force is -hbar*c*π2/(240*a4). In this formula “a” is the plate separation.  This relationship holds for perfectly conducting plates.  This chapter discusses several failed attempts to extract net energy including: Ken Shoulder’s charge thrusters, the Potapov device, and sonofusion by Roger Stringham.

Of interest is H. E. Puthoff’s explanation for the ground state of hydrogen atoms as a dynamic balance of energy loss due to acceleration radiation and energy absorbed by the zero-point field.  In my view this explanation is not necessary. The ground state of hydrogen is just calculated from inserting the coulomb potential into the Schrödinger equation and the using Frobenius’ Method to solve it.  The only thing special about this problem is that it can be solved by hand.  This state just is and there is not allusion to anything else happening.  There were several experiments to test Puthoff’s idea. If he was right then if hydrogen is placed in between Casmir plates, then Hydrogen’s ground state should have lower energy.  The energy of a hydrogen atom is normally given by ~13.6 ev/n2.  Most experiments tested for heat being released by hydrogen passing though the cavity. However the temperature increases observed could be explained by the Joule-Thompson effect.  In some runs with powdered metals the hydrogen reacted with oxygen gas.  A different approach was taken after these failures: absorption spectroscopy.  In this experiment (conducted at the Synchrotron Radiation Center of the University of Wisconsin-Madison) hydrogen molecules were used and the species was probed with high UV radiation.  The results were negative.

Electromagnetic devices are briefly mentioned and they fail to produce net energy and the claims of net gain are only because of the difficulty in correctly measuring power output (even with expensive equipment).  Cold fusion is mentioned next.  Probably many readers know of the claim of success some decades ago, but that was just fraud. The results of cold fusion are not reproducible.  Here’s the problem I have with it.  Protons repel one another through the Coulomb force.  The attractive strong nuclear force acts at about a separation distance of 10-15 m.  This means that particles will have to have a lot of energy to reach this distance.  Temperature is classically defined as proportional to the square velocity.  Low (cold) temperatures mean low velocities, which in turn mean low energy.  Such particles should never overcome the coulomb barrier.

CHAPTER 21 – General Relativity Computational Tools and Conventions for Propulsion.

This chapter is about notation, conventions, and computer programs.  Math can be long and tedious.  While multiplying through by a negative sign is normally not a big deal; physicists, engineers, and mathematicians will be at each other’s throats over which version of an equation is the best (all equations are really different facets of one another).  One major difference in notation between physics and math people is in the spherical coordinate system.  The zenith angle is phi for math majors and is theta for physics majors.  This matters because the roles of phi and theta are switched between the two fields.  My personal belief is that theta should be used for the x-y plane (I side with math majors).  This is because when using polar coordinates the x-y plane’s angle is always called theta.  Adding one more dimension shouldn’t meaning calling what was theta now phi.  However I will use the physics standard, because that is what others in my field are used to.

The book has its own set of recommendations on how to view equations for spacecraft: SI units, positive time-like displacements, more common metric, Riemann tensor through Christoffel symbols, and various other pro space flight beliefs.  While there is no preference in terms of software, pros and cons are made available.

Although there are details in the book, the reader of this articles is recommended to research each kind of software themselves.  However, the recommended programs for spaceflight (that have been tested so far) are: Maxima, Mathematica, and Maple.  These programs each have their own add-ons.   Note that there is no perfect piece of software for space applications.  I do quite a bit of numerically modeling myself and write my own programs rather than use someone else’s stuff so I know exactly what’s going on.

People have been attempting to make programs for yet to be discovered things, such as Casimir geometries.

CHAPTER 22 – Prioritizing Pioneering Research.

chapter_22_figure

Comments are closed.