I think what I most enjoy about the possibility of the EmDrive is that it's similar to how things were often discovered during the "golden age" of science.
Often, engineers or inventors would create something and then scientists would have to explain why it happened. In the last few generations, physicists and mathematicians would come up with theories and engineers would have to build equipment to test those theories.
The EmDrive is one of the rare modern situations where someone has engineered a device that shouldn't work according to what we know and the scientists are having to come up with the explanation.
Personally, solving a mystery is more exciting than purely intellectual theories and the EmDrive has created a very interesting mystery.
> I think what I most enjoy about the possibility of the EmDrive is that it's similar to how things were often discovered during the "golden age" of science.
There were also a lot of false discoveries. Someone made a machine that does something, and after a few iterations it was discovered that it was only a bad measurement (or a scam). It happens also with theoretical ideas, the most famous are aether and flogisto. They were good ideas, they explained a lot of experiments, but after a time it became obvious that they were wrong.
> The EmDrive is one of the rare modern situations where someone has engineered a device that shouldn't work according to what we know and the scientists are having to come up with the explanation.
The obvious explanation is that they have an error in the measurement. Probably something get very hot and the thermal effect produce a tiny force, some experiments show that it need some time to build up and it continue a little after the current is off, like a thermal effect. Another possible explanation is that the high currents produce a magnetic field that produce a tiny force. They are putting a lot of electricity and energy in a small device, and they only measure a tiny force, it's very difficult to rule out experimental errors. A similar case are the faster than light neutrinos, that were the product of a bad measurement.
It's a very, very small effect right now, near the noise threshold. Magnetic induction from the power leads can be bigger than the effect. Testing in vacuum eliminated some other sources of noise. That's why there's such skepticism.
The encouraging thing about having some theory now is that it gives some insights into what to do to get a bigger effect. Lilienfeld built a field effect transistor in 1925. This was a major breakthrough, but wasn't pursued. He was using a copper/sulfur oxide on aluminum, sort of like a copper oxide rectifier, a known device at the time. That oxide apparently has some semiconducting properties. But lacking any theory, there was no clear way to make it better.
There was nothing to indicate that highly refined silicon (something nobody used back then) was the material to use instead of aluminum. Germanium diodes were known to rectify, but nobody understood why. Not until Bell Labs tried to figure out how germanium diodes worked and some semiconductor device physics was discovered was there forward progress.
Once the underlying mechanism started to be understood, progress was fast.
They did it last year and alo found some thrust signals, altough they were very careful to leave the door open to measurement errors, like Lorentz forces.
Eagleworls lab will also attemp one vacuum test in a more equipped NASA lab.
I tend to think it's all vapourware since it goes against so many well founded rules that it's more likely that it's just pathological science. The theory in the OP is also likely to be rubbish. The author of the paper was challenged many times and failed to provide sound answers. Furthermore he supports a telepathy "researcher" which is also a red flag.
Well, it's not just a friend, it's something he supports.
There's something to be said for Bayesian updating, and in this case I'm willing to say that people who have the right of it scientifically are less likely to support telepathy.
I haven't read any previous studies of telepathy, so I have no informed opinion on it. Which seems to be a minority position: the majority position seems to be to assume that not only is telepathy "not real" but also anyone studying it is clearly a "not real" scientist.
Is there any data to support your theory that people who are willing to study telepathy are less mathematically able than those who are not?
Because it seems to me that people who are willing to rule out an entire area of study for social reasons are less able scientists than those who are willing to go where the data leads regardless of reputational consequences.
This EM drive is a case in point. Studies seem to suggest there is a real thing happening here, but scientists are not willing to study it because of the reputational risk. Which is clearly bad science.
If you're interested, then go ahead and read about the history of telepathy research so you can have a (more) informed opinion about it. Numerous well-performed and controlled experiments have not find any evidence for it, and in this case, absence of evidence is reasonably strong evidence of absence. Since there are so many other interesting (real) things in the universe to study, I conclude that people who continue to study it are doing so for reasons unrelated to the merits. It doesn't make someone "not a real scientist", but I sure would look askance at their critical thinking skills.
> This EM drive is a case in point. Studies seem to suggest there is a real thing happening here ...
That's not what I see. Every time the experiment is done with more precision and less noise, the effect diminishes again to near the limits of experimental error. That's exactly what you would see if the force was due to thermal (or other electromagnetic) effects rather than novel engineering principles. Nothing wrong with studying it, though. But I'd be willing to bet real money it all comes to nothing.
Edit:
> Is there any data to support your theory that people who are willing to study telepathy are less mathematically able than those who are not?
No, of course there is no such data. And mathematical ability is only tangentially related to critical thinking in the real world.
You keep saying "go where the data leads" but the data is leading in circles so far. What data would lead you to study telepathy? There's data then there's wishful thinking.
I wasn't suggesting that the data leads to telepathy. It may, it may not. I don't know. I haven't studied it. But dismissing anyone that studies it as a crank, and by implication anyone that supports them in that study as also a crank, is bad for all of us.
I'm less concerned about telepathy per se than with "science as politics". The social sciences are seeing some serious problems because there are things that scientists cannot say or talk about because of their political implications. Climate science is hampered by the problem of having to be constantly aware of the political ramifications of their research: are they "helping the deniers" if they publish a result?
If we start bringing this bullshit into hard science then we'll break that too. This is maths and physics. There's the hypothesis, there's the experiment. Do the experiment and validate the hypothesis. If the experiment isn't working, devise a better one. If the new theory leads to predictions then test those predictions. It's all objective, it can be tested, and those tests can be reproduced.
It's like free speech. I may not like the things you say, but I will absolutely support your right to say them. If you find telepathy interesting enough to study, then that's awesome. If you think you've got a result, and I'm interested enough, then I'll look at your methods and your data and work out if I agree with you or not. But dismissing you as a crank just because you're studying it is anti-scientific.
And just like free speech is being threatened by politics and there are now things that people must not say, science is becoming threatened and there are things that must not be published. This is bad imho.
I am sorry, but nerve system is electric. It emits electromagnetic signals. Nerve system is also perfect receiver for these signals. If you will train your body to emit/recognize these signals, you will be able to communicate telepathically. Your "hardware" (bioware?) allows you to do that. Some sharks are using that skill to locate fish.
And it's been done[1], successfully — but the signals need to be amplified before the receiving brain can detect them. Parapsychology research was answering a pressing need in the 1950s, and so it was pursued and funded. Now that we have ubiquitous resilient encrypted wireless broadband communication on the worldwide scale, and we're pretty sure the enemy forces cannot submit our soldiers using the power of concentration, the need is much less pressing. Recent advances in brain activity imaging with fMRI and the impetus to create computer-brain interfaces will lead to any advances in telepathy that can be had, and maybe discover new questions to study.
Out of all the futures that seemed possible at the beginning of the Cold War, ours is no less strange and awesome than one where telepathy were a mundane experience. Being skeptical about telepathy in the style of the great Sci Fi works of the latter half of the 20th century reflects the understanding that some ideas become reality, and some just peter out.
Body can be trained to improve strength of signal and make it directional.
I tried to train myself to improve strength of my look few years ago by training, and girls describe their feeling at that period as "burning skin". When I looking at them with concentration, they able to feel that at distance of up to hundred of meters. I also saw video with similar experiment recently, but I cannot find it right now.
So, IMHO, telepathy is possible physically at some short distance, but requires years of training of participants. Someone need to train himself to be strong signal emitter, which can be confirmed by measurement of body EM radiation at frequency interval of 1.3-30Hz. Then somebody else need to learn how to hear that weak EM signal, like blinds do.
If what you're saying were true (even requiring decades of training), it would herald a revolution in our understanding of human physiology. Let's see you reproduce these experiments in a lab with a skeptical researcher.
Where I can get funding to research short-range telepathy and other mumbo-yumbo science? :-) I was very skeptical myself just few years ago.
I had similar feeling ("burning skin") only two times in my whole life, so, IMHO, it is very rare. However, woman told me that it is no so rare, and staring of some "hungry" mans is similar. IMHO, it is sort of "I love you", but for silent animals.
> Where I can get funding to research short-range telepathy
DARPA[0].
This kind of research really has been done to death, and much of the groundbreaking research has been declassified by now. For example the CIA mind control research[1].
It is not a telepathy, it is something else. But yes, I trained myself by staring at girls from their back and noticing how they react, and what I feel, when they react.
This is not a valid experimental procedure, of course. I saw video with experiment which was done properly, but I cannot find it.
It's like hearing. I can hear, but I cannot explain - how. My English vocabulary is not enough to describe it in details (I am from Ukraine, I learned English as adult).
Before that, I used some techniques to better control my body and feelings for years, so I was able to notice subtle differences and gradations of my feelings. I also know (poorly) body language (because I am amateur ballroom dancer), so I can notice subtle changes in behavior of others.
I feel warmth at my face when I do that, like warmth after warmup, and not like warmth when I self-heat a part of my body by concentrating at it.
It is much easier to stare at somebody, if I have feelings to her/him (e.g. when I like or dislike her/him).
It works without eye contact: from target back or when target is sleeping.
Targets always know direction from which I stared at them.
When effect is strong, it is easy notice it: it feels like something happens at skin ("burning") but without pain of any kind, i.e. nerve system sends strong signals about something ephemeral, but nothing happens in reality. When effect is weak, it feels like warmth, so it is easy to mismatch it with self-heating with concentration. So, to test effect properly, you must not say to targets that you testing them, otherwise they will concentrate at their body and will feel warmth.
IMHO, it is kind of "i like you/i dislike you" sign language used by animals and then forgotten by humans.
I had similar feeling myself, but two times only:
- in train, from girl: after about hour of staring at her (experimenting) I got exact same response. :-)
- from soldier, when we talked about how to free Crimea of Russian soldiers. :-)
It is all.
I stopped doing that because it was hard to stop looking without staring. Random people started to like me for no reason, remember me and my name for years, etc. I unlike that.
PS.
I saw a film where I was able to track at which girl operator/director is looking when filming, because of girl reactions, so I am 100% sure that I am not alone. :-)
> Because we should only research orthodox, approved subjects, and not go where the data leads us?
Data is crappy at the moment, but I agree, it's at least interesting in the case of the EmDrive that people are measuring something.
--
Actually yes, if somebody follows crank science my trust factor goes down (the Mariana trench). But that's me, if you see no problem with it, just study his paper. He also a big opponent of dark matter. But hey, I'm just a programmer, what do I know about theoretical physics.
Link? According to the article, "And last year, NASA conducted its own tests in a vacuum to rule out movement of air as the origin of the force." So either the story is wrong or you are, but both can't be right, right?
The article is very sweeping. You're getting only the pro side, not any of the critical view.
People are still heavily debating the experiments. It has not been empirically demonstrated to a standard sufficient to call the thrust real. That's why only a very small community is working on this. "NASA" in this context means EagleWorks is letting a couple people spend a bit of spare time and resources on it as a speculative project.
Mike McCulloch's work is interesting, but it's quite far from any mainstream acceptance or testing, as he himself would admit. It's mostly independent of the EmDrive stuff, but the enthusiast community has latched onto it as a way to escape the unpleasant reality that established physics says the EmDrive is a perpetual motion machine. I'm glad EmDrive is giving his work a boost because I think it should be tested, but there should be much better ways of testing Modified inertia by a Hubble-scale Casimir effect than the EmDrive (he's got a couple blog posts about it if you're curious).
Its arguably a possibility that we are so unsure of the effect because we have scientists "cautiously creeping" so as to not hurt their future reputation.
There was anomalous scaling in the observed effects in one Chinese research teams results, but it has not been repeated because full reproduction and pushing past their power levels would cost some decent money.
Seems stupid the (possibly) most groundbreaking advance in propulsion ever made, can't get a million or two when we collectively gamble more than that each day at casinos lotteries and horses races.
We could quickly eliminate the question of "is this anomalous results from bad experiment design" with a bigger experiment that would make such flaws more evident. Yet we waste time instead of money.
>Seems stupid the (possibly) most groundbreaking advance in propulsion ever made, can't get a million or two when we collectively gamble more than that each day at casinos lotteries and horses races.
Perhaps they need to set up a crowdfunding campaign to get those people betting on their experiment, gambling on a science experiment rather than a game of chance.
Somebody built this complicated device without already having any understanding of how it supposedly works, and you think it might work anyway? Talk about winning the lottery.
What's stupid is making investments which are known to have an expected return less than 1:1. Playing the lottery isn't stupid because we don't understand how it works; playing the lottery is stupid because we do understand how it works, we know what the expected return is, and we know that it's a worse investment than just stuffing cash under a matress. This is entirely uncontroversial, and lotteries are run by profit-making entities (either private firms or as a "tax on the ignorant" by governments) whose entire viability relies on this fact.
In contrast, things like the EmDrive are high-risk high-return investments. Their expected return is more difficult to estimate than a lottery, since we'd need to estimate the probability of it working and the expected return if it did work. However, whilst an idea like the EmDrive may be controversial, the idea that spending a small proportion of investment on ideas like the EmDrive isn't controversial. There may be arguments over how much counts as "small", which ideas should be prioritised, etc. but this just goes back to the uncertainty of estimating the expected return. It's entirely uncontroversial to say that, if it works, the EmDrive would be an incredibly lucrative technology; it's also uncontroversial to say that it's unlikely to work. The tricky part is working out which term dominates the expectation: does the big thing (the potential return) multiplied by the small thing (the probability) result in a big thing or a small thing (the expected return)?
I meant "high risk" in the sense that the return has a high variance, with all of it being concentrated in a thin sliver of the probability; apologies if I've misused a technical term.
> The problem here is that you could make this same argument about almost anything, a la Pascal's Wager.
I'd call it a consequence of expected return being a widely-applicable calculation, rather than a "problem" per se. Even if we knew the expected returns, we'd still need a decision procedure to perform the allocation.
My point is that it's uncontroversial to avoid putting all funding into the most promising project (e.g. other Physics research doesn't wait until we're "finished" with the LHC), so there's certainly scope for allocating a small budget to more "fringe" research like the EmDrive. I'm not in charge of research budgets, but as a simplified argument we might imagine allocating funding on an exponential scale, based on expected return and risk: the most promising projects compete for a chunk of half the funds, the "second tier" projects for a quarter, and so on. Projects with lower impact are lower tier, projects with lower probability of success are lower tier. We stop once we've rounded-up an allocation to the smallest unit of funding, hence avoiding Pascal's Wager.
Also note that there are only a finite number of options to choose from, because there are only a finite number of submitted research proposals.
Restricting research capacity and dismissing empirical evidence because it doesn't jive with "established science" is exactly the opposite of what science is all about.
There have always been the recalcitrant "that's how it's always been done around here" types that often push back even when confronted with evidence. Take, for example, Ignaz Semmelweis[1] - he had the evidence that it works but he couldn't explain how washing hands could decrease mortality in hospitals. Few people took him seriously, he suffered a nervous breakdown and died. Afterwards it took Pasteur and Lister to get people to actually accept the theory.
I do understand that it is a recurring theme. This belief approach to science can be scientifically proven to be a bad idea (given evidence such as yours).
While every chance should be made to further explore our current assumptions (such as with the LHC), we shouldn't be neglecting low hanging fruit that challenges our ideas (such as the EMDrive). If as much as 1% of the money that is being spent on LHC went to "anomalous science" we'd likely have a conclusive answer on whether the EMDrive doesn't work. Science is tool that disproves, and we've been hacking it into a proofing tool for far too long. It's time to go back to basics and figure out more about what we don't know.
> McCulloch’s theory makes two testable predictions.
Think about the cost of either of those experiments - vanishing in the face of other science that is being performed today. If science wants to see the EMDrive go away, and is certain that it doesn't work, a comparatively small grant is all that it takes.
A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it. -- Max Planck
Science has always had devout, religious elements, don't think otherwise. Look at the backpack the Theory of Evolution received from respected scientists and how much hand-waving dismissal there was of Relativity.
It has been argued that the only way science progresses is when the obstacles to science die of old age.
This seems like a misconception to me. Scientists have always tried to apply their energies toward the most promising lines of research. The supply of research capacity is finite.
> established physics says the EmDrive is a perpetual motion machine.
In what sense is it a perpetual motion machine? As far as I understand it violates the conservation of momentum, not the conservation of energy. How would a theoretical perpetual motion machine based on this effect work?
It only looks like conservation of momentum is violated. From the article:
>The cone allows Unruh radiation of a certain size at the large end but only a smaller wavelength at the other end. So the inertia of photons inside the cavity must change as they bounce back and forth. And to conserve momentum, this must generate a thrust.
The new idea introduced is the quantization of inertia at small accelerations. As far as I understand it, from one end of the cone to the other there isn't a smooth change of inertia. This depends on the idea of Unruh radiation, and the reason the article gives for quantization of inertia is that as accelerations get very small, Unruh radiation wavelength becomes larger than the observable universe, forcing unruh radiation to take whole-value wavelengths (quantization). Again as far as I understand it, the inertia of photons on one end of the cone takes a different quantized value of inertia than the photons on the other end. So the thrust isn't a violation of conservation of momentum. The thrust is necessary to not violate conservation of momentum.
No, the new idea is a massive photon. There's strong limits on that from astronomical observations, and a stronger one from Special Relativity when cast in a modern isometry group theory form. In that form, the Poincare Group is the isometry group of the unremovable (flat space) background, and the Poincare Group has exactly one free parameter, "c", which corresponds to the speed of a massless particle. In this form, we assume that light is massless, and look for experimental evidence supporting that assumption (there is a fair amount; we have lab studies showing that the m_photon cannot be more than about 10^-17 eV/c^2 and there are even more stringent limits from observational cosmology).
A nonzero photon mass makes a mess of particle physics at high energies. The Standard Model falls apart due to loss of gauge invariance, and QED becomes really obviously non-renormalizable near limits we have already tested. So this would be a big thing.
(Note that if you bite the bullet and take a nonzero photon mass (as McCullogh says in his paper at page 3: "Normally, of course, photons are not supposed to have inertial mass in this way, but here this is assumed") you can probably get a non-constant photon speed too, along the lines of neutrino oscillation. But you can always write down non-physical theories that conflict with frequently and precisely tested areas of physics...)
None of the above departs from the symmetries of Special Relativity, and one subgroup of those (invariance under spatial translation) is what implies the (local) conservation of (linear) momentum per Noether's (first) theorem.
So there's a really big looming question about why the Standard Model (which has the Poincare Group baked into it) works as well as it does everywhere but in an EmDrive or a single space probe, and resorting to special shapes of objects on scales much larger than that of atoms further conflicts with Poincare invariance.
Finally, Unruh radiation is a difference in particle count and particle energy measured by differently accelerated observers. The cosmological horizon, when it formed, produced an acceleration between observers then and observers in the future. Unruh radiation from that is pretty uncontroversial. However the problem is that the acceleration is pretty small, so the temperature will be much lower than that of the CMB. Also, you'd expect anisotropies based on Earth's (and its surrounding Local Group's) peculiar motions relative to the horizon. Why isn't there a dipole anisotropy similar to the one we see in the CMB, and if there is, by how much does it offset the inertial argument in McCulloch's paper?
Constant thrust means constant acceleration, so speed goes up indefinitely with a constant supply of energy. But kinetic energy is proportional to the speed squared. So you get more energy than you put in.
Conservation laws aren't fundamental, but are implications of local differentiable physical symmetries (this is the primary result of Noether's (first) theorem).
The local symmetries in question are represented by the Poincare Group, which is the isometry group of Minkowski spacetime, which in turn is the unremovable background of Special Relativity. (The Lorentz Group is a subgroup of the Poincare Group).
This is another way of saying that in a 3+1 flat spacetime, well-designed local probes of fundamental physics will not depend on time translation (i.e., experimenting today vs experimenting tomorrow), on spatial translation (e.g., the same experimental results here vs there), on spatial orientation (i.e., rotation about the 3 spacelike axes; so you get the same result when you turn the system under test 90 degrees to the left), or under Lorentz boosts, which are basically instantaneous changes in constant uniform motion along any of the axes. Additionally, small-scale (small compared to several times the size and mass-energy of the whole solar system) natural phenomena are essentially always "well-designed local probes of fundamental physics".
Conservation of linear momentum arises from invariance under spacelike translation; a violation of that conservation makes it very difficult to maintain spatial translation symmetry, and in particular flies in the face of the many direct tests of physical systems at different places on our planet and in our solar system, for example. So that's a big deal that needs explaining, and in particular the explanation should preserve the known and reproducible invariance under those translations as well as allowing the EmDrive's alleged violation.
The perpetual motion machine argument is slightly different because it is rooted in a claim about how the EmDrive behaves when in non-constant motion; in the current version of the theory paper "V 9.4" at equations 14-16, there is an implied violation of the Einstein Equivalence Principle. The equation is a bit odd, and you easily can read it to say that there is a power<>thrust relationship that varies with acceleration, and raise the Einstein elevator objection: if you put the EmDrive into an upwards-accelerating box, or leave it on the ground, do you still take this power<>thrust relationship seriously? If so, you get "free power" (although that's a bit subtle). If not, then you have to explain a violation of the Equivalence Principle, which is also something that has also been very well tested and has so far applied without fail.
You could think of it with a somewhat concrete example: to hover at some fixed height above the Earth's surface, the EmDrive would have to emit thrust similar to the amount of work you would do to hold it at that height with your hands; since you are near the surface, that should be about "g" expressed as N/kg (and indeed Eq 16 tells you how much electrical power you will require for the EmDrive). So far so good. But (eq 16) can be read to say that when you place the EmDrive on the ground, it will produce electrical power proportional to to "g".
I think this is pretty clearly just a mistake rather than a serious claim. Unfortunately eq 16 features prominently in the FAQ and all the "marketing" material about the EmDrive. (Even more unfortunately it's hard to see how to fix the equation without making the drive obviously inoperable).
Thanks. For the supposed violations - aren't we still then only discussing the device working within known physics? There is no violation if the device were to slowly lose mass due to some unknown process (making it a form of Ion thruster), or if it accelerates some yet unknown weakly interacting massive particle in the opposite direction of the thrust, or if it somehow only conserves invariants for a "larger" system, potentially the entire universe?
Given an exotic enough explanation (negative energies/faster than light particles/spacetime warping/..) it feels like you can cheat your way around almost any invariant? Occam is left weeping in a corner of course.
Sure, if it's not actually reactionless, most objections fall away.
The problem with reaching for exotic particles or the like is that we don't see them in searches which involve tens of orders of magnitude more power, and we certainly don't see them in, for example, electric toasters or radar sets. So what would need explaining is what's peculiar about the EmDrive with respect to (extremely) unusual particle interactions, and there is [a] nothing obvious and [b] nothing at all in the "theory" paper.
Likewise, reaching for non-local physics answers raises similar questions (why is EmDrive doing non-local whatever but my kettle isn't; or alternatively, why is EmDrive coupling with whatever much much much more strongly than my conventional oven is).
I have another pair of things that you can add to the list: [a] EmDrive somehow violates causality in a way that other similar arrangements of mass-energy-momentum does not and [b] EmDrive somehow escapes logic in a way that other similar systems under study does not. These are neither more far-fetched nor more unpalatable, compared to abandoning local physics (causality and logic preserved, but hidden non local variables proliferate) or abandoning the Standard Model as an accurate low-energy theory of matter (causality, logic and locality preserved, but now what happens in the molecules and atoms of cars, computer chips, and light bulbs? we no longer can be quite so sure!).
There is A LOT of known physics and almost exactly zero examples of violations of the known invariants at low particle energies. You are re-testing relevant parts of all that in reading this comment.
Paywalled on that link though so I can't see the details which tend to be everything since last time a vacuum chamber was involved it wasn't actually evacuated.
Yes, and that report indicates a null result. But they also indicate that the driving frequency was way mistuned for the cavity. They were driving it with a magnetron from a microwave oven, which output a frequency far from optimal. This indicates a low-budget operation, and not one in a lab with lots of microwave gear.
NASA, at least, has a microwave source with the right frequency. But they report "researchers were now working on a new integrated analytical tool to help separate EmDrive thrust pulse waveform contributions from the thermal expansion interference". That indicates this thing is still way too close to the noise threshold.
Back when cold fusion was taken seriously, I went to a Stanford talk where a physicist described their attempts to replicate the experiment. At first, he said, they had the apparatus surrounded with radiation detectors and alarms, in case it produced a dangerous burst of neutrons. After a while, they realized that wasn't going to happen. They discovered that the effect being measured was about twice background radiation. Then they discovered that people moving around the apparatus affected neutron readings by more than the measured amount. (Humans are mostly water, and thus reflect neutrons.) Finally they moved the experiment to a "neutron cube" built from lead bricks, where the background radiation was very low. The measured neutron readings went way down. That's what it's like when a phenomenon is near the noise threshold.
The German team did it for a BBC documentary on "junk science" - they were looking for a null result, so while I take claims that the tech works with a pinch of salt, the same applies here.
My understanding was that the report didn't show a null report:
The device produced positive thrusts in the positive direction and negative thrusts in the negative direction of about 20 micronewtons in a hard vacuum, consistent with the low Q factor.
Besides being tested horizontally in both directions on the torsion pendulum, the cavity was also set upwards as a "null" configuration. However, this vertical test intended to be the experimental control showed an anomalous thrust of hundreds of micronewtons that could be caused by a magnetic interaction with the power feeding lines going to and from liquid metal contacts in the setup.
he just means here are all kinds of piddly effects that will throw it off. If you switch the thing on and you get a huge effect size with high signal/noise you don't need to worry about stuff like who is standing near the tank other than to worry that they're being irradiated with fast neutrons.
Measurement error is a fine, and even necessary hypothesis in attempting to explain the observed effect. Multiple, independent replications tend to cast doubt on the ultimate viability of that hypothesis, however. How likely is it that all of these people at all of these labs are making the same measurement error?
And then, on top of that, when the observed effect comports with a theory that also predicts another, well-established observed effect?
It's been said that the most important phrase you will ever hear a scientist utter isn't, "I've found it!", but rather, "Well, that's strange."
This is very much in the, "Well, that's strange" class of things. Either way, we're going to learn something about how the universe works, or about our ability to measure it, or both. This should be celebrated, not dismissed as mere "measurement error."
People noticed an effect every time the Dean Drive was turned on... but it is just a stiction engine and obviously won't work in space. It could be this drive is actually generating magnetic forces or moving air around and thus "creating thrust" by pushing off its surroundings, which won't work in a vacuum.
You'll note that I didn't actually assert the existence of the effect. I observed that a theory which predicts the effect the EmDrive appears to demonstrate, also predicts the fly-by anomaly, and offers a unified explanation for both.
Yes, it requires more confirmation. Much more. I'm not a scientist, but I think that, taken together, it's interesting enough — "Well, that's strange" enough — to warrant further investigation, instead of being all, "Meh. Measurement error."
It's worth an investigation because it can be done cheaply and there might be something there.
We've also investigated ESP and quite a few other weird things.
Unruh radiation is weird under some interpretations it can be used to reduce the inertial mass of objects, the thing I take from this is that I hope the EM drive could prove it rather than the other way around because then we might say we've discovered the "Mass Effect" ;)
It's also quite possible that the theory was derived to fit these observations. This theory has yet to predict anything new that has been proven. Until it does it only amounts to a possible way of connecting two unexplained and possibly flawed observations.
> The obvious explanation is that they have an error in the measurement.
Then why have 6 different teams verified it? Not saying EmDrive is the real deal but either the error is hard enough to catch 6 teams missed it, or theres something else going on. Saying all these teams are wrong, to me, is far from "obvious".
Do you have a link to the 6 experiments? I'd like to compare the results. This link has a list of 3 or 4 experiments before December 2014, that is more than 1 year ago. http://forum.nasaspaceflight.com/index.php?topic=36313.msg13... I'd like to compare that with the new results.
With the "current" physic laws, the theoretical maximum of ForcePerPowerInput is 1/c = 0.0033 mN/kW. In that table, the ForcePerPowerInput varies from 300000x to 3x. That's a lot of variation, not and exact value that coincides with a theoretical prediction. It's a pity that the list is not ordered by date.
If you order the experiments by date and all of them have roughly the same result, they probably are measuring a well known effect were all the variables are well understood. Like measuring g with a pendulum.
If you order the experiments by date and the value increase a lot, it's perhaps a new phenomena that is still not well understood and they are tweaking the materials to get more efficiency. For example, let's consider measuring the critical temperature of a high temperature superconductor. If you pick a fixed simple superconductor, you expect to get approximately the same result in any laboratory, but small changes in the fabrication process can increase or decrease the temperature. But any time someone discover a new superconductor material or method of production, you will get a new record, so the world record will increase and the other laboratories will try to reproduce and increase it.
If you order the experiments by date and the value decrease a lot, it's possible a sign that they are fixing some experiential details and reducing the experimental errors, and they get a smaller result because the correct value is 0.
Yeah imagine if someone had made similarly exciting claims about an unexpected and so-far impossible-sounding wrinkle in how gcc works. A bunch of people tested gcc and got all different results. Nobody cites any of the results or discusses any detail.
Everyone would be asking to see the code and the output for themselves before they got excited.
Wikipedia says that the experimental tests disagree on even the sign of the measured force:
"An article published by Shawyer in Acta Astronautica summarises the existing tests on the EmDrive. Of seven tests, four produced a measured force in the intended direction, and three produced thrust in the opposite direction. Furthermore, in one of the tests, thrust could be produced in either direction by varying the spring constants in the measuring apparatus. Shawyer argues that the thrust measured in the opposite direction is the reaction force from the drive, and therefore it is consistent with Newtonian mechanics.[1]"
Would it really be do surprising if similar experiments all working from the same basic design using similar components under similar conditions suffered from the same design flaw?
Wasn't there a problem that while there was an anomaly of some sorts, different teams still reported significantly different results? That would still mean that n-1 teams were wrong, which could mean that the nth team could have been wrong as well. Or was that some other similar device?
I didn't read through all the papers, but didn't the testing labs measure the weight of the apparatus precisely before and after? If it's any mass being vented off, I assume this would show up, and you can make precise predictions about how much mass would be needed by calculating upper limits for the velocities these particles could be accelerated to.
So your counter argument should be pretty easy to test, and I'd be surprised if this isn't accounted for in these experiments.
In the example of FTL neutrinos there was only one non repeatable instance - here we have at least three.
Most of the times skepticism is good, but sometimes it can blind you - at the very least this effect needs to be investigated further because the implications - if it turns out correct - to both practical applications and physical theory - could be huge.
Not a good comparison, considering six independent research bodies including NASA have experimentally verified that this does work (citing the OP article)
They have verified that there is an effect that needs explaining. Claiming that 'it works' is just as premature as claiming that it definitely doesn't.
The tl;dr for this is that Woodward proposed (based, AFAICT, on actual science) that rapid internal energy changes cause transient mass fluctuations. His device used a capacitor on a piezoelectric effector so that the effector would only push while the capacitor was charging or discharging, and only pull while the capacitor was _not_ charging or discharging, so producing an assymetric effect and, therefore, thrust.
There have been a number of attempts to verify this experimentally, all of which have been inconclusive; turns out the measuring very small forces when your test equipment is vibrating is very, very hard (see the Dean Drive).
My feeble understanding is that the underlying theory is at least plausible; it doesn't seem to violate anything we know about the universe. The last report I've seen of work on this is this (rather good) BoingBoing article from 2014:
One of the things that keeps me intrigued about this is that this is not the first time someone has claimed anomalous thrust from an asymmetrical EM device of some kind.
It could be measurement error combined with a little wishful thinking, or it could be that there's some undiscovered physics there and we're tip-toeing around the edge of the parameter space where it manifests. If that's the case, all these "X effect" systems could be working according to the same principle.
There are other cases in science (e.g. early transistor efforts and superconductors) where tinkerers and engineers hit on effects that were not understood and were ignored for a long time until we had some kind of theory that explained them and much more importantly told us how we might optimize for the effect.
The McCullough prediction that a dielectric should increase the effect seems easy to test.
There seems to be something that catches the imagination, but I suspect it's closer to that whole class of perpetual motion machines which consist of moveable weights on a wheel, than to a revolution in physics.
There's lots of unbalancing wheels, and none of them work.
I don't think really any scientists in the relevant fields actually think it's an interesting mystery. Lots of people make extraordinary claims about new theories or perpetual motion machines, but those claims frequently get reported on a disproportionately large amount.
This is from the perspective of a former physicist who is grumpy about the endless torrent of "Einstein was wrong!"-type articles. I personally feel like it's risky to allow people to pin their hopes on something that's pretty obviously bunk, but I imagine it does have some benefits. I just don't think it's worth it.
Considering that the 2 rounds of NASA testing are now undergoing peer review it seems that some "...scientists in the relevant fields actually think it's an interesting mystery."
That doesn't state that they think it works or doesn't... but they are investing time to clarify which is which. And that to me qualifies as it being an "interesting mystery".
I am skeptical about the EmDrive (although I would very much like it to be real), and my opinion on the drive or its theoretical underpinnings is not really relevant, given my lack of knowledge about the field. But I don't think that writeup is very good either:
From the IO9 article:
> The experimental setup is so flawed that it’s continuing to produce measurable “thrust” while in null mode when it should do nothing.
From the Wikipedia page on RF resonant cavity thrusters (and corroborated by the citations):
> the 'null test article', was designed without the internal slotting that the Cannae Drive's creator theorised was necessary to produce thrust
> The null test device was not intended to be the experimental control.
The article's author seems to fundamentally misunderstand the purpose of the null test setup. Setting everything else aside, if the null articles did produce thrust, this would disprove the Cannae theory (which requires the slotted configuration), but would say nothing about the efficacy of RF thrusters in general.
Their quotes from various physicists about why the drive is probably nonsense are a lot more compelling.
Pardon my ignorance, but I don't think anyone seriously doubts that the null article produces no thrust. So it's not a question of whether the null article's production of thrust disproves the efficacy of anything. It's a question of whether the experimental setup which measured that thrust is trustworthy.
I'm not saying the experiments were trustworthy or conducted properly. Only that the author of the article misunderstood the experimental setup.
(To be precise, the comment about the null thruster was made by the author in a comment on this article, and by a previous article written by the author, which this one references. It is not in the article itself.)
I realize that, but I'm suggesting that you may have misunderstood the author.
It sounds like you think the author is asserting that the null test article was intended as an experimental control and that its production of thrust is evidence for the null hypothesis.
I read him/her as asserting that the null test article is an experimental apparatus calibration tool, and that the reading of thrust suggests the apparatus is improperly calibrated, so that no results at all can be inferred from the experiment.
You have the right parity but there is a slight semantic difference between that and what I said. Sorry that I wasn't so clear. I'll try again.
There's a difference between "I don't believe X" (which allows for ambivalence) and "I believe not X" (which does not). There's also a difference between "I believe not X" and "I have no doubt of not X".
I was pointing out that there is no dispute or even doubt about the null article's inability to produce thrust (and this would have been a better phrasing). So the question is not "what does the null article's thrust imply about various hypotheses?" It's "what does the apparatus' measurement of thrust from the null article imply about the apparatus?" GP seems to have missed this point.
If you read the papers, the claims are not very well backed by independent experiment. (The claims are 'replicated' meaning lots of people repeat them!) There's little data and what there is is right at the limit of measurement error. Results from different groups conflict wildly.
Tech Review is a PR rag, not a scientific publication.
Ironically, it was Boris Derjaguin's participation in the polywater debacle that caused many to dismiss him when he claimed to have synthesized diamond by chemical vapor deposition, at pressures far below the thermodynamic stability region for diamond.
In fact, he was right and diamond synthesis by CVD is routine today. We probably lost about a decade of progress in diamond CVD because of Derjaguin's having been tarred with polywater, as it were.
I like popular articles about dark energy. About the only thing you learn from them is "Einstein called the cosmological constant his greatest blunder!".
Removing the CC from the normal form of the Einstein Field Equations was what he considered his blunder.
It was there originally, but he realized that he could not have a static universe with a nonzero CC, so he removed it, as Hubble, Lemaitre and Friedmann had not yet demonstrated the universe is non-static. When there was overwhelming evidence of what we now call the Hubble flow, he realized that a small positive cosmological constant produces exactly that, and thus he put it back in.
What you don't usually learn in those articles about dark energy is that in the Friedman-Lemaitre-Robertson-Walker model of the standard cosmology, you have an assortment of matter fields which are characterized by density and pressure.
Matter (in the most general sense of non-gravitational field content) has positive pressure, and some matter can clump (leading to non-uniform densities, and where density is higher, so is pressure; super-dense massive objects have enormous positive internal pressure).
Dark energy is in its simplest form a field with slightly negative pressure, and with constant density (i.e., it does not clump and it does not dilute away with the expansion like the matter fields do). This constant density is the "cosmological constant". Its absolute value is very small compared to the pressure even in slight overdensities of ordinary matter (like in sparse gas and dust clouds), so it's drowned out entirely by the positive pressures in structures like galaxies or stars.
Pressure and density are terms which are in the (Robertson-Walker) metric, and the metric describes the 4-lengths of spacetime intervals. Positive pressure contracts these lengths; negative pressure increases them.
So when fields with nonzero pressure are treated as the principal generators of the metric (i.e., matter and dark energy tell spacetime how to curve), you can call the result the metric expansion (or contraction) of space.
> the EmDrive has created a very interesting mystery.
Microwave cavities have been a workhorse technology in many fields for many decades, and everyone has found that existing physics (classical electrodynamics, superconductivity, and a few other things) is entirely sufficient to explain how they work.
There's a very simple explanation for why a small group of people would report an unphysical, novel effect in a well-studied physical system: sloppy experimentation.
> Microwave cavities have been a workhorse technology in many fields for many decades, and everyone has found that existing physics (classical electrodynamics, superconductivity, and a few other things) is entirely sufficient to explain how they work.
Not that I'm disagreeing but you could have said "Newton mechanics has been the workhorse method for centuries and everyone has found that they work" then Einstein came along and dumped the apple cart.
Not saying there is anything to the EmDrive (personally I'm on the side of experimental error) but the correct way to do science imo when you have something you can't explain is to keep at it until you can.
> Not that I'm disagreeing but you could have said "Newton mechanics has been the workhorse method for centuries and everyone has found that they work" then Einstein came along and dumped the apple cart.
You couldn't have said that. Einstein's contributions were solving real problems with Newtonian mechanics, where it was clearly inadequate to explain how things actually worked. The photoelectric effect was known for years before Einstein explained it with the first glimmer of quantum mechanics. The problem of a fixed reference frame for the motion of light was known for a long time before Einstein came up with relativity.
The situations aren't really comparable. Newtonian mechanics had major known flaws that people were trying to reconcile. They weren't tiny effects hiding near the noise.
I totally agree that investigations should continue until an explanation is found, it's just that people seem far too eager to assume that it must be something new, when with what's known so far it's overwhelmingly likely to be experimental error.
> Not that I'm disagreeing but you could have said "Newton mechanics has been the workhorse method for centuries and everyone has found that they work" then Einstein came along and dumped the apple cart.
That's specious. I am talking about a well-studied physical system, not a theoretical framework.
The correct analogy in this case is that someone picked a system that is well-described by Newtonian mechanics—e.g., a pendulum—and then built a small, crappy pendulum, made a crappy measurement on said crappy pendulum, and then claimed the existence of new physics despite the fact that no new physics is required to explain the behavior of much bigger and better pendula that other people already built.
What Einstein did with special relativity was solving well known problems with Newtonian mechanics, which is the exact reason it was a big deal at the time.
The big shock was of course the way he solved it, not that there was a problem to be solved.
Similarly, the big problem with today's theoritical physics is that physicists are conjouring up stuff like dark energy and dark matter to "fit the gap" between theory and observation without any sound basis for their existence. Much like pre-copernican mathematicians conjoured up complex circular orbits orbiting circular orbits to explain planetary motion.
> physicists are conjouring up stuff like dark energy and dark matter to "fit the gap" between theory and observation without any sound basis for their existence.
Physicists are undertaking a wide range of experimental programs to look for more satisfying explanations of what underlies ΛCDM cosmology (which I guess is the "conjouring" [sic] that you are referring to). Such experiments could also show that ΛCDM cosmology is wrong or needs to be modified. But right now, ΛCDM cosmology does a pretty good job of explaining the data we have so far.
I'm not sure what else you would have physicists do—should they just not talk about the fact that there's a relatively parsimonious framework that explains the large-scale behavior of the universe?
so if somebody thinks anomalous thrust from a microwave cavity is a problem, they should be working on finding a solution. it might be a systemic error, it might be a real effect.
That was the initial response, but the results have since been replicated several times by several organizations, including NASA. "Sloppy experimentation" seems unlikely at this point.
> the results have since been replicated several times by several organizations
The results are random. Sometimes nonzero thrust is observed in a direction opposite to what was expected [0]. Know what that sounds like? A null measurement dominated by statistical and systematic uncertainties.
> including NASA
I'll repeat what I said elsewhere: NASA is so big that that doesn't mean anything. Not everyone affiliated with NASA is a top-notch researcher. The word "NASA" is not automatic proof of good research.
The theory seems to predicts reversed thrust in certain conditions, it's at the end of the abstract. I don't know for certain whether they're random or not, but at least this looks interesting.
Being cynical, the sort of people who give enough credence to claims of a non-Newtonian 'space drive' that they rush to test it may not be the sort of people who are also likely to practice a high level of scientific rigor.
>Some have questioned why no companies such as Boeing, Lockheed Martin, or SpaceX have attempted to investigate the device, but regardless of how likely these companies find the results so far, the largest reason is almost surely that the devices are both patented by their inventors.
That shouldn't stop you from building and testing the device though, right? Only that if you wanted to commercialise it you'd have to come to an agreement with the patent holder, or maybe buy the patent.
If it worked, being first to market with an exclusive agreement with the patent holder would be lucrative.
True enough, but can you have hits without misses? Most new stuff will not work, but there is no way to avoid trying without also eliminating progress.
I'm of the opinion that science today is too conservative and gun-shy. Science needs to be more willing to fail, and scientists who pursue cold leads should not have their careers destroyed.
> Since then, something interesting has happened. Various teams around the world have begun to build their own versions of the EmDrive and put them through their paces. And to everyone’s surprise, they’ve begun to reproduce Shawyer’s results. The EmDrive, it seems, really does produce thrust.
That's a misleading statement. I'm passingly familiar with a few of the experiments they're referring to, and none of them both produced significant results and were performed by groups which seemed un-suspect. I'm not aware of any peer reviewed paper on this stuff, and I don't personally know any non-laypeople who believe there is anything actually remarkable happening here.
The sort of problem with relying on peer review is kinda shown in this bit from the Wikipedia article:
Eric W. Davis, a physicist at the Institute for Advanced Studies at Austin, noted "The experiment is quite detailed but no theoretical account for momentum violation is given by Tajmar, which will cause peer reviews and technical journal editors to reject his paper should it be submitted to any of the peer-review physics and aerospace journals."[46]
Basically, merely having a lot of replicated experiments isn't a high enough standard--one has to have a theory of why is works. This somewhat makes sense, but kinda fails miserably for things for which we currently have no mechanism for explanation.
Imagine if empiricists were faced with the nonsense of peer-review a couple hundred years ago before they had any of the knowledge of chemistry or physics to really explain electricity. Hell, imagine how Alessandro Volta would have had trouble publishing his work today when all he had was the empirical evidence of a voltaic pile but no knowledge of the electrochemistry that made it work.
I just want to parrot that what rimunroe is saying is true. The article is presenting empirical demonstration as certain and done. That is far from the case. These experiments are still highly contested and the proponents of "the thrust is real" are still a decided minority.
Fair points. Paul March of Eagleworks has claimed that a peer reviewed paper is under review for publishing currently, but who knows what the rigor of this publisher is. I badly, badly want this to be real, but I've accepted that it likely isn't.
TL;DR - a small, experimental division within NASA called Eagleworks tested the device. They are a very small group with very limited funding tasked with exploring unconventional theories around advanced propulsion. Their results were not published in a peer-reviewed journal, and there is considerable disagreement as to the validity of their experiment. They are continuing to refine their experiments, and the last update provided is that they intend to publish a peer-reviewed paper describing an experiment that successfully breached 100uN of thrust, which was the target needed for JPL and others to attempt to replicate their results.
4. A test at 50 W of power during which an
interferometer (a modified Michelson device)
was used to measure the stretching and compressing
of spacetime within the device, which produced
initial results that were consistent with an
Alcubierre drive fluctuation.
Which is just really cool. The followup sums up my fascination with the whole thing niceley:
Test #4 was performed, essentially, on a whim
by the research team as they were bouncing ideas
off each other, and was entirely unexpected. They
are extremely hesitant to draw any conclusions based
on test #4, although they certainly found it interesting.
That right there is science in progress: a hint of something interesting, a test leading to another test leading to another, each interesting in their own right. It's still not clear if something spectacular is going on, but I think it's undeniably interesting. Careful reexamination can yeild all manner of useful insights!
(Off topic, but along the lines of closer examination, see https://news.ycombinator.com/item?id=9020065 for how some careful (and extensive!) experimentation helped illuminate sodium's reaction with water. Not just how much work was needed to tease out the details!)
If that thing is really stretching and compressing spacetime, and a Michaelson Morely interferometer can see that in theory, then something like LIGO would be equipped to measure it.
sorta, kinda...
It is true that some scientists at NASA have done experiments with this where they claimed to have found some result. A lot of other scientists were very sceptical about these results and said they weren't rigorous enough in eliminating possible errors. The same scientists have done other controversial experients and made claims that other scientists found overhyped.
NASA is a big institution. It's a very different thing to say "NASA said XY" or "someone at NASA claimed to have found XY".
It's true - I think the general issue is that the forces produced are tiny, so it's hard to rule out some other effect coming into play and for the EM drive to work would mean overturning a decent amount of 'settled' physics understanding so the evidence needs to be pretty incontrovertible.
I don't think there's anything real here. McCulloch's papers on arXiv seem very confused. Eg., this paragraph:
"In this scheme there is a minimum allowed acceleration which depends on a Hubble scale Θ, so, if Θ has increased in cosmic time, there should be a positive correlation between the anomalous centripetal acceleration seen in equivalent galaxies, and their distance from us, since the more distant ones are seen further back in time when, if the universe has indeed been expanding, Θ was smaller. The mass to light ratio (M/L) does seem to increase as we look further away. The M/L ratio of the Sun is 1 by definition, for nearby stars it is 2, for galaxies’ it is 50, for galaxy pairs it is 100 and for clusters it is 300. As an aside: equation (11) could be used to model inflation, since when Θ was small in the early universe the minimum acceleration is predicted to be larger." (http://arxiv.org/pdf/astro-ph/0612599v1.pdf)
If an effect was stronger in the early universe, you'd expect to see a big correlation between the effect size in a galaxy, and that galaxy's redshift z. It wouldn't make any sense to say that "galaxies" have a ratio of 50, since there are galaxies at every redshift; many are nearby and have redshifts of almost zero, while the Ultra Deep Field galaxies have very large redshifts of up to ~10. If the number is really the same for "galaxies" in general, that means there's no distance dependence, but McCulloch doesn't seem to realize this. He seems to imply that nearby stars have a higher mass/luminosity ratio because of their distance compared to the Sun (?!), but the time-delay effect for anything in the Milky Way is negligible (< 0.0005% of the universe's age). In reality, nearby areas of space will have higher ratios than the Sun just because they contain many objects which, unlike the Sun, don't emit much light (red/brown/white dwarfs, gas and dust, etc.). Likewise, he seems to imply that "galaxy clusters" are farther away than "galaxies", but most galaxies are part of clusters, and we can observe both galaxies and galaxy clusters at both small and large redshifts.
Isn't he saying the 'standard' M/L ratio for galaxies is 50... and that by looking at a lot of galaxies at different distances you see a trend that the more distant ones are more >50 ?
The hard part for physicists to accept is that the theory requires the speed of light to change.
It would be interesting to see how the theory behind the Unruh radiation works with "The quantum vacuum as the origin of the speed of light" (http://arxiv.org/abs/1302.6165#)
> The hard part for physicists to accept is that the theory requires the speed of light to change.
I'm a complete layman here, so I don't understand how the link you shared says that the speed of light is changing; it seems more like a change in the way we define the speed of light. Is that right?
If the actual speed of light is to change, how is that possible? We've experimentally verified it up one side and down the other, so this seems like a huuuuuge discovery.
So there's c, which is essentially a conversion factor between space and time, and also happens to be the speed at which massless particles move. If photons happen to not be massless, the speed of light might change, though 'c' wouldn't have to.
A large part of theoretical physics is getting to grips with the terminology -- not that you should imply that I'm in any way an expert. I think I'd go with the wording "it seems more like a change in the way we understand the speed of light". From that, it implies that, yes, its definition would change, but it's something much bigger!
So yes, this seems like a huge discovery. Pretty much all of our understanding of relativity is based on the fact that c is, well, immutable. Changing the way we understand c means that we'll need to completely change the way we grok the last century of theoretical physics.
In case of refraction it is a macroscopic illusion. If you look closely enough, you will see photons traveling at the speed of light between atoms, getting absorbed by atoms and reemitted later. The macroscopic speed of light is just the result of absorption and emission probabilities and the microscopic geometry of the material.
But there is another layer below but I am less sure about that, it may be (slightly) wrong. In quantum electro dynamics photon take, figuratively spoken, all possible paths from source to detector, also paths that require going slower and faster than the speed of light. All those possible paths interfere with each other and the net result is that the photon seems to move with the speed of light with very high probability. But really take that with a large grain of salt.
If you look closely enough, you will see photons traveling at the speed of light between atoms, getting absorbed by atoms and reemitted later.
That's a story we tell kids who ask too many awkward questions. If it were true we would see sharp changes in the observed speed of light at wavelengths depending on the absorption spectrum of the material in question.
My understanding is that the real answer involves photons gaining inertia via coupling with particles in the material they are passing through (cf. Higgs) ... but I'm probably mangling that explanation horribly.
> If it were true we would see sharp changes in the observed speed of light at wavelengths depending on the absorption spectrum of the material in question.
Uh, we do. The imaginary portion of the index of refraction near a resonance generally looks roughly like a gaussian, and the real portion does have jumps that look a bit like tanh.
I shouldn't have written »absorbed by atoms and reemitted later«, that is really a different thing, interacting with the atoms is better. The superposition of all possible paths and interactions results in a wave function that effectively travels slower than the speed of light.
Yes, due to the interaction of the photon with the atoms of the material the photon takes longer to move through certain distance of a material than through the same distance in vacuum.
Just to be clear, I am not a physicist, only a interested layman. As I understand it currently, photons don't travel any path. They are emitted somewhere at some point in time and detected somewhere else at another point in time, what they did in-between is not a meaningful question, at least in the classical sense of asking for a path the photon took. When we want to calculate the probability of observing the photon at a specific place at a specific point int time, then we consider all possible paths - including the photon traveling to the edge of the universe and returning from there, including the photon going into the future and coming back - and sum them up. There is a relatively simple formula for every possible path and those possibilities interfere when we sum them up, some path amplify each other, some cancel out, in the end you have a probability distribution in space and time telling you where it is likely and where and when it is unlikely to find the photon in case you tried to observe it there. What the photon did in-between, we don't know, is a meaningless question, ... at least I can't tell.
Fun fact: Cherenkov light is emitted when particles move faster than the speed of light in a given medium. It can be seen in nuclear reactors in blue. Tho is of course, in line with current theory, only possible because the speed of light in a medium is lower than the speed of light in vacuum, which particles cannot reach.
The sentiment I hear in this thread is like those that dismiss cold fusion. Sure, we can all make statements like "obviously, someone screwed up" but it is another thing entirely to have to the patience to simply cite the experiments that disprove the proposed effect. I don't know what to make of cold fusion, but I also know for a fact that neither do physicists, and instead of studying it, they're just simply saying because there is no theory it must not work. Same with this stuff.
The one YouTube guy discovered the beaded-chain lifting effect, and then it had to be studied to find out what was going on. Obviously that was an easily reproduced experiment.
So with this thing, we must find conclusively the unmeasured heat or ions or whatever and show a repeatable method for such mistakes. That is my opinion about science, of course I probably lost most scientists with my first sentence.
Extraordinary claims require extraordinary proof, and the history of physics is littered with intentional fraudsters and sloppy experimentalists. Disproving every perpetual motion machine via experiment would take a lot of effort from scientists who don't want to do it. If you think those experiments should be done, then by all means do them.
Fair enough, but the problem with perpetual motion stuff is that no energy can be extracted or the time until the device stops is very long, not that they don't do what people claim that they do, which is work more or less perpetually.
We need to find that kind of explanation for this supposed propulsion.
I upvoted you because you're right, but maybe you and I should take a few months off and time some of the better ones! Worthless machines but I've heard of some lasting a great deal of time.
Like many of these types of inquiry, the details matter. For example: cold fusion is real. It's just not terribly useful for powering a city. But you can use it as a neutron source for generating the short half-life radiological treatments for cancer [1].
For this drive, what we have is an experiment that begs a theory. There's something interesting going on, and so far it eludes easy explanations. Quite possibly we'll get an innovation out of it in experimental setup, or best case real, easily verifiable thrust is detected. Chances are there won't be any new physics, but rather a very clever engineering exploit of what was already known (but not properly applied).
Enjoy the failures in science; it means we're actually trimming the dead ends carefully instead of assuming all innovation is low hanging fruit. There's a lot of bunk out there, but there's also tons of neat edge cases to map out!
[1] A friend's husband works in one of those labs. I forget the mechanism at play (I think it was cavitation), but it was room temperature fusion generating neutrons. It'd never be self sustaining for power, but still incredibly useful.
People challenging the established, many-times-checked basics of a field need to bring some data.
Otherwise, by analogy, on Stack Overflow should we take each new programmer's statements at face value, without seeing their actual code or error messages, unless an experienced programmer has the patience to refute it individually?
It wouldn't fly here if it was an astonishing claim about gcc backed with no specific code or output.
The point is that when someone asks for code and output, the user with the anomaly either has to show some, or people give the question up as unresolvable. We need the data.
People don't just say 'well multiple teams have written code and gotten output that agrees with what I'm saying.' We need the actual information, not vague reports that a friend of a friend thinks there were test cases.
Oh yeah no totally. I mean, it is indeed difficult in those A/B problem situations where you don't even know enough to ask the proper question. Perhaps SO needs an army of question vetters, code experimenters, and educators?
Physicists didn't just dismiss and ignore cold fusion. It's been tested a lot, and it just doesn't work as originally described.
Likewise, the EmDrive is also being tested. So far it's pretty inconclusive and it looks likely to have a mundane explanation. But testing continues, so what exactly are you complaining about?
I guess I wanted to know more about cold fusion, and you have to be really careful because there is just so so much bullshit out there from scammer link spam people and UFO crazy people. Then, however, there are just tons of physicists using it as a punchline... so after you weed through all of that, you do get to some of the US Navy stuff, some of the real people doing the work, but even a video series about it from some MIT professors say that you can't just go around talking about it because it is career suicide.
Anyway... I started to hear some of that tone, and I guess I assume that there really are some physicists here on HN, and so I posted what I did.
I think that last part is where you've gone wrong. I'm sure you've seen discussions about computers in non-computer forums, where there's just an astonishing amount of cluelessness on display but the people don't know enough to realize they have it all wrong. It's likely the same here but in reverse. (And to be clear, I don't exclude myself from it at all.) I'm sure there are some actual physicists here, but most of the commenters are informed laymen, with all the good and bad that implies.
I'm curious to what degree Unruh radiation (and the resulting consequences this idea relies on) 'established physics'?
By extension, I think this is the most interesting article I've seen to date on the EmDrive: it seems to have a basis in a fairly non-controversial result of GR, which in turn is something which nicely explains an otherwise bizarre physical phenomenon. And, to top it off, there are a number of falsifiable predictions which are within our ability to test. I'm interested in whether any of my assumptions are wrong.
> I'm curious to what degree Unruh radiation (and the resulting consequences this idea relies on) 'established physics'?
Most theorists think Hawking–Unruh radiation has to exist, I guess because the calculation is relatively straightforward. You just need special relativity and some basic field theory.
Hawking–Unruh radiation has never been observed in a gravitational system, and doing so would be very difficult because it requires a truly enormous acceleration in order to produce a horizon with a temperature that is an appreciable fraction of 1 K.
There are certain regimes in fluid mechanics where the mathematics governing the fluid looks like the mathematics governing relativity. Some people have produced phenomena in those systems that look like Hawking–Unruh radiation. Some people further claim that this demonstrates the existence of Hawking–Unruh radiation for gravitational systems, but I find that reasoning to be specious.
Unruh radiation itself is well established theoretically, though it has not been observed experimentally.
However, the theory behind Unruh radiation says one thing that creates an obvious problem for anyone trying to use it to explain something like the EmDrive or the flyby anomaly: Unruh radiation is felt by objects that are undergoing proper acceleration. That is, they have to already be experiencing thrust. And according to the theory, the thrust they are experiencing has to be very, very small, so that the wavelength of the Unruh radiation is of the same order as the size of the observable universe.
In the case of the flyby anomaly, the spacecraft passing by the Earth were in free-fall orbits--i.e., zero thrust. That means zero Unruh radiation.
In the case of the EmDrive, technically the apparatus was feeling "thrust", in the sense that it was sitting on the surface of the Earth and therefore feeling weight. (Weight counts as "thrust" in this connection.) But the weight of the EmDrive apparatus is many orders of magnitude larger than the small accelerations that would be required for the Unruh radiation explanation to work--the wavelength of Unruh radiation associated with a 1 g acceleration is about one light-year, much, much smaller than the radius of the observable universe.
So, bottom line, whether or not the EmDrive results themselves are valid (I am skeptical, but the whole thing is still being hashed out so it's too early to know for sure), it doesn't look to me like Unruh radiation can account for results of this sort.
(Btw, as gaur pointed out, Unruh radiation actually is a result of quantum field theory in flat spacetime, i.e., not GR, as the article claimed.)
Even if this works, reactionless drive would be my last hypothesis after exhausting all possible things it might be interacting with via some as-yet-unknown means: dark matter, nearby gravitational bodies, etc.
The infinite energy device claim is easy enough to test. Since a working infinite energy device would be perverse, I'd predict that an EmDrive rigged up so as to produce infinite energy would fail to do so. Exactly how it fails to do so might tell us what's going on. Do the energy requirements rise with momentum (as they should in a sane universe), or do you pass a point at which the effect ceases, or does something truly wacky occur like space-time distortion in such a way as to cancel the effect?
My reading of the McCullough hypothesis is that it's pushing against all matter in the universe at once, or some such thing, but that could be way off. I'm not a physicist.
This is probably incredibly naive... but couldn't this all be put to bed by someone putting such a drive on a cubesat, lifting it into orbit, and seeing if it works under practical conditions?
Maybe Hawking and Milner should be considering this for Starshot?
1. The power requirements to run one of these would be larger than what would fit in a basic Cubesat form factor. It's not going to be a 1U, perhaps something like a 3U or 6U at best. Then there is the thermal management of the large power supply (getting rid of heat in a vacuum is not simple) which is even more equipment and mass. Those larger satellites are a bit more expensive.
2. A lot of the "Launch a 1U Cubesat for $100k" figures are for the launch itself. That ignores other stuff like engineer wages, legal, etc. and is mostly hyperbole. Launching two for $200k is much more common as the second one take a lot less time to put together once the initial R&D is done. Then these "$100k" Cubesats, capability wise, are fairly useless. Think Sputnik type satellites. Want a working payload? Prepare to do more R&D. Oh and Cubesats have about a 50% rate of actually operating in orbit. First one doesn't work? Now you need to find more money to launch another if you weren't lucky.
3. The number of people who are willing to throw millions after something that currently isn't explained by science and is a lot harder to debug when you don't have someone sitting next to it in orbit, when it can done on the ground for an order of magnitude cheaper, is very small. Most people interested in using this are waiting for someone to pay for the testing first. Hell most satellite engineers I know for now still believe it to either be a hoax or will end up being just a measurement error. So people are riding this out until there is more hard evidence.
Milner is a billionaire. He could easily pay for a full-size satellite lofted up by one of Musk's $5M reusable Falcon 9 boosters in a couple of years... Double that number to build the satellite if you have to and he could still find the change in his pockets...
EDIT: and once I convince Milner to finance this boondoggle can I sign you up to be my lead satellite engineer?
There are so many variables in leo. Magnetic field, ions, drag, radiation pressure from the sun, radiation pressure from the earth, variable gravitation fields due to the irregular nature of the earth, and I'm not a rocket scientist, so I'm sure there are many more I'm just not thinking of.
Plus, upthread someone cites the Eagleworks guys as preparing to test an EM drive capable of 'producing' 100uN (yes, microNewtons) of thrust; LEO's various weirdnesses are more than capable of fucking with that small of a delta-v.
Nitpick: that's the force supposedly produced by the engine, not delta-V. Delta-V, should the drive work, would be limited by the available electrical power.
A Falcon 9 can lift 10,000 kg to LOE (or more). Let's say we could get a simple Em-drive and basic satellite and solar panel in for under 100kg. (We can do better, but let's start there). That's 1/100 of the load, so (in theory) we could maybe pay 1/100 of the $60m cost for a Falcon 9 lift, or $600,000.
Half a million bucks plus some change, let's say.
Edit: Less if we can get on board a previously-enjoyed rocket!
You also have to add in the payload integration fees, testing (very, very expensive), the actual payload development, insurance, and also be in a launch configuration such that you can piggyback behind a payload taking up most of the F9 capacity.
I'd be astonished if the cost of this was less than $2-3M.
It takes about $100k to get a cubesat into orbit, plus costs for building the sat itself (custom fabricating a miniaturized drive plus a suite of sensors).
So not much, but the larger problem is we don't know how this drive works. Miniaturizing it could impact performance, as could other electronics in the satellite... given the unknowns it makes sense to let NASA do some more terrestrial due-diligence before going to space.
> ... (McCulloch) proposes a constant term that modifies the acceleration corresponding to the inertial mass. He says torsion balance experiments can't detect it because torsion balance experiments measure differences in acceleration. But he's wrong because since it's a constant term he "predicts", it should manifest in the Eotvos parameter. Torsion balance experiments have gone well beyond the limit to detect this. But it's irrelevant because he completely misunderstands all the theory he bases this on.
The article leaves out all the numbers and so skips over one little problem. The momentum some people think they might have observed (if it's not experimental error) is compatible with what you'd get by using a microwave antenna as a thruster. Just ordinary radiation pressure, with the puzzle of how the radiation could be escaping the cavity.
But the minimal measurement results I've seen are not compatible with the radiation pressure multiplied by some large factor for the Q of the cavity, which seems to be the claim from some. That really would violate our understanding of conservation of momentum, rather than violating our assumptions about where the momentum goes in this experiment. And that seems to be ruled out experimentally so far.
> At very small accelerations, the wavelengths become so large they can no longer fit in the observable universe. When this happens, inertia can take only certain whole-wavelength values and so jumps from one value to the next.
So the EmDrive glitches the universe size? This is hilarious.
This was also my first reaction. I don't see how energy could be quantized with the inverse of the diameter of the observable universe.
Could someone explain the current thinking around how energy quanta relate to the size of the universe, or rather why wavelengths larger than the universe are impossible?
If the latter were true, I'd expect that an energy quantum corresponding to 0.99 * universe would be equally impossible as 1.01 * universe, and that only integer multiples of the corresponding frequency would be allowed (i.e. a harmonic series across the universe).
Photons must behave as if they have mass. From TFA:
McCulloch’s theory could help to change that, although it is hardly a mainstream idea. It makes two challenging assumptions. The first is that photons have inertial mass.
When I was taking college physics, there was a question on the exam about radiation pressure. I missed that day, so had no idea how to solve it. "a 5mW laser is reflected off a mirror (perpendicular) what is the force exerted on the mirror"? Later looking it up in the book there was a page on this and a derivation using electromagnetic theory. In the exam however, I decided to convert 1 second of laser energy to mass, bounce it off the mirror at speed=c, compute the force and change in momentum (over change in time which was 1s). I got the right answer of course.
The logic is simple. If we can convert back and forth between matter and energy, any experimental setup must obey conservation of momentum and it's CG must not move. So a laser inside a closed spaceship would actually be tranfering mass (as energy) from one end to the other. The net effect must be the same as if that mass was moved any other way.
I derived a general expression for radiation pressure after the exam and it's identical to the EM one from the book. Photons behave - and must behave - as if they have mass with a velocity of c. By the same reasoning, gravity must bend light rays, though I have not compared this prediction to that of relativity.
Interestingly, that would imply we are possibly living in an imperfect simulation, one that hypothetically does not replicate the physics of a real non-simulated universe.
If that is true, why? I can think of four possible reasons:
1) Is it because the simulation was designed in such a way that it leaves its inhabitants clues that reveal the true nature of the universe? This lets them figure out the truth once their society and knowledge become sufficiently advanced.
2) That it's impossible to simulate a real universe? This would imply any sufficiently advanced society will be able to detect that they do in fact live within a simulation.
3) That the creators made a mistake and it's not a fundamental limitation. It's simply an error made by whatever civilisation created this simulation, they messed something up.
4) That it's possible to create a perfect simulation, but for some reason they decided to make a simulation that used less resources and this brings errors or the need of hacks to get it working. Now we might be exploring these hacks.
This is fascinating stuff. It is also good to see more and more respectable names looking at this. Good things will come from that regardless of the overall outcome of the emdrive.
So is there any microwave radiation escaping the cone? If more of it escapes from one side than the other, won't the momentum of the escaping photons (radiation) be the reaction of the thrust produced? Why does it need special physics? I am probably missing something here, but what is it?
One of the main issues I see with the EmDrive in general is time invariance. And especially with the theory Mike McCulloch put forward. If photons cause this process for a positive change in momentum, the same process should happen in reverse with a negative change in momentum.
Humans have learned a lot of things. However they still have to learn about the universe because there are non-naturally-observable physical phenomena and we have no evidence that we discovered them all.
Edit: These so called "laws" are laws in our minds and things like EmDrive show us that our minds can expand forming new "laws".
Inertia is quantized? Do we actually live inside a computer, because that's really weird. Physicists, help explain this outside of the universe uses a lookup table to approximate inertia.
wonder if this inertial/acceleration effect is what is responsible for "dark matter" in the universe, i.e. galaxy-scale gravitational anomalies that don't agree with existing newtonian and GR models.
The Black White thing is a Crookes Radiometer[1] and it rotates because of differential heating of the black and white sides of the plates interacting with air molecules to create a tiny air flow from the light side to the dark side of the plate.
The device is under _partial_ vacuum. Under total vacuum, no rotation will occur, if the air pressure is too high, drag forces outpace the tiny thrust.
Often, engineers or inventors would create something and then scientists would have to explain why it happened. In the last few generations, physicists and mathematicians would come up with theories and engineers would have to build equipment to test those theories.
The EmDrive is one of the rare modern situations where someone has engineered a device that shouldn't work according to what we know and the scientists are having to come up with the explanation.
Personally, solving a mystery is more exciting than purely intellectual theories and the EmDrive has created a very interesting mystery.