Love this comment left on the post by Peter Shor (of Shor's algorithm--the algorithm that kicked off the quantum computing frenzy). I assume it's him and not an imposter.
"It's not just that scientists don't want to move their butts, although that's undoubtedly part of it. It's also that they can't. In today's university funding system, you need grants (well, maybe you don't truly need them once you have tenure, but they're very nice to have).
So who decides which people get the grants? It's their peers, who are all working on exactly the same things that everybody is working on. And if you submit a proposal that says "I'm going to go off and work on this crazy idea, and maybe there's a one in a thousand chance that I'll discover some of the secrets of the universe, and a 99.9% chance that I'll come up with bubkes," you get turned down.
But if a thousand really smart people did this, maybe we'd actually have a chance of making some progress. (Assuming they really did have promising crazy ideas, and weren't abusing the system. Of course, what would actually happen is that the new system would be abused and we wouldn't be any better off than we are now.)
So the only advice I have is that more physicists need to not worry about grants, and go hide in their attics and work on new and crazy theories, the way Andrew Wiles worked on Fermat's Last Theorem."
(new comment right below)
"Let me make an addendum to my previous comment, that I was too modest to put into it. This is roughly how I discovered the quantum factoring algorithm. I didn't tell anybody I was working on it until I had figured it out. And although it didn't take years of solitary toil in my attic (the way that Fermat's Last Theorem did), I thought about it on and off for maybe a year, and worked on it moderately hard for a month or two when I saw that it actually might work.
Sigh, so true. Heartening that Shor has the same thoughts about the grant system that I do.
I don't have quite the same critical perspective as the blogger, but I think there's a certain misguided attitude underlying the phenomena observed by Shor.
Yesterday or the day before I was listening to the radio and someone with a physics background was talking about something (I think quantum entanglement) and started asserting that physics has basically figured out almost everything. This is probably a somewhat unfair paraphrase, but not too unfair.
What irritated me about it was the assumption that, if most of your predictions are correct, your model is almost entirely correct, and just needs to be tweaked a bit. This is certainly true some of the time, but sometimes those little empirical cracks are what brings down a major paradigm, and leads to another one, one that has the same predictions as in 99% of the cases, but in the other 1% has totally different predictions with very different implications.
This carries over to grant funding, etc. in that the prevailing community often assumes that what they're doing is fine, and all that's left are these little empirical tweaks. That's certainly helpful some of the time, but it seems to dominate too much. Academics needs to leave more room for people to fail at high rates with good ideas, to increase those small percent of times they succeed wildly.
>> What irritated me about it was the assumption that, if most of your predictions are correct, your model is almost entirely correct, and just needs to be tweaked a bit.
Haha, no. We wish.
Epicycles worked very well and were highly accurate, because, as Fourier analysis later showed, any smooth curve can be approximated to arbitrary accuracy with a sufficient number of epicycles. However, they fell out of favour with the discovery that planetary motions were largely elliptical from a heliocentric frame of reference, which led to the discovery that gravity obeying a simple inverse square law could better explain all planetary motions.
A theory can explain observations even perfectly well and still be wrong- because the frame of reference is wrong. The worse thing is that you can't figure that out until you've figured out what the correct frame of reference is, and looked at your obsrevations in a new light.
>A theory can explain observations even perfectly well and still be wrong- because the frame of reference is wrong. The worse thing is that you can't figure that out until you've figured out what the correct frame of reference is, and looked at your obsrevations in a new light.
Well strictly speaking, it wasn't wrong. It explained the observations perfectly well. What a heliocentric description brought was a simpler description that illuminated the principles behind it, in a way that enabled us to discover the inverse-square law of gravity, link that to Gauss's theorem for gravitation, explain it even from a more fundamental geometric perspective with general relativity, etc.
It was wrong in the sense that epicycles are an entirely imaginary math artefact, and reality works on different principles.
Revolutions happen when a new mental model - or frame of reference, or whatever you want to call it - can generate new kinds of math.
The old model is certainly wrong in the sense that it's not a good picture of how reality actually works.
If you really want to, you can still use epicycles for certain kinds of problem, just as you can use Newtonian physics for basic mechanics.
But this is engineering, not physics. These theories are useless for frontier research. They're absolutely wrong in the sense that their lack of completeness means they cannot be used to generate theory[n+1].
The same way we distinguish the shadows cast in Plato's cave from the objects occluding the light. The more situations in which a theory or model makes accurate predictions, the more correct it is. Epicycles are much, much more wrong than mathematically perfect elliptical orbits.
> Epicycles are much, much more wrong than mathematically perfect elliptical orbits.
Not if you define "wrong" as "inaccurate predictions". You can approximate ellipses with circles and epicycles to any desired degree of accuracy by putting in more epicycles. So you can match the predictions of ellipses to any desired accuracy with epicycles.
Also, as I noted, the actual orbits of the planets are not perfect ellipses once GR effects are taken into account. Have you proven mathematically that it is impossible to construct an epicycle model that makes more accurate predictions than perfect ellipses, based on the actual data (which confirms the GR predictions to within current observational accuracy)?
You're being intentionally obtuse. "Just add more epicycles!" isn't building a better model (for a sensible definition of "better"). It's just overfitting. You are the reason why regularization exists. https://en.wikipedia.org/wiki/Regularization_(mathematics)
As your article explains (but too briefly), epicycles do predict wrong. It predicts correct locations but incorrect phases.
"It was not until Galileo Galilei observed ... the phases of Venus in September 1610 that the heliocentric model began to receive broad support among astronomers."
My understanding is that these had not been observed before Galileo, or at least not observed by many and not long before Galileo's time. In that case, they weren't so much incorrectly predicted, as not observed.
True, but there are frames of reference that make understanding (and the math) vastly simpler. I can calculate the orbits of the Saturn's moons using my location on earth as the origin. It will take me a lot of work, but I can do it.
It's a fun exercise to reframe the laws of physics in terms of your "stationary" frame on Earth and I recommend it to everybody. Consider the speed of light in the Andromeda galaxy... except under these physics we can't speak of the "speed" of light, but the permissible velocities as a function of the location in question.
Running through this exercise with some honesty can give one a greater understanding of why our physics is framed the way it is, and why it is that while all sorts of reference frames are valid, "inertial" reference frames are still important on their own merits.
The Earth is demonstrably not a "stationary" (inertial) frame, in the sense that it's constantly accelerating.
Is there a deeper meaning to "Consider the speed of light in the Andromeda galaxy" that I missed? The speed of light is known to be constant in every reference frame.
Your two paragraphs are connected to each other. The speed of light is known to be constant in every inertial reference frame. But reference frames are not required to be inertial, which is precisely why we call them inertial reference frames; the word "inertial" is not redundant.
You can reformulate all of physics into your Earthly non-inertial reference frame. You can formulate all of physics into a reference frame in which you personally are always stationary! Nothing stops you from doing it, and the physics will work, as much as they ever do (i.e., we know something's wrong with our theories). To the extent that the result is a hideous monstrosity, well, such is my point. Pondering the nature of that hideous monstrosity is something I think worth doing, at least for a bit. Not to the extent of actually writing the equations, though. It brings clarity to why inertial reference frames are so important that we almost consider "inertial reference frame" to be a single atomic word, because non-inertial reference frames are in general not very useful. (In specific they can be.)
The earth is still stationary with respect to itself. The Universe is accelerating around it. In the case of Saturn's moons the Sun and Earth are not significant factors, if I use the Earth as my origin I have to account for those anyway, but if I were to use Saturn as my origin I could safely ignore them (probably - I could come up with sci-fi reasons that they matter).
Sorry, but I have to correct you there. There is a infinite number of Relativistic Models possible. In the one that is currently used, you are indeed right.
But now I understand why Einstein wrote in his last book, after much thinking, that this perspective is wrong. He called it "unthinkable" for a good reason. The model I'm using also has relativity, but with an absolute frame. It also behaves differently in extreme situations like the surface of super massive black holes and and near field of a proton. In fact, I have much more relativity but not everywhere and it's paradox free :)
General relativity holds that the universe has no “center”, either earth or sun. even more surprising, is that unlike newtonian physics, general relativity says the universe doesn’t even have a single “clock”, and what you observe in astrophysics depends on where you observe it from and how fast you are travelling when you observe it. the speed of light is constant, and space and time will bend in order to maintain the observation that light is always a constant speed.
The location, and speed with which you are travelling is what general relativity calls a "frame of reference", and none of them are "correct" or "incorrect", they're just predictors for what observations will be possible from that frame.
then the weirdest part is that one of the consequences is that planetery bodies are large enough for that “speed of light must remain constant” rule to matter in a particular way as to generate a warping of spacetime around them, the geometry of this warp perfectly explaining gravity. or put another way, we stick to the earth because time runs slightly faster at our heads than at our feet.
Thank you, that's a good explanation- in the sense that I understand now what the previous comment, by Koshkin meant in responding to mine that there is no "wrong" frame of reference.
>> The location, and speed with which you are travelling is what general relativity calls a "frame of reference", and none of them are "correct" or "incorrect", they're just predictors for what observations will be possible from that frame.
OK, I see- "frame of reference" is a technical term, in General Relativity, that refers to your position in space, and determines what you can observe. Instead, I meant "frame of reference" as a more general "point of view" or "frame of mind" - a set of assumptions that give context to any observations and that inform interpretations of them.
Even going by the technical sense of a frame of reference, though, there are frames of reference that will not permit the cocrrect identification of a process that generates a set of observations- or at the very least, they will tend to favour incorrect interpretations of the observations.
I think that is in keeping with what your comment says about a frame of reference in General Relativity allowing a range of physical observations.
Right, so what was ground breaking about General Relativity, is that it challenged the newtonian axioms (assumptions) that there's a single universal clock, and that all objects within the universe are effectively rigid and exist in something resembling euclidian geometric space, and all move forward through time at the same speed. Newtonian physics explains many things very well, but couldn't explain other phenomenon.
Going from observation, that the speed of light is constant, regardless of how fast the light emitter is travelling relative to you, he made that the unbreakable assumption, and made the shape of spacetime flexible to always satisfy a constant speed of light. This theory was then confirmed when the light of a distant star was observed to bend when travelling through the strong gravitational field of our sun during a total solar eclipse.
Therefore the physics described by General Relativity have greater predictive power.
Quantum physics, can also predict everything in general relativity, but doing so is a lot more complicated than using general relativity. However, Quantum Physics can explain things that happen on small scales that General Relativity cannot. Quantum Physics has greater predictive power, but it's more convoluted. Like Epicycles. Einstein didn't like quantum physics and spent a great deal of time trying to debunk it, but, well, he couldn't.
This is all to point out that one should not confuse predictive power with complexity. Ockham's Razor is a rule of thumb that prefers "simpler" explanations for things. But the predictive power of the two competing theories must be equal for that to apply.
Thanks, I didn't kow about Einstein and quantum physics. I'll have to read a bit about that, it sounds interesing.
My original comment is grounded in an assumption that predictive power is not enough to identify a theory as correct, and neither is simplicity. There's nothing to stop any number of theories to have the same predictive power and the same kind of complexity. Sometimes, it's just very difficult to choose one, above the others.
Did I come across as confusing predictive power with complexity?
EDIT: it's interesting you bring Occam's razor up. It's part of what I'm studying, in the context of identifying relevant information in (machine) learning. There are mathematical results (in the framework of PAC-learning) that say that, basically, the more complex your training data, the more likely you are to overfit to irrelevant details. At that point, you have a model that explains observations perfectly well, but is useless to explain unseen observations (the really unseen ones- not those pretending to be unseen for the puprose of cross-validation).
...iiish. The result is that large hypothesis spaces tend to produce higher error. But, the size of the hypothesis space in statistical machine learning depends on the complexity of the data, as in the number of features. Anyway, I'm fudging it some. I'm still reading up on that stuff.
> Quantum physics, can also predict everything in general relativity
Unfortunately, the two theories, while both being extremely successful and accurate in their predictions, are incompatible with one another. Quantum Field Theory has successfully combined Quantum Mechanics with Special Relativity, but that is all.
We need to to be specific here though: they are compatible at low energies. They only become incompatible at very high energy states like those shortly after the big bang, and those we can't produce easily in particle accelerators.
Which is to say: they break under conditions very unlike the every-day universe, which is important but also indicative that they are not that broken.
The incompatibility is important though, because if there's any more card tricks we can do with physics so we can do interesting things, somewhere in that bit of incompatibility is where we must find it.
I think your "I'm not that kind of geek" comment came off as condescending. It also makes you sound like you're not really interested in understanding the other side's argument.
Sounded more to me like just a "that's outside my area of expertise so I can't really contribute".
Also, "frame of reference" has a specific meaning in relativity but it also has a more general meaning regarding the framework within someone understands something. It's pretty clear from the context (imo) that this latter is what was meant in this comment.
Thank you, yes, that's what I meant- I dont' know physics (well, very little) so the OP's comment left me confused and I didn't realise there's a technical meaning of "frame of reference". It doesn't help that, in the case of the theory of epicycles and the location of the Earth in space, the technical and colloquial term can mean the same thing.
"I'm not that kind of geek" is a bit of an in-joke so my bad for using it where the context is missing, but I thought it would work even so. The missing context is that a colleague used to tease me for my deplorable lack of a science background, although we did hit it off in terms of our fantasy and science fiction tastes. So, I was not the science kind of geek, although I was the science fiction and fantasy kind of geek.
I don't see that there is an argument - rather that a separate chain of discussion has started. It's fair to then say "well that's all fine but what I was thinking of was X" which is what I am reading. I think that there is a big difference between frames of Einstein (I don't understand these) and frames of reason and perception (I don't understand these either) but I do see that there are two different things!
Like I don't understand either Australia or Argentina.. but I know that they are not the same!
Except you can’t just go hide in your attic. Everyone is more risk averse than ever as the cost of housing, healthcare, and education relentlessly increase faster than wages.
Because they realized they were in this together, unionized, not just demanded but fought for more rights and a better life balance. Since the baby-boomers more women entered the work force and used their "extra" income to buy things. That lead to inflation of those assets - especially housing. Now people continue to push down labor rights so they make even less than before.
I'm with you though, I find this to be extremely frustrating.
And more economically useless work is done than ever before by an enormous margin (advertising, CGI, video games, cosmetics, PR, social media addiction, etc.). Just briefly think about how many people produce nothing of real value (i.e. value that would actually reduce housing, food, and medical costs) and how many are simply trying to redistribute the demand of the general population so they can get theirs. Basically, there are fewer economic "sources" than ever and more "sinks" than ever.
It's almost as though much of that productivity on an individual level is not making it back to those individuals, and going somewhere else, perhaps to others.
Keynes famously predicted 88 years ago [1] that "the economic problem" would be solved 100 years thence (hence in 12 years), and assumed that people would only need to work 15 hours a week. But back then already he pointed out two issues:
* what will people do with their free time? "It is a fearful problem for the ordinary person, with no special talents, to occupy himself, especially if he no longer has roots in the soil or in custom or in the beloved conventions of a traditional society. To judge from the behaviour and the achievements of the wealthy classes to-day in any quarter of the world, the outlook is very depressing!" He asked whether there might be a "general 'nervous breakdown'" of people "who cannot find it sufficiently amusing [...] to cook and clean and mend, yet are quite unable to find anything more amusing."
* While most of our needs would easily be fulfilled, he noted a second class of needs, jostling for relative status, that might never be: there are two classes, "needs which are absolute in the sense that we feel them whatever the situation of our fellow human beings may be, and those which are relative in the sense that we feel them only if their satisfaction lifts us above, makes us feel superior to, our fellows. Needs of the second class, those which satisfy the desire for superiority, may indeed be insatiable; for the higher the general level, the higher still are they."
And he got somewhat overly optimistic then maybe:
"The love of money as a possession – as distinguished from the love of money as a means to the enjoyments and realities of life – will be recognised for what it is, a somewhat disgusting morbidity, one of those semi-criminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease. All kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital, we shall then be free, at last, to discard.
Of course there will still be many people with intense, unsatisfied purposiveness who will blindly pursue wealth – unless they can find some plausible substitute. But the rest of us will no longer be under any obligation to applaud and encourage them."
Seems to me the massive inequality and concomitant struggle for relative status keeps most everyone working like crazy, even though we have many times more than what people had a century ago.
This is one of the best arguments for UBI. Imagine how much humanity would innovate in science, open source and other public gift economies if everyone didn’t have to worry how they will eat and pay rent next month.
How about reducing the generally accepted working hours to something like 24 hrs/week? 40 is pretty arbitrary so we could arbitrarily set a lower number. That would free up a lot of creativity.
The argument you are making assumes two things that strike me as highly implausible:
(1) That enough people would use their new free time to innovate instead of just watching TV or playing World of Warcraft or something like that.
(2) That enough people would still choose to do the unpleasant but necessary tasks that provide the resources needed for everyone to eat, have housing, etc. Not to mention all the other stuff people seem to want.
Right now a fairly tight circle of people decide for others how to spend vast amounts of time. It's not even very efficient as many jobs carry bullshit hours requirements. So after increasing free time by a large amount, a very small number of people start choosing different areas to innovate in, it still ends up as a large growth area for innovation, or social interaction, or whatever humans value individually instead of what old line accounting values.
I don't dispute that our current system is inefficient. But the proposed change under discussion (universal basic income) does not just mean "increasing free time by a large amount" while keeping everything else the same. It means increasing free time by a large amount while removing the need for anyone to do productive work. It's the latter that I see as problematic, not the former.
What would a world where we did the former but not the latter look like? It would be something like everyone having to be, at least in some measure, an entrepreneur--everyone would be their own small business, having to figure out what product or service to sell to others in order to make a living, and having to decide for themselves what use of their time would best serve that goal. That might well, in the long run (i.e., after all the upheaval caused by people who were used to having someone else define their business objectives, now having to do it themselves, has died down), be a big improvement over what we have now. But the key incentive of having to make a living is still there.
I think you may be under appreciating the "basic" part of universal basic income, as well as the human drive to do better than their own baselines.
The "basic" part means that the level of income is generally set at an austere level. Think of living in the minimal existence of a monk. Some people would do fine with that, and would choose it, but the vast majority of people I believe would elect to work for more.
Humans over their history have always reached for more, including the most extremely wealthy, who basically have a capitalism granted equivalent of UBI, but significant numbers still choose to work.
> The "basic" part means that the level of income is generally set at an austere level.
And all history shows that that level increases over time to a point where "minimal existence" is enough luxury to be unsustainable. This is by no means the first time that the option of the state doling out basic necessities to everybody has been considered. The Romans had their bread and circuses. Today it would be food stamps and cable TV and Facebook and Twitter. Same difference.
How many is "plenty"? What percentage of people who have enough money not to do those things actually do them?
My sense is that that percentage is pretty low. Yes, there are people like pg or sama who continue to work and add value even though they don't have to, and I think that's an admirable thing to do. But I think there are many more people who, once they have enough wealth to not have to work, stop working for good and don't produce anything after that.
How many people contribute to open source, wikipedia and debate stuff online?
A lot of it produces ZERO economic returns. But the fallacy is that we need top-down institutions to move things forward. I would argue that we are better off abolishing intellectual property laws as well and allowing everyone to contribute to open source drugs the way they do in other sciences.
Watch these two videos:
Drive: the surprising truth about what motivates us
Clay Shirky: Institutions vs Collaboration
We need more collaboration and less capitalistic competition.
I know firsthand the righteous indignation that anarcho capitalist libertarians have at “violence” being used to redistribute wealth.
But these same libertarians ignore all the coersion used on the other side. They seem to want people to be FORCED to work out of fear of losing food and housing. Some freedom for the masses - the freedom to work or starve.
And of course Property is a coercive institution just like government. It has to be enforced. So Disneyworld charges visitors entry fees and vendors rent and pays people to dress up like Mickeys and it’s top down and Libertarians are ok with that. Next door is a city that’s run democratically and what if they want to charge taxes and redistribute basic income, how is that any worse than Mickeys?
> the fallacy is that we need top-down institutions to move things forward
That might well be true. And it has nothing whatever to do with what I said. In fact, abolishing top-down institutions would, if anything, make it more difficult to have a scheme like universal basic income (the topic we're discussing here) at all.
> of course Property is a coercive institution just like government. It has to be enforced
Is the only thing preventing you from appropriating your neighbors' property the fear of enforcement?
Property rights are agreements. If it is a net gain for all parties to follow an agreement, they will follow it, even in the absence of coercive enforcement.
Property rights are agreements. If it is a net gain for all parties to follow an agreement, they will follow it, even in the absence of coercive enforcement.
The same can be said about the social contract. Is the only thing preventing you from running red lights the fear of enforcement?
For many people, yes. That's the only thing. And we violate property rights in many ways, like peeing in a forest that may be "owned" by someone. Or by using an idea that may be "owned" by someone.
Property rights become "States" if the organization is large enough.
Property rights are basically monopoly rights to exclude others, by force if necessary, from the use of a resource.
Sometimes this exclusion actively harms wealth creation. Especially if the resource is a public good.
> Is the only thing preventing you from running red lights the fear of enforcement?
This is a very telling question. Of course the answer is yes--if you qualify "running red lights" to mean "running red lights when it is clear that it is not going to cause any harm or violate anyone's property rights". For example, it's very late at night, it's an intersection with clear visibility in all directions, well lighted, and there is obviously no one else in sight. In such a case, yes, the only thing preventing me (and probably any reasonable person) from running the red light is fear of enforcement.
But of course that's because any reasonable person has the common sense to know that running a red light under circumstances where it will clearly violate no one's property rights and cause no one harm is not a crime; it's just a violation of an administrative rule, which in practice is used as a revenue source by localities, not to improve traffic safety.
And of course any reasonable person will not run a red light if it would risk causing harm or violating someone's property rights. But in that case, it is not because of fear of enforcement; it's because reasonable people understand that harming others or violating their property rights is a net loss for everybody, including them, so they have a good, rational reason not to do it and would behave the same even if it there were no enforcement.
> we violate property rights in many ways, like peeing in a forest that may be "owned" by someone
If this does no harm, how is it a violation of property rights?
> Or by using an idea that may be "owned" by someone.
Ideas are different because there is no such thing as exclusive "ownership" of ideas. Governments create "ownership" of ideas by making laws, but that doesn't make ideas the same as physical objects. If I take your car, I deprive you of it; we can't both have it. If I take your idea, you still have it; I can't deprive you of it. That is a key difference.
Across the of people I know from poor to wealthy, the percentage of people who do not work stays more or less the same in each bracket. Lazyness and motivation are pretty evenly distributed.
One slightly cynical strategy I've heard is that your next grant proposal should be for the work you've already completed, giving you time to come up with the idea for the one after :)
Not really workable in practice. Are you going to lie in the project schedule section of the application? Nowadays the grant agencies expect to have Gantt charts and other crap. It also would require delaying publishing a work when it's finished, which then raises the chances of perishing...
What people however do is to smuggle more risky ideas into proposals for less risky ones. But that in the end means they'll be spending time also on low-risk things.
Probably works in mathematics and other theoretical sciences. If the way you use your funding depends on the particulars of what you're doing, then you're probably screwed.
And boredom and desperation. One thing is to have the rare talent and opportunity, another is to actually truly go for it. Most people lack the daring and rebelliousness to think that they can prove that all of humanity in all of history have misunderstood something truly fundamental.
I wonder if some rich people could start a form of charity where they just pay a few incredibly brilliant people a good salary to go do whatever research they want for the rest of their lives. Sort of like patronage. Just bypass the whole grant funding cycle and work on whatever you want, because you're a genius and want to study things other people don't like or understand.
Another issue with relying on charity is that there is often strings attached to the money donated. We really want to allow students, scientists, engineers, etc. to have agency, not be under some wealthy person's patronage.
A first problem is how to recognize these incredibly brilliant people. The current answer to this is the tournament to tenured professorship. If you select those who already succeed in this system, you are not really doing anything new, so something different should be done but it is not clear what.
A second difficulty is the "rest of their lives" part. It's quite hard to believe ROI would not be required when rich people are involved in some way or other. Charity is PR, and so the system will optimize for PR.
It's kind of like what happens with the people who invest in startups... the founders who fit the investors' conceptions of what brilliant people look like get funded. It's also a big problem in philanthropy and why many times it's better to have (even slow and inefficient) government depts with set criteria and review processes deciding who gets funding.
Something like this happens, actually. It's often in the clothing of small nonprofits with small teams of people who are brilliant, or who are basically the support department of the 1 or 2 funded geniuses. Question mark on the efficacy.
I think John Bell discovered his revolutionary Bell's theorems of quantum mechanics in a similar way.
I can't find it now in a quick search, but I remember reading that he thought every physicist should devote something like 10% of their time thinking about the foundations of physics/quantum mechanics. (What would he do with 100% of his time?)
It's basically another form of premature optimization as it pertains to discovery and research. It stems from the misbegotten idea of permeating the last few decades that cost-benefits accounting and ROI should drive the entire world (and choosing the entirely wrong accounting model to apply uniformly on top of it).
As a scientist, fund raising of grants is by far the worst part of the job. NSF rejects proposals with 3 "very good" scores as low competitive regularly. It isn't that they don't think the work is terrible. It's just limited resources.
>And if you submit a proposal that says "I'm going to go off and work on this crazy idea, and maybe there's a one in a thousand chance that I'll discover some of the secrets of the universe, and a 99.9% chance that I'll come up with bubkes," you get turned down.
>But if a thousand really smart people did this, maybe we'd actually have a chance of making some progress.
The problem is, as I understand it: suppose some people locked themselves in their attic and worked on physics problems; how does society know that they're actually working on physics and not merely twiddling their thumbs?
The whole publication-review-credit-tenure-grant circuit was invented to address exactly that situation. In order to replace it, you need some other way of convincing the funding bodies that their money is actually paying for something.
If this theory about the grant system is right, then we should expect to see philanthropy-funded research pull ahead. Philanthropists will vary in how much they tolerate long-shot ideas; but will generally not tolerate a recipient sitting around doing nothing -- and even if they do, it's their own money.
The only question then is whether there is enough philanthropic research around. Are there perhaps, 2000 different projects around the world getting something like 5M USD each from an philanthropists? Or does the modern zeitgeist that assumes science funding is a thing for governments to do crowd it out?
>> And if you submit a proposal that says "I'm going to go off and work on this crazy idea, and maybe there's a one in a thousand chance that I'll discover some of the secrets of the universe, and a 99.9% chance that I'll come up with bubkes," you get turned down.
That's just people being chicken. You know someone got to research why wombat poop comes out in cubes:
Well considering that we know everything pretty much about most of everyday physics and new physics requires energies that are almost unreachable and billions of dollars in experimental investment and/or years and decades to collect sufficient data, it's not a surprise at all that foundations of physics progress has slowed. I don't think the physics community is to blame here. Just the nature of reality
Well considering that we know everything pretty much about most of everyday physics [...]
That's true and also not true. Yes, we have a working theory that explains essentially everything with unprecedented accuracy as long as you don't wander too far outside of everyday length and energy scales. But on the other hand we still do not really understand basic quantum mechanics almost a century after its discovery or at the very least there is no consensus about what the theory actually says. Quantum mechanics is not even self-consistent.
Sure, but there are some interesting and affordable tabletop experiments being done to probe certain features of QM (like Weak Measurement: https://www.hep.ucl.ac.uk/qupot/) and test fundamental assumptions. The work on that front is progressing, but when people think of "Laws of Nature" i.e. GR and the Standard Model, short of expensive facilities like new particle colliders or telescope arrays like LSST and Cosmological surveys, it's unlikely "new physics" is going to be jumping out of the woodwork any time soon.
Which is not to say people aren't trying. Conferences are held probing this exact question (e.g. can we come up with DM detectors that aren't just enormous tanks of cooled liquids?) and trying new strategies. It's not like the community isn't engaging in good faith with some of these proposals, it's also that we haven't had a new hint of where to look from a collider experiment since the discovery of the Higgs. Despite everyone's best efforts. Yes, we need some experimental ingenuity to push through this frontier, but I also agree with OP that Physics is just being very stubborn against yielding any further secrets at present.
I'm really interested in noted inconsitencies in physics since historically major progress was achieved by refusal of some to ignore an inconsistency, because that is often the last thing people talk about (as opposed to open questions). I know a couple that occupy my mind, but I am always looking for more such inconsistencies. It's in such a state of mind that I would like to know which problem you refer to in your last sentence, and not in the state of mind of bashing the perceived outsider.
Well, if you can't build your own super-energetic-particle accelerator, borrow some-one else's! In this case, that would mean telescopes pointed at jetting black holes. Those bad boys are very high energy and should be making some really exotic particles. Granted, the resolution we need isn't really there yet either, but I think that making a system spanning telescope is a bit easier than a system-spanning accelerator. Still, you're right, the next (theoretical) energy regimes are much higher than what we can manage to build today. We're forced to look at the stars for 'inspiration' for the foreseeable future. This particular golden age of physics is passed.
I agree, although I think it could be argued that a logical outcome of this fact, is that most (although hopefully not all) of the physicists should go work on other things, like maybe the physics of planet formation, where we are getting new data all the time. Or maybe in areas outside of physics altogether, like the social sciences, where again we have orders of magnitude more data than we once did. If the low-hanging fruit is all gone, move on. That's hard news if you've sunk two or three decades into becoming an expert in a particular niche.
This does actually happen already. I'm not sure how widespread it is, but I've noticed it in a couple of Condensed Matter Theory groups. For example Imperial's group[0] do research on Complexity and Networks including the "application of these principles to a variety of stochastic phenomena, ranging from ant colonies to cardiovascular biology, from sandpiles to earthquakes".
I studied under a very smart professor who believed that somehow table top particle physics would be a possibility. See http://science.sciencemag.org/content/357/6355/990 . Maybe the HEP experiment consoritium is just a global grant conspiracy. Wake up sheeple!!
This isn't true though. They said the same thing over a century ago, yet if you took those people and plopped them into the present they'd be unable to explain such an important everyday technology as GPS, because it requires an understanding of general relativity to work.
We don't know what we don't know. It's very likely that more discoveries will result in obvious changes that are visible in everyday life, like GPS.
My bet is on quantum mechanics. There's a lot there we don't understand or don't know how to make use of quite yet that nevertheless seems likely to have a very obvious effect on everyday life in the centuries to come.
> "they'd be unable to explain ... GPS, because it requires an understanding of general relativity to work.
It requires GR to make it work, but not GR to understand how it works. I.e., GPS would work fine -- in fact, be easier -- in a world without GR. The clocks on board the satellites depend on QM, but the idea of an accurate clock is easy. Same with orbits -- staged rockets came late, but orbital mechanics goes back a long way. Transistors are new, but amplification is old.
The only guarantees made about the LHC were, that it would prove or disprove the existence of the Higgs Boson.
Believe me, experimental physicists are desperate to find the slightest deviation from the standard model. I spent 2 years on one such 'stab in the dark' rare decay analysis!
The SM's predictions have been tested to a rigor unparalleled in history. It predicts stuff like mass of W & Z bosons, fine structure constant, the measurements of which exceed an accuracy of 1 part per billion in some cases.
I was getting it jumbled up with the magnetic moment of the electron, which is predicted by the SM (well, the QED part anyway), to be slightly different to the 'classical' prediction.
Experiment measurement of this is accurate to one part per billion, and is consistent with the QED prediction.
The electroweak bits of the SM predict the W & Z bosons, along with their masses, which have also been measured, to around 1 part per 10,000, and match SM predictions.
EDIT: last but not least, the Higgs Boson was also predicted by SM, with a ballpark figure for it's mass, and other properties (how often it decays into photons, W bosons, quarks etc). So far all measurements of these properties are consistent with SM predictions.
Correct me if I'm wrong, but the standard model makes no prediction for the masses of any of the fundamental particles. To my knowledge, they are all input parameters. It might put certain bounds on them, but in the end they are all not predicted by the standard model itself.
This is seen as one of the big issues with the standard model, that it does not actually explain a lot of the characteristics of the fundamental particles like their couplings and masses.
For the W & Z bosons, their masses are derived from 3 other 'free parameters' of the SM.
The Higgs mass is indeed a free parameter, but the SM wouldn't work of it's mass was greater than 200 GeV or so. The Higgs interacts with other particles of mass, and the strength of interaction is proportional to the Higgs mass, it influences certain processes (like W boson scattering), the rate at which these happen would deviate from experimental observation if Mhiggs was over 200 GeV.
That's why the LHC was such a big deal, it reached the energies required for direct observation of sub-200 GeV Higgs, so it would either find the SM Higgs, or rule it out and invalidate the SM. Unfortunately the former seems to have happened.
Indirect constraints on Higgs mass in SM (a bit technical, slide 6 chart is the key one, strongly influenced specs & mission of the LHC)
https://indico.lal.in2p3.fr
It comes from the magnetic moment of the electron being measured and then run through more than 10k Feynman diagrams. The second part wouldn't work if QED didn't.
I can't tell whether you are agreeing or disagreeing with me. Let's make it easier. Do you agree or disagree with the following statement: The value of the fine-structure constant is not predicted by any physics theory.
Given what? If a theory gives you a well-defined relation between the magnetic moment of the electron and the fine structure constant, you can measure either one and then compute the other one. Which one is "predicted" is just a convention.
Eq. (13) in [1] is a prediction of the electron's magnetic moment given the fine structure constant. Eq. (15) in [1] is a prediction of the fine structure constant given the electron's magnetic moment. For the purposes of that paper (testing QED) it turns out to be more convenient to use Eq. (15).
> There is a most profound and beautiful question associated with the observed coupling constant, e – the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to 0.08542455. (My physicist friends won't recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with about an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.) Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It's one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the "hand of God" wrote that number, and "we don't know how He pushed his pencil." We know what kind of a dance to do experimentally to measure this number very accurately, but we don't know what kind of dance to do on the computer to make this number come out, without putting it in secretly! — Richard Feynman, Richard P. Feynman (1985). QED: The Strange Theory of Light and Matter. Princeton University Press. p. 129. ISBN 978-0-691-08388-9.
I have never seen a physicist put that particular number on a wall (these days, the cosmological constant would be a better bet). I am vaguely aware of numerological attempts (some collected in [1]) to "explain" why 137 is "special", none of which has ever led anywhere.
As far as I can tell, the fascination with it got started by the number being close to an integer, and maybe the remark in [1] that "In ancient Hebraic language letters where used for numbers, and Cabbala is the word corresponding to 137" played a role. But we know that it isn't an integer, and that it runs [2] like any coupling constant in QFT, so at best you could marvel about it taking on some particular value at some particular interaction energy, which would mean... what? I dunno. As Feynman also said [3],
You know, the most amazing thing happened to me tonight... I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!
Your latest comment seems to be replying to someone with a fascination with the specific number 1/137, which I do not have, and some related tangents. I'll try to refocus the points of contention between us.
This whole thread started because the top comment said that a physics theory predicted the value of the fine-structure constant. Which is wrong, as the fine-structure constant is one of fundamental constants of the universe and one _whose value is not predicted by any theory_.
At this stage, two claims are in tension.
The first is your claim, that QED predicts the fine-structure constant once you measure the magnetic moment of the electron using several thousand Feynman diagrams.
The second claim is Feynman, himself, literally writing in his book titled _QED_, that we have no idea how to predict the value of this constant.
Could it be that Feynman overlooked the fact that he, himself, predicted the value of the fine-structure constant? He thought that not being able to predict its value was such an unsolved problem as to call it "one of the greatest damn mysteries of physics"? That "all good theoretical physicists put this number up on their wall and worry about it"?
The long and the short of it is that I think you're missing something much deeper. Yes, _once you measure_ something that has a tight coupling with the fine-structure constant, you now know the value of the fine-structure constant. But, before you made that measurement you DO NOT KNOW and further CANNOT PREDICT the value of the fine-structure constant. If you could, you'd be able to claim your own Nobel prize.
> This whole thread started because the top comment said that a physics theory predicted the value of the fine-structure constant.
Yes.
> Which is wrong
No. I even provided you with the reference to the paper in question. Did you even try to read it?
> as the fine-structure constant is one of fundamental constants of the universe and one _whose value is not predicted by any theory_
This is where you go wrong, and where you misunderstand Feynman's point.
The correct statement is that all respectable theories of physics (to date), including the Standard Model, include an irreducible number of values which must be plugged into them "by hand". In other words, once you've written down your theory, there are some parameter values in it about which the theory itself gives you no guidance; you could give them different values, and the theory would still work. It would just be describing a universe with different properties than ours. In order to make it describe our universe, you need to get those values from experiment.
Feynman's point is that we don't have a theory without at least some such parameters (even string theory, which he disliked, has the string tension, and it skirts the need for more by randomly picking a vacuum, which sets the values of low energy theory "constants"). It is not that the fine structure constant has some particular status as "more fundamental" than others.
The choice of constants which you can determine by experiment is constrained by the theory, but generally not locked down completely; you can choose your set of constants, as long as they are independent, i.e. as long as measured constant #1 can not be determined by plugging measured constant #2 into the theory and doing some calculation. If constant #1 can be computed given the theory and constant #2, then they are not independent, and the choice between them is arbitrary; a convention.
The choice between fine structure constant and magnetic moment of the electron is one such arbitrary choice. Given one of them and the Standard Model, you can compute the other. And It turns out that it's actually more convenient to do it this way: measure the magnetic moment, then compute the fine structure constant. There is no reason at all to regard the fine structure constant as more of an "input to the model" than the magnetic moment, as you claimed at the start of this thread.
Needless to say, Feynman knew all this perfectly well. You are just taking away the wrong message from an attempt to popularize the topic. "QED" was a popular book, not a graduate text.
> The correct statement is that all respectable theories of physics (to date), including the Standard Model, include an irreducible number of values which must be plugged into them "by hand". In other words, once you've written down your theory, there are some parameter values in it about which the theory itself gives you no guidance; you could give them different values, and the theory would still work. It would just be describing a universe with different properties than ours. In order to make it describe our universe, you need to get those values from experiment.
100% yes.
> It is not that the fine structure constant has some particular status as "more fundamental" than others.
Agreed, as I wrote above "as the fine-structure constant is one of fundamental constants of the universe", it's in a class of fundamental constants (or one of the irreducible values to be plugged in to use your phrasing).
> If constant #1 can be computed given the theory and constant #2, then they are not independent, and the choice between them is arbitrary; a convention.
Sure. You can choose one input over the other once you have made at least one other measurement.
Are you arguing something like, "there are X irreducible inputs to the Standard Model. For any particular input, a, you might be able to swap it out for a different one, g, so that you still have X irreducible inputs, but now they are a different set. Therefore, because we swapped a for g, a is not a fundamental constant"?
1) > ...physicists ... reserve the use of the term fundamental physical constant solely for dimensionless physical constants that cannot be derived from any other source.
2) > Fundamental physical constants cannot be derived and have to be measured.
And 3) its classification of the fine-structure constant as a fundamental physical constant?
> Are you arguing something like, "there are X irreducible inputs to the Standard Model. For any particular input, a, you might be able to swap it out for a different one, g, so that you still have X irreducible inputs, but now they are a different set.
That's part of what I'm saying. Some trivial examples from the Standard Model are the choice of angles you use to parametrize the CKM and PMSN matrices, Weinberg angle vs electroweak gauge couplings and the scale at which you choose to fix those couplings.
Maybe it will help to call the prediction of values for one such set of parameters from the values of another such set of parameters a "horizontal prediction": you have one theory T, a value a_A of some parameter A, and you predict a value b_B of some other parameter B: B_b = T(A_a). It is "horizontal" because A is no more fundamental than B; you could equally well use T to predict A_a from B_b.
y = T(x) is of course the general form of any prediction of anything at all from theory T.
The reason you saw fit to "correct" walru1066 is that you implicitly expanded "prediction" to "prediction from a more fundamental theory". That's too long to write, so I'll call it a "vertical prediction": you have a more fundamental theory F with some set of parameters A and a less fundamental theory L with some set of parameters B, and you predict B from A using F: B = F(A). It is "vertical" because F is more fundamental than L.
How do we know that F is more fundamental than L, and not just an equivalent description of the same theory? That's easy: because the set A is smaller than the set B. :)
walrus1066 mentioned a prediction of the fine structure constant, and he was right; that's what's done in [1] (I'm pretty sure he was remembering that paper, but not the exact reference; who does?). It's a horizontal prediction. Like all proper predictions, it only works if the theory works, so it is a perfectly valid test of the theory (the topic of his post).
You saw "prediction" and expanded it to "vertical prediction", but that was never mentioned or intended.
I do not take issue with the full phrasing of it, which you snipped out. The complete sentence is
Other physicists do not recognize this usage, and reserve the use of the term fundamental physical constant solely for dimensionless physical constants that cannot be derived from any other source.
In other words, there is no consensus about whether dimensional quantities can be called "fundamental physical constant". The reason is obvious: once you've settled on a system of units (if you are doing fundamental physics, presumably natural units [2]), you can always turn any dimensional quantity into a dimensionless one combined with a fixed dimensional factor.
I can imagine a parallel to this thread in that context: Somebody posts "the mass of the electron is a fundamental constant of the Standard Model", you reply "no it's not, it's dimensional, so it's not fundamental", and I end up writing a long post explaining that you can factor it into a dimensionless Yukawa coupling and a dimensional Higgs expectation value, so it's really fine to call it fundamental even by your definition (i.e. we do not currently have a more fundamental theory which predicts the mass of the electron, unless you are happy with it being a random value).
Regarding this part of your question,
> Fundamental physical constants cannot be derived and have to be measured.
I have no problem with the first part of that sentence (can't be derived; that would require having a more fundamental theory) but the "have to be measured" is subject to interpretation. If you take it to mean directly measured, it's really too restrictive (just have a look at what really goes into determining the properties of short-lived elementary particles). If you allow for measuring some quantities and performing a bunch of calculations on the general form of a horizontal prediction (the only kind possible within the confines of a single theory) then fine.
As for "classification of the fine-structure constant as a fundamental physical constant", I have no problem with it (at the current state of knowledge).
While I don't like this ~guy's~ woman's disdainful, superior tone, I do think his complaint has merit. Physics is notoriously bad at having curiosity about strange, unconventional, or truly novel ideas. I just finished reading What is Real? by Adam Becker-- a history of modern physics-- and am astonished at how aggressively physicists resisted and suppressed "unconventional" interpretations of quantum mechanics (like the many worlds interpretation) in favor of the obviously-wrong Copenhagen interpretation. There was a taboo for decades around even discussing quantum foundations, and people's careers were ruined simply for trying to publish papers about it.
Physics got stuck for a short while on the understanding of QM, and then promptly went into sour grapes mode and decided that it was meaningless to ask any deep questions about what QM actually meant. Since then it has been focused on mathematical formalisms and smashing particles instead of deep questions about what it all means.
The stagnation is real, and it's the physics community's own fault.
It is "her" complaint, not "his". This is Sabine Hossenfelder, a well known theorist (mostly for her outreach/popularization work).
And you are misrepresenting this"taboo" you are talking about. Thinking about what quantum mechanics "means" is how we got breakthroughs in quantum computing and theoretical computer science. Similarly, there is plenty of exciting deep theoretical work in particle physics beyond the smashing particles together type of experiments.
Oops, my mistake on the gender- Sabine is not a super familiar name. (Still dislike the tone, though; don't care how famous she is).
I also think the taboo has lifted a bit, as it's now possible for mainstream physicists like David Deutsch and Sean Carroll to build their careers on quantum foundations work. But I still think the physics community has a lot of baggage from the second half of the twentieth century to let go of.
> Another comment-not-a-question I constantly have to endure is that I supposedly only complain but don’t have any better advice for what physicists should do.
> First, it’s a stupid criticism that tells you more about the person criticizing than the person being criticized.
These feel closer in tone to "personal attack" on her critics to me than a discussion of their ideas.
There is also discussion of ideas in the article, which I have no problem with. But snippets like that feel like unnecessary salt that doesn't add anything. While her critics are (I agree, mostly) wrong, there is no reason to call them stupid.
i think its one of the classic logical fallacies - you can't disagree with me unless you have a better idea. it's a false dichotomy in the sense I can think you are wrong without further explanation. unless there's expt evidence but most of this is opinion-based so it's not open to that sort of progress.
Maybe we're short on those precisely because the strange ideas aren't given enough attention. Bell's Theorem was a landmark result that didn't come out of the Copenhagen interpretation, but because John Bell was fascinated by the de Broglie-Bohm interpretation.
The point is, unconventional perspectives give you new ways of looking at a problem that can yield new experiments.
That could be the case. I think the bigger issue is that many unsolved questions today are difficult to explore without expensive, highly-sophisticated equipment and larger teams of people.
>Physics is notoriously bad at having curiosity about strange, unconventional, or truly novel ideas.
Physics of today would be unrecognisable to a scientist at the start of the 20th century. Indeed the physics of 1935 would be unrecognisable to a scientist at the start of the 20th century. Our understanding progressed enormously AND it did so because people put forward radical theories in complete rupture with the established, 300-year old classical mechanics. There was no "traditionalist resistance" to it after it became apparent that classical mechanics failed to explain phenomena that quantum theory did explain.
>am astonished at how aggressively physicists resisted and suppressed "unconventional" interpretations of quantum mechanics (like the many worlds interpretation) in favor of the obviously-wrong Copenhagen interpretation [emphasis mine]
Ahahaha, you clearly have no idea what you're talking about. There is nothing "obviously wrong" about the standard Copenhagen interpretation (unless you have some new insight you would like to share), nor was there any suppression of ideas. Many debates have been waged in the past ~100 years, and many alternative interpretations have been put forward, like Bohmian theory, superdeterminism, or "shut-up-and-calculate".
>decided that it was meaningless to ask any deep questions about what QM actually meant
Physics, indeed all science, studies observable reality. Any "deep" questions about why or about things not measurable, quantifiable, or empirically observable, are by definition outside the scope of science. It is therefore as unreasonable as complaining about why don't geologists study epistemology. The answer is the same: it's outside the scope of their study.
I also don't like that your phrase seems to imply that what physicists study is not "deep" as opposed to your philosophical questions. There are many deep and beautiful ideas in physics.
> Physics, indeed all science, studies observable reality. Any "deep" questions about why or about things not measurable, quantifiable, or empirically observable, are by definition outside the scope of science.
This is wrong. Scientific theories provide predictive power, yes, but they should also provide explanatory power.
Which is to say, they should describe a model of the world because this is how we devise experiments, and experiments are clearly critical for science.
Two theories with equivalent predictive power but unequal explanatory power are not equally good theories.
> There is nothing "obviously wrong" about the standard Copenhagen interpretation
Can you tell me how to calculate whether a system of particles will cause another system of particles to collapse or not?
Can you tell me under what circumstances a system of particles will evolve unitarily or not?
Can you shade the region of the spacetime diagram of EPR where the wavefunction is collapsed? How about in a delayed choice quantum eraser experiment?
If you tell me "you're not allowed to ask those questions" (or "hm, I never thought of that!"), then you're directly illustrating the complaint here about physics!
The common narrative is that Copenhagen, many worlds, and the other interpretations of QM are all equivalent, but they are not. Copenhagen adds an extra physical event-- collapse-- where the wavefunction is suddenly nonunitary. The burden is on Copenhagen to tell me how and when this happens, and in fact to prove that it happens at all. Many worlds, on the other hand, predicts the same phenomenon-- the apparent collapse of a wavefunction to an eigenstate-- without adding unitarity violation, or any other phenomena at all beyond the normal, extremely well-verified mechanics of multi-particle system scattering; it merely treats the environment is a multi-particle quantum system.
Many worlds is the null hypothesis (no, the extra worlds are not extra suppositions, they are predictions of known physical laws), and the burden is on Copenhagen to show that unitarity violation exists, and the burden is extremely high (possibly insurmountable?) for EPR and eraser experiments.
When Copenhagen was devised, most believed that there was some "underlying state" in QM, and that measurement told you something about what the underlying state was. Bell's theorem should have sent a shockwave through the community which forced everyone to reevaluate fundamental assumptions and self-correct wrong ideas. But by accident of history, John Bell was too shy to publish in a major journal, and no one even read his result for four years. The implications of Bell's theorem were slow to diffuse through the community, and so the disruptive moment of reckoning that should have happened never came. By the time it was well-accepted, the narrative in physics had become "you're not allowed to ask about what quantum mechanics means, just shut up and calculate", and so the cognitive dissonance was cast aside. Annealing this attitude and this misstep away is taking excruciating decades.
EDIT: I also don't mean to imply that there aren't deep and beautiful things in modern physics. There are! But physics is concertedly avoiding asking deep questions in areas where (I strongly believe) it is most important. The claim that "the meaning of QM is outside the scope of science" is exemplary, I think, of that attitude.
>Can you tell me how to calculate whether a system of particles will cause another system of particles to collapse or not?
You appear to be somewhat mistaken. Copenhagen interpretation does not postulate any specific explanation or mechanism for wave-function collapse, merely that "upon measurement, the wave-function collapses into an eigenstate of the observable being measured". This, of course, is a physically verified phenomenon. Now Copenhagen purposefully leaves the precise meaning of "measurement" undefined, since in my view there is no convincing empirical evidence that supports a specific mechanism for this phenomenon. Other interpretations posit mechanisms (decoherence (doesn't fully account), von Neumann "consiousness" (not empirical), etc.) for this collapse.
My biggest complaint about many-worlds interpretation is how it is in its essence a non-scientific theory, as it makes assertions about an unobservable reality. It postulates other parallel realities that by definition do not communicate. Again, this makes it intrinsically a non-scientific theory.
>Can you tell me under what circumstances a system of particles will evolve unitarily or not?
Everywhere except on measurement, in which the state collapses.
>How about in a delayed choice quantum eraser experiment?
I'm not familiar with this experiment.
>If you tell me "you're not allowed to ask those questions"
By all means, ask as many questions as you want. That's after all the essence of scientific endeavour. But it's not very nice to misrepresent other positions, nor is it to claim everyone else is deluded (without strong evidence on your side at least).
>Bell's theorem should have sent a shockwave through the community which forced everyone to reevaluate fundamental assumptions and self-correct wrong ideas.
Bell's inequalities force no re-evaluation. They simply prove that the search for a local hidden-variable theory is impossible. It certainly raised important ideas for research, but it does not do anything to discredit established QM.
>The claim that "the meaning of QM is outside the scope of science" is exemplary, I think, of that attitude.
If a physical argument can be made about this problem, then it's in the scope of physics. Otherwise, it is not. As simple as that.
> This, of course, is a physically verified phenomenon
No it's not. There are interpretations in which collapse does not exist, so these experiments are not measuring what you think they're measuring unless you already assume the conclusion.
Also, your view on other interpretations of QM are outdated. Arguably one of the more famous results on quantum mechanics, Bell's theorem, would have never existed if not for Bohmian mechanics.
Don't discount the value of explanatory power. The fact that other interpretations provide far more explanatory power than Copenhagen makes them far more valuable. Many important results in quantum foundations would have never happened if everyone were a Copenhagenist.
And it is still explained by the Copenhagen interpretation and there’s no problems with it. It’s like all those SR paradoxes that sound interesting but don’t disprove the theory — just the fact you don’t fully understand it.
The major interpretations of QM are all similar in the fact that they are not testable and don’t really affect the resulta. It’s metaphysics.
Also, all the non-MW interpretations say "and then choose a random outcome" without any clue as to how randomness might occur. MWI can propose an actual mechanism for randomness: sampling a space of alternatives, aka the Sleeping Beauty approach. To me this is pretty close to a knockdown argument in favour of MWI.
I am not sure that paper knows it, but it is essentially showing that many worlds is correct-- that the wavefunction never collapses:
> the implications on conditional probabilities
hold for other measurements throughout the entire
spacetime, present and past. [emphasis theirs]
And the math in the paper which supports that statement works by keeping all the superpositions around and allowing us to project to them at any time. This is the picture of many worlds! Copenhagen says the opposite: When you make a measurement, the unmeasured superpositions go away. The paper confirms, that's nonsense! If you shade any region of the spacetime diagram where the superposition is gone ("collapsed"), you'll be wrong!
The author says: "Note: in this work, we will not make any reference to why this (apparent) collapse occurs. Not only is this a much harder issue, it is simply not relevant to discussion we will present."
Also: "It is important to note that arriving at our conclusions did not require introducing new physics. We only relied on elementary quantum mechanics: not on novel ‘backwards time’ concepts, nor on any particular interpretation: we only used the Born rule ‘as is’. [...] With the remarks and intuition presented here, there really is no mystery whatsoever in any of the discussed experiments."
This experiment is not more mysterious than the rest of QM, but of course if you find the whole theory unsatisfactory you won't be satistified with this explanation either.
That language is why I think the authors aren't aware that they are advocating many worlds.
Also, this is a fantastic example of the "don't ask questions" attitude I think is so shameful. If they had bothered to take the small step of asking, they'd have come to a clear, evidence-based refutation of Copenhagen! There are regions of spacetime after the measurement, where the wavefunction is not collapsed, which the authors explicitly point to. That's in direct contradiction to the premise of Copenhagen.
The point is that the order of the measurements is irrelevant: QM predicts the same probability distribution of outcomes. So the problem can be solved using either order and the response is the same. I am not sure what premise of Copenhagen is being contradicted.
Edit: solving the problem in the "natural" order would give the same probabilities but is much more difficult to handle. You need to get a probabilistic outcome for the position of each single photon on the screen, which "collapses" the state and determines the wave function of the other photon. At this point, the first photon has been detected somewhere but it can only be labeled as "interference" or "not interference" later after the detection of the second photon. The probability of being labeled as "interference" or "not interference" does depend on the position (because the collapsed wave function depends on what the outcome of the previous measurement was). When everything is said and done, looking at the subset of events labeled "interference" there is an interference pattern and looking at the subset of events labeled "not interference" there is not an interference pattern. There is no mystery.
> I am not sure what premise of Copenhagen is being contradicted.
Before measurement, the state of the particle system is Σ α_i|i>. Copenhagen says, "after you measure the system, all but one of the α_i go to zero". The authors don't do that, and in fact say that you can't. Unless I am misunderstanding something, they keep all the α_i around at all times and project the measurement for each particle separately, regardless of whether it has been (or will be) measured somewhere else. There is nothing philosophically or physically wrong with this, as they point out, but it is different than what Copenhagen says you should do when you measure something.
And if you look at what it means (which they refuse to do), you'll see that after Alice makes her measurement, Bob's quantum state is still explainable in terms of a superposition. When is the measurement (joint or individual) "finished"? Answer, per the authors: Never. The superposition permeates spacetime. That's how we escape the need for a causal connection between Alice and Bob's measurements, and, naturally, that's how many worlds does it.
(And if we take it further and ask, when Bob makes his measurement, "What is the state of Alice's particles?" we'll see that she is in a superposition of being entangled with each of the superpositions of the particle, which remains in superposition before, throughout, and after our ~measurement of~ entanglement with it).
Please see what I added to my previous comment. You could in principle do it in the "right" order and you would get the same result. They routinely solve quantum optics problems using standard QM and the experimental results match the predictions.
What he shows in that paper is that the order of the measurments is irrelevant [1]. So he does solve the problem in the reverse order where it can be done easily writing a few quantum states [2].
Note that QM is not about causal connections, is about correlation. Once a pair is correlated, the correlation may appear when measurements are done. But it's not that observing one outcome here and now "causes" a particular outcome there and later (or before). One doesn't need to keep superpositions to think it "correlation" terms (instead of "causal" terms).
[1] Using the projection postulate in the derivation: "Say we have indeed measured on B and got OB = bJ . The state then collapses onto ..." He concludes: "None of this looks very surprising, but we want to stress that the total probability to find OA = aI and OB = bJ does not depend on the place or time at which the measurements occur."
[2] He also uses the projection postulate here in the usual way: "So the experimental outcome (encoded in the combined measurement outcomes) is bound to be the same even if we would measure the idler photon earlier, i.e. before the signal photon by shortening the optical path length of the downwards configuration. Then, if the idler is detected at D4 for example, the above state ‘collapses’ onto ..."
Yep, I'm with you that we don't need to bring causality into it.
And yes, you don't need to ask "when does the wavefunction collapse" to manipulate joint probabilities of measurements that happen at disparate locations and times. In fact, that's my objection: If you do ask, you find that there is no consistent answer! (And I suggest it's because wavefunction collapse is not a thing the universe does).
Re: "interference tagging"-- do you have a link to some material? (I'd love to understand something specific before commenting).
EDIT: Also, I didn't see Appendix B at first-- The authors do understand and even advocate the Everettian view! Though I still don't quite understand/agree with their earlier timidity about finding and interpreting conflicts with Copenhagen.
> If you do ask, you find that there is no consistent answer!
I'm not sure what's the problem. Why do have to "ask" if the answer doesn't really matter? What answer more consistent than "it doesn't really matter" would you like? Anyway (standard) QM is a non-relativistic theory, QFT may be more satisfactory from that point of view.
Re: "interference tagging" - what I mean is that first you detect the photons and later check if they "did happen" to go through two slits (interference appears) or one slit (no interference). But the interference pattern is not visible for a single photon and at that point the individual events are still a superposition of both possibilities (so for the events at a certain position part of them will be in the end identified as coming trough one slit and some of them from both). Only after the second measurement is done you know how to group the previoulsy recorded events to see the interference. It's not that the later measurement causes interference to appear. Or at least it doesn't affect at all where the photons were detected, it just lets you know how to group the existing events to make it apparent (selecting only those where, once the full mesurement on the pair has been done, the path taken remains uncertain).
If all the events are taken together there is no interference pattern. But when they are grouped according to where the second photon is detected in two cases there is still no interference but in the other cases complementary interference patterns appear.
> Why do have to "ask" if the answer doesn't really matter?
You tell me; Copenhagen is the one that says collapse exists. It sounds like maybe we are on the same page that collapse isn't necessary to explain quantum mechanical observations? In that case, we are both Everettians :).
To explain quantum mechanical observations you need "collapse" (i.e. the projection postulate: immediately after a measurement the state of the quantum system is the projection on the corresponding eigenspace of the operator). I don't know what do you gain by saying that it's not "real" and that it's just "as if".
Because if we say that some of those alpha_i physically go to zero at any point (e.g., "after"), our predictions are wrong, in agreement with the paper. We have to account for the fact that those alpha_i|i> are still nonzero and "existing", and that our projection onto them is only zero for the time being. A different choice in EPR or a quantum eraser experiment may bring our projection onto those states back out of orthogonality-- or maybe not, if we never make those choices. But if we believe we have the physical freedom to manipulate our experiments, we can't get away with saying those extra "universes" (basis states) physically disappear.
In some cases it is a safe approximation to ignore those extra states for the remainder of our experiment/calculations, but with a small change to the experiment we can make that a bad approximation.
I am afraid that you have not understood the paper.
a) You do the measurement first on the "screen" side (and project the quantum state of the pair of photons according to the measurement, the "extra universes" disappear). You do then the measurements on the "idler" side (and project again the quantum state according to standard QM).
b) You modify slighly the setup to reverse the order of the measurements. You do the measurement first on the "idler" side (and project the quantum state of the pair of photons according to the measurement, the "extra universes" disappear). You do then the measurements on the "screen" side (and project again the quantum state according to standard QM).
QM predicts that the outcomes in the original experiment (a) and the "reversed" experiment (b) are the same. And those predictions are verified empirically.
He published in a tiny journal that was only in print for a handful of years, which was read by very few people. Bell himself said that no one contacted him or even mentioned it to him until four years after its publish date, and that researcher only came across it by pure happenstance. The whole story is detailed in Becker's book; it's fascinating.
> There is nothing "obviously wrong" about the standard Copenhagen interpretation.
Copenhagen interpretation showed up when they were faced with a choice between preserving locality or determinancy.
They chose wrong — we know now QM is non-local, and that the underlying justification for the Copenhagen model is an extraneous philosophical proposition.
But much like an extra dependency in a software project, no one wants to remove it now that it’s used everywhere, and there’s a lot of “good enough” stuff using it.
That said, it seems to be one of the major impediments to a unified theory: by dropping the extraneous assumption, we have fewer things to reconcile with GR, and can start looking for GR geometries that have quantized non-local behaviors.
ie, dropping Copenhagen and giving geons another look is probably worth it. (And is basically what loop quantum gravity people are doing, as far as I can tell.)
> we know now QM is non-local, and that the underlying justification for the Copenhagen model is an extraneous philosophical proposition.
What underlying justification are you talking about and what do you understand by "the Copenhagen model" precisely? At least in the Einstein vs Bohr debates the one denying that QM could be a complete theory because of its non-locality was Einstein, I think.
> Einstein's refusal to accept the revolution as complete reflected his desire to see developed a model for the underlying causes from which these apparent random statistical methods resulted. He did not reject the idea that positions in space-time could never be completely known but did not want to allow the uncertainty principle to necessitate a seemingly random, non-deterministic mechanism by which the laws of physics operated.
The underlying assumption for Copenhagen was to try to preserve locality by assuming non-determinism. However, there’s no saving locality — and non-locality is enough to leave determinism — so there’s no reason for the non-deterministic axiom.
I think I also meant “definite” instead of “deterministic”, but it works out the same.
No, but if you have non-locality, then you don’t need the non-definiteness of Copenhagen, and could do with something like a Bohm or MW interpretation.
"The problem with Copenhagen is that it leaves measurement unexplained; how does a measurement select one outcome from many? Everett’s proposal keeps all outcomes alive, but this simply substitutes one problem for another: how does a measurement split apart parallel outcomes that were previously in intimate contact? In neither case is the physical mechanism of measurement accounted for; both employ sleight of hand at the crucial moment."
I don't think that criticism really understands what Everett says-- the evolution of the universal wavefunction is unitary. There is no "splitting". There is only uncertainty about what part of phase space you are in (and any measurement that tightens your certainty in one axis of phase space will broaden it along some other axis).
quote from previous post that is potentially relevant to your dislike of the "superior, disdainful tone":
> later someone sent me a glorious photoshopped screenshot (see above) which shows me with a painted-on mustache and informs us that Sabine Hossenfelder is known for “a horrible blog on which she makes fun of other people’s theories.”
> The truly horrible thing about this blog, however, is that I’m not making fun. String theorists are happily studying universes that don’t exist, particle physicists are busy inventing particles that no one ever measures, and theorists mass-produce “solutions” to the black hole information loss problem that no one will ever be able to test. All these people get paid well for their remarkable contributions to human knowledge. If that makes you laugh, it’s the absurdity of the situation, not my blog, that’s funny.
> The modern astrophysical world view, which began with Galileo, and its challenge to the adequacy of the senses to reveal reality, have left us a universe of whose qualities we know no more than the way they affect our measuring instruments, and — in the words of Eddington — "the former have as much resemblance to the latter as a telephone number has to a subscriber." Instead of objective qualities, in other words, we find instruments, and instead of nature or the universe — in the words of Heisenberg — man encounters only himself.
-- Hannah Arendt, "Vita Activa"
Heisenberg and others knew that, I'm sure many current scientists do too, but going by output, the situation is dire, the intertubes are full of people talking about being "unbiased" or "objective" even about human matters, when not even the hard sciences are truly objective! It's like people simply agreed that because they're not religious or whatever, they're now correct. If they are wrong about anything, since they supposedly are on the side of science, they can be sure that will be corrected, so they can consider themselves correct even today. Before you know it, you have superstitious people who think their superstition constitutes the absence, the opposite of superstition.
We went from Heisenberg to a chemistry teacher cooking crack while his assistant gets all bug-eyed about the power of "science, bitch!". Or this "critique" of the game Soma I saw recently, where the Youtuber mentions that "people who are smarter than you and I are saying the universe might be a simulation". What I see is people booby-trapping everything with stupidity.. making things so stupid, and then glorifying that, that injecting even a lick of seriousness into anything would cause massive offense, hurt a lot of egos. To me it's the analog of religious fundamentalists going to church 20 times a day to get hyped and primed with caricatures of those not of their flock, and fantasies of how great it's all going to be when those are gone.
But I'm not sure I can blame physicists for that. I don't know enough about what they do, I see what "everyone" else is doing, and that's horrid by itself. It's not the job of physicists to ask deep questions about what it all means, it's the job of everybody.
> suppressed "unconventional" interpretations of quantum mechanics (like the many worlds interpretation) in favor of the obviously-wrong Copenhagen interpretation.
They’re just interpretations and they all work equally well and produce the same predictions. It makes no sense to call any of them obviously wrong. It’s personal preference.
Is it possible though, that Copenhagen is wrong in a similar manner to epicycles that give the correct numerical answer?
That's the impression I get as a non-physicist, that yes it's kind of an aesthetic preference, but on the other hand aesthetics are important and sometimes are pivotal in finding new insights.
Ugly may not be completely objective, but it isn't completely arbitrary either.
I should correct an important misconception. Epicycles do not produce correct predictions.
Epicycles can produce correct point location of planets, but do not produce correct phase. Galileo observed heliocentrism-compatible phase of Venus with his telescope. That was the crucial experiment, not the simplicity.
I don't think it's important to my point. The point remains that multiple theories can be equivalent in some restricted sense, yet the more aesthetic one might have greater potential.
Copenhagen is wrong to the degree that any theory can be called wrong. It posits specific physical events (collapse) which are not only unsupported by evidence, and not only contradicted by evidence (delayed choice quantum eraser experiments), and not only logically self contradictory (when it is not too vague to make concrete statements about the phenomena it is explaining, it is circular-- our theory of electrons cannot include scientists (or the macroscopic world in general) as a fundamental element, when scientists themselves are explained in terms of electrons), but ontologically inferior to many worlds, which explains all the same phenomena without adding any new suppositions (and no, the extra worlds are not extra suppositions, they are natural predictions that arise by simply following well verified quantum laws to their natural conclusion, by treating the environment as a quantum mechanical system of particles).
MW might be simpler in the way it arises but while it produces the same results it doesn’t really matter. And it definitely has enough supporters to ruin the “stubborn physicists” hypothesis.
I suppose this is as an appropriate time as any to ask for advice: I am a multimillionaire from inheritance, and am about to complete my masters in physics. I wish to work on long term theoretical physics problems that do not seem to be possible under the current publish-or-perish academic system. The plan was to complete a PhD, then leave academia, but lately I have been having severe doubts about continuing onto a PhD, partly due to the cruft that comes with academia. Obviously, future employability due to financial reasons is completely irrelevant to me.
It sounds like you are in a good position - you can have total control over what you work on. You could even write your own grants and get other people to research what you want alongside you. Since the traditional PhD path isn't showing that much success, doesn't it make more sense to just research what you want?
All the papers are free online and authors will generally discuss their work with you if you have intelligent questions.
BTW, this is what I do. I freelance about 20% of the time and spend about 50% of it reading physics papers. So far I haven't produced anything new, but I have greatly increased my intuitive understanding.
Completing your PhD would give you credence, being taken seriously even if you are right can be an issue (and justifiably so, evaluating new ideas take time and lots of cranks come up with new ideas). Also making contact in academia certainly can't be bad.
... I have spelled out many times very clearly what theoretical physicists should do differently. It’s just that they don’t like my answer. They should stop trying to solve problems that don’t exist. That a theory isn’t pretty is not a problem. Focus on mathematically well-defined problems, that’s what I am saying. And, for heaven’s sake, stop rewarding scientists for working on what is popular with their colleagues.
It seems that some examples might be useful here.
Which specific groups are trying to solve problems that don't exist?
What are some mathematically well-defined problems that aren't getting enough attention?
As for rewarding scientists for working on what's popular, that's a science-wide problem that stems from the way that science is funded and decades of inbreeding. Still, examples of how to break physics out of its funk on this score would also be useful.
Btw, something to think on. Consider this slowdown in physics, and the oft-repeated ideas that we'll sure to colonize not just the solar system, but eventually even the galaxy...
Dunno, seems pretty out of date. The mentioned price of $15,000 per kg to the moon is already 3 times cheaper, even if you need to get it to mars.
They also mention a wildly optimistic "I'm not holding my breath" $20k per kg to mars, which is already is 4x higher than a SpaceX BFR launching stuff to mars.
I'm no physicist but this leaves me scratching my head.
You have some people saying the university funding system is to blame by not accepting crazy ideas but we have all sorts of ideas in physics like:
- String theory. As best as I can tell the only reason string theory exists is because if dimensions=11 the equations for general relativity pop out. Importantly though string theory has made no testable predictions and it's unclear when or even if that will be the case.
- Supersymmetry. Interesting idea but no evidence of this yet.
Other more interesting ideas to me at least (again, as non-physicist):
- Octonion Math underlying the standard model (maybe) [1]
And some interesting experimental work:
- Possible violations of lepton universality from the LHCb detector [2]. This was, last I heard, still well below statistical significance (5-sigma) and could well disappear (as other bumps have eg at 750GeV) but it's interesting nonetheless.
And there are host of open problems with otherwise successful theories.
My favourite extremes here is the prediction of magnetic moment of an electron, which is ~12 significant digits in agreement with experimental results. At the other end is QFT predicting the energy density of a vacuum, which is ~120 orders of magnitude off [3].
Anyway, a lot of this exists in the current academic system.
Depending on how strict you are, you can trace string theory all the way back to the 1940s, or at the very least to the late 60s [1]. Supersymmetry is from the early 70s [2]. They are exactly the mainstream kind of thing which Hossenfelder is referring to when she writes about "physics beyond the standard model which the Large Hadron Collider (LHC) was supposed to find". For every lone wanderer exploring a long shot like octonion math, there are thousands writing yet another paper on some variation of the old theme.
> As best as I can tell the only reason string theory exists is because if dimensions=11 the equations for general relativity pop out.
This gets at the essential feature that seems to be driving string theory research (or at least a major one), but I think it's an overstatement as you state it. String theory is popular because it appears to hold out the hope of a theory of quantum gravity. But to my knowledge nobody has shown that the equations of general relativity pop out from an 11-dimensional string theory model; that is still vaporware. There are results which suggest (at least to string theorists) that that should be possible, but nobody has actually done it yet.
It was also a time when the whole societal landscape was different: peasants farmed because it was their way to survive and the aristocracy lived on the labor of the former without having to work. What changed now it that almost everybody need "a job" and the attention and time of a lot of smart people is distracted by that. That's also the reason why people professionalize themselves by going to the academia. In my case, I would do research and experimenting new ideas no matter what, but joining an PhD program is the way to put bread on the table.
Today the number of people that don't have to work, both in raw numbers and as a percent, has absolutely skyrocketed. I mean if a person is simply interested in e.g. pursuing research and not so much with material niceties then it's extremely easy to live abroad in developing nations, indefinitely, starting with around $200k. Bump that up to a million and you can live practically anywhere so long as you have the discipline to live far below your means, which is perhaps the hardest part of it all.
I kind of like the model of the turn of the 19th century where you had great geniuses like Tesla who would just get funding to create whatever kind of cool crap they could think of. Are there still any people like that in physics? Legendary scientists/engineers who run there own lab and just make cool stuff?
Tesla is a bad example for almost any given point, but in this case it's relevant to note that he was not a scientist, and his funding model did not work.
> Are there still any people like that in physics? Legendary scientists/engineers who run there own lab and just make cool stuff?
That's pretty much what I've been going for. Got my PhD in Chemical Engineering and an MBA, and now I'm starting my lab - or "startup", here - in pursuit of my research, basically into strong AI.
Bootstrapping is an interesting process. I mean, I have to keep the lights on somehow, though there's an inherent conflict-of-interests between keeping the lights on and doing the core work that the lights are there for.
David E. Shaw. Note that before he returned to his scientific passions he had to make a fortune, because creating new things at the cutting edge is a lot more expensive now than at the end of the 19th century.
This criticism may have some merit on the particle physics side of things, but from the 'gravity' side I see exciting recent progress. In particular, the AMPS firewall and literature that followed, including the introduction of computational complexity into physics, ER=EPR etc. With LIGO and various space telescopes soon coming online, the experimental future looks bright too.
The stagnation become with the end of the USSR. No need for physics anymore, no one was building missiles, or going to space.
Then the Internet happened and smart people made money off that.
Looks like the wages for developers is going down. China, Japan, etc., art going to space.
Physics will pick up again.
It is proven that physicists are in fact, the most ignorant folks of all scientists. Real, proper physical models are always interdisciplinary, unified theories. The most ignored category of all.
I can assure everybody, that the model that might get accepted in 100 years is already here. As I'm personally using one of those fringe models, I can assure you that using this in public will get you mostly negative points online and quite interesting conversations offline. Nice side-note: You will be able to filter out non scientific thinkers quite easily and I can assure you, there are lots of them in the "scene". Interestingly chemists are much more open to different models, in fact, most of them know that our models are rough approximations at best.
It is funny when you think in models that explain everything, but are quite far from the standard perspective. It becomes hard to explain effects because the details obviously start to diverge the closer you look.
On the other hand, I think every adolescent is capable of thinking the model I'm using. (PS: I'm not the origin of the model I'm using, I seriously would have never been able to come up with such a minimal, absolute logical and elegant solution)
Very similar theme of lack of verifiable theories.
I wonder what the experimental physicists have to say about this topic? I feel like theories are also driven by new observations. However the observations that the theorists have to go on are very indirect compared to those of 100 years ago. "The mass of this galaxy is out by x percent" isn't doesn't give many clues as to what's wrong.
As an outsider, I wonder, what happened to Nima Arkani-Hamed and his new ideas about the nature of spacetime? That seemed pretty interesting.. even just as a strategy of what should be researched.
Nobody has mentioned John Horgan's book "the end of science" yet, so I figure it is time to do so. It's even more relevant now than when he wrote it in 1996.
If you are doing theoretical research all your really need is an office, a computer and a basic salary; it's not like university pay is great.
If the need for funding from existing sources is hindering your research, try to find another way to support it, like by freelancing 20% of the time or Patreon.
Isn’t she just complaining that nobody is listening to her? That sounds a lot like a crackpot to me. Instead of complaining she should better work on a useful new theory explaining the masses of neutrinos or dark matter and dark energy. But of course this is much harder than writing blog posts.
I wish physicists would dare blame themselves for (insufficiently dis)trusting their statistician, mathematician & philosopher colleagues.
The reason for physics research becoming more wide, limited & shallow instead of more narrow, broad & deep seem to stem from the foundations of mathematics.
(If you struggle with comprehending the above, try drawing Venn diagrams of the logical operations involved to gain a geometric understanding of the matter.)
And here for the foundations of probability theory:
But for which we lack a sufficiently advanced & logically consistent mathematical formalism, both due to people mostly ignoring, out of ignorance [because from where else do you get the action of ignoring!], the philosophical solution, and, more importantly, because we lack a sufficient mathematical formalism for it due to, among other things, the issues with probability theory.
And here, a small shimmer of hope in the foundations of statistics:
There exists another, unrelated to the above presentation, avenue of highly interesting research out of Brazil, but their results haven't yet reached a stage of maturity where people throw together easily understandable powerpoint slides, which I'll neglect mentioning here for now, because I'd consider that bad etiquette.
Personally, I feel partial to blaming all of this on this Euclid translation error, albeit I say that in partial jest:
...which people still fall for, even in 2018, as exemplified in quite a few papers on the foundations of geometry published in recent years.
In closing:
Physicists don't stand the furthest to the right in this xkcd comic, and out of frame, even further to the right from the already left out philosopher, there exists a recursive boxing match between numerous fields of science conveniently left out of the graphic to maintain a sense of strict hierarchy & order in a reality that lacks such hierarchy:
On what premise? 'Theories' are human constructs, hence why physicist are so adamant about their Truth and Beauty. It's wrong to say that, when most sciences rely heavily on Occam's principle (an aesthetic argument) for reasons unknown. It's pretty likely that the human brain is guided by both principles to model the world, and that should be reflected in the formulation of our theories.
We’re stuck in a paradigm that doesn’t result in any valid fundamental predictions. The idea that running the expanding universe backward makes everything denser and hotter in the past is just dumb.
The universe is not like a loaf expanding from dense batter to fluffy bread.
Instead, the chaotic vacuum produces, for want of a more accurate concept, particle-antiparticle pairs at random that exert a “pressure” seen as the Casimir Effect and a force that underlies the expansion of space-time.
These pairs are mostly ephemeral, but under certain conditions they can randomly transition to a stable state. This eventually results in matter. (It results in a lot of things, but we’re biased toward the minor component, matter, because we’re made of it.)
The process happens a lot in very empty space, and almost not at all in space that is constrained by the existence of matter already. This is why “dark matter” exists out there and not down here. The Casimir Effect will give this to you; constrain the available space and some wave equations are excluded, resulting in a measurable inward pressure.
Run the expanding universe backwards: space-time contracts and we have exactly what we have right now. Run it forward and space-time expands, again giving us exactly what we have right now. Of course, things are different, but the physics is unchanged. The universe doesn’t get hotter or cooler, there’s no era of total ionization or inflation.
The fundamental ground state of the universe is chaos. Anything can arise out of that chaos, but specific events are constrained by probability: some are so unlikely that we never see them; some are so likely that they are certain and they happen all the time.
Mathematics, so useful a tool in the past, cannot describe this situation. The only way to describe this system is by using the system itself; there are no shortcuts.
Math, philosophy, reason and order are inapplicable because they are only rules-based approximations of a chaotic state.
That's an interesting idea. Do you have any thoughts on the fact that the structure of the cosmic microwave background radiation is exactly what we would expect it to be if the universe had expanded from a tiny state to its current state?
I'm unwilling to assume it "just randomly looks like that" given that a random distribution on a universal scale seems highly unlikely to show the large scale structures notable in the CMB.
I'm in no way endorsing this individual's proposal, but the argument you're using here is not as strong as you'd expect. A recurring trend in the past several decades has been that a contradiction in prediction is often simply massaged back into that prediction, at times quite arbitrarily. Then that previously contradictory observation can be used as 'evidence' for the newly retrofitted prediction. Of course that's at best circular logic and, at worst, something that can start sending us spiraling down the wrong path, faster than light.
So for instance the CMB did not support the big bang, it contradicted it. See: the horizon problem [1]. To reconcile this inflation theory was invented which arbitrarily suggests that the universe hit the accelerator hard, then slowed back down. There's no logic, mechanism, rationale or falsifiability. But it makes what we see fit what we predicted we'd see, so it's a pretty widely accepted part of modern physics. And this retrofitting now has a cascade affect that enables other theories to provide support for yet other theories -- such as what you're proposing here in that the CMB now 'supports' the big bang. It does so only if you add a very big asterisk there.
I'm not a physicist, so maybe I'm misunderstanding the article you linked, but it seems to support my 1st post.
If you'll reread it you'll see that I mentioned nothing about a Big Bang, but rather that the CMB provides evidence that the Universe expanded from a smaller state to its current state, which is larger than its starting state.
From the linked wiki:
"Differences in the temperature of the cosmic background are smoothed by cosmic inflation, but they still exist. The theory [Cosmic Inflation evidenced by the Horizon Problem] predicts a spectrum for the anisotropies in the microwave background which is mostly consistent
with observations from WMAP and COBE."
It doesn't. First to clarify what the CMB is. It's basically just heat residuals that seem to indicate that the universe was much hotter in the past. The problem is that these heat residuals are relatively homogeneously distributed. In terms of thermdynamics this makes perfect sense - the entire temperature of an area will gradually converge, like a pot of boiling water will eventually reach room temperature.
But physically what we observe does not make any sense. Like you probably know, nothing -- including action -- can be perceived to travel faster than the speed of light. The sun is about 8 light minutes away from us. If it somehow just suddenly disappeared, we'd still see it in the sky and continue to revolve around, what would 'now' be nothing, for about 8 minutes. The observation of its disappearance and the effective causality of its disappearance (and its effect on our orbit) would happen at or very near the exact same time.
The problem with the CMB is that areas of space that should not be causally connected since light itself has not had time to go from one to the other, seem to be causally connected. In other words, with our boiling pot in a kitchen room - the eventual equilibrium that the kitchen reaches (if we assume that that entire little region is all of the space in existence) is going to vary quite substantially whether you have an e.g. 100 cubic meter kitchen or a 200 cubic meter kitchen. We should observe both sides of space acting like two independent 100 cubic meter kitchens, instead we seem them behaving like a single 200 cubic meter kitchen.
This is a major and unresolved problem that threw much of what we know out the window. It directly contradicts the big bang. To 'resolve' this, we started creating a arbitrary special conditions. Cosmic inflation is one of these. There is absolutely no reason to believe that cosmic inflation ever happened - its sole and only reason for existence is to work as a 'fix' to make what we observe fit what we thought we'd observe. This makes it illogical to use derivative things as "evidence". In particular the nature of our current CMB is in no way meaningful evidence of inflation, because inflation was hypothesized, after the fact, in no small part to fit the CMB to what we thought we'd see! In other words calling the CMB meaningful evidence of inflation is trying to provide support to a hypothesis by suggesting that the observation said hypothesis tries to explain is meaningful evidence of that hypothesis.
Any not completely idiotic hypothesis will obviously be 'evidenced' by what it tries to explain. But we have a major problem when that 'evidence' becomes all you have to rely on, and that is exactly the case here.
> Mathematics, so useful a tool in the past, cannot describe this situation. The only way to describe this system is by using the system itself; there are no shortcuts.
> Math, philosophy, reason and order are inapplicable because they are only rules-based approximations of a chaotic state.
This is really a superlative cop-out. "My theory is so genius that math, philosophy, reason, and order are insufficient to describe it."
How do you take someone seriously who says, "Mathematics, so useful a tool in the past, cannot describe this situation. The only way to describe this system is by using the system itself; there are no shortcuts. Math, philosophy, reason and order are inapplicable because they are only rules-based approximations of a chaotic state."
That's tantamount to saying that his theory is not based on data, logic, or even sound reasoning, and cannot be proven. Pretty much the definition of a crackpot theory. Of course, what he is actually admitting is that he doesn't understand and can't do the math.
Or how can you take someone seriously who says, "We’re stuck in a paradigm that doesn’t result in any valid fundamental predictions." This is so obviously false that the only possible explanation is that he is ignorant about the foundations of modern physics, what they are, how we got here, etc. The field is freaking loaded with predictions that have been proven.
The tl;dr is that physics is not necessarily explainable, however much you want it to be. So it's very possible that the underlying phenomena of reality is simply out of reach of theories (i.e. it's incompressible).
> That's tantamount to saying that his theory is not based on data, logic, or even sound reasoning, and cannot be proven. Pretty much the definition of a crackpot theory.
You misunderstood the post I think. It's not proposing a theory, or even a crackpot one. It's saying that maybe the underlying phenomena is out of reach of theories.
Nonsense. It’s just a rehash of the old steady state model. Further, any theory which itself claims to be unproveable, is a religious sentiment masquerading as science, and is a waste of mental effort.
Wolfram’s article is irrelevant to this discussion. It is a question of empistemology.
The quantum foam is a shared component, an interface, between our universe and another universe that is majority anti-matter. When particles arise from the quantum foam, what we're observing is the half of a virtual particle pair that ended up on this side of the interface, and it is more likely to be the "matter" particle for some reason.
Lets get hypothetical. Suppose the quantum foam interface is, in fact, the event horizon of a blackhole as observed from the inside. Since we see a bias towards matter, we could expect that the "outside" universe would see a bias towards what we call anti-matter. Further, since we have blackholes in our universe which we can observe, we should see a bias towards matter in the Hawking radiation.
=This post brought to you by a physicist from a not-terribly-accurate sci-fi novella.=
> This is why “dark matter” exists out there and not down here.
Nope, everyone thinks that dark matter exists everywhere around us. That's why we are funding and running all sorts of dark matter detectors that are looking inwards, not outwards.
"It's not just that scientists don't want to move their butts, although that's undoubtedly part of it. It's also that they can't. In today's university funding system, you need grants (well, maybe you don't truly need them once you have tenure, but they're very nice to have).
So who decides which people get the grants? It's their peers, who are all working on exactly the same things that everybody is working on. And if you submit a proposal that says "I'm going to go off and work on this crazy idea, and maybe there's a one in a thousand chance that I'll discover some of the secrets of the universe, and a 99.9% chance that I'll come up with bubkes," you get turned down.
But if a thousand really smart people did this, maybe we'd actually have a chance of making some progress. (Assuming they really did have promising crazy ideas, and weren't abusing the system. Of course, what would actually happen is that the new system would be abused and we wouldn't be any better off than we are now.)
So the only advice I have is that more physicists need to not worry about grants, and go hide in their attics and work on new and crazy theories, the way Andrew Wiles worked on Fermat's Last Theorem."
(new comment right below)
"Let me make an addendum to my previous comment, that I was too modest to put into it. This is roughly how I discovered the quantum factoring algorithm. I didn't tell anybody I was working on it until I had figured it out. And although it didn't take years of solitary toil in my attic (the way that Fermat's Last Theorem did), I thought about it on and off for maybe a year, and worked on it moderately hard for a month or two when I saw that it actually might work.
So, people, go hide in your attics!"