Hacker News new | past | comments | ask | show | jobs | submit login
Why not string theory? Because enough is enough (backreaction.blogspot.com)
189 points by Santosh83 on June 11, 2016 | hide | past | favorite | 117 comments



Speaking as a professor specializing in string theory, I'd say, "More power to her." I have no idea whether string theory gets too much attention relative to its actual value in modeling the real world, but I think one essential part of finding the right amount of attention for any physical theory is for theorists to make their best judgement about what's worth their time to study.

In fact, it's quite comforting to me to see people deciding to focus in other directions: to my eye, that means the system is working (though perhaps the author would argue it's not working efficiently), and I'd hate to see other worthwhile angles of attack wither away purely due to lack of attention. For myself, I got excited about string theory in grad school and decided to go that route, and I'm still finding it fascinating today. (And it still feels worthwhile to keep studying it, though perhaps I'm not quite as optimistic about its ultimate success as I was 15 years ago.)


From one of the comments by the article author:

"I am far from "silent" on what should be done. I have said here and many times before that what needs to be done is to take measures against social and cognitive biases. If people still work in masses on string theory after that, so fine."


So you feel strong theory connects to reality, and is falsifiable and avoids "piling on epicycles"?

Will string theory be more parsimonious than the the more concrete model of physics it tries to model?


Of course string theory is falsifiable. String theory reduces to quantum field theory in some limit and to general relativity in some other limit. Unlike all the other previous theories, string theory is the theory that has resisted all efforts to falsify it yet.

It's a ridiculous misconception floating around that string theory can't make predictions, when in fact string theory is the only theory we have so far that predicts all known observed phenomena.


String theory reduces to quantum field theory in some limit and to general relativity in some other limit.

That's nice and clearly shows why string theory research is a worthwhile endeavor, but you only get a gold star once you make an experimentally verified new prediction. There is a reason why people were excited about the discovery of the Higgs boson.


> tring theory is the only theory we have so far that predicts all known observed phenomena.

That would have been impressive if it didn't also predicts even more that is neither known or observed.


"[A]cademia is currently organized so that it invites communal reinforcement, prevents researchers from leaving fields whose promise is dwindling, and supports a rich-get-richer trend."

A really awesome quote there. (Perhaps a bit self-serving for me).


    (Perhaps a bit self-serving for me).
So you think she's biased in pointing out that academia has systemic biases?


SKDH is as much of a physics insider as they come. So, if anything, she has something to lose.


sorry, that parsed poorly.

It's a bit self serving for me. I operate a science nonprofit outside of traditional academia.


Yeah, I realized later it was possible to interpret that sentence another way. I think my downvoter thought I was making a dumb throwaway snark, but I was actually just confused.


well it's not an expected situation that anyone runs a science nonprofit, so your confusion is understandable.


I've got a theory that string theory is more of a sociological phenomena than real science. It started as a genuine attempt to model how the nucleus was held together and then continued because it's a good area to do maths and write papers rather than because it models reality.

One thing I don't get, which may be down to my own stupidity - take maybe the simplest interaction in physics - you have two electrons in space a few cm apart and the accelerate away from each other due to the charges repelling. I'm not sure how that is supposed to happen if everything is strings. Do they ping tiny strings at each other and how do they know which way to aim? Maybe some string person can enlighten me.


It's a fundamental mistake to assume that these are "real" strings being talked about.

String theory is about the idea that reality is best modeled as, notionally, strings - i.e. the mathematics looks a bit like them if you sort of squint and look at it really hard.

But there's a really good reason strings are a compelling here: because near as we can tell, vibrational frequency is really important to particle physics - the energy of a photon is quantized as E=nhf , (Energy = integer * planck's constant * frequency). And this seems to scale all the way down - it works for photons, it works for electron orbitals, and it comes into play dealing with the virtual particles which are exchanged in force interactions.

So however you describe reality, whatever you do, your abstraction pretty assuredly needs to somehow recover the vibrational importance, which implies certain kinds of "geometry". You get "strings" because strings are one of the most fundamental structures which can hold a vibrational mode, and importantly, they kind of neatly explain the quantization as standing waves - closed strings can only setup a standing wave at certain frequencies depending on their length, same with open strings (the wave needs to be zero at the ends of the "string" or it'll lose energy).

Of course this is all my very ad hoc handwavey explanation of it, but the point is, there's a lot of very basic physics which works quite well in the model. It did not, and does not, just come from nowhere even at this very high level.


>really good reason strings are a compelling here: because near as we can tell, vibrational frequency is really important to particle physics

I can give a sort of counter argument. Fair enough the relationship between frequency and energy and also wavelength and momentum are fundamental to physics. But consider maybe the classic experiment where you demonstrate that, where you have a barrier with two slits and fire particles at it and get an interference pattern on a screen on the other side. You can calculate how close the interference stripes will be using λ = h/p and this works not only for point particles like electrons but more complicated ones like helium atoms or C60 molecules. How do you explain that with strings? There's definitely something going on like the amplitude state of the universe changes according to the difference in momentum for different particle paths but I see no way you can explain it using strings.


>I can give a sort of counter argument.

You can't because you clearly have an incredibly shallow understanding of string theory (I do too). For one thing, you're completely stuck on the metaphor of the 'string' and you're trying to apply it to a very shallow understanding of Quantum Mechanics and complaining that it doesn't make sense. Okay.

I think it's cool to talk about fundamental research but taking it one step further and saying that some theory X isn't correct because it doesn't make sense to ME, a non-professional physicist, is a step too far. Be more humble.


Yup I don't understand it. I'm not convinced others really do either though.


There's a minimum threshold of understanding required to make intelligent commentary on topics like this. Muddled pop-sci and half remembered undergrad material doesn't cut it.


Fair enough but is my description of two slit interference patterns wrong and is there any way you can explain the result with string theory?


You can explain it with any quantum mechanical theory. Talking about strings here is a red herring. That's why people are calling you out.


i think attempts at popularising of a lot of science is hindered by language compression

for instance, calling one element hydrogen and another helium from a natural language perspective seems to imply they are completely different things when in reality they are essentially the same thing differently configured that interact with the universe in completely different ways

as for strings, my understanding is they are particles considered with consequence

from the wiki(o):

> The starting point for string theory is the idea that the point-like particles of quantum field theory can also be modeled as one-dimensional objects called strings.

so zero dimensional objects, or particles, modeled as one dimensional objects, or vectors, or lines, or.. erm, strings?

string theory is concerned with interactions, map these interactions through point-likes: . . . ; and you get a string : .-._. ;

like how animation has individual, often disparate, frames but our eyes model the scene as a fluid, consistent, connected string of motion (i)

but these are mathematical models stead something you can tie your shoe with (ii)

from this explanation strings explaining the double slit makes sense because 'particles should act this way, why do they reliably act a different way' is answered by saying 'interactions'

that said, i dislike the probabilistic explanation of quantum mechanics which seems entwined with double slit explanations and as such my hope is future science will show the outcome of the double slit is a consequence of bad assumptions about the experiment and its equipment stead some underlying universal expression of quantum mechanics

https://www.youtube.com/watch?v=qDii69YCh_Q

(o) https://en.wikipedia.org/wiki/String_theory#Strings

(i) https://en.wikipedia.org/wiki/Nude_Descending_a_Staircase,_N... nsfw? art

(ii) https://en.wikipedia.org/wiki/A_rose_by_any_other_name_would...


> It's a fundamental mistake to assume that these are "real" strings being talked about.


>I've got a theory that string theory is more of a sociological phenomena than real science.

Yes. I have a friend with a PhD in physics, and he says that a generation of string theory people basically just shouted everyone else down in a rather obnoxious way. Other critics (e.g. Lee Smolin) have said similar things, albeit slightly more politely.

At some point it stopped being science and became more about careers, reputations, funding, and paper mills. Which, given the way that academia works at the moment, became self-sustaining.

Some interesting math has fallen out of string theory, but it should never have been allowed to hold back other approaches to quantum gravity to the extent that it has.

Considering the amount of time and paper involved, there's a huge wall of maybe-perhaps-if but very little solid physics to show for the effort.

Incidentally, that blog is a real find - some of the clearest explanations of hard concepts I've seen anywhere.


It reminds me a little of the phenomena of efficient market theory in economics / finance. Academics were drawn to it partly because it enables them to do a lot of fancy mathematics whereas if you look at inefficiencies like the stuff that went on in the movie The Big Short then the academics don't really have much of an edge against people working in the field. So much academic stuff goes 'assuming efficient markets', blah blah... which is not actually wrong but maybe a misallocation of resources from looking at bubbles and crashes which could have helped avoid the whole 2006 episode.

I agree the blog seems very good.


It gets more complicated, because there are various `strengths' of efficient market hypotheses around. The weaker ones are obviously true, but the stronger ones are more suspect.


Abstruse Goose also has some deep wisdom about string theory. I particularly like this one:

http://abstrusegoose.com/424


Yes, I am glad about this blog too.

Lee Smolin's 2006 book 'The Trouble with Physics' first alerted me to this, since I am not a string theorist, and I like to read about the applied physics going on instead. I would like to read the theoretical math, but my mind can only deal with beginning grad-level math at most. Nonetheless, I like to burn some wood and try at times. I just don't want to waste it on what is seeming to be more and more of a rabbit hole.


Particles themselves are just models of localized field quantizations. It's a bit analogous to how a person is just a quantification of the molecules they comprise. You can gain and lose new molecules by the millions, but you're still the same person -- like the crest of a wave.

Anyway, particles repel because that is the nature of the fields they exist in. Electrons don't have to do anything 'to' each other to repel, they just have to interact with the electromagnetic field. The string model simply employs the word 'string' to describe some components of the field.


Quantification or quantization? (Autocorrect?)


Quantization. I don't use autocorrect, so it was probably my brain's auto-typing.


Strings are supposed to act like Leptons / quarks / etc. Like how quantum mechanics averages out when your taking about planets. The problem is they are like epicycles in that they can describe any shape. Much like how Epicycles are effectivly taking a fourier transform of an orbit and can thus describe any shape, String theory can describe any possible universe.


Why are this comment and the parent comment downvoted? As someone who doesn't know about string theory it's hard to tell.


> you have two electrons in space a few cm apart and the accelerate away from each other due to the charges repelling. I'm not sure how that is supposed to happen if everything is strings.

At the level of quantum theory, you don't have charges repelling. You have charged particles, like electrons, exchanging other particles--photons in the case of the electromagnetic force. So really it's all particles (or all quantum fields, if you look at it another way). And you can make any particle (or quantum field) out of strings, so string theory is just the next level down, so to speak, where all of the particles--the ones that exchange other particles, and the ones that get exchanged--are just different vibration states of strings.


Do photons quantize and transmit the electric force? Are charged particles constantly sending a continuous stream of photons out in all directions?

I'm genuinely curious and haven't had luck finding this with Google - neither sites that confirm this, nor those that deny it.

Edit: /s/magnetic/electric


> Do photons quantize and transmit the magnetic force?

The magnetic force is just an aspect of the electromagnetic force ("interaction" is the word more commonly used in quantum field theory), so yes.

> Are charged particles constantly sending a continuous stream of photons out in all directions?

Virtual photons, yes; they aren't observed directly, only their effects--what we see as the electromagnetic interaction between charged objects--are observed. At least, that's one way of (heuristically) looking at what is happening. This way of looking at it has limitations, but it can be helpful.


> I'm genuinely curious and haven't had luck finding this with Google - neither sites that confirm this, nor those that deny it.

Wikipedia talks about this. https://en.wikipedia.org/wiki/Force_carrier .

> The electromagnetic force can be described by the exchange of virtual photons.


> /s/magnetic/electric

Same answer as I gave before.


The issue with string theory isn't an inability to explain what standard QM can explain, it's the problem that so many formulations of it may not be falsifiable. Making predictions along aside another falsifiable, real theory, without offering falsifiable predictions yourself... that's where String theory finds itself for the most part.

There are still some formulations that haven't been ruled out by observations such as LHC data, but not many, and they're not necessarily the ones that were anyone's darlings.


I've got a theory that string theory is more of a sociological phenomena than real science.

I've got to say, that comes across as awfully dismissive toward a whole lot of very thoughtful people. I'm not entirely sure what you mean by it. Certainly all of the string theorists I've known (it's my profession) have talked as if they believed their work was involved in a "genuine attempt" to model reality.

Maybe a physicist can make a living by just "doing math and writing papers" without any expectation of making meaningful progress toward understanding the world, but the people who really drive the field and get noticed are almost invariably folks who dream of changing the world, of being the next Hawking or Witten or even the next Einstein. If someone like that doesn't have the a pretty solid hope that what they're working on is likely to be truly relevant to reality, they work on something else.

I think that others have already more or less answered your electron question. I might answer it this way: one of the great triumphs of 20th century physics was the realization that the details of how physics works on very small scales (or equivalently, very high energies) get very predictably "blurred out" when phenomena are measured at longer scales (or lower energies). The formal process involved is called "renormalization", and the punchline here is that essentially any ultimate theory of nature (whether that's string theory or something else) will reduce to an ordinary quantum field theory (with "renormalizable" fields) once you get a few orders of magnitude below its intrinsic scale.

That means in particular that any "stringy" physics (or any "loopy" physics, or even any simpler "grand unification" physics) will inescapably be completely washed out and undetectable when electrons interact from a few cm apart. All you need to understand is "simple" quantum electrodynamics, in which the two electrons exert a repulsive force on each other by way of their interactions with the surrounding electromagnetic field: interactions that can be described as being mediated by exchange of virtual photons as force-carrying particles.

For that calculation, it's not at all important to know that (e.g.) each electron is an extremely tiny string in a specific vibration mode, nor that they (more or less) emit and absorb virtual photons by pinching off into new tiny strings or in a different specific vibration mode and then joining back up with them in a uniquely determined way. (This would be a good place for me to note that nobody knows exactly which scenario in string theory would correspond to our world, so nobody actually knows exactly what vibration mode corresponds to an electron. But we know a lot of ways in which things like that might work.) My point here is just that most of the questions you've asked are really quantum field theory questions: the string theory connection is almost entirely unimportant in that regime.


And that, to put it succinctly, is the problem. In the pursuit of a Theory of Everything, you wind up with a Theory of Anything. If there's no particular reason why things are the way they are in this universe, then you're reduced to the simpler observationally-verifiable model anyway since the elegant overarching framework is essentially useless. It becomes a philosophy of the ought rather than a description of the is. A theory that predicts everything predicts nothing.


Say, you shuffle a deck of cards, and shuffle really well. Now you could spend forever by drawing cards from the deck and analyzing the sequence and trying to think of theories how to predict the next card. But the is no ultimate reason behind why the deck was shuffled into this order and not some other.

Perhaps the shape of natural laws (the four fundamental interactions) and the values of natural constants have been arrived at by a similar process. Perhaps there is a multiverse of different universes with different variations of these natural laws, and we just happen to live in this one.

If this is how our universe came to be, shouldn't we be ready to accept it, instead of searching meaning where there is none?

Then again, this is just an idea. And not really even falsifiable. So it's difficult to say if this is even a scientific idea, in the strict sense.

Or perhaps Einstein was right, perhaps God didn't play dice (I am really misusing this phrase here), and there is an underlying mechanistic model which explains everything.

Who knows.


The deck of cards analogy is sort of getting to the anthropic principle -- the natural laws could have come out any number of ways, but most of the other shuffles wouldn't result in us being here to contemplate them.


The reality sting theory try to describe is more general than the reality we can observe. It's like finding the primes for what is possible. The meaning of the primes on the next level up is moot.


>most of the questions you've asked are really quantum field theory questions: the string theory connection is almost entirely unimportant in that regime

My issue with the two electrons repelling scenario is that in normal quantum field theory it's modeled, I think, as the exchange of virtual photons and given that we don't really understand what a photon is and that they have odd spread out kind of properties that seem to allow that sort of interaction between distant things I can accept that that happens even if we don't understand how it fully. When you get to "emit and absorb virtual photons by pinching off into new tiny strings" and similar ideas though I just can't see it. My hunch is electrons and photons are in their base nature more abstractly mathematical and attempts to model them as bits of stuff will fail.


I'll be honest: high energy physics isn't really an area where "hunches" carry a lot of weight, at least if those hunches aren't grounded in a pretty thorough understanding of particle physics and general relativity.

The language of "pinching off into new strings" is of course largely an analogy, one which captures some of the mathematical features of string interactions reasonably well while probably obscuring others. What's really going on underneath that analogy is just as "abstractly mathematical" as you could ask for: a (boundary?) conformal field theory defined on the 1+1-dimensional string world sheet, which allows just a single unique self-consistent mode of interaction. It is complicated, but honestly, so is standard quantum field theory when you really dig into it, and if you're familiar with the underlying math and physics this description is surprisingly elegant.


The author says that he "became more convinced [string theorists] are merely building a mathematical toy universe." The explanation for his conclusion was that they kept revising string theory in order to fit observations. I know little of string theory, so I am probably misreading his meaning, but isn't that how all science works? Create a model, then revise it when you get more data.


True, of course. But at some indeterminate point theories start seeming awfully ad-hoc even if they still retain other properties.

Let's say I propose that no existing solar panel designs respond to artificial light. Easily falsifiable. Say you perform the experiment and find that panels from First Solar do respond to artificial light. I refine my hypothesis to "no existing solar panel designs respond to artificial light except for First Solar". You find that SunEdison panels do, by testing at your lab in SF. I refine my hypothesis to "no existing solar panel designs respond to artificial light when tested outside San Francisco except for First Solar". You repeat the experiment in San Jose, and I refine my hypothesis again.

We're revising the model, keeping it falsifiable, and keeping it consistent with existing evidence. It would not be without merit to claim that this is something of a toy hypothesis, however, and that our time would be better served otherwise.


This sounds like the scientific equivalent of "overfitting" in machine learning. The point of a theory is not to predict the training set well, but to generalize well to new cases. This is what string theory fails at.


NNs are modeled after the human brain. It's not an immensely crazy to point out that we could be overfitting ourselves by continuously exposing ourselves to a single theory.


The successful backpropagation algorithm works completely different from anything that has been observed in the human brain up to now.


You misunderstand, first: it is not the case that new observations about the world forced the changes. Instead, it is mostly observations about the internal properties of string theory which has forced modifications.

After it was proposed as a self-consistent theory of quantum gravity it was realized it wasn't self-consisten because it had ghosts, which was then fixed by adding 22 dimensions. But this is a problem because that's not how many dimensions the world has so they solved it by compactification that folds all the extra dimensions tiny so they don't bother us. But then it turns out that there is no particular reason to fold just so so that we get the world we live in. And now they are trying to find a way to make it so its natural that the folding happens just so that we get the world we live in.

Secondly, and most importantly, while it is true that some models get revised whenever there is more data, those are poor models. Good models agree with all the data and don't need modification. (The best models agree with most of the data, but sometimes don't so that the scientist gets a mystery to solve!) Realistically this stage takes a while to happen, and proponents of string theory argue that they right now have a models that is going to be great one day, but right now needs a bit of tinkering. What critics say is that string theory has been in the "tinkering" stage for around 50 years now and maybe we should try other ideas a bit more.

Finally, I don't think Sabine likes being refereed to as a "he" and rather have that pronoun used to describe the father of her children rather than herself.


Don't the "best" models make surprising predictions that we go out of our way to design experiments to collect data for - and find that they are, in fact, accurate?

Has string theory made a single prediction anyone went out of their way to test, finding it accurate?

I don't have a problem with shoveling in a bunch of dimensions. If it's shovelled in carefully so as not to predict anything that can be tested, I can get that entertainment from watching flat Earth videos though.


> Don't the "best" models make surprising predictions that we go out of our way to design experiments to collect data for - and find that they are, in fact, accurate?

Models like e.g. the heliocentric solar system, or even relativity, weren't developed that way - they were developed by noticing flaws or incompatibilities in existing models, and proposing a better explanation for them. (The much-touted "experimental verification" of relativity in an eclipse was a nonsense - the errors were as large as the measurements - but it didn't matter; the theory was elegant enough to be obviously correct).

String theory is mostly still at the fiddling-with-the-epicycles-and-thought-experiments stage. But at least it has a model that contains a) standard QM and b) a graviton. None of the competition has even got that far (and there's little reason to think they ever will in most cases).


Well, nothing describes how "all science works", but ideally you don't revise your model based on new data. Instead, you take all of the data you have, both old and new, start over from scratch and try to find the simplest model that would account for it. What your old favorite model used to be, before you obtained the new data, should not play any role in your evaluation of competing models.

Given how real people and organizations work, psychology, resource limitations, incentives, etc., this is VERY idealistic, but still, imagine that you had found your data in a different order, so you ended up with the same data you have now, but had a different subset previously. With two different subsets in the past, your best past models might have been different from each other, but your best choice now should be the same now that both paths have converged on the same data. So why should your choice in the past have a "vote" in your choice now?

So (again, ideally), don't "revise" your old model; take the data you have now, old and new, pretend you are starting from scratch, and choose the best model. If a theory such as String Theory keeps failing to account for new data and is repeatedly modified to keep it from being disqualified, a reasonable question would be whether, if we started from scratch knowing what we know now, we would come up with this repeatedly patched String Theory version as our first choice.

(And, yes, I know that from a Bayesian perspective you can't literally start "from scratch", but that doesn't mean you have to use your most recent model as your prior).


It's called adding epicycles : http://rationalwiki.org/wiki/Adding_epicycles


I mostly agree, but I'd argue that epicycles actually made useful and testable predictions, although we found a simpler description later on. The article underlines a lack of refutable theories. Thus, I think [1] is a better analogy.

[1] http://rationalwiki.org/wiki/The_Dragon_in_My_Garage


The characteristic of metaphorical epicycles is that the make simple, testable predictions that are then wrong, which are fixed with another epicycle.

Another more subtle aspect of epicycles, real ones this time, is that they are too powerful and can be used to prove anything. You can predict the motion of the planets with epicycles, it's just that the required series is very long or infinite. And with very long series of cycles, you can "predict" anything: https://youtu.be/QVuU2YCwHjw?t=25s Thus, one of the problems with epicycles both real and metaphorical is that they are indeed not refutable. Because epicycles can predict anything, they aren't that useful; they exclude far less than meets the eye at first.

It isn't hard to see that characterist showing up in string theory. The theory has for a very long time had problems with excluding possibilities, and each new metaphorical "epicycle" seems to come with more parameters than the last, rather than fewer. Now, this isn't unique to string theory since all the current theories seem to have that problem, but then, the point is, why does this problematic theory have so much more support and money than the other problematic theories?


>Another more subtle aspect of epicycles, real ones this time, is that they are too powerful and can be used to prove anything.

String theorists talk about how rich and complex the mathematics is, but maybe that just means it has so many possibilities you can always come up with an explanation to overcome newly-discovered difficulties.


Whack realization of the day for me at least.

Epicycles --> Taylor series.


They're more closely related to the Fourier transforms. That's how somebody calculated the actual values that would yield Homer.


Perhaps off-topic, but I really like the classification of that article:

This might be Skepticism but we're not sure


What's wrong with that? Epicycles are a good testable model. They were discarded when a better one was developed.


Well that's really all that is wrong with it: you are adding epicycles instead of working on a better theory. In analogy: once a language is turing complete you can do anything in it, so you can keep using brainfuck to make a gui, but probably it would be easier in java...


What's wrong is that you're trying to make a round peg fit a square role.

The problem was not that it was discarded, but what happened between new data and getting discarded and how long it took.

You don't want Epicycles because theories should be "simple" and Epicycles are added exceptions


The revision in the face of observations relies on them being new observations. You can make an almost infinite number of models that fit existing knowledge; the way you figure out which one to go with is by having the model make testable (falsifiable) predictions, and then test those predictions. The string theory revisions, however, seem to fall out of conflicts between the theory and existing observations (like, say, that the dimensionality of our physical space is 3, or that the cosmological constant is non-positive).

For example, general relativity could make Newtonian mechanics fall out as a low-mass, low-velocity special case, and solved the already-known theoretical problem of the constant speed of light, but there a lot of conceivable models that could solve that. To be accepted, its predictions needed to be tested (starting with gravitational lensing in the 1919, and going through higher-precision tests later).

This is related to the problem of overfitting in machine learning. You can get a system to be very very familiar with your training data, so it can predict the things it has already seen. However, until you have validated it on data it wasn't trained on, you don't know if the model reflects any of the underlying properties of the system it's observing.


Theories serve multiple purposes. A theory can be useful to make calculations about things you already know. Or to make predictions about things you don't yet know. String theory has been adapted over time to be able to calculate things we already know. But we already had that without string theory. But the predictive part is where string theory really falls down. It's track record there is abysmal.

On the other end of the predictive spectrum there's Einstein. Somebody fiddles with the equations and says, "if this is right then there could be black holes!" And then... they find black holes. It seems that every few years there's some confirmation of an odd corner case that shows the predictive power of Einstein's theories.

That string theory can calculate the known might still be useful, if it's a more convenient way to get the results than other theories. Plain old Newtonian physics is still hella useful today, for instance, even for space missions. AFAIK, string theory is not more convenient.

Take this with a grain of salt, because I haven't kept up on what string theory has been doing lately. As a matter of personal triage I stopped following it until such time as I heard that someone predicted something interesting with it, and it was confirmed in observation or experiment. If that happened then I missed it, and would love to hear about it.


The author is a she.


The point is that a theory is only scientific it if it's falsifiable. That is, it makes a prediction, you go into the lab and do an experiment, and if the experiment comes out differently than the prediction, you know the theory was wrong.

The author and I got the impression that string "theory" (let's call it "string hypothesis" instead?) has so many degrees of freedom that it can predict basically anything you throw at it, even things we now know aren't physical, so its predictive value is zero.


Think of it this way :

"I have solved all of math, physics, and everything else ! Look ! My theory is simply the assembly programming language. Any problem can be expressed in it and anything that we can predict we can predict using some assembly language".

You wouldn't consider this to actually predict anything, right ? Despite the statement being perfectly true. Change any prediction into the assembly program printing it out. Done/done. So the critique of string theory is that it is such a form of assembly language. It is a general principle, that can express nearly any algorithm.

So generally one considers that any theory, for it to be a theory, has to predict things unrelated to why it was originally designed, with no changes (or at the very least, very minimal changes). Quantum theory, for instance, was designed to solve a particular problem relating to electrons, but turned out to solve the ultraviolet catastrophe (and dozens of other problems).


It is not obvious that the Universe is computable. https://en.wikipedia.org/wiki/Digital_physics


Think flat world, and the plethora of adjustments that astronomers introduced to it when observations conflicted with theory. As some point you have to throw the towel and come up with a completely different explanation of whatever you're observing.


Maybe you meant the geocentric model of the cosmos developed by Apollonius, Hipparchus, and Ptolemy, full of epicycles and so on? (As contrasted with the heliocentric “Copernican” model.)

Every serious astronomer in the West (indeed, every educated person) has known that the earth is round for about 2500 years. The idea that medieval Europeans believed in a flat Earth is largely a modern myth. See https://en.wikipedia.org/wiki/Myth_of_the_flat_Earth


Science also prescribes that you use the smallest (simplest) model that fits the data. String theory arguably does not adhere to that.


Yes. But you must generally increase the predictive power of the theory for it to makes sense - otherwise you are creating a religion or mythos. After all - that was how the ancients explained the world - this rock is here because Hercules was mad and threw it all the way from the Danube.


This is probably a naive question: can we input a bunch of particle interactions into a deep learning system and train it to predict the probability of future interactions? It would be like a "black box version" of physics. If we can predict, then we can find a more elegant mathematical notation and a more intuitive physical interpretation.

Machine learning can observe and learn patterns that are more complex than humans can grasp. What if the perfect Physics theory is more complex than humans can understand and possibly quite unintuitive in meaning? Then we won't like it and steer away from it, and that would be a bad thing in the end.


Short answer no.

Long answer, the input you are thinking of (particle interactions) is already what partly defines the theory you are trying to uncover. So you have a serious chicken-egg situation.

You could think about inputting the data that comes out of particle detectors to look for patterns etc. and that sort of thing is being done already. However there is a difference between finding existing patterns and then figuring out the underlying theory which is the mathematical construct that lets you predict things.

Finally, even if you could have a "black box" predict stuff (correctly), the black box-ness is a serious problem for scientists from a meta-science or philosophy of science point of view, and people would be highly unsatisfied until they actually understood what was going on.


> Machine learning can observe and learn patterns that are more complex than humans can grasp.

That is a common misconception. ML cannot do anything beyond our modeling ability because it is designed with it. Deep learning is simply a method to approximate a function with a nonlinear formula. Something that cannot be easily approximated this way may require too much memory and power to be practical.

It is fundamentally similar to how JPEG is not a good fit for storing text: glyphs are hard to approximate with Fourier transforms.

The edge that ML has against humans is not in the learning part, it is in the machine part. Human memory is volatile, while we have grown exceedingly good at making machines retain memory.


That is not a valid argument. You need to provide a reason why the patterns that machine learning grasps are all graspable by humans, or that humans grasp something that machine learning never will. Multilayer neural networks can capture very interesting (from a human perspective) patterns and concepts, but also many others that seem garbage to us (perhaps because we dont grasp their significance).


Give me a ML system, and I can give you a problem it cannot solve. I am guaranteed success thanks to the No Free Lunch theorem: https://en.wikipedia.org/wiki/No_free_lunch_theorem.

In the case of deep learning, I can point to the task of determining values above 0.5 on an infinite Perlin-noise-derived 2D space fed by Mersenne Twister with seed 0, with an infinite number of octaves. Deep learning does not deal well with infinite spaces to begin with, and the pseudo-random generator cannot be easily encoded with common neural network nonlinear functions.

On the other hand, while we cannot compute an infinite number of octaves, and while places extremely far or extremely small details will run into IEEE754 limitations, we will get a good approximation by writing the program that computes the texture.

And that is just with only two numbers as input and one number as output.


Sure, I choose... "Exhaustive Search in the space of programs". (maybe with some genetic algorithm heuristics to shave of a couple billion years on each query)

It's a ML system that can solve any decidable and even some semi decidable problems. Which is (if the church turing thesis holds) everything that can be understood by humans or other.

You might not be able to wait around long enough to see it give you a result though, but hey at least _it_ got an answer.


If you allow impractical ML systems, you might as well pick a dice. Sure, the answer is inaccurate, but there's a non-zero probability that it is correct!

But, realistically, the ML system you devise cannot learn about features that require knowledge outside of the observable universe.


How does a static dice model computational processes?

What does the universe have to do with the set of computable functions?


Humans are neither immune to No Free Lunch, nor able to predict Perlin noise.


They're definitely not immune to No Free Lunch, but they are able to predict Perlin noise.

(Sure, you could try to analyse humans as if they were spherical objects floating in void, but in practice humans have computers.)

Let me give you an example. The 2011 Nobel prize in chemistry was dedicated to the discovery and analysis of quasicrystals. Those also cannot be modeled by deep learning, as its building blocks, linear separators, cannot finitely appreciate infinitely generated structures (unless it essentially encodes a completely different ML system within its neural network). Yet humans can model them.

I could go on all day about this, as there is an infinity of problems where deep learning is inadequate: proving the three-color theorem, routing, computing multiplications, …

Don't get me wrong: deep learning is outstanding for a set of menial tasks that I love to see being handed off to machines. But it is not the be-all, end-all that is sometimes claimed.


In principle its not impossible.


> Deep learning is simply a method to approximate a function with a nonlinear formula.

I think that was single hidden layer neural networks. Deep learning improves behaviour of that approximated function for values that were not in training set beyond what was possible with single layer (which was totally sufficient to aproximate any function).

Deep doen't improve ability to approximate function results but it it improves ability to approximate function implementation to get better result on the data that NN was not trained for.


I wonder, if given a big set of time1, time2 pairs (with relatively short dt) of the positions of the planets in our solar system, as seen in the night sky on earth, a machine learning system would be able to come up with good way to predict where the planets are going, given their positions.

Surely a machine learning system would need to be able to solve this 18th century physics problem, before we throw modern physics problems at it.


Pretty sure what you will end up with in the best case is a numerical integrator of the solar system as represented by a neural net. I also expect to have pretty crappy stability, but its an interesting question what sort of time step method it would approximate.


Maybe advanced AI of that nature could be the way forward in physics. I was quite good at physics at school etc. but have quite a job mentally even figuring how a gyroscope works and my brain totally fuzzes over at attempts to combine quantum field theory with general relativity. It may be that the theory of everything is there but just a bit hard for our ape brains to process. It would probably require better AI than we have at the moment though.

A little closer to what you asked about, Demis Hassabis of Deep Mind has kind of suggested:

>I was giving a talk at CERN a few months ago; obviously they create more data than pretty much anyone on the planet, and for all we know there could be new particles sitting on their massive hard drives somewhere and no-one’s got around to analyzing that because there’s just so much data. So I think it’d be cool if one day an AI was involved in finding a new particle.


So, how many bits are you giving the deep learning machine? If you give it enough bits, you aren't predicting anything -- you're just feeding it experimental data which it more or less spits back out.

For an elegant solution, one would need to have deep learning that also optimises on a constrained set of bits.

I feel like in this scenario, humans perform much better. The main strength of computer learning is being able to harness a massive dataset and really good storage capabilities.

Also, you would need to impose a "consistency" constraint on the computer which would be hard to do. Like a computer might say if (mass > 5) do this; else do that. And that is valid computationally. We do that in our split between gr/qm. But in some sense we feel this is wrong physically. The universe shouldn't run on arbitrary if statements.

So I think the answer is just no: the computer algorithms we have today can't handle the problem's constraints.


Enough bits to make it efficient at predicting. When it predicts better than the current theory, it starts getting closer to having enough bits. That's the beauty of ML, you don't need to worry about these details if it gives good accuracy.

My intuition was that there could be different ways to explain the laws of Physics that don't look like the current ones which evolved based on human intuition, math and language ability. A non-anthropocentric Physics if you will.


>That's the beauty of ML, you don't need to worry about these details if it gives good accuracy.

I get very alarmed by this. At work we have several examples of ML systems that have done good things for many years before suddenly and inexplicably blowing up and producing nonsense. Our folk explanation (as we have failed to produce anything resembling a proper one) is that the models that are captured in some cases appear to replicate reality but are deficient of some fundamental part of it which later comes into play destroying there predictive power. The domain theory changes in a sense, in another sense the driver was there all along but just hadn't featured in this part of the regime.


In ML, they might say you were overfitting. Predicting is all well and good, but predicting too well can indicate the machine hasn't really learned anything. It's just spitting back nearly identical information as the original.

It sounds like that is what this black box is supposed to do. All you need for a perfect black box is every possible data point...


So we have instance based learning which basically is about approximating this, and support vector machines which abstract the idea into representing the space of decisions represented by all the data points via a kernel which is a mapping function. The problem I see which deep learning seems to be "winning on" is that the space of instances that you have does not either abstract a meaningful theory of the distribution of these points (therefor predicting outside of this space) or describe things exterior to that space - but that exist, or might exist. A scientific theory is considered valid if it predicts things that have not been seen yet (and you can then test that, hence the post today about "string theory : enough"), I think IBL and SVMs can't represent the unseen, I used to work using something called inductive logic programming (I used a tool called Progol that Steven Muggleton made, it was good!) and I thought that that did produce such insights, sometimes, but it turned out that often these were actually me due to fiddling. Some play projects I did recently made me think that conv. nets were doing the same sort of things, but it was /is harder for me to catch and show this (and as I say, I ended up thinking that I'd often fooled myself with ILP, even though it was very useful).

The old skool difficulty I have with deep nets is that I used to think in terms of structural risk minimization vs empirical risk minimisation when dealing with overfitting. The idea was that if you had the right size of information store in your learned system and it was experimentally producing results that showed it was an effective predictor you could say that it was generalizing properly. Deep nets seem to me to have all the information storing capability of the domains they address and I worry....

But I am shocked, shocked, by how well they seem to work.


The idea that the universe runs on a simple program with simple rules hasn't failed us so far. All we have to do is figure out what that simple program is with our capability for abstract symbolic thinking and reasoning; machines are presently very bad at this while humans are less bad at it.


>The idea that the universe runs on a simple program with simple rules hasn't failed us so far.

Uhhhh... yes it has. The universe has myriad levels at which a simple program works. If the universe "really" runs on a simple program with simple rules, it's got so large a parameter space that even knowing the program itself is mostly useless to us.

https://arxiv.org/abs/1303.6738


A simple program running for billions years on trillions of bits...of course the result isn't going to be "simple." Still, knowing what the code would hardly be useless, and we are spending a lot of resources on trying to figure out.


Look up automated proofs. This exists. It's largely useless, due to mathematical properties of logic. Every mathematical system has things which are true but can't be discovered from first principals.


> Every mathematical system has things which are true but can't be discovered from first principals.

This is not true, see Gödel's completeness theorem [1], which states "... that a deductive system of first-order predicate calculus is 'complete' in the sense that no additional inference rules are required to prove all the logically valid formulas." It basically means that every "true" statements are provable with a good enough deductive system.

What you possibly think about is Gödel's first incompleteness theorem [2] which is quite a different statement. It states that in "most" formal systems (that are complex enough to be able to describe natural numbers) there is always a statement A that both A and "not A" are not provable. It basically means that these statements can't be "logically valid", which means that there exist models of the formal system where A is true and models where "not A" is true.

[1] https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_th... [2] https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...


Nah, P vs NP is a bigger factor here.

Basically, checking a proof for correctness takes polynomial time in the size of the proof. (Dependending on how you formalize your proofs, that might even be linear.)

Coming up with a reasonable sized proof is much harder. But if we had P=NP, then checking and finding a proof would be about equally hard.

See eg http://www.scottaaronson.com/papers/philos.pdf


What would be the advantage of that over doing zillions of particle interactions in an actual particle accelerator?

If history is any guide then theories of physics are extremely simple. All the way from classical mechanics to special relativity, general relativity, quantum mechanics, quantum field theory, the theories are simple. In fact in some sense our theories have been getting simpler.

Take for instance the step from classical mechanics to special relativity. In classical mechanics time and space are not on equal footing. Time is the independent variable, and the space coordinates of particles are functions of time. In special relativity these are put on equal footing, and this simplifies the physics. It's not easier, but it's simpler in the sense that there are fewer parts. You used to have energy, and x-momentum, y-momentum, z-momentum for the 3 spatial directions. It turned out that energy is t-momentum in the time direction. This simplifies the equations because instead of having one equation for energy and another for x,y,z-momentum, you have one equation for x,y,z,t-momentum. Symmetries between electric and magnetic fields in Maxwell's equations were discovered that unified them into one object, just like with energy and momentum. A similar story exists for quantum mechanics.

What you suggest about modeling physics as a black box has been suggested before. After the first particle accelerators were built people were finding lots and lots of new particles, and there wasn't any good theory for them. Some people thought that we had to give up on a theory, and just model the real world as a black box of X particles in -> Y particles out, and make a big database of such interactions. However, then the standard model came along and all those particles turned out to consist of a much smaller number of quarks that interact in a small number of definite ways.

With string theory however we are in precisely the opposite situation. We don't have a big amount of experimental data that we need to find a theory for. We have a large number of theories and no experimental data to distinguish them.


I don't think so.

I wish the Frame Problem[1] got talked about a little more, because it's one of the fundamental challenges for intelligent beings/systems. I think we'd see fewer grandiose claims about AI if people spent a little more time pondering it. But I'm getting ahead of myself.

You can think of the Frame Problem as the problem babies have when they first enter the world. You've got eyes that can look anywhere, ears that are hearing sounds at every frequency, every touch receptor in your body is feeling something... There is data coming in from every pore and it's all noise.

Where to start? You can randomly twitch a muscle, but there are a lot of muscles. The likelihood that anything coherent will happen is very small. You could just pick sensors at random and try to correlate them with each other, and that lets you pick up some regularity, which would be useful for perception, except that regularity is still totally valueless. You can figure out how to see lines moving across a visual field, but there's no way to know if they are good or bad. Or what you might want to do with them. There is a sense in which it doesn't matter how much computing machinery you have, it's impossible to learn to perceive the world without outside help.

That's the frame problem.

Now, if you are a human, you get bootstrapped into the world through social interaction. You have very dumb insect-like circuits in your nervous system that make black dots with white on either side (i.e. another human eye) look very enticing to you. Before you've had any indication about whether these other eyes are a good thing or a bad thing, your muscles will twitch and you will (clumsily) orient all of your sensors (which are just screeching noise at you) towards whatever the other eyeballs around you are orienting towards. There are a handful of buttons on your body that create pleasure (warmth on your skin) and pain (pressure on your skin). That combined with some places to look gets you started. A human being builds from there, but we continue to get a ton of help throughout our whole life.

Machine learning is the same way. You can train a network to pick a dog out of a bunch of pictures of cats, but you have to give it know the difference between dogs and cats first, so you can feed the network a training set that it can learn from. If you just fed the network the pictures, without categorizing them, all it would see would be noise.

Back to physics... You can feed as much data as you want to a network, "deep" or not, and it will learn nothing. You need to be able to coherently structure it first. In chess that's easy: wins and losses. In physics, not so much. What's a win in physics? Explaining a situation that a bunch of other theories can't explain. So you'd have to structure all of the existing theories in physics in some machine-readable way. And then you'd have to somehow generate the space of all possible theories and all possible experiments....

And that still only gets you basically to where the baby was when they only had some pain and pleasure receptors. I.e. you are still totally screwed by the frame problem. Because you have no idea where to start looking through all of those possible experiments and all of those possible theories.

[1] https://en.wikipedia.org/wiki/Frame_problem


That's been solved by the most basic approach to ML: Supervised learning. A simplistic way to describe it would be that you say to the ML algorithm: "Here's two states, one at time N and one at time N+1. Your task is to predict the state at N+1 given the state at N. We have a bazillion of such state-pairs that you can test and refine your model on. Go nuts."


Supervised learning is what I'm talking about and it's not a solution to the frame problem. The human handler knows roughly the answers they looking for, and the machine learns details. The machine isn't discovering a new idea, it's just learning how to perform well inside a very small, well defined game that a human made.


It actually kind of has been done, can't find a link now...


I was actually thinking about this earlier in the day. Could we derive the law of universal gravitation if all we had was a bunch of data about the positions of objects at times plus a bunch of machine learning algorithms, and were otherwise complete idiots?

But in this case, no, because there's nothing in the particle data that we have that can't already be explained by the standard model, plus any number of other possible models. If only we could find something in the data that we couldn't explain, we might be able to make some progress, but that data doesn't exist.


possibly but the answer might be 42


The author is taking a different approach called phenomenological quantum gravity that has some promise to be more productive in the long term.

http://backreaction.blogspot.com/2013/06/phenomenological-qu...

From what I understand, she and others are trying to understand what empirical differences a quantum gravity theory of any sort would have, and hopefully will come up with one that can be experimentally observed, which in turn would help people figure out what is the correct quantum gravity theory.


> Even with that problem fixed, however, it was quickly noticed that moving the superpartners out of direct reach would still induce flavor changing neutral currents that, among other things, would lead to proton decay and so be in conflict with observation.

Dissonance strikes again!


i vote we change the name. "generalized quantum mechanical phenomena maybe" ... and we can rename quantum gravity to "generalized macroscopic versions of quantum phenomena maybe"

then we can add the "maybe" suffix to all the rest of our theories


Article isn't about TCL.


This ties in with my thinking that there is just too much funding to do research for the sake of research. Let the private sector work on moonshots if they want but more realistically no one should be researching super far out problems. Instead the agile "just in time" approach needs to be used in academia as well as private companies.

A good analogy is no one was trying to build electric cars fifty years ago but now they are. But they only are because progress has been made indirectly in other fields to make it worth it now.


Babbage designed a computer almost 200 years ago. We need people to be working on moonshots. A lot of them will fail. When one finally succeeds it pierces the veil of "just in time" capitalistic motivation. The financial incentive is certainly perverse in this case. Everyone sells their ideas up front in a frenzied competition to exist. We need a lot more research for the sake of research. More varied and more useless. We need to find a way for people like Babbage to exist, at scale, while seeking whatever intrinsic motivations drive them. We likely won't be able to know if they're doing anything useful until they're done. Academia needs the ivory tower.


> A good analogy is no one was trying to build electric cars fifty years ago but now they are.

People were trying to make electric cars 120 years ago and succeeding. [1]

[1] https://en.wikipedia.org/wiki/Electric_car#History


True. On the other hand, nobody was trying to build them fifty years ago.


I think you are throwing the baby out with the bath water here. Science is a cornerstone of the society we live in. To actively explore reality to better understand it is one of humanities greatest achievements.

Flamebait: Should we stop funding schools because most kids are bad at reading ?


That's one perspective, however, collectively, as a society we see value in funding such research. I think that's a better approach.

> But they only are because progress has been made indirectly in other fields to make it worth it now

Yeah, like the fundamental 'moonshot' research done in the 1700s, 1800s and early 1900s, which now serves as foundation for all electronics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: