> Protein folding is still an unsolved problem, and I’m dubious of the notion machine learning will ever solve it, but hopefully we get some helpful science out of it.
As a working hypothesis, protein folding assumes that a protein folds into the globally lowest energy configuration. And that's a good assumption for a start.
However, nature isn't magic and can't magically solve global optimisation problems. If there's a region in configuration space with a local minimum and high enough energy 'walls', this might be stable enough for the protein to be stable.
For reasons of computational complexity, I agree that machine learning will probably never solve the global minimisation problem. But the complicated and messy local optimisation problem that we see in reality might very well be solvable eventually by something like machine learning.
Why are you dubious? Where do your objections come from?
Great points about the energy minimisation issue. Funnily enough, this is actually a problem with de-novo protein design at the moment: the designed proteins are _too_ stable! Compared to natural proteins. Protein are often not static shapes, they are machines that need to be dynamic - in other words what you said, they do not live at some deep global optimum.
> I think what you said only depends on the minimum being relatively flat (instead of deep); but it doesn't matter whether it's global or local
No. There is no such thing as a "global minimum" energy conformation, because the conditions vary wildly. Many protein structure changes are brought about by changes in the local chemical potentials and even electric fields. This is not something you can get a good grip on by thinking in terms of "flat minima".
> Why are you dubious? Where do your objections come from?
That the results the machine learning techniques provide are still nondeterministic.
Meaning that they are, in terms of identifying other local minima that satisfy the constraints, as good as a guess.
If the provided solution also came with a method of systemic modification to derive all other solutions that satisfy the constraints, then I would be satisfied.
Without that you are unable to say with certainty that your local minima is correct even if nature fails to adhere to the lowest energy assumption.
> However, nature isn't magic and can't magically solve global optimisation problems.
I wonder sometimes. Let’s remember, this is an open question after all.
I have a long standing hypothesis that an algorithmic solution to the global optimization problem is what lends action potentials the appearance, or essence?, of what we mean when we speak of “consciousness”.
But I am a more inclined toward the abstract aspects of the mathematics behind the problem, and leave advocacy for the current techniques
to researchers developing practical solutions with them.
I applaud the people who toiled with X-ray crystallography to build the field to the point that a machine learning technique could be developed.
> That the results the machine learning techniques provide are still nondeterministic.
I think I know what you are trying to say, but 'determinism' or not isn't the problem. You can run machine learning methods completely deterministically: just use a pseudo-random-number-generator (and be careful about how you seed it, and be wary of the problems with concurrency etc).
> If the provided solution also came with a method of systemic modification to derive all other solutions that satisfy the constraints, then I would be satisfied.
> Without that you are unable to say with certainty that your local minima is correct even if nature fails to adhere to the lowest energy assumption.
Have a look at how integer linear programming solvers work. They use plenty of heuristics and non-determinism for finding the solution, but at the end they can give you a proof that what they found is optimal.
You are right, that you don't get that kind of guarantee with current machine learning approaches. Though you could modify them in that direction. (Eg if you added machine learning to an integer linear programming solver, you would hook it in as a new heuristic, but you would still want the proof at the end.)
> I have a long standing hypothesis that an algorithmic solution to the global optimization problem is what lends action potentials the appearance, or essence?, of what we mean when we speak of “consciousness”.
Sounds like woo. Protein folding in bacteria and yeast work pretty similar to how it works in humans. In fact, we can transfer genes from us to yeasts to produce many of the same proteins human produce. But you'd be hard-pressed to argue that yeast are sentient.
This reminds me of how some people claim that soap films are super special because those films can solve optimisation problems. See eg https://highscalability.com/why-my-soap-film-is-better-than-... If you put soap film between a bunch of supports, even if the supports have complicated shapes, the soap film will tend to minimise its overall surface area.
Of course, if you look deeper into it, and do larger scale experiments, you figure out that the soap only assumes a local minimum.
O, definitely woo. I tried to make that explicitly clear by using “hypothesis” and “appearance”.
My hypothesis is less “optimization solutions == consciousness” and more positing that our brains, “action potentials” was meant as cheeky shorthand for the human brain, use an “optimization solution” that we identify as “consciousness”, or as you put it “sentience”.
But to quote South Park, “and I base that on absolutely nothing”. ;P
I feel like there should be a much stronger effort to solve optimization problems with ML enabled guesses. It's arguably the most important problem to be solving to improve ML itself.
Humans, for example, can provide extremely strong guesses by just eyeballing travelling salesmen problems without doing any calculations. If we could use ML to take a problem and guess how to reformulate it with 95% of the search space cut out, we would be in a much stronger place. My gut says this should be theoretically possible and is probably the mechanism that under the hood biological learning systems use to such a great effect that its ok to just use greedy and less efficient methods to do last mile of optimization without something like backprop.
Human can mostly only do these kinds of guesses for traveling salesmen problems embedded in 2d Euclidean space. But we have pretty good heuristics for these cases to kickstart a solver, too. Give a human a general graph with arbitrary edge weights, and they'll be dumbfounded.
(I don't think you even have to go all the way to an arbitrary graph, I suspect a decent sized graph with edge lengths embedded in 3d euclidean space will already confuse humans. Definitely once you get to 4d.)
My point is not that we should mimic humans. My point is that there's probably learnable but inexplicable heuristics you could learn for generally solving gradient descent problems just by the formulations on their own that a neural net would be good at.
> As a working hypothesis, protein folding assumes that a protein folds into the globally lowest energy configuration. [...] If there's a region in configuration space with a local minimum and high enough energy 'walls', this might be stable enough for the protein to be stable.
The working hypothesis you described was considered fairly obsolete some time ago. The current model is much more "most proteins fold to kinetically accessible states". The assumption of global lowest energy led to a lot of wasted effort and misled computer scientists. But along the way to this understanding we learned an awful lot about the forces that affect folding- see for example "hydrophobic collapse".
As a working hypothesis, protein folding assumes that a protein folds into the globally lowest energy configuration. And that's a good assumption for a start.
However, nature isn't magic and can't magically solve global optimisation problems. If there's a region in configuration space with a local minimum and high enough energy 'walls', this might be stable enough for the protein to be stable.
For reasons of computational complexity, I agree that machine learning will probably never solve the global minimisation problem. But the complicated and messy local optimisation problem that we see in reality might very well be solvable eventually by something like machine learning.
Why are you dubious? Where do your objections come from?