Hacker News new | past | comments | ask | show | jobs | submit login

Like this is awesome and a huge advancement but one thing that worries me with an AI solution is that it doesn't really draw us any closer to the why. Why do proteins fold the way they do? We can predict the resulting structure which is extremely significant, we have no clue why. While we get the insight of being able to predict some structures we don't get the insight of why things are happening the way they are. In some cases like this it might not matter but in other cases that insight might actually be way more significant than answer the problem to begin with. Of course we can review over the problem with the additional predictions that AI gives us but this can be haphazardous because what if there is specific sequence spins in some certain way that we and thus the AI has never seen and it goes missed. I'm not a biologist to say this is possible but I known this kind of edge case can come up and what rabbit holes will we go down because we only have the AI implied insight.

disclaimer I think the contributions are super useful for science but they do come with worries as does every path of discovery




> Why do proteins fold the way they do?

I think the why is pretty clearly understood (https://en.wikipedia.org/wiki/Protein_folding), in the same way that we understand the mechanism behind the three body problem in physics or quantum computing. But that does necessarily imply that there is an efficient way for us to simulate/predict the results of having nature play out those mechanisms.


There are two threads here. The first is that it would not be surprising to learn that describing the way that proteins fold is a very hard thing for humans to understand. See i.e. 4CT [1] and its computational proofs.

The second is that explainability in ML is much more tractable than it was 10 years ago. This is not to say that it's solved, but having solved the predictive problem -- I would expect model simplifications and SME research to proceed more quickly towards understanding the how now. I did some work w/ an Astrophysics postdoc using beta-VAEs [2] to classify astronomical observations, and simplifying models in order to achieve human-explainability proved to not cost as much predictive power as you might expect. It might be that the same holds true here.

1- https://mathworld.wolfram.com/Four-ColorTheorem.html

2 - https://paperswithcode.com/method/beta-vae


> While we get the insight of being able to predict some structures we don't get the insight of why things are happening the way they are.

This isn't something specific to AI, but science itself. We know the value of C, but now why the value is C, sure we can point to something like the Lorentz transformation, but we can't and probably won't even be able to explain why it has these particular constants, we just know that we can measure them and they are this.

Science isn't in the business of answering why. A successful scientific theory does two things, A) Makes useful predictions, B) Is correct in its predictions. It'd be wrong to call a NN a scientific theory, but it certainly does make predictions and as these results show, it is correct in its predictions.

Sometime soon, humanity is going to have to come to terms that we will soon (or perhaps already have) enter an age where mankind is not the only source of new knowledge. AI-derived knowledge will only increase as the future unfolds and the analysis of such knowledge will likely become it's own branch of study itself.


> Science isn't in the business of answering why.

I agree as long as science is a business. But why is science a business?

If science is not meant to answer why, does this mean we cannot know why?

should we just give up on having story-like (narrative) explanations for why and how things work? it seems like we are headed to a world where the computer just tells us what to do and where to go. a world in which we are free from having to think about why we are being told to do whatever it is we're doing. click (or tap) buttons, get tokens to buy food and pay rent.


These are predictions. Presumably the proteins will be inspected and the model refined and updated before we start using DNA without first checking the output.


It could be that more complex phenomenon don't have a simple explanation. It could be that they do. But, just because I would like a why, doesn't mean that there is one. (Personally I think there is a why.)


[flagged]


AI solve the process but doesn't give a whole lot of insight into the formulas and the description what's going on. Where we as humans have reasonably found that e = mc^2. However AI would gives us e or m but backboxes us away from seeing that c aka the speed of light was involved(unless we implied that before). There might be interesting relationships that are useful that AI unintentionally masks that could be ground breaking if we could only understand process more holistically. I think a different commenter eluded in this case we think we understand protein folding well we just struggle to synthesis it in a compact mathematical way even though with AI we can simulate the process well for known examples.

The issue with AI is we don't know if our current example set includes every case what if there is a strange sequence of amino acid that causes something "weird" to happen that we have haven't seen. AI cannot predict something novel it or us haven't seen which is the issue. The process(if it exists) of how one could solve this problem might also be exportable to other fields if it was formulized with math rather than estimated with AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: