Yep, I know Paul Adams (used to work with him at Berkeley Lab) and that's exactly the paper he'd publish. If you read that paper carefully (as we all have, since it's the strongest we've seen from the crystallography community so far) they're basically saying the results from AF are absolutely excellent, and fit for purpose.
(put another way: if Paul publishes a paper saying your structure predictions have issues, and mostly finds tiny local issues and some distortion and domain orientation,r ather than absolutely incorrect fold prediction, it means your technique works really well, and people are just quibbling about details.)
I also worked with the same people (and share most of the same biases) and that paper is about as close to a ringing endorsement of AlphaFold as you'll get.
I don't know Paul Adams, so it's hard for me to know how to interpret your post. Is there anything else I can read that discusses the accuracy of AlphaFold?
I can't find the link at the moment but from the perspective of the CASP leaders, AF2 was accurate enough that it's hard to even compare to the best structures determined experimentally, due to noise in the data/inadequacy of the metric.
A number of crystallographers have also reported that the predictions helped them find errors in their own crystal-determined structures.
If you're not really familiar enough with the field to understand the papers above, I recommend spending more time learning about the protein structure prediction problem, and how it relates to the epxerimental determination of structure using crystallography.
Thanks, those look helpful. Whenever I meet someone with relevant PhDs I ask their thoughts on AlphaFold, and I've gotten a wide variety of responses, from responses like yours to people who acknowledge its usefulness but are rather dismissive about its ultimate contribution.
The people who are most likely to deprecate AlphaFold are the ones whose job viability is directly affected by its existence.
Let me be clear: DM only "solved" (and really didn't "solve") a subset of a much larger problem: creating a highly accurate model of the process by which real proteins adopt their folded conformations, or how some proteins don't adopt folded conformations without assistance, or how some proteins don't adopt a fully rigid conformation, or how some proteins can adopt different shapes in different conditions, or how enzymes achieve their catalyst abilities, or how structural proteins produce such rigid structures, or how to predict whether a specific drug is going to get FDA approval and then make billions of dollars.
In a sense we got really lucky because CASP has been running so long and with some many contributors that it became recognized that winning at CASP meant "solving protein structure prediction to the limits of our ability to evaluate predictions", and that Demis and his associates had such a huge drive to win competitions that they invested tremendous resources and state of the art technology, while sharing enough information that the community could reproduce the results in their own hands. Any problem we want solved, we should gamify, so that DeepMind is motivated to win the game.
this is very astute, not only about deepmind but about science and humanity overall.
what CASP did was narrowly scope a hard problem, provided clear rules and metrics for evaluating participants, and offered a regular forum in which candidates can showcase skills -- they created a "game" or competition.
in doing so, they advanced the state of knowledge regarding protein structure.
how can we apply this to cancer and deepen our understanding?
specifically, what parts of cancer can we narrowly scope that are still broadly applicable to a complex heterogenous disease and evaluate with objective metrics?
[edited to stress the goal of advancing cancer knowledge, not to "gamify" cancer science but to create structures that inivte more ways to increase our understanding of cancer.]
(put another way: if Paul publishes a paper saying your structure predictions have issues, and mostly finds tiny local issues and some distortion and domain orientation,r ather than absolutely incorrect fold prediction, it means your technique works really well, and people are just quibbling about details.)