These questions are better addressed by folding@home and we're still very much computationally limited in our ability to answer these questions.
Do you want to answer these questions for the satisifcation of understanding the underlying physical rules that drive folding? Why? It's unclear that knowing those things would actually make a large impact in any industrially/medically useful contexts. It uses huge amounts of CPU to sample these functions accurately enough to replace actual physical experiments on protein motion. Just doesn't seem like an effective investment of brain or computer time.
(I say this as somebody whose entire career was predicated on using MD to answer these questions; see https://www.nature.com/articles/nchem.1821 for our attempt in that space)
The folding@home approach is very limited to short individual simulation times (a millisecond total, maybe, but 100 disconnected nanoseconds at a time), so it then relies on various 'enhanced sampling' techniques to try to put your thumb on the scale to bias things into exploring interesting dynamics. It seems like it is probably more effective the more you already know about a given protein target. Meta's approach (which seems like AF2, but faster/worse?) seems to have a similar problem, in that it's even less trustworthy when you apply it to a new target you have relatively little concrete information about.
Do you want to answer these questions for the satisifcation of understanding the underlying physical rules that drive folding? Why? It's unclear that knowing those things would actually make a large impact in any industrially/medically useful contexts. It uses huge amounts of CPU to sample these functions accurately enough to replace actual physical experiments on protein motion. Just doesn't seem like an effective investment of brain or computer time.
(I say this as somebody whose entire career was predicated on using MD to answer these questions; see https://www.nature.com/articles/nchem.1821 for our attempt in that space)