Hacker News new | past | comments | ask | show | jobs | submit login

The problem is that you don't know what the sources of error are. In methods like density functional theory (DFT), or other post Hartree-Fock calculations, the sources of error are well understood. We know where the predictions break down and we know what can be relied upon.

Methods like this are difficult to verify. You don't know where the weaknesses in the model actually are and you don't know what is reliable. This is an interesting idea but has limited application and, even if the model can be understood well-enough to determine the limitations, this will not replace methods like DFT due to the cost of the calculations.

DFT is imperfect due to the limitations of the functionals and basis-sets but we know what it does well and that is a lot. It is reliable when used by someone that understands the sources of error and how to apply the methodology to the the target system in the most appropriate way.




Are you familiar with quantum Monte Carlo? This is just variational QMC, a well established method, with a neural network as an ansatz. Traditionally QMC uses ad-hoc ansatzes anyway, so this is not different.

Also, I‘ve spent last seven years doing DFT calculations, and although sometimes one can explain the failures, more often than not it‘s just intransparent. QMC is actually in the core of DFT, because QMC calculations on the uniform electron gas have been used to parametrize LDA in DFT.


I'm not particularly familiar with QMC, although I do know of it.

I was just adding a comment in support of this not being a particularly revolutionary methodology because, although it may achieve spectacular results for certain systems, the limitations and mistakes of this kind of approach are completely hidden behind an opaque ML methodology of an NN.

ML has a place but NN are notoriously difficult to even grey-box and a black box model doesn't do much to actually advance the field. It certainly doesn't allow for a well-rounded assessment of failures.

As for the limitations of DFT, unless you are referring to convergence issues, I think you are completely wrong to claim that the issues with the method are not well-known and understood. We know precisely where the methodology has limitations and we also know how the functionals have been parametrised and we also know the assumptions / theoretical models upon which they are based and their limits. That is enough information to know where confidence can be placed.

I would also dispute that QMC is in the core of DFT just because it is used to parametrise LDA. As I am sure you are aware, LDA is not used for any reliable modelling. GGAs and Hybrids (And maybe meta-GGAs if we're feeling charitable...) are what make DFT a useful theory. Prior to that the results just sucked for the majority of systems!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: