Hacker News new | past | comments | ask | show | jobs | submit login

Is there room for generative AI in science? I am experimenting with this a lot at https://atomictessellator.com, As a computational chemist I found it difficult just to stay on top of all of the papers that are released, I thought it would be cool to have generative AI attempt to reproduce the experiments using simulation tech.

Here's a few cool insights I have uncovered while working on this:

- Developer tools are popular, but there's an upcoming market for AI tools - I.e. Tooling/API's that are meant to be used primarily by LLMs/AIs. Design the interfaces to be well composable at the right level of abstraction and AIs can design, run and monitor experiments really well.

- Tree of thought and Graph of thought are really, really, really important for this - I think this is because it compensates for the lack of looping mechanisms in LLMs and also adds the ability for recursive problem decomposition, abstraction laddering, composable malleability and so on.

Does it work? Yes, I have a working E2E pipeline now that has already made novel discoveries validated by lab work, I am focusing on scaling this out now to support a broader search space and give the LLMs more freedom to explore.

A shameless plug: I am working on this 4-5 hours a day as well as my day job. If anyone is aware of any grants or investors that I could connect with, I would love to work on this full time. I am neurodivergent so I think running a company is not really my thing, but there must be an alternate way, advice anyone?




> Is there room for generative AI in science?

I would love a system like ChatGPT but targeted specifically at exploring existing literature. A system that can recommend papers to read, that you can chat with about your problem and can recommend approaches that have worked for others and tell you why. That you can prompt and refine and go into detail with while it helps you figure out what do next based on previous work. That can link you to actual papers to read.

With ChatGPT, I get some of this, but I have to be veeery careful how I use it. It is generally good at discussing points at a high level and helping you sort out some ideas, but when you get into detail it is very easy to catch it making mistakes, and when you ask it for references to read further, it almost always makes up some or all of them. Forget asking it to give you actual links. Maybe some of the other GPT systems targeted at the search space are better at this, I don't know I haven't tried them. But I would love a system that actually does this kind of thing well.

Not just for scientific research -- I've used ChatGPT to help understand some legal things, to help understand some government application procedures (cut through the "consulate speak" and explain some steps to me in my son's visa application in plain language. Of course I double-checked everything it told me.)

Having a system that has "read everything" and can explain it back to you after you ask some questions is just fantastic. I just want it to be more reliable.


I agree, I think there is a wide open vista of opportunity in the hypothesis generation space when it comes to science. An instant literature-review-as-a-service would be incredibly valuable too, to say nothing of a, "write the code to implement the methods described in this paper."


Yes, but most of the current ideas on how this moment will change science are willfully short-sighted.

We are only a few years from next generation multimodal modeling. The future contains protein embeddings, genome embeddings, medical image embeddings, and chatbot decoders to discuss them with you.

Imagine prompting an image-conditioned decoder with questions like "Q: Why do you think this brain MRI indicates the person will get Parkinson's?" These are things that models can currently do, but we have basically no understanding of what they are looking at.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: