Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Attention Is Everything.

To direct attention properly you need the right context for the ML model you're doing inference with.

This inference manipulation -- prompt and/or context engineering -- reminds me of Socrates (as written by Plato) eliciting from a boy seemingly unknown truths [not consciously realised by the boy] by careful construction of the questions.

See Anamnesis, https://en.m.wikipedia.org/wiki/Anamnesis_(philosophy). I'm saying it's like the [Socratic] logical process and _not_ suggesting it's philosophically akin to anamnesis.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: