I believe this is the least known and most important use-case for LLMs today -- understanding and inferring the meaning in language. We've all seen cases where Google can occasionally surface the right content for indirect descriptions of what you're looking for. The most famous example I can think of is searching for "eyebrows kid" returns images of Will Poulter. Google's knowledge engine is getting pretty good at this kind of thing, but LLMs are way better.
Language models are exceedingly good at understanding the meaning of your language without the use of specific keywords. Here's an example from a recent search I did.
> metals can be flexed or bend, and it will regain some of its prior shape
in Google returns either "ductile" or "shape-memory alloy," which are both incorrect.
> What is the property of a material where it prefers to stay in its current form? This is often found in metals, where it can be flexed or bent, and it will regain some of its prior shape?
in ChatGPT correctly explains that
> The property you are referring to is called elasticity. Elasticity is the ability of a material to return to its original shape after being deformed (stretched, compressed, bent, etc.) when the external forces are removed.
We all know that LLMs can hallucinate, and they are therefore not a reliable source of truth or knowledge. I'm not necessarily trying to say that LLMs are more accurate than something like Google's knowledge engine. The value in LLMs is that they can infer your meaning to some degree of accuracy (just like asking a human) so that you can productively continue your own research in more depth.
Language models are exceedingly good at understanding the meaning of your language without the use of specific keywords. Here's an example from a recent search I did.
> metals can be flexed or bend, and it will regain some of its prior shape
in Google returns either "ductile" or "shape-memory alloy," which are both incorrect.
> What is the property of a material where it prefers to stay in its current form? This is often found in metals, where it can be flexed or bent, and it will regain some of its prior shape?
in ChatGPT correctly explains that
> The property you are referring to is called elasticity. Elasticity is the ability of a material to return to its original shape after being deformed (stretched, compressed, bent, etc.) when the external forces are removed.
We all know that LLMs can hallucinate, and they are therefore not a reliable source of truth or knowledge. I'm not necessarily trying to say that LLMs are more accurate than something like Google's knowledge engine. The value in LLMs is that they can infer your meaning to some degree of accuracy (just like asking a human) so that you can productively continue your own research in more depth.