> So basically a consumer with no idea of anything.
Not knowing is sort of the purpose of AI. It's doing the 'intelligent' part for you. If we need to know it's because the AI is currently NOT good enough.
Tech companies seem to be selling the following caveat: if it's not good enough today don't worry it will be in XYZ time.
> It still needs guardrails, and some domain knowledge, at least to prevent it from using any destructive commands
That just means the AI isn't adequate. Which is the point I am trying to make. It should 'understand' not to issue destructive commands.
By way of crude analogy, when you're talking to a doctor you're necessarily assuming he has domain knowledge, guardrails etc otherwise he wouldn't be a doctor. With AI that isn't the case as it doesn't understand. It's fed training data and provided prompts so as to steer in a particular direction.
I meant "still" as in right now, so yes I agree, it's not adequate right now, but maybe in the future, these LLMs will be improved, and won't need them.
Not knowing is sort of the purpose of AI. It's doing the 'intelligent' part for you. If we need to know it's because the AI is currently NOT good enough.
Tech companies seem to be selling the following caveat: if it's not good enough today don't worry it will be in XYZ time.