It seems they didn't address hallucination at all?
Presumably this hallucinates as much as any other AI (if it didn't, they'd have mentioned that).
So how can you delegate tasks to something that might just invent stuff, e.g. you ask it to summarize an email and it tells you stuff that's not in the original email?
you really have to try hard to make a model hallucinate when asked to summarize an email. I think they didn't mention it because they can't guarantee 100%, but it's virtually on non-issue for such task.
Presumably this hallucinates as much as any other AI (if it didn't, they'd have mentioned that).
So how can you delegate tasks to something that might just invent stuff, e.g. you ask it to summarize an email and it tells you stuff that's not in the original email?