Based on their presentations yesterday and accompanying written materials they have a pretty capable ~3B parameter Foundational LLM running fully locally on the devices which is the first line. https://machinelearning.apple.com/research/introducing-apple...
Can’t find a link right now (search terms are pretty crowded atm!) but I saw they just recently shared an llm they’ve been working on that is designed to answer questions about how a give screen of an app functions. (Identifying buttons, core functionality, etc.)
I actually liked that they didn’t show any of the AI writing capabilities being used in iMessage but rather in email client for more professional contexts. I’m really curious to see if they make it available in iMessage…
Word on the street (someone who was talking to Apple employees at WWDC) is that the Vision Pro doesn’t have enough headroom on the processor for it. It’s driving that sucker really hard just doing its “regular” thing.
Before you write off their claims I encourage you to read more about the detailed specifics (if you have the technical footing and inclination to do so). While the approaches should certainly be probed and audited, it’s clearly more than performative. https://security.apple.com/blog/private-cloud-compute/