Fair 'nuff... though I was after a "even with an older model iPhone, and no net connection, there the ability to do speech to text (and even with some interesting transformations of "two by four"), it can be done locally."
> In order to provide an additional layer of privacy for our users, we proxy all STT requests through Mycroft's servers. This prevents Google's service from profiling Mycroft users or connecting voice recordings to their identities.
I didn't know the specifics of it, that has a lot more information and is is an interesting read.
One of the bits in there caught my eye...
> We created a language-specific phonetic specification of the "Hey Siri" phrase. In US English, we had two variants, with different first vowels in "Siri"—one as in "serious" and the other as in "Syria." We also tried to cope with a short break between the two words, especially as the phrase is often written with a comma: "Hey, Siri." Each phonetic symbol results in three speech sound classes (beginning, middle and end) each of which has its own output from the acoustic model.
And the British version getting false positives on wake up with world politics.
The specifics of the wake up and that its done with a ML model rather than a low power wake word chip akin to https://www.syntiant.com/post/syntiant-low-power-wake-word-s... is also interesting - and impressive that they were able to get it to be that low power.
This is not true anymore. Latest iPhone models have offline Siri working to some extent