“The speech recognition process involves capturing audio of the user’s voice and sending that data to Apple’s servers for processing. The audio you capture constitutes sensitive user data, and you must make every effort to protect it. You must also obtain the user’s permission before sending that data across the network to Apple’s servers. You request authorization using the APIs of the Speech framework.“
Yeah, I suppose I should have formulated that more clearly. The API offers cloud speech recognition for a set of languages, and on-device speech recognition for a subset of these.
> Which languages are processed on device and not send to Apple’s servers?
It's not a static set, because (1) availability tends to expand over time and (2) when you start using a new language, the on-device model needs to be downloaded first.
So what you need to do is create a SFSpeechRecognizer and then test the supportsOnDeviceRecognition property. If that is set, you can set requiresOnDeviceRecognition on the SFSpeechRecognitionRequest.
“The speech recognition process involves capturing audio of the user’s voice and sending that data to Apple’s servers for processing. The audio you capture constitutes sensitive user data, and you must make every effort to protect it. You must also obtain the user’s permission before sending that data across the network to Apple’s servers. You request authorization using the APIs of the Speech framework.“
https://developer.apple.com/documentation/speech/asking_perm...
Which languages are processed on device and not send to Apple’s servers?