Your project is amazing and I'm not trying to take away from what you have accomplished.
But..I looked at the code but didn't see any audio-to-audio service or model. Can you link to an example of that?
I don't mean speech to text to LLM to text to speech. I mean speech-to-speech directly, as in the ML model takes audio as input and outputs audio. As they have now in OpenAI.
I am very familiar with the typical multi-model workflow and have implemented it several times.
But..I looked at the code but didn't see any audio-to-audio service or model. Can you link to an example of that?
I don't mean speech to text to LLM to text to speech. I mean speech-to-speech directly, as in the ML model takes audio as input and outputs audio. As they have now in OpenAI.
I am very familiar with the typical multi-model workflow and have implemented it several times.