Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But as long as the EQ effect of the speaker/microphone setup is constant, an AI should have no problem learning how to restore the original.


Not in the real world where audio bandwidth is limited.


That's one big if.


Seriously, the tone can change from just a couple millimeters of difference in microphone or speaker positioning, not to mention the possible variations in room sound due to furniture or other object placement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: