Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wasn't new knowledge assimilation from talks the reason for Microsoft infamous Twitter chatbot to be discarded [1]. Despite such ability it definitely was not sentient.

I rised this point in another thread on HN about LaMDA: all its answers were "yes"-answers, not a single "no". Self-sentient AI should have its own point of view: reject what it thinks is false, and agree about what it thinks is true.

1. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: