Hacker News new | past | comments | ask | show | jobs | submit login

It might. The ethical dilemma here seems to be:

1) We don't know when/if GPT will. GPT in its current form can't seem to guarantee safety (either from "substantial" verbatim snippets or from complex "hallucinations" of random pachinko output).

2) GPT doesn't know when/if it has. GPT in its current form likely cannot know this. (In part, because it doesn't really "know" anything, that's too anthropological a word for what is still mostly just a casino full of pachinko machines.)

3) Define "substantial portions" in a way that a jury of your peers can understand it in a court of law.

4) Can you define "substantial portions" again, only this time in code as guide rails for something like GPT? "Substantial portions" is barely a human term designed for human lawyers and courts. There's a fascinating challenge here on quantifying it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: