Hacker News new | past | comments | ask | show | jobs | submit login

This is a measure of how well the supplied text matches what the model itself would have produced.

A low perplexity means the text isn't massively different from what it might have output itself (which might be an indicator that it was produced by a model), whereas a high perplexity suggests it's the kind of semi-random nonsense you'd expect from a student. ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: