Hacker News new | past | comments | ask | show | jobs | submit login

Domain terminology - integer on a computer means fixed precision.

BigInt is for whatever reason the most common name used




Integer in all programming languages I've seen so far means infinite precision with limited range.

Floating point numbers are used for values that have finite precision but typically much larger range.

I guess I'm making a distinction between precision and range. I was pretty sure they couldn't be used interchangeably. Now I'm seeing "precision" being used to describe the size of the integer. I feel like that's a bit of a misnomer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: