Integer in all programming languages I've seen so far means infinite precision with limited range.
Floating point numbers are used for values that have finite precision but typically much larger range.
I guess I'm making a distinction between precision and range. I was pretty sure they couldn't be used interchangeably. Now I'm seeing "precision" being used to describe the size of the integer. I feel like that's a bit of a misnomer.
BigInt is for whatever reason the most common name used