I’ll go further; don’t automatically default to a BIGINT. Put some thought into your tables. Is it a table of users, where each has one row? You almost certainly won’t even need an INT, but you definitely won’t need a BIGINT. Is it a table of customer orders? You might need an INT, and you can monitor and even predict the growth rate. Did you hit 1 billion? Great, you have plenty of time for an online conversion to BIGINT, with a tool like gh-ost.
I work for a company that deals with very large numbers of users. We recently had a major project because our users table ran out of ints and had to be upgraded to bigint. At this scale that's harder than it sounds.
So your advice that you DEFINITELY won't need a BIGINT, well, that decision can come back to bite you if you're successful enough.
(You're probably thinking there's no way we have over 2 billion users and that's true, but it's also a bad assumption that one user row perfectly corresponds to one registered user. Assumptions like that can and do change.)
I’m not saying it’s not an undertaking if you’ve never done it, but there are plenty of tools for MySQL and Postgres (I assume others as well) to do zero-downtime online schema changes like that. If you’ve backed yourself into a corner by also nearly running out of disk space, then yes, you’ll have a large headache on your hands.
Also, protip for anyone using MySQL, you should take advantage of its UNSIGNED INT types. 2^32-1 is quite a bit; it’s also very handy for smaller lookup tables where you need a bit more than 2^7 (TINYINT).
> but it's also a bad assumption that one user row perfectly corresponds to one registered user. Assumptions like that can and do change.
There can be dupes and the like, yes, but if at some point the Customer table morphed into a Customer+CustomerAttribute table, for example, I’d argue you have a data modeling problem.