Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Eh I like the nice names. Byte=8, short=16, int=32, long=64 is my preferred scheme when implementing languages. But either is better than C and C++.


It would be "nice" if not for C setting a precedent for these names to have unpredictable sizes. Meaning you have to learn the meaning of every single type for every single language, then remember which language's semantics apply to the code you're reading. (Sure, I can, but why do I have to?)

[ui][0-9]+ (and similar schemes) on the other hand anybody can understand at the first glance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: