Hacker News new | past | comments | ask | show | jobs | submit login

The only argument I can see for using other types is for int, when you want the natural length integer, in any other case it seems to be than stdint.h types are better?



Using the natural integer whenever possible keeps the C code abstract. The program becomes less limited as it is ported to more capable machines with bigger integers, instead of continuing to pretend that everything is a 32 bit i386.

You need some low-level justification for using an uint32_t and such: like conforming to some externally imposed data format or memory-mapped register bank.

The justification for <stdint.h> is that it's better to have one way of defining these types in the language, than every program and every library in every program rolling its own configuration system for detecting types and the typedefs which name them. Let's see, for calling Glib, we use guint32, for OpenMax we use OMX_U32, ...

Funny how these situations persist almost 20 years after stdint.h was standardized (and a number of more years after being draft features).


Well, except that I don’t think most software authors actually know where it is possible to safely use an int versus one of the fixed width stdint types. In particular, you now need to make sure that your code works correctly no matter what the actual size of an int is. This involves complicated knowledge like int promotion rules and how they will interact with different sized int, long int etc. So instead of having portable software you just have software that may fail in unexpected ways on different types of machines. I don’t actually know the rationales , but I would think that making it easier to write portable software was possibly one of the goals of introducing stdint.


The promotion rules only get worse when you use a type alias like int32_t, which could be a typedef for short, int, long or conceivably even char.

An expression of type int doesn't undergo any default promotion to anything, period; the widening promotion is only applied to the char and short types.

An int operand may convert to the type of an opposite operand in an expression, and an int argument or return value will convert to the parameter or return value type. That applies to int32_t also.

Anyway, you really have to know the rules to be working in C. Someone who uses fixed width types for everything doesn't know what they are doing and are just taking swipes at imaginary ghosts in the dark out of fear.


The program becomes less limited as it is ported to more capable machines with bigger integers, instead of continuing to pretend that everything is a 32 bit i386.

I'm guessing you've never ported code that was written for 32-bit ints back to a 16-bit architecture.

Always better to make your type sizes crystal-clear to the reader, IMO, even if it risks using a suboptimal word size on some other platform down the line.


> You need some low-level justification for using an uint32_t and such: like conforming to some externally imposed data format or memory-mapped register bank.

This is backwards. You should use uint32_t etc. like any higher-level language would, unless you have specific reasons to know you need a machine-sized type. Making your code behave differently in some arbitrary, untested way on machines with different default int sizes isn't going to make it "less limited", it's just going to make it broken.


Can you cite one page in the K&R2 where the authors are using some 32-bit-specific integer? Or any other decent C book: Harbison and Steele? C Traps and Pitfalls?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: