When I was learning C a couple decades ago, pointer syntax never made any sense to me until something suddenly clicked with “put it directly next to the type name and not next to the variable name”. Ever since then I have not been able to figure out why it is commonly taught next to the variable name.
I mean, short of enabling declarations like:
char *a, *b;
But I have long since found the tradeoff to be worth the syntactic clarity.
The best argument against this, and for leaving the * by the variable name, is this declaration:
char* a, b;
Now a has type char * but b is just char. It’s probably not what the author meant and it’s definitely ambiguous even if it was intentional. Better to write:
char b, *a;
Or, if you meant it this way:
char *a, *b;
“Well, don’t declare multiple variables on the same line,” you respond. Sure, that’s good advice too. But in mixed, undisciplined, or just old code, it’s an easy trap to fall into.
apply the char* type to both? (That is, why didn't they design it that way?)
I assume there was some reason originally, but it's made everything a bit more confusing ever since for a lot of people. :/
Edit: Apparently it's so declaration mirrors use. Not a good enough reason IMO. But plenty of languages have warts and bad choices that get brought forth. I'm a Perl dev, so I speak from experience (even if I think it's not nearly as bad as most people make out).
In the olden days of C, pointers were not considered types on their own (you cannot have just a pointer, a pointer must point ‘to’ something, grammatically speaking). The type of the object of that declaration is a char. So it’s not really read as ‘I’m declaring a char-pointer called a’, it’s more along the lines of ‘I’m declaring an unnamed char, which will be accessed when one dereferences a’. Hence the * for dereferencing.
> Ever since then I have not been able to figure out why it is commonly taught next to the variable name.
C does this very cute (read: horrifyingly unintuitive) thing where the type reads how it's used. So "char ⋆a" is written so because "⋆a" is a "char", i.e. pointer declarations mimic dereferencing, and similarly, function pointer declarations mimic function application, and array declarations mimic indexing into the array.
. Some people (me included) find it clearer to write the type, followed by the variable name. So just as you’d write
int a
, you’d write
char* a
.
The fly in this ointment is that C type syntax doesn’t want to work that way. It’s designed to make the definition of the variable look like the use of the variable. A clever idea, but unlike nearly every other language, which BTW is why I think you should really use typedefs for any type at all complicated in C.
For example, the type-then-variable style falls down if you need to declare an array
int foo[4]
or a pointer to a function returning a pointer to an int
int *(*a)(void)
(...right?).
So I’m perfectly willing to do it the “C way”, I just find out more readable to do it the other way unless it just won’t work (and then prefer to use typedefs to make it work anyway).
In my own projects, I like to put the * on its own, like so:
int const * const i;
This is nice because you can naturally read the signature from right to left. "i" is a constant pointer to a constant integer. It's a little unconventional, but I think it's a really clear way to convey the types.
I find it to be clear to have the * with the type. Otherwise it can be confused (not by the compiler but the reader) that you are dereferencing the variable a or b.
declares a variable, `a` that, when dereferenced using the `*` operator, will yield an int.
In C++, the same line declares a variable, `a`, of type `pointer-to-int`.
C cuddles the asterisk up to the variable name to reflect use. C++ cuddles it up to the type because it's a part of the type. Opinions don't really differ on whether C-style or C++-style is better, but a lot of cargo-cult programmers don't bother adjusting the style of the code snippet they paste out of Stack Exchange so you see a lot of mixtures.
Is it equivalent? Because that seems to be the only difference between the two editions for the newprint function, and the author says one of them happens to compile.
The difference is the function prototype `void newprint(char *, int);` at the start, which is missing in the second example. With the forward declaration, the compiler knows what arguments newprint takes and errors out if you pass something else. C is compiled top to bottom so in the older version of the example the compiler has no way of knowing what number of arguments the function takes at the point whree it is called. In (not so) old versions of C that implicitly declared a function taking whatever you passed it.
To add some flavor to the other answers, in K&R C the parameter passing was very uniform: everything (save doubles) got promoted to word-size and pushed on the stack. chars (and shorts) became ints. You couldn't pass aggregates. doubles were a problem, but were rare.
Because the parameter passing was uniform, you didn't need to inspect anything at the call site. All functions get called the same, so just push your params and call it. Types were for the callee and were optional. This is what powered printf, surely the highest expression of K&R C.
In modern C-lineage style, we enumerate our formal parameters, and variadic functions are awkward and rare. But LISP and JavaScript embrace them; perhaps C could have gone down a different path.
As far as I know, the promotion to word size (now 32 bit) still happens. Also if you have more than a fixed number of params (defined by the platform ABI), parameters are still pushed on the stack. You can't push 8 or 16 bit values on the stack. The stack pointer is always a multiple of 4.
The interesting thing is that with K&R C a function declaration/prototype is optional. That means you can call a function that the compiler has not even seen. Mismatches in parameter/return types (which are optional and default to int in declarations as well) are normally not a problem, because of the aforementioned promotion. If you have the declaration, then the compiler will at least let you know about wrong number of arguments.
Ugly? I rather miss the traditional style of argument passing. My first real paid job used Microware's OS-9 compiler which only supported the traditional syntax, did absolutely no optimisation that I could discern, and made you use "register" if you wanted a local variable assigned to a register instead of always being spilled to the stack. (In fact looking back at it now I wonder if it was just a rebadged PCC).
As an aside it's not always more verbose because you can group parameters by type, eg:
int foo (c1, i1, c2, i2)
char c1, c2;
register int i1, i2;
{
...
That is pre-ANSI C. The parameter types are declared between the end of the argument list and the start of the body, instead of inside the argument list.
ANSI adopted function prototypes from C++ in C89. Originally C had no type checking on function parameters. All arguments were promoted to word width and pushed to the stack. If you gave a function fewer arguments it would just read garbage off the end of the stack. If you gave it excess arguments they were silently ignored.
Sort of. ANSI introduced prototypes, but old-style function declarations and definitions, though they were declared obsolescent, remained valid (i.e., any conforming C compiler is still required to support them).
As of the 2011 standard, that's still the case. I think that C2X will finally remove them.