This isn't crankyness. If you try to generalise the thoughts on asm/C then it boils down to this:
- Abstractions will always be leaky
and
- be prepared for when an abstraction leaks.
As for the ideology part:
think for yourself and try to see beyond the hype
And I have to say I absolutely agree.
For example, I've written maybe a handful of lines of assembly, and maybe as many lines of inline assembly. However, understanding assembly language has saved me many times. Compiler bugs exist. Sometimes you don't have access to debug info and a debugger, just the instruction pointer and a register dump. Sometimes the compiler just can't optimise well enough.
C is just one level of abstraction higher, so that's probably the level that's going to leak through when your high-level-language interpreter fails for some bizarre reason. (recent Ruby segfaults anyone?)
It's not turtles all the way down, but rather a menagerie of different abstractions, and it's useful to know your zoology, as it were.
I've just finished my freshman Java class and was severely disillusioned at how far from 'programming' it felt. I picked up a copy of K&R and have been enjoying myself since. I actually feel bad for my fellow majors who think that Java/OOP is the only way that things can be done.
I actually feel bad for my fellow majors who think that Java/OOP is the only way that things can be done.
I feel bad for people that think imperative-style programming is the only way things can be done.
C is not the anti-Java. It's Java minus modern features and with more rope to hang yourself with. (Then again, someone actually told me that spending an entire day with Valgrind hunting down a memory leak was fun.)
If you want power, non-Java-ness, and modern features, learn a language like Lisp, Haskell, OCaml, etc. And learn Perl for actually getting things done.
... meanwhile the entire Internet is powered by software written in C, including the very same browser you typed this message in.
I've been hearing how obsolete C is ever since high school (1994) yet pieces of successful programs not written in C-family is an order of magnitude less common: everything form web-servers to browsers, OSes and office suites and imaging software - all C. Meanwhile I've never had to install JVM on any of my computers, somehow I never run across a single piece of useful software written in Java (with exception of evaluating Jython and toying with Eclipse&Netbeans at work).
Perhaps it's running on my phone. If so, that's probably why the damn thing feels so anemic compared to Objective-C powered iPhone.
I don't know how C programmers do it, but somehow they just get things done.
Moreover, C remains the only choice of writing the most portable code ever. There isn't a reasonably popular CPU on this planet that doesn't have a C compiler for it, while Java runs only on "java computers".
C is entrenched. Everyone uses it, and because everyone uses it, everyone continues to use it. That doesn't make it good, though, just popular. Read pg's "Beating the Averages" for a more eloquent explanation.
You seem to think that Java is C's competition. It's not. I think we can all agree that Java is one of the worst programming languages ever created. (The VM is nice though.) Nobody is telling you to use Java or C++. They are telling you to use a dynamic language (Perl/Python/Ruby) or a functional language (Haskell/OCaml/Lisp).
I don't know how C programmers do it, but somehow they just get things done.
Sure, but their programs usually leak memory (ever use Firefox?), and their libraries handle errors by terminating the whole program (see PulseAudio; any error condition immediately calls exit(1)).
There isn't a reasonably popular CPU on this planet that doesn't have a C compiler for it, while Java runs only on "java computers".
I don't optimize my life for solving problems I don't have. Every machine I want my code to run on has Perl, Lisp, Haskell... whatever. If C was the highest-level language I could use, then obviously I'd use it. But it's not, so I don't.
Finally, gcj will compile Java down to machine code; ghc compiles Haskell to native code. So I don't ever see a situation where I will absolutely have to use C for some reason (other than interfacing with C libraries, of course).
jrockway: you misunderstood my intentions. I don't code C nor Java, I was merely pointing out that it is way too early to declare C obsolete, since so much C-derived development is going on.
In case you're familiar with gcj, what's the status of that project? Their page hasn't been updating lately, is it being actively maintained? It looks like the most painless way to integrate some Java code I have with Ruby... and I don't wan to bet on a dying horse :)
And as a side note, regarding "Beating the Averages": there are many ways to beat the averages, and beating them ONLY by using a higher level language is like beating them with Aeron chairs: yes, some languages give you productivity boost over others, but I disagree with Paul on magnitude of that effect - in fact I'd say that availability of certain libraries beats any boost provided by language alone.
And, to conclude, here's a little example from real life: a group of programmers I know were working on some nifty AI stuff for automated document processing, and they were doing it in C++, which I disliked. They just discovered a totally different market in the embedded space, which allows them to grow and make money. Thanks for their language of choice (C++), they were able to beat an average Java/Python/Lisp-powered startup that doesn't have access to that segment.
"Sure, but their programs usually leak memory (ever use Firefox?), and their libraries handle errors by terminating the whole program (see PulseAudio; any error condition immediately calls exit(1))."
That's a poorly written program. You can just as easily System.exit(1) in Java, or to do it a more Javaesque way, throw new Error("im too lazy to do this right").
But C doesn't have any mechanism other than exiting or returning a value that indicates "false". Both are non-optimal.
One (exit) makes the code simpler, but makes the consuming application extremely flaky; the other bloats the code with a weird API where functions modify their arguments:
errorcode_t do_some_work(int arg1, int arg2, int *result)
and forces the client code to check every single error and decide right then and there what to do.
It's possible to write code that way, but why would you want to? This is a solved problem; why keep using a square wheel when round ones are available?
(Yes, there is longjmp. Still a lot of work to use that square wheel.)
In the end, don't think I care what programming language you use. It doesn't matter. But I do think that C should be dying off now; it is much less useful than it was 30 years ago.
> But C doesn't have any mechanism other than exiting or returning a value that indicates "false".
Of course it does, in fact you just said it yourself - long jumps. Back in late 90s I worked with the firmware for the Point of Sale terminals, PIN pads, etc. It was written in C and its error handling was exception-based. Behind the scenes it was just a handful of long/setjmp wrappers, but it did nevertheless implement semantics of try/catch/etc.
Additionally, "exiting or returning false" are not the only two options, nor they even the most commonly used ones. At least in the projects that I was exposed to. Kernel code (e.g. Linux, BSD) routinely uses int as a return value and it still somehow manages to be both readable and functional without being "bloated" or using "weird API".
Yet another thing to consider is that optional exceptions (such as those in C++) come with a non-negligible performance hit, so it is considered an absolute no-no to use them in a "fast path" parts of the code.
"Of course it does, in fact you just said it yourself - long jumps. Back in late 90s I worked with the firmware for the Point of Sale terminals, PIN pads, etc. It was written in C and its error handling was exception-based. Behind the scenes it was just a handful of long/setjmp wrappers, but it did nevertheless implement semantics of try/catch/etc."
David Hanson's book "C Interfaces and Implementations" explains how this is done and provides (very high quality) source code you can use/learn from.
None of the modern browsers, unless we're counting Epiphany, are written in C, and it uses a C++ rendering engine. Safari is an Objective-C / C++ mix, Firefox, Flock, Camino, Konqueror and IE are C++.
"""I've been hearing how obsolete C is ever since high school"""
C will not be obsolete for a very long time. It is however completely silly to compare it with Java. Completely different and created for different purposes.
You've missed the point. Learning C is not about imperative-style programming. It is about seeing one abstraction level down relative to your favorite programming language.
It may not be your favorite flavor, but it's a perfectly valid and reasonable choice.
I didn't say not to use C; there is plenty of good software written in C. Like... everything :)
But, I don't think it's very valuable to know. It doesn't put you in a good position to write correct, maintainable software. There are no tools for that, only for twiddling bits. Therefore, I think C should be relegated to interfacing with UNIX (which you Just Have To Do) and small number-crunching routines. In those cases, there's really nothing to screw up (UNIX interface -- allocate resources, mutate them, deallocate; number crunching, call a function with some arguments). For both of those uses, C has been very successful. Just don't write a big application with it, there are much better tools for that.
I'd actually claim that C is very valuable to know, and while I'm at it, so is assembly. Pretty much all high level languages are written in C (or in a language that is itself written in C). Anyone that doesn't understand enough about how a modern CPU operates and cannot implement a compiler/interpreter for a higher level language in C should not claim to be a hacker.
This has been covered elsewhere, but the job of a university isn't to give you what you want. It's to give you perspective and a different way of thinking. If they teach only Java, you're missing out on a lot and you'll only be prepared for Java. On the other hand, if you're taught Haskell, Scheme, Smalltalk, Forth, C, etc. you'll (hopefully) learn how to pick up any language easily and always have a job, even when the industry-standard/popular language changes.
But anyway, no arguing about this because it's been beaten to death (and I just woke up from a nap, don't anger me! :P)
Most of the article is "why I think learning assembly is important". I don't know assembly, but I do know how computers work (and I even know you have to trigger an interrupt to make a system call). I think he should just get to the point and say "people should learn how the computer works", rather than suggesting that you learn a programming language that makes you aware of some aspects of how a computer works.
It's like a very sharp knife: you can cut yourself as well as whatever it is you're trying to slice. It's really easy to write broken or inflexible code in it.
This is true of all powerful tools. I've seen horribly unmaintainable Lisp and Haskell. I've also seen very clean and maintainable Perl. C is no different than any other programming language, although it does actively discourage good code. You can ignore the discouragement, but why not use a language that encourages maintainability, or at least automatically deallocates memory when you're done using it?
Anyway, I don't think this article shows why the author likes C, or why someone else should learn C. Sure, learn C if you want to learn C. But you can also learn about your computer by just learning about your computer.
Dial your time machine back a few decades, and you'll see assembly programmers telling those C upstarts that they should learn assembly for all the same reasons. A few more, and it's the logic circuit designers griping that you can't trust instruction cards because you can't see the program. A few more, and it's the vacuum tube enthusiasts complaining that digital switches are slower, less reliable, and too small to debug properly.
If you want a visceral relationship with your hardware, get a soldering iron and go to town. That's a respectable hobby, and confers mega geek-points. If you want to write software and be productive at it, learn a real language.
C is a powerful tool, and every programmer should know it. But it's woefully underpowered as a fulltime software development tool.
And you know what? All those curmudgeons were right (except for the bit about vacuum tubes: transistor logic was never slower or less reliable than tubes, ever). If you don't know how your tools are put together, there will be areas of software development atalllevels which will forever remain voodoo to you.
You can do an awful lot of useful work while relying on voodoo, but eventually the voodoo will catch up with you and you'll end up with an ugly mess where a better trained developer would produce an elegant hack. That's no less true for a web developer using rails than it is for an embedded systems driver developer.
Basically, I've never known a great hacker who doesn't understand CPU architectures and at least a little bit of digital logic design.
As for C specifically, let's just agree to disagree. I'll put of the sum total of "great hacks" written in C up against any amount of C++ or Java you can find. It's true that for some problems (web development being a good example) there are better tools (scripting languages, databases) for the domain. But that alone doesn't make C "woefully underpowered".
It's implemented in machine language somewhere. Programmers should know what a register is. It's implemented in electrons, too. And, some of the best programmers I've known were EE majors.
Don't get me wrong, it's wise to know your craft. That's never a bad thing. But it's foolish to use a tool that is, frankly, not as powerful.
Saying a tool is "powerful" doesn't mean "you can do lots of stuff with it." You CAN saw down a tree with a screwdriver. But a chainsaw is a much more powerful tool for that job. Yes, a screwdriver can do lots of things that a chainsaw can't, which only proves that it's a more versatile tool. The power of a tool depends on the context of the problem you're trying to solve.
I'm guessing that screwdrivers are used in the assembly of chainsaws. They still suck for cutting down trees. Same with C vs just about every other language. C is a fairly small step up from assembly. An important step, no doubt, but for most tasks that software developers face, especially on the web, it's not appropriate.
I'm going to go out on a limb and guess that you've never actually worked seriously in C. You probably had to maintain someone else's code at some point, got confused by the linker or debugger semantics, and decided you hated it. [disclosure: I peeked at your blog and resume, and it seems I'm broadly correct.] That's fine: work in C is grounded deeper than web development. You don't have the scaffolding around you to provide a fallback for changes. Sometimes you need to fix bugs from first principles, and that takes a lot more knowledge up front than web work does.
But that's not the same as saying that someone who has that background is as unproductive in C as you are. Seriously: spend some time writing something serious in C, you might like it better. It will certainly seem more powerful than it does to you right now.
You're "broadly correct" in the sense that the Republican Party is "broadly libertarian."
I spent 5 years writing almost 100% C and C++ in college, with detours into Lisp, Java, VB, and Ada. (Actually, I'd grown up on Basic, so VB wasn't a detour, really.) After school, I worked at a VB shop, but built a few side-projects in C++. I've had to fix things that were broken, and I don't have a problem with the linker or debugger semantics. I've used templates, and both has-a and is-a object extension. I am certainly not an world-class expert in it, but it's not like I tried it once, got burned, and decided I hated it.
I'm a web developer because I like web development better. For a variety of reasons, not the least of them the challenge of building code that is so portable it will run on 12 different browser/os combinations, and the opportunity to work in many different languages, I find front-end development much more rewarding.
My aversion to C is based on a simple rule of thumb:
(usefulness of a feature) / (tokens required to implement it)
Lines of code is a pretty good fill-in for "tokens", but I had to change the rule when someone pointed out that a 500 character regex in Perl is hardly an elegant or maintainable program :)
Another way to express this is that the elegance of a solution is the number of tokens that are not essential to the solution and do not aid in understanding the intent of the code. For example, compare Javascript's closures with Lisp's or Erlang's syntax. On the other side, compare any of those with the mess of class and object boilerplate in C++ or Java or PHP.
Conceptual cruft is even more pervasive. If encapsulation requires class/object constructs, then there is no way to get past that.
Some programmers will be able to create more elegant solutions than others, of course. But without changing languages, you can't get past the cruft that is built into language. (Or, in the case of C, the cruft that was not removed from the language.)
Again, "power" does not mean "you can do anything with it." C can be used to tell a computer to do anything that computer can do. But I'd argue that that's a naive view of the power of a programming language. There are things that are trivial to express in other languages, and require several lines to do in C. Just something as simple as comparing that two strings hold the same value requires about half as much conceptual overhead in almost any other language.
Or is there a version C with lexical closures, first-order functions and strings, and a garbage collector that I'm not familiar with?
I'm not sure what you mean by You don't have the scaffolding around you to provide a fallback for changes.
The claim that C programming takes more knowledge up front than web work does is frankly presumptuous and a bit misguided. I've seen the Javascript that C experts write. It's terrible. In fact, I'd argue that since quality web work requires an understanding of semantic HTML, CSS, Javascript, maybe some Flash and/or Canvas chops, and probably at least one kind of server-side language like PHP or Java, it requires much more knowledge up front than almost anything else out there.
Very well said. This was the point I was trying to convey, but you did it better ;)
Anyway, I think programmers think too much in black and white. There are people that say you never need to know how the computer works. Then the rest say that you always need to think about how the computer works. The reality is somewhere in the middle. There have been a few cases in my programming career where knowing how the hardware actually worked was helpful. But it was never critical, just good to know.
(Another similar area is data structures. Yes, it's good to know how merge sort and quick sort and B+ trees work. That direct knowledge has helped me a number of times, either because I needed my own tree or because thinking about how merge sort worked helped me solve a related problem. But the other 99% of the time, I'm not thinking about sorting or the tree balancing algorithm. I just use libraries. So theoretically, it's possible to write good programs without knowing that stuff.)
Yeah, but think about what you're saying: the dichotomy you're drawing here between "essential" knowledge and "helpful" knowledge is precisely that between the skills of a merely adequate developer and a great hacker.
Sure, an IT wonk can churn out working software. But they'll never aspire to build the world's next great startup. Have some standards, man. You can't be a true hacker if you're cutting corners like this. :)
I think he's basically saying you should know how computers work, and learning C is a pretty good way to do that.
I like C because it's pretty "close to the metal" but much easier and more portable than assembly. I can usually visualize what's going on under the hood if I need to. That's much harder with higher level languages. And on top of that it's a useful language to know.
Knowing assembly would also help you understand how computers work, even more so than C. But it's not as useful these days.
article reminds me of my granddad complaining that us youngsters don't know how to kill and clean chickens because we can just go to the grocery store and buy them already ready to go.
The #1 thing that I hate about C++ is all the fcking side effects that people can bury in their constructors and every where else under the fcking sun. I want to start swinging a baseball bat at all the fcking idiots that don't think in advance of using the latest fcking cool C++ whatsamcallit.
When you are maintaining a very very large project, the best thing about C is that when you read the code it is fairly obvious what is going on, and you don't have worry about some innocent looking code causing a bunch of other crap the be executed behind your back.
On the downside of C, name space collisions are a problem and the ability to hide data and functions.
I vote they should add C++ style classes to fix the name space problem, and private and public keywords to hide data and functions. I don't want all of the object-oriented B.S. like constructors and destructors that makes it difficult to read and debug other peoples code.
Don't get me wrong.....I think some features of C++ are very useful, especially operator overloading which has allow us to make it easier to code big programs that handle lots of longer intergers and weird integer sizes like 56 bits, or 112 bits, or 128 bits. Yeah, these days we have uint64_t in C99 compilers, but back in the mid-90's we had to write code in C++ just because it had operator overloaded classes, which we also had to write.
Even though I haven't stated earlier...I do real-time embedded software...so I'm not going to be writing an interrupt in Perl or Python anytime soon. C is not going away in the real-time / embedded software / driver world anytime soon. It is still very very common!
Obviously you should be familiar with C. This entire conversation, however, is misguided by comparing it to Java or to fancy high-level languages. It's apples and oranges.
Of all of those, only C/C++ offers extremely high performance as its key feature. Being primarily a shorthand for assembly, C occupies a permanent place as the systems and performance language of choice and will remain there. If your application demands performance, use it. If not, then don't.
Sure, but first write the application in a higher level language and then, if required, you can re-write the bottleneck in C. Otherwise what you are doing amounts to premature optimisation.
>> C is a powerful tool, and every programmer should know it. But it's woefully underpowered as a fulltime software devlepment tool.
Only a inexperienced fool says such a thing! The bottom line is that it all depends on your what you are trying to do. For embedded / real-time / drivers / O/S, both C and C++ are perfect, but for middleware and scripting and many other high level things they aren't the best language.
Completely agreed. I've been doing Euler problems in Ruby/Java, and my friend does them in C. The difference there is very noticeable, and quite interesting at the same time. Obviously our programs are only compared when we use the same algorithm.
The #1 group that says C should be dieing off is f-ing book authors and sellers! Why? Because they can't write a bunch of new books on an older language. It is old...so we can't write book and make money on those older things. New stuff is cool...yeah new stuff is cool...buy our books...buy are books.
What p points to shouldn't be modified (but p can be changed to point to something else).
q mustn't be changed to point to something else (but what it points to can be changed -- except that in this particular example, q will point to a string constant, which shouldn't be modified).
int i[10]; does allocate memory for 10 integers (on the stack)
So i has the type int•. A pointer to a value: an array. A pointer to a value is the same thing as a pointer to the first value in the array. That's why you can use ++i, increment the pointer, and it will point to the second value in the array (++i => •i == i[1])
So &i is, wait for it, a int••, that is, a pointer to an array. So if you have an array of arrays, then you need to dereference twice. E.g. argv. It's a char••, so an array of strings which is an array of arrays of characters.
Yeah, but that's not the point. It's about which address points to which place. An array always points to its 0th member in C by default - that was the point. Declare int• j = i; and you can do as much pointer shuffling as you want.
- Abstractions will always be leaky
and
- be prepared for when an abstraction leaks.
As for the ideology part:
think for yourself and try to see beyond the hype
And I have to say I absolutely agree.
For example, I've written maybe a handful of lines of assembly, and maybe as many lines of inline assembly. However, understanding assembly language has saved me many times. Compiler bugs exist. Sometimes you don't have access to debug info and a debugger, just the instruction pointer and a register dump. Sometimes the compiler just can't optimise well enough.
C is just one level of abstraction higher, so that's probably the level that's going to leak through when your high-level-language interpreter fails for some bizarre reason. (recent Ruby segfaults anyone?)
It's not turtles all the way down, but rather a menagerie of different abstractions, and it's useful to know your zoology, as it were.