I think there is a fundamental problem that isn't addressed in teaching people how to code. The problem is that every single course/book talks about the tools of the trade but not the art. Most books start with:
- Variables
- Loops
- Functions
... etc.
In doing so, it is simply overloading the student with syntax memorization and conceptual overload. It bothers me so much that very few books (Kernighan) talk about WHY. WHY. WHY is a variable needed? WHY is a function needed? WHY do we use OOP? Every single book out there jumps straight into explaining objects, how to create them, constructors, blah blah blah. No one fricking talks about what's the point of all this?
Teaching syntax is like muscle memory for learning Guitar. It is trivial and simply takes time. Syntax - everyone can learn and it is only one part of learning how to code. Concepts are explained on their own without building upon it. The famous book for Python (Learn Python the Hard Way) explains Loops in its own chapter and provides examples. But, they never build up on the idea. There should never be a separate chapter for variables, loops, functions, etc. Chapters should be:
Chapter 1. Setting up the problem (Goals)
Chapter 2. Defining Inputs/Ouputs (API)
Chapter 3. Automating something (Variables, loops)
Chapter 4. Abstraction of something (Functions)
Chapter 5. More automation! (Combing all)
Chapter 6. Splitting code into multiple modules (Growing the project)
Chapter 7. Objects (New type of abstraction, OOP)
Chapter 8. Reusability of Classes (Inheritance)
Chapter 9. Safety/Security (Encapsulation, tie it back to Chapter 2.)
etc...
Best coding resource I've found is things like:
NAND to Tetris [1]
Handmade Hero [2]
The Nature of Code [3]
Harvard CS50 [4]
How to Design Programs [5] (thx minikomi)
This is learning how to produce music. Not learning the F chord. Teaching how to code is fundamentally broken and very few books/courses do it well.
Along those lines, I think CS 106A from Stanford may be the best intro I’ve ever seen.
It uses Karel the Robot, which is a kind of pseudo-code robot written in a Java environment. So you’re trying to use code to solve problems, and use code composition techniques to make it happen.
For example, Karel can’t turn right, so you need turnLeft(); three times. But that’s a pain, so create a turnRight() method that tells Karel to turn left three times. And just like that you understand function composition and why you need functions.
Similar problems force you to use every low-level aspect of programming, including loops, variables, solving off by one errors, etc. And by the time you’re done you’re actually using classes and kind of accidentally writing Java, without fully understanding that’s what’s happening.
I consider it a work of art, and which someone would make an open source, extensible Karel environment that’s really well done and supports other languages. In fact I may just build it.
Sounds like that must have been the inspiration for Swift Playground for iPad, which is also a great teaching resource though aimed at kids. It uses the very same turnRight example.
Honestly, it's like you think they want to teach coding instead of sell a book. They couldn't possibly cram a slip-shod curriculum into 24 hours if they actually wanted to do a good job teaching people to code.
It's practically right there in the title. I think this will actually evaluate true in every single programming language as well, even those yet to be written:
char title[] = "Teach Yourself C in 24 Hours";
char meaning[] = "Cynical money-grab from people that haven't learned there is no shortcut to learning";
strcmp(title,meaning) == 0;
Edit: I've always been a fan of the Ritchie book.
Edit 2: The most useful courses I ever took, that taught me the mental framework for approaching code or any code-like problem (and really any problem that can be chunked down to discrete bits of work) are the following, roughly in order of importance:
1: Assembly
2: Data Structures
3: Algorithms
I won't even both telling which languages (and assembly flavors) were used-- it doesn't matter, and that's the point.
Thanks for the list. I've worked my way through part of NAND to Tetris so far. NAND to Tetris is a little more than just a coding resource. It covers first principles of computing, through to programming. The lessons also come in the form of a book listed at the top of the page The Elements of Computing Systems for anybody who doesn't want the Coursera version.
CS50 is great for core principles, but you need to go into that course with some kind of knowledge/experience working with a programming language of some kind. I found it just kind of dropped you into your first algorithm problem without teaching anything practical about implementation except for how to compile a C program. Regardless, it's a great course.
I'll definitely be having a look at the others you/others listed. I like to run through these things to fill out my core understanding or to refresh it. And still need to finish NAND To Tetris when I free up more time I can commit to it.
NAND to Tetris - yea it is a little more than just a programming course but that's exactly the kind of teaching we need to do. I took CS1371 at Georgia Tech back in 2004 and it was completely and utterly useless. How does a multi-million dollar college screw up teaching how to code so badly? I wish NAND to Tetris was around.
CS50 - Just watch week 0[1] and it literally starts with a survey of 73% that have never taken anything computer programming course or know anything about programming. The course starts with the absolute fundamentals - decimal system to binary, power cord of the computer where electrons go in and then builds slowly into introducing C.
I wasn't discounting your suggestions -- don't get me wrong. They're absolutely great for somebody wanting to do a deep dive. They're not a fast introduction to writing your first program, though. The logic required to understand high-level programming can be taught without diving that deeply. If somebody needed to write some simple math functions (and theoretically for whatever crazy reason wanted to write the programs in C), they would be able to learn to do so without such a deep dive into first principles. For real understanding, the courses are an incredible resource, especially for free!
This makes me think of the George Carlin joke: "Have you ever noticed that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?"
Yes, that's another problem with programming books and courses. Literally, few books talk about intermediate/advanced topics, whereas, these reference/glossary type books popup every week on Amazon.
Along the same lines, I tend to recommend people that are interested in becoming a developer check out 'Code: The Hidden Language of Computer Hardware and Software', it does a great job of explaining how computers work from first principles in an engaging way.
Thanks for the resource. I'm studying programming at a university and I just feel they are teaching everything very high level. The video on the website of nand2Tetris was very fascinating. From the coding courses I've done on and off, I thought the chain ended at "Assembly Language" but in reality, the rabbit hole goes deeper! The book is available in my library and going to grab it now!
I found the Harvard CS50 really, flashy and fluff. Didn't enjoy it. I took an alternate to it, Introduction to Computer Science using Python on Edx.
I’m self taught, and I would have killed to have had resources like that in early days. I always got glimpses of what I wanted to understand, but it seemed like a lot of technique and very little “why” much of the time.
If you want a book to help get you started with TAOCP I've heard Concrete Mathematics is good. It's the math precursor to TAOCP (so if that's what's holding you back it could be helpful.) I haven't finished concrete mathematics yet though either :P so I can't tell you if it actually helps reading TAOCP. I would recommend knowing calculus, linear algebra, and discrete math before concrete mathematics. MIT OpenCourseware should have all those courses with Mathematics for Computer Scientists working as the Discrete Mathematics supplement.
I've never taken linear algebra so that is what is holding me back right now from continuing concrete mathematics. I bought Gilbert Strangs Linear Algebra book and it seems good so far, but I am busy with schoolwork now so I'll have to get back to it after I graduate in a spring.
Thanks. I actually have a Ph.D. in mathematics. It's just that I dislike programming. I've done it, I don't like it. When I started using computers, knowing how to program was close to mandatory.
The "in 24 hours" part is of course just book title clickbait. I read Sams' C++ book and the 24 hours figure is making lots of assumptions such as general familiarity with programming, spending >10 times more time doing exercises and understanding the material and so on. And after all this, you will still just be a beginner since you will have no actual experience. It naturally also leaves out lots of material that isn't judged necessary to fit into the 24 hour time frame (though this may not be the case for the C book since the language is so much smaller).
As for this particular link, all image links in the text seem to be broken, and a pdf of the book can be found elsewhere.
I totally recommend Sam's Teach Yourself C++ in 21 days by Jesse Liberty (I found it here http://101.lv/learn/C++/). It's a great book to learn not only the basics of C++ (which, since it is a rather old book may be not very good for this and you should probably learn C++ by using a more recent book) but, more importantly, the basics of OOP, i.e encapsulation, inheritance and polymorphism/late binding. The animal examples are excellent and I actually use them even today when I want to explain to somebody these concepts.
Wow this brings back memories. This is the book I first learned C with. Bought it at a video game store and it came with a install disk for the Borland C compiler taped to the back page.
This was one of the first books I read on C and programming I general! I remember I wasn't happy at all with it, so I gave up on it a bit after the middle. It must be pretty old.
When I was a kid you had maybe 2-3 books on C at the local bookstore, and they were all terrible. But sometimes they came with a (crippled or obsolete) compiler, so that was a huge plus.
The book I learned from was C in 12 lessons by Greg Perry, which I found is still on the shelves to this day, 25 years later. Terrible book.
The bad part is you don’t learn how to do anything other than little bit of simple input and output and loops, and the worst explanation of pointers you can think of.
You can’t do anything in the base language, but that’s just the way C works, you need a platform specific book to learn to do anything worthwhile. This is why the 80’s assembler books were so useful and inspiring.
For what you end up learning in the book you could have actually spent the 24 hours (or less) using something like Python, where you’d be way more productive and hundred times less confused. Actually doing work instead of messing with pointers and fscanf().
Thank you for your response. At this point, I think that I must say that your brain and my brain must operate very differently. To me, messing with pointers is the beauty of all that - and it so easily translates to the assembler-lang-brain.
EDIT: Did you not find the K&R explanation of pointers sufficient?
Back in the 80s, "Program in BASIC" didn't exist (well, maybe). You had "Program BASIC for the Atari 400/800" or "Learn BASIC for the Commodore-64". I even remember books to learn C for specific platforms like the PC or Amiga.
Today, a generic book on <programming language> will have to spend a significant portion of time explaining platform specifics. Granted, this is really only three platforms these days (Windows, Mac and Linux) but they are significantly different to be a pain to have to explain in a general purpose book.
I really want to see K&R Third Edition which is written as perfectly as the original but updated to C11. Just to have the textbook with the modern ISO specification and latest introductions.
The languages, idioms, and ecosystems are different enough that I don't see that as a problem. They should be treated like two different languages. You wouldn't read a C book and expect to understand the conventions of C#. Neither should you for C++.
The problem is, C-only codebases are pretty rare, even established ones adopt C++ (e.g. Darktable, which has a rather advanced C-only object system for UI and plugins). Especially in 'hot' topics like AI, where there may be place for pure C, but actually you'd more often use some low-level subset, like C with templates (very useful in writing image processing libraries). And no, I don't think it's reasonable to learn the modern C++ straight away, skipping C part: its abstractions are very leaky, you need to understand underlying resource management and type system.
If you don't plan to learn C++ later, okay, do use a modern version, but if this is a possibility, better start either in completely different language with more clear semantics (pascal was great, maybe some Go or Rust are good now), or the pure subset, which wouldn't add to later confusion but would allow to use more advanced abstractions as needed.
From my experience teaching software engineering to high school students which already passes their algorithms & programming courses, small differences in practices and/or semantics (svn commit vs git commit) is what kills learning progress.
Flexible array members, designated initializers, compound literals, anonymous structs and unions, and many of the library features are pretty great. They more than pull their weight.
I am glad the C11 committee put more weight on implementibility for C++ compilers than C99 did. Variable-length arrays in particular feel like the C99 committee decided to poke the C++ committee in the eye with a stick. (Given how the C++ world treated C at the time, though, I understand the temptation.) That said, if being compilable as C++ were a hard requirement for C features the world would be a poorer place.
I read K&R when I was 15. It was definitely good enough.
Platform specific books also helped a ton. I was into the Amiga at the time - '90 or '91 or so - and you really needed extra info to do anything interesting.
Same here, I was lucky enough to have a friend visiting the US just when I realized I needed the RKM books and got 3 of them (there was no Amazon, Ebay etc). They were absolutely fantastic!
The reason for me was that I didn't know of its existence. I just though I want to learn C, so I'll just google "learn c free online book". With no mentors or peers, it's more difficult to find the way.
Of course, after some time I thought of googling "best book on c lang", and then I discovered K&R.
I've been coding JavaScript and PHP for 6 years. I know the advanced topics of both languages. I'm in the office on a Saturday right now asking the question I'm always asking, "Where does this go?" I wish I knew the answer so I can go home -- the really important part of coding can't be learned in a day, a month, 12 weeks at a bootcamp, or after 6 years.
Many here will rush to trot out the usual "but, we should all be using { fashionable language of the day | Rust } instead of C". Maybe. Maybe not. But there is something of a sweet spot with C and after all these years it persists due to that sweet spot despite some of the potential downsides.
Anyone capable of thinking that thought (about any programming language) is so divorced from the vast range of pragmatic contexts within which programming happens, they might as well be shipped off to a monastery to live out their days.
I only opened the first chapter, but I certainly won't recommend a book where things such as "There are two types of programming languages: compiled language and interpreted language." are written.
My point wasn't to get pedantic nor to criticize this particular sentence alone. I'm essentially saying that beginners' material could often be as easily comprehensible without the sort of "simplification" that actually says something wrong.
If you would like to be really pedantic then the statement is true. There are many types of programming languages, and s language could be categorized into a "compiled language" category, an "interpreted language" category, or some other category. The statement did not claim that there are only two types of languages.
Even if you only consider compilers and interpreters (in real life there are many in-betweens: bytecode/VW, JIT, etc.), a language is not either interpreted or compiled. Maybe its reference/canonical implementation is an interpreter or a compiler, but that's all.
What would stop anyone from writing a C interpreter? Or a Python compiler?
Now consider OCaml, which canonical distribution comes with the `ocaml` interpreter, and the `ocamlc` and `ocamlopt` compilers: how do you classify the language?
I don't think this breaks my point. For example, I don't see of any reason a language could be compiled but not interpreted. And that alone breaks the idea that a language is necessarily either compiled or interpreted.
Moreover, the distinction that Shutt makes in this blogpost between compiled and interpreted is not very clear: most of what he says rings more to me as a distinction between static and dynamic rather than between compiled and interpreted, but I concede that I read the blogpost rather quickly and without taking the time to fully grasp everything.
Interpreters are more flexible; I'd expect that any compiled language could also be interpreted. I'd expect that at least some languages where the canonical implementation is an interpreter could also have an ahead-of-time compiler implementation, but that some of those languages would have at least sections of the program that need to be just-in-time compiled (or even recompiled) at runtime. That would pose extra difficulty in making a compiled implementation of the language.
> This is an introduction, details will be glossed over.
Maybe, but I really don't like this. Sometimes it is necessary to simplify things (at least in the first place) to teach them, but it is very rarely the case that it is necessary to say something actually wrong. For example I think it is totally acceptable to ignore the existence of VM and JIT in this chapter, because explaining what they are requires knowledge that the targeted readers don't yet have.
But I don't see what good it does to say that the two types of languages are compiled languages and interpreted languages. It may even be counterproductive because the same paragraph which explains what a compiled language is as opposed to an interpreted one ends with this sentences: “You can think of the C language as a compiled language because most C language vendors make only C compilers to support programs written in C.”. This sentence is now confusing for the inexperienced reader, while it is the only acceptable one in the paragraph.
It would not have been more complicated to explain that languages can be implemented either with a compiler or an interpreter, then explain what is the difference between the two (this is already done), and then finish with that same sentence :).
I think it's a useful simplification to say something like "C can be interpreted, but it's almost always compiled in practice, so we usually say that it's a compiled language". Even with that, you've got to have an explanation of "compilers" vs "interpreters", with the attendant decisions involving the balance of simplification vs pedantry that it's so easy to get mired in (this thread being an example).
A document claiming "Learn [topic] in [short amount of time]" seems likely to err on the side of simplification (the alternative being perceived by the author as over-explanation). In that position, my solution might've been to avoid the compilation vs interpretation discussion, describe a compiler as a program for converting source code into a computer-friendly format, add a footnote to something in the appendix with a deeper explanation of other methods of preparing source code to run as a program, and move on.
I leaned C from the Borland book that came with Turbo-C. A masterpiece of clarity and conciseness. I found K&R too low-level. I don't really want to know how they implemented strcpy().
I would prefer that someone learnt it in a few weeks using K&R followed by study of the comp.lang.c FAQ for more useful and some obscure parts of the language.
In doing so, it is simply overloading the student with syntax memorization and conceptual overload. It bothers me so much that very few books (Kernighan) talk about WHY. WHY. WHY is a variable needed? WHY is a function needed? WHY do we use OOP? Every single book out there jumps straight into explaining objects, how to create them, constructors, blah blah blah. No one fricking talks about what's the point of all this?
Teaching syntax is like muscle memory for learning Guitar. It is trivial and simply takes time. Syntax - everyone can learn and it is only one part of learning how to code. Concepts are explained on their own without building upon it. The famous book for Python (Learn Python the Hard Way) explains Loops in its own chapter and provides examples. But, they never build up on the idea. There should never be a separate chapter for variables, loops, functions, etc. Chapters should be:
Best coding resource I've found is things like: This is learning how to produce music. Not learning the F chord. Teaching how to code is fundamentally broken and very few books/courses do it well.References:
[1] http://nand2tetris.org/ [2] https://handmadehero.org/ [3] http://natureofcode.com/ [4] https://cs50.harvard.edu/ [5] http://www.htdp.org/