Hacker News new | past | comments | ask | show | jobs | submit login
Beej's Guide to C Programming (2007) (beej.us)
427 points by da02 on Sept 8, 2017 | hide | past | favorite | 79 comments



Also good as a supplement:

* A Tutorial on Pointers and Arrays in C: http://home.netcom.com/%7Etjensen/ptr/pointers.htm

* Right-left Rule: http://ieng9.ucsd.edu/~cs30x/rt_lt.rule.html (via HN)

* More free C stuff: https://github.com/EbookFoundation/free-programming-books/bl...

(I had to go search through lots of stuff to finally find resources that explain things in a way that I can understand. Once it is explained to me in a certain way, C code suddenly feels a lot less cryptic and I feel a lot less hopeless/useless.)


Very good.

A pet peeve is that it consistently says "sizeof()", like this:

You can use the sizeof() operator to determine how many bytes of memory a certain type uses. (I know, sizeof() looks more like a function than an operator, but there we are.)

BUT: sizeof does not in fact need the parentheses always!

Syntactically they are part of the argument, and only needed when the argument is a type name, i.e. the argument looks like a cast to that type.

So:

    int x;
    printf("%zu == %zu\n", sizeof x, sizeof (int));
It's very annoying to see the guide kind of glance off the way things really are, into confusion.

Very glad to see it doesn't cast the return value of malloc(), too. :)


Simply referring to it as the 'sizeof operator' would be preferable. Otherwise ignoring the ability to drop the parentheses seems fine though.

That way the author presents a simple, useful conceptual model and hints that it's actually a little more complicated than they described. Seems like a good trade-off, as there's lots of stuff to learn, and sizeof's hidden weirdness is unlikely to bite any students. When writing code, there's basically no drawbacks to just following the simpler rule of always using parentheses and it makes expressions a little easier to follow [1].

[1]: https://lkml.org/lkml/2012/7/11/103


About casting the malloc() : http://c-faq.com/malloc/mallocnocast.html


Yeah, that's a good resource. I also recommend https://stackoverflow.com/questions/605845/do-i-cast-the-res....


I like the style. It's somewhat reminiscent of _why's Poignant Guide to Ruby. (I wouldn't be surprised if the inspiration were actually in the reverse direction.)


BeeJ has been around for longer than why


It would have been nice if they could have collaborated together. "Freelance professors" that could write code and illustrate it w/ words and graphics.


As someone who currently works with C# - is there much of a point to learning C? I've always found it interesting, but never had a reason to jump into it.


While C remains the single most important programming languages of all time, its share of popularity is shrinking, giving way to C++, C#, Java, and Swift (from Objective-C).

Nevertheless, I don't think anyone should call themselves a (true) programmer unless they know C.

C is easy to learn, because it's a small language. If you are not new to programming, pretty much all there is to know is in the second edition of the K&R book.


> C is easy to learn, because it's a small language.

I'm no great c or c++ programmer - but I really like Bjarne Stroustrup's "Learning Standard C++ as a New Language" (C/C++ Users Journal. pp 43-54. May 1999):

http://www.stroustrup.com/new_learning.pdf

Even if it is somewhat dated now - I think it makes a great case for c++ (or d, maybe rust) over C - and it doesn't even touch on Unicode - simply memory management, error checking and buffered I/O.

I agree everyone should know some C - and some assembler (if for nothing other than looking at compiler output) - but I'm not sure I agree that C is "simple". And learning from k&r might not be the best approach today - rather read sources of something like some of the openbsd utilities, redis, spiped...


I kinda disagree. A small language is one thing, but to "learn a language," (to be productive) you need to know the ecosystem, idioms, libraries, etc. A lot of people say design patterns are language smell--deficiencies in the language. For example, the Group of Four's book was for C++ at the time it was written. True or not, my point is that it's a part of being productive in that language and part of "learning" it. K&R and nice and short, but I've heard it argued that it doesn't teach the best (modern) C practices.

C was probably the first language I learned, but I wouldn't say I "know" it. I still have no idea which calls are safe or preferred or how to structure a larger program.

It depends what you want to get out of it. If you want to learn for its historical value, to learn more about the language that a majority of other languages adopted ideas from, K&R is probably good enough. If you want to grok a bit of random C code, that might be good enough, too, but you'll probably need a bit more. If you want to contribute to the Linux Kernel or similar project, or even put C on your resume, that's definitely not enough.


> "Nevertheless, I don't think anyone should call themselves a (true) programmer unless they know C."

You could make an even stronger claim about knowing an assembly language. The point being that understanding how a computing device works is helpful in getting the best out of it. If that wasn't your point, then C is no more special than any other popular programming language.

Also, whilst I only know the basics of C, it's pretty clear things have moved on considerably since the days of K&R. It doesn't cover any of the enhancements included in C99 and C11, nor does it cover the tooling and coding conventions (the ones that help avoid problems) that have become commonplace since 1989.

For a more up-to-date reference, I've heard good things about 21st Century C:

http://shop.oreilly.com/product/0636920033677.do


> If that wasn't your point

It wasn't; it is perfectly possible and normal to use C as one would any other programming language, i.e. without knowing much about the computer architecture. Even the pointers, often seen as something low-level, aren't really: the rules of their behavior and use are very simple, and they are easy to describe at a high level.

C is special in that, as I said, it has been, and remains, a very important programming language in its ubiquity and sheer power.


> "C is special in that, as I said, it has been, and remains, a very important programming language in its ubiquity and sheer power."

In that case, you could say the same about Java, Python, C#, etc... If being a "true" programmer is just about getting the job done, any of those languages will suffice. C may be a useful tool for certain performance critical code, but that's a very small fraction of what's required in most day-to-day programming activities. I would say the following list describes the key qualities of good code (in the order described, from most to least important, for most cases):

Correctness, clarity, conciseness, performance.

As for pointers, you can describe them at a high-level but to understand why they're a low-cost abstraction it's helpful to understand how computers work. That's the type of knowledge that C can help expose you to.


Any modern introduction to C should make heavy use of Valgrind at the very least. In addition to catching the nastiest class of bugs C programs are heir to, it can be used to teach how memory works at a low level.


can you recommend any books that introduce the concept well?


I've been using C# on a daily basis for ~2 years at this point. I've had that book on my wish list for awhile, but wasn't sure if I would make use of it - I'll likely order it soon.


K&R is a great book and should be read by everyone both for culture and as a model of technical writing.

After working through the problems, two books that would be great followups are Hanson's C Interfaces and Implementations and Bryant and O'Hallaron's Computer Systems: A Programmer's Perspective.


First, what is programming? Programming is basically writing out instructions with names, verbs, if/else's, and loops. The latter three are not much different in all languages, but names! The programs we wrote -- the style, details, and effectiveness -- depend on the understanding of entities behind those objects. For example, the name Michael, if we assume he is intelligent and equipped with common sense, we may choose to instruct him in a high-level or declarative way. But if we assume he is dumb we may have to instruct him in a more detailed and imperative way and often it is the easier and more effective way. The state of programming: All entities are in fact very dumb, but programming languages often disguise it and make you think they are smart (smarter than they really are). If you don't care about efficiency and even how it is actually done, as many frameworks and languages designer wished, the more deceivingly smart the entities appear, the better, so the programmer can program without worry about how it is actually being done with a good conscience. But as soon as the programmer start to care about how it is actually get done, or as soon as they desire some control -- out-of the inflated self-respect that they are smarter than the language and framework, then they need the understanding of their entities in details. With most higher level languages (with C#, the libraries, and framework), the objects are very complex and it is difficult or practically impossible to fully understand (due to the fact that we are only human), so the programmers are often doing programming with an assumed understanding (in reality it is just a religion). It is still programming, but programming with trust and faith. With C (the language, not some libraries or framework) however, the subjects are simple -- 1-byte integer or 8-byte integer, 4 -byte float or 8-byte float, and even names representing functions are simple entities. It is possible to fully understand your entities when programming in C. They are dumb as minions, but you are in 100% control. Only the experience of programming with entities that you truly understand and fully in-control can reward you with the certain zen of programming -- the demigod-like joy of producing and enjoying fruits that you are in control of near 100% of its creation.

So, it depends on your objectives in programming. You don't need C or even C# if all you care is a job or an end objective (in fact, you would very much like to skip the programming part if possible). But if you care about programming as an activity of love, then the experience with C -- especially the one with few dependence of third-party libraries -- is uniquely rewarding.


I'm assuming you wrote this comment as a C programmer (or have used C in a professional setting?)

If so - how did you get into that? I'm really looking to _build_ something rather than just going through a book or tutorial (although they have a place and time). '

I suppose my objectives are learning as much as I can, and (ideally) having a day job working with that technology. All of this is more or less driven by interest and enjoyment of programming.


It will be a frustrating experience to learn C with a mindset of career. The mindset of career is about results, C is often not a shortcut to most superficial (visible) results. (It is a shortcut to real programming skill though, but a real skill without paper career record doesn't mean much for your career.)

But if you can afford a hobby and have the luxury of certain time at disposal, then I would suggest you to approach C as if you are picking up a leisure book. You learn more and enjoys more that way. You only need a day or two with a book or tutorial. Then you should simply start solving simple problems. If your math is OK, I would suggest project Euler. Programming is mostly common sense, you learn through practicing. Without the career mindset, you are very likely to enjoy the practice.

Now assume you acquired your career some other way (Java for example) and now you gained certain freedom in your job, and with the unique experence from C, you can try make decisions based on your understanding and common sense. I choose C in my work only because I have that freedom (but that freedom was not gained via C).


Thanks for your interesting perspective :)

I always think of people who use C professionally as people who have been doing it for a _very_ long time and know a lot about it. In that situation it would be exceptionally hard to find any employment working with C.

I've always been hesitant at many Project Euler problems since some of the math is off putting to me. That's why I consider building something (like a side project or something of use).

Using C as a learning tool to better understand programming would be a great idea and I'll likely venture into that- hopefully something comes of it though! :)


I work on a team of 25-35 year old devs doing firmware/performance work for a BigCo - there are definitely plenty of people hiring in the space, but in general you only use C if you have custom hardware, and you only have custom hardware if you have a huge amount of capital, so it's tougher to find that work outside of established players.


That's awesome - if you don't mind me asking how did you get into that area of work/expertise?


Large amount of low-level embedded programmers are hardware designers who learned C because they needed it and usually use various weird programming styles, usually reminiscent of HDLs and PLC programming languages.

Other embedded programmers are just that, programmers who found a programming job that happens to be in this sector and learned the peculiriaties on-job.

Also bear in mind that there are different levels of "embedded" which ranges from writing code for some obscure MCU directly in hexeditor (as it is so obscure so nobody bothered to write assembler for it) to writing applications in Java for something that is essentially android tablet bolted onto some larger system (be it car, airplane or some industrial machinery)

Edit: also habing custom hardware is not that capital intensive unless you plan to mass manufacture said custom hardware. And almost any piece of custom hardware has some MCU programmed in C and/or device driver which is also written in C.

Over last 10 years I've participated on development of about 5 different pieces of custom hardware for various niche applications. This includes industrial sensors, IoT-ish stuff and somewhat peculiar reliability and security enhanced PC platform.


>> I've always been hesitant at many Project Euler problems since some of the math is off putting to me. That's why I consider building something (like a side project or something of use).

Math has the nature of simplicity -- well defined and little complication, but abstract (or useless).

If you like building things, I would suggest a text editor or a GUI layout engine. For the latter, think about an intuitive declarative way of describing layout (like HTML/CSS or TeX or Tk) then implement it on native win32 API or GTK. Focus on the simplicity and in-control aspect; do not get distracted by feature completeness.


> If you like building things, I would suggest a text editor or a GUI layout engine. For the latter, think about an intuitive declarative way of describing layout (like HTML/CSS or TeX or Tk) then implement it on native win32 API or GTK. Focus on the simplicity and in-control aspect; do not get distracted by feature completeness.

Both of these sound like great learning projects - but are they too far in the "deep end" for a beginner?

Just curious :)


I wish there was a more modern version of Programming Windows by Charles Petzold that I could recommend for its great, down to Earth introduction to Win32, but unfortunately the fifth edition from 1998 was the last of its kind.

If you're OK with the dated context it's still applicable to modern Win32. You can find it used.


> I'm really looking to _build_ something rather than just going through a book or tutorial (although they have a place and time). '

It might be worthwhile to start 'small' by taking one of your C# projects and converting some small section into a native/unmanaged C or C++ DLL that gets used by the C# side. There's a lot of boilerplate involved setting things up on both sides of the divide, but you can stay in a mostly familiar environment yet start learning about some of the 'magic' C# handles for you that is exposed by the unmanaged code, and extend your experimental forays into the C side as you get more comfortable.


Keep in mind that C and C#, in practice, aren't especially related. C is a low level, close to the machine, minimal systems language, while the latter is a modern, advanced, managed language.


What kind of apps have you used since learning C#? Was it difficult finding work with C#?


Do you mean what sort of things have I built thus far using C#?

Most have been console based applications with a GUI front end to manage settings. I've worked a little with WCF and a little more with WebAPI/REST and also Windows Services.

C# more often than not makes things fairly simple and fun, but repetitive at times.


Thanks.


C is still pretty special in that it's generally used to write the compilers for all the other languages. Even today, if you were to create a brand new language, you'd likely write the compiler in C (or maybe C++).

Also it still remains the most practical language for fundamental system components like an OS kernel, drivers, etc.

Most people don't work on compilers or kernels, so I'd say most people don't actually need to learn C. But, unless C is ever unseated from its position, we'll always need some people out there that know it. :)

To unseat C, we'd need a language with similar low-level expressive power, minimal overhead, and a compiler written in that new language that can compile itself.


> Even today, if you were to create a brand new language, you'd likely write the compiler in C (or maybe C++).

You might write the bootstrapping compiler in C for portability reasons, but you'd probably write the real compiler in the new language itself. Not to mention that if you don't mind the extra dependency on POSIX platforms you don't even need to write the bootstrapping one in C. For example, the Rust compiler frontend was originally written in OCaml, and is now written in Rust with a fairly complicated bootstrapping process. The backend is still C because it uses LLVM, but it hardly has to be


If I remember correctly Go's first few compiler implementations were written in C, but later were written in Go (bootstrapped by C).


They actually wrote a tool that turned C code into (unidiomatic) Go code, and then later gradually improved the codebase they got from that.


This is wonderful. I see that this guide is sort of "old news" to many of the experienced programmers hanging out here. But as someone who isn't that experienced and about to begin a course on Data Structures + Algorithms using C, this is amazing. Thank you.


These two sites were invaluable for me when I was in your position: http://iso-9899.info/wiki/Main_Page and http://c-faq.com/


> I see that this guide is sort of "old news" to many of the experienced programmers hanging out here.

https://xkcd.com/1053/

Carpe diem.


Well that's cool - digging around a bit, I found out that he runs a hacker group right here in Bend: http://bend.hackersguild.us/#page-main


BeeJ's guides don't get as much visibility as they deserve.


Are there any good tutorials for cross platform development with C++ on windows.

I just want to avoid MSVC compilers and use clang or gcc.

Also, C++ module/package/library management is very very confusing. You have to configure multiple files in multiple folders and multiple configuration entries.

I tried it almost a dozen times and just left it out of frustration.

Compared to development with python, it's just mind numbing.


> Are there any good tutorials for cross platform development with C++ on windows.

Install QtCreator with MinGW. Here you can just compile your code and have it work.

> You have to configure multiple files in multiple folders and multiple configuration entries.

not on sane systems where you just do `apt-get install libsdl1.2-dev` or `pacman -S gtk` or `brew install qt boost`


I hear QtCreator mentioned on Hacker News quite frequently. A couple of questions, if you don't mind.

1) How well does it work if you are trying to stick to C and not go the C++ route?

2) They are GPL licensed, so not very permissive, or quite expensive for a commercial license. Is there a significant difference in feature set between the open source & commercial versions for someone that just wants to take it for a spin?


> How well does it work if you are trying to stick to C and not go the C++ route?

The IDE works fine, but you're missing on a lot :p. You can use raw makefiles or, better, CMake with it; the integration with cmake's server mode in the latest version is top notch.

> They are GPL licensed, so not very permissive, or quite expensive for a commercial license. Is there a significant difference in feature set between the open source & commercial versions for someone that just wants to take it for a spin?

First, there are two things: Qt and QtCreator. Qt is a C++ library (and also has its own DSL, QML which is a godsend to make modern UIs) and QtCreator is an IDE. While it has a lot of Qt-specific features (eg UI designer, some Qt-specific autocompletions, Qt examples on the front page), it can just be used as a general-purpose IDE for C, C++, Python, and Nim. As such, its license does not matter for you as a developer, unless you want to ship a product with the IDE (for instance Sailfish OS and Ubuntu used a customized QtCreator as the IDE for their respective SDK).

Then, Qt is as you saw, under multiple licenses: commercial, gpl, and lgpl.

* all the libraries are under GPL.

* most modules are additionnally under LGPL.

* two tools are only under the commercial license: QML compiler (but it has been superceded by another approach that was open-sourced) and pre-made images for some embedded boards (Boot2Qt, basically useful for embedded software that runs fullscreen just on top of the linux kernel with no GUI environment, generally on the raw framebuffers or drivers provided by the board vendor).

The things that are under GPL but not LGPL are here: http://doc.qt.io/qt-5/qtmodules.html#gpl-licensed-addons (three modules: virtual keyboard, charts and data visualization). Everything else on the page (ie more than enough for "taking it for a spin") can be used from proprietary apps (you have to make a readme at some point that says that you use Qt, like any other open source libs.) . The simplest is to just use the Qt shared libs, but you can also link them statically:

https://www.gnu.org/licenses/gpl-faq.html#LGPLStaticVsDynami...


>QML which is a godsend to make modern UIs

What advantages does QML provide over using "just the Qt library from C++"? Faster development? I've used an earlier version of Qt (3 or 4) a bit, but never tried QML.


If you are familiar with modern web development, think of Vue.js or React's reactive bindings but as part of the language syntax. See https://qmlbook.github.io/en/ch04/index.html#qml-syntax

e.g. when in "C++ Qt" you'd do :

    connect(m_button, SIGNAL(clicked()), m_object, SLOT(on_clicked()));
or more recently

    connect(m_button, &QPushButton::clicked, m_object, &MyObj::on_clicked);
or even

    connect(m_button, &QPushButton::clicked, this, [=] { /* do stuff */ });
QML would be :

    Button {
      onClicked: /* do stuff * /
    }
Even better is stuff like

    connect(m_slider, &QSlider::valueChanged, 
            m_object, [=] (int val) { m_item->setWidth(3 * val) });
    m_object->setWidth(3 * m_item->value());
which becomes

    Slider {
      id: slider
    }

    Item {
      width: 3 * slider.value
    }

It has a slightly stronger type system than JS; not as strong as TypeScript though. The idea is that when you really need strong typing you do it in C++ and expose your objects to QML (which is a one-liner: every object part of the Qt meta-type system can be used from the QML's side and have its signals and slots called there).

Of course, the big drawback is the performance loss that you get from going from optimizable C++ to Qt's modified V8 engine. However, if you had a graphics-heavy C++ Qt app that relied on QGraphicsScene / QGraphicsView it could be faster in QML since there it's all OpenGL / D3D12 (and hopefully Vulkan) + a modern scene graph, instead of CPU rendering. So like always, benchmark benchmark benchmark :)


Yes, the QML code does seem a lot simpler/shorter. Thanks.


About your second question: as far as I know, you can use the LGPL Qt in commercial projects, freely, as long as you don't statically link the libs.

https://stackoverflow.com/questions/11994053/can-i-use-qt-lg...


You can install Msys2 http://www.msys2.org/ and use GCC 7. Your compiled code will be 100% native Windows executable and will run on other Windows machines.


You can now use clang as an alternative compiler in Visual Studio. Other option is to install bash on windows and install gcc/clang and use that.


The first listed program to add up all the integers from 1 to the scanned number wouldn't compile under GCC for me and the error was that "i" undeclared (first use in this function). I had to declare "i" as an "int" to make it work. It looks like it accepted the declaration in the top with num and result or within the for loop.


I remember using Beej's guide to learn how to do TCP/IP stuff in C around ten years ago. Very well written and easy to follow.

http://beej.us/guide/bgnet/


Beej was a couple years ahead of me at Chico - he was an awesome guy then and it looks like he's been keeping at it ever since.


Beej wrote one of the best programming content online and I hope he can update some of them(C11,posix IPC, epoll/poll,etc).


Yup. This guy is great. Downloaded this a while ago for offline reading while I'm on the go.


Beej's Guide to Network programming (http://beej.us/guide/bgnet/) is also pretty good. It was effectively used as the textbook in my networking class. (There was also an actual textbook, but almost nobody bothered to read it.)


It's definitely worth reading, but be aware that it's quite old-fashioned in that it focuses entirely on blocking I/O and select(). Also this page fails to explain why you would call shutdown(): http://beej.us/guide/bgnet/output/html/multipage/shutdownman...


Do you perhaps have some examples of some more modern resources or examples we can consult? I would appreciate it.


Thanks for pointing this out.


What has replaced select()?


Although there is a case for still using select(). If you're looking at avoiding porting headaches and aren't dealing with a large number of descriptors then select() still works fine.

The alternatives are mostly necessary when you're dealing with large numbers of connections (thousands), which most people aren't.

Of course there's also the slight problem that select()'s interface with the file descriptor packing macros and the unusual requirement to figure out which file descriptor is the largest numerically that might push people to the alternatives. That is also a valid reason IMHO.


epoll on linux. Other apis on other oses.

More reading: http://www.kegel.com/c10k.html


Kqueue on the bsds and darwin.


Beej's guides are the best way to get started quickly (i.e. in the time frame of a university course). For a more thorough introduction, one should probably read "Advanced Programming in the UNIX Environment" and "UNIX Network Programming", both from Richard Stevens.

Jens Gustedt has also written a rather nice ebook on how to program in modern C: https://gustedt.wordpress.com/2016/11/25/modern-c-is-now-fea...


This has been around for 15+ years. I wouldn't be surprised if it is recommended to every newcomer on forums and chat channels.


I will also share praise for Beej. My network programming course was in C, and most of my programming experience had been in Java, Python, and JavaScript. This guide was probably the reason I passed.


I've seen this suggested in many reddit or other websites threads. Also in college classes yes.

Used it to implement ICMP echo heh.


Second this guide. It’s pretty short, but it has everything I’ve seemed to need at least.


Was there anything in the textbook that was explained better than in the Beej Guide?


I would wager the textbook probably covered history & development, various topologies, congestion control algorithms, queueing theory, etc. There is more to networking than programming sockets.

But for a guide on programming sockets, which is all it sought to do, yeah it's excellent.


BeeJ's guides to other things: http://beej.us/guide/


He even wrote a League of Legends guide.

Beej's Guide to Lane Control and the Early Game http://forums.na.leagueoflegends.com/board/showthread.php?t=...


This is amazing. It's a shirt and a quick understanding guide.


That's a unusual combo.



That's for network programming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: