"Nevertheless, this is still a few order of magnitude better than C++ templating."
Uh... no. On a personal level, I have always found that almost any type of preprocessor magic, especially re-#including with a different set of parameters in effect is a recipe for disaster.
Any use of the pre-processor seems to have a serious impact on the build-system (tracking parallel variants), the IDE (color, intellisense or auto-completion), the debugger, and apparently any tool that operates at the meta-level.
On the other hand, IDEs and debuggers often also have problems with C++ templates as well.
Am I crazy to think that the result has generally been better with templates?
I think we would better off without any of this. We should: - discard the preprocessor,
- expanding and tightening control on templating), and
- migrating towards more conventional module import methods, such as http://clang.llvm.org/docs/Modules.html (instead of #include, unless you can fix IWYU).
I've never seen preprocessor use cause a problem with build tools. Using the preprocessor for certain types of code generation used to interact poorly with code completion and code browsing, but that's much less of a problem these days. With things like clang, the IDE/editor can just compile the code, and get the browsing/code completion data from the result. Modern versions of Visual Studio do something similar internally with the VC++ compiler.
(This is actually a bigger step forward than it might seem. You can now make good use of a few useful techniques that were previously worth avoiding because they didn't work with code completion or browsing.)
On the other hand, if you're creating a macro that consists of more than one statement, I've always thought you're much better off with templates. Even if the debugging experience isn't as good as it should be (e.g., the debugger doesn't know what template type T actually is, so you can't watch "(T *)p" and so on), you're pretty much guaranteed to be able to at least step through the code and look at the variables.
In extreme cases, I combine templates and macros to achieve the desired conciseness of code. Templates are powerful, but often a lot of verbosity/boilerplate is still needed, where macros may come in. That is usually in a basic form as described in this article (one can do much more advanced stuff with macros e.g. google X macros).
Before becoming suitably annoyed at your preprocessor magic, which I will examine shortly, I would like to point out one example I had in the back of my mind when I wrote my comment: just google lttng/tracepoint.h. It just seems to me like a terrible hack.
Of course, if you want to criticize templates and meta-programming (as an alternative to pre-processor), there are many jaw-dropping examples in Boost, V8, etc. I can remember several late-night sessions wrangling with (Boost) serialization code, and the cryptic multi-level warnings that ensue. Those were bad times.
I always figured that the answer would be a language/tool simple enough that a serious programmer could, without heroic effort, manage to customize to his needs over a weekend.
Sadly, it appears that the language lawyers have gone insane, and the trend appears to be entirely opposite (with the recent developments in C++0x/C++11/C++14).
For background: I was a big fan of Visual Studio (when working in games and finance), but my most recent posting precludes developing in Windows, and then porting to Linux for deployment. Looks like most people (at this posting) are doing printf-programming, relying on logs to see what happened (in debug builds). It's hard (within our current heterogeneous environment) to setup gdbserver + Eclipse or some other gdb client on a Windows front-end (which we need for administrativia).
The point is I miss being able to just set breakpoints, launch the program and inspect program state on the fly, as a means of learning the behaviour.
I have cared enough about all this, and used to eagerly await the day that MS would open-source the C++ AST/Intellisense engine, opening the door for programmers being able to perform complexe refactorings with relative ease. Maybe I should just but the whole-tomato software, or some other tool. These days, I am following Clang developments with interest and hope.
FYI, I think the ultimate annoyance is some Linux fanboy who doesn't "believe" in C++ or IDEs, but who is too young to remember when GUIs were not even an option, that text mode was all you had. I am surprised at the resurgence in TUIs (and the longevity of some CLIs). Sure, C++ has its warts, but RAII is better than gotos (here, I am conflating the kernel developer's aversion to C++ with the likelyhood that he/she uses vi, or some other console-based editor).
Sorry for the confused monologue. I'll check out the code later.
Inspecting program behaviour via the debugger has always been an uphill battle in C/C++. Sometimes you're working on something where the bug only surfaces in release builds (because the debug build isn't optimised enough and the program runs too slowly to trigger it). Good luck debugging when half the function arguments read "optimised out" and program execution jumps around randomly because the compiler reordered your code for efficiency.
One thing I've always dreamed of is having the compiler emit source for the assembly it generated. So you can attach the debugger and see the code as it actually is, rather than as it was when you wrote it. I love how in Java my code gets optimised by the JIT, but once I start debugging all the optimisations come off (but only once I need to see the source) and what I see is what is running.
I was simply not aware of it as I never really looked deeply at the features of anything posterior to C99 (I only vaguely remember the thread features that look appealing). That's interesting, I might end up mentioning it in the conclusion later.
But they don't give C any power C++ doesn't have. In terms of expressive power, you have things like std::vector, and in terms of performance, everyone except MSVC supports VLAs in C++ anyway, and MSVC doesn't even support them in C.
>How do I play to its ABI compatiblity strengths? Because I have to keep wrapping all of it in extern "C" declaration in a large enough system.
That is playing to its ABI compatibility strengths. In how many other languages is binary compatibility so simple?
Regardless, no one said anything about ABI compatibility being one of C++'s strengths. The point was about using C++'s template system for, of all things, exactly what it was designed to do.
> , you have things like std::vector, and in terms of performance, everyone except MSVC supports VLAs in C++ anyway, and MSVC doesn't even support them in C.
They are not the same. VLA in C are on the stack vectors are allocated from dynamic memory (heap).
> Regardless, no one said anything about ABI compatibility being one of C++'s strengths.
Right but that is why in practice C++ it is not a strict superset of C. When having to interface with other parts of the systems (loading .so, .dlls) or integrating with scripting langauges, the interface still has to be wrapped in a C interface.
I didn't say that they were. I said that they were equally expressive, because the claim was that C++ is more powerful than C.
>VLA in C are on the stack vectors are allocated from dynamic memory (heap)
That is not necessarily true. It's an implementation detail, and as far as it's relevant, so is the fact that everyone supports VLAs in C++.
>Right but that is why in practice C++ it is not a strict superset of C
Nobody said that C++ was a strict superset of C. The claim was that C++ is more powerful than C, and the fact that you have to specifically request ABI compatibility is not evidence to the contrary. Rather, the fact that you can do this at all (and that you can't use C++ linkage in C) is evidence in support of it.
And its weakness is that those ways in which you can, you're practically required to use to accomplish anything interesting. You can't even perform formatted output without throwing the entire type system out the window.
You could, in principle, perform formatted output without throwing the entire type system out the window. How? Rather than pass the arguments themselves, pass them wrapped up in structures that indicate their type, e.g., struct type_wrapper {int type; void *ptr; int i; char c;}; Of course, you'd have to write your own custom version of printf, possibly deal with the mess which is va_copy, but it's doable.
You're absolutely right of course; just about any facility's shortcomings can be overcome by redesigning that facility, but that's how we get Greenspun's Tenth Law. :)
Unfortunately, redesigning for safety/correctness in C almost always means a loss of place-of-use conciseness. For example, you'd have to manually wrap each argument to your hypothetical printf replacement in a "struct type_wrapper." Plus, there's no way to enforce that your format string has the same number of placeholders as you have arguments.
Speaking of which, you can actually do that in C++. It would look something like
#include <cstddef>
template <std::size_t N>
static inline constexpr
std::size_t num_args (const char (&fmt) [N]) {
// Count args
}
template <std::size_t N, typename... Args>
void printf2_impl (const char* fmt, Args&&... args) {
static_assert (N == sizeof... (args), "Wrong number of arguments to printf2");
// Interleave args into fmt and print
}
#define printf2(fmt, ...) (printf2_impl <num_args (fmt)> (fmt, __VA_ARGS__))
You could even just typecheck the format string against your argument types and forward to printf. The biggest problem with this what happens when you give printf2 an empty __VA_ARGS__, but such is life when you're using the C preprocessor.
Another solution, which seems to be missing is to use function pointers; process_image could receive a pointer to given function instead of n. My experiences with gcc show, that functions passed by pointer are often inlined, which is not really a surprise, since they are constant arguments of the function.
Here is an example [0], although the wrapper functions seem to be redundant in this case.
How is this for "performance portability?" I use this solution in C when the function is very expensive, therefore a little extra indirection really won't make any difference - but can potentially improve the reusability a lot. Is inlining calls to function via a pointer a very basic optimization that any self respecting compiler should be able to do, or is it a very advanced optimization that I can't count on working across platforms? Given the possibility of dynamic libraries I don't see how it could be inlined in all cases, therefore at least some kind of analysis must be done before trying it.
Of course, my statement if far from being complete and definitive answer, it's a mere suggestion of another method to add to OP's list.
If the functions are defined in different binaries which are dynamically linked, the chances of inlining are approaching zero [0], although this method may work across binaries in contrast to all techniques from the article. To enable compiler to inline the argument-function, the compiler must have definitions of both functions. To enable this in library/library-consumer scenario the higher level function [1] can be placed in header file and marked as static, this will guarantee that they are in the same module.
Inlining function pointers doesn't seem to be advanced optimization technique (it is not that different from constants propagation). GCC 4.5 does this for same module functions even in -O1 [2].
Personally, I prefer to emphasize modularity in my code and move to other solutions when something is identified as a bottle-neck (which seems to be nice rule of the thumb for all optimizations).
[0] I would love to be proven wrong, by some kind of JIT-ing dynamic linker.
[1] ie. function accepting other functions as arguments.
Inlining a call to a function via a pointer, if the function is static and implemented in the same .c file or an included header, is a basic optimization.
I'm less sure about the compiler figuring out that the user wants (in that case) process_image to be inlined into each of the wrapper functions. However, if you don't mind a bit of nonportability, it's easy to force it to do so: mark the function to be inlined __forceinline on MSVC, __attribute__((always_inline)) on GCC/Clang/ICC/etc.
It's mentioned quickly, look for "func_lut" in the post. These functions can not be inlined if the LUT must be declared dynamically, and it's a problem when they have different parameters typically.
These solutions are what is presented at the beginning of the article, basically. They suffer from a maintenance and readability issue (most editors will disable syntax in the macro scope, you have to mess with the '\', ...).
In this particular case it's not exactly a problem, but when you reach a 100+ lines macro, it becomes one.
It's a bit like maintaining pixel shaders in strings, it pretty quickly becomes a huge PITA.
Now if you read carefully the post, you will notice that I'm not encouraging the latest multiple files method, it is just presented because it's a common pattern and has its use cases.
I think what is described in "Mixing full functions mechanism and macros" would be more appropriate for the example you picked. Again, you have to assume that the function is going to grow, and it needs to be editable.
Yes, that's what I presented. Sometimes though you need a function because you have a function pointer tab for each feature (foo, bar, baz), that's why I wrapped them to show the mechanism of inlining.
I prefer the single file solution too. I suppose it comes down to how much having escape markers on every line of the function implementation bothers you, and if your favorite code editor does full syntax highlighting in such a case.
Yes, but that is now a worker function that handles every if(n == x) case explicitly in the body, not something you need the preprocessor for anymore. Nothing against it, but it is a different design.
I see. I was thinking in the context of your second pastebin link and the evil_template.c snippet in the article, where the functions are actually generic (and in which case you can't avoid the function being declared as a macro unless you go the multiple-includes route).
I did something similar in a personal project[0], but to avoid reimplementing things like lists and resizable buffers and such over and over again but maintain a sort of "type safety.*" It's a step down from C++ templates but it works.
It is uglier than C++ templates, no doubt about it[1]. However, it does have two advantages over STL std::unordered_map (not applicable to a custom C++-template implementation):
- It's guaranteed to have low code size overhead for each supported key type, and none for additional specializations with the same type of key but different types of value.
- It is possible to explicitly instantiate said code in one .c file, rather than the standard C++ approach of forcing the compiler to redo code generation for every .cpp file and hoping the linker merges duplicates. This is mainly an advantage for compile time. C++11 has extern template, but you can't use it with the STL.
(It's also easier to do 'raw' things with the table, although this is just an implementation choice with upsides and downsides rather than a flaw in the STL. And independently from the duplicate specialization, C just compiles faster than C++, and I am fanatic about compile time.)
[1] At least, C++ templates implemented sanely. Certain STL implementations (GNU libstdc++) appear to have a goal of being as difficult to read as possible.
I am looking for a book or web page which describes well programming patterns in C (might be focused on OOP in C). I mostly design for embedded systems.
Any recomendations?
An excellent book, and one which stands right next to K&R in my opinion, as a standard reference book on the subject of the C language. It still has much relevance today - many C projects I've seen being written by the new generation of developers could benefit from having this book become standard reference material everywhere. I can't recommend it enough.
This is an excellent book...but I would not recommend it to beginners. Read "Head First C" first, then maybe this book. Would also recommend "21st Century C."
I've actually used a subset of that book for on an embedded system. It ended up being scrapped, partly due to project management issues and partly because we were writing twice as much code for very little benefit, so I can't say I recommend it for embedded projects. The need for malloc was a pretty big negative as well.
That said, I would recommend it as a good learning experience.
Looking the PDF over again you don't need malloc, that was a quirk of our implementation. We had separated public/private structs, meaning private variables were inaccessible by non-class methods (and thus also unable to be instantiated statically by other code modules.)
Uh... no. On a personal level, I have always found that almost any type of preprocessor magic, especially re-#including with a different set of parameters in effect is a recipe for disaster.
Any use of the pre-processor seems to have a serious impact on the build-system (tracking parallel variants), the IDE (color, intellisense or auto-completion), the debugger, and apparently any tool that operates at the meta-level.
On the other hand, IDEs and debuggers often also have problems with C++ templates as well. Am I crazy to think that the result has generally been better with templates?
I think we would better off without any of this. We should: - discard the preprocessor, - expanding and tightening control on templating), and - migrating towards more conventional module import methods, such as http://clang.llvm.org/docs/Modules.html (instead of #include, unless you can fix IWYU).