Hacker News new | past | comments | ask | show | jobs | submit login

> Microkernels don't need memory management.

Of course they do. It takes memory to hold metadata about a process. It takes memory to hold resources about other services. It takes memory to pass data between them.

Just because that memory is reserved at boot doesn't mean it suddenly has no lifecycle of any kind.

> Furthermore, you don't want exceptions in kernel code.

Nobody said anything about C++ throw/catch exceptions.

> Simply put, there is no reason to choose C++ for a microkernel, and many, many reasons not to.

If you want to avoid C++ that's great, but to argue for C over it is insanity rooted in nostalgia.




> Just because that memory is reserved at boot doesn't mean it suddenly has no lifecycle of any kind.

Yes it does. The "lifecycle" is: allocate at boot, machine halts.

All of the memory you describe for other purposes is allocated at user level and booked to processes. This is how you make a kernel immune to DoS.

> Nobody said anything about C++ throw/catch exceptions.

That's the only meaningful difference in error handling between C and C++. Since you mentioned error handling as a reason to choose C++, what else could you possibly mean?

> If you want to avoid C++ that's great, but to argue for C over it is insanity rooted in nostalgia.

Sure, you keep believing that. It's clear you're not familiar with microkernel design. The advantages C++ has for application-level programming are useless in this domain.


> Since you mentioned error handling as a reason to choose C++, what else could you possibly mean?

I believe he means RAII. It makes it almost impossible to forget to release resources or rollback transaction.


> It makes it almost impossible to forget to release resources or rollback transaction.

No. For that you need a better type system. Linear types show great promise for this.


Linear types don't "show" promise, they solve the issue, and this has been known since linear logic was popularized by Wadler[1] and Baker[2] in the early 1990s. The problem is that programming with linear logic is very inconvenient for a lot of things, and very inefficient for when you actually want to share data.

[1] http://homepages.inf.ed.ac.uk/wadler/papers/linearuse/linear...

[2] http://home.pipeline.com/~hbaker1/LinearLisp.html


I understand RAII as resource management solution. What use does RAII have in error handling? It makes things convenient, but it does not make error handling go away.


It's easier to get this right:

    {
      Resource foo(path);
      if … {
        return -ENOMEM
      }
      return 0;
    }
Than to get this right:

    {
      Resource* foo = acquire(path);
      if … {
        release(foo);
        return -ENOMEM
      }
      release(foo);
      return 0;
    }
Even if you do the goto-style:

    {
      Resource* foo = acquire(path);
      int rc = 0;
      if … {
        rc = -ENOMEM
        goto out;
      }
    out:
      release(foo);
      return rc;
    }


No, but exceptions aren't the only way to handle errors in C++.

There are also library types that enforce checking for errors, something currently impossible in C.

Also thanks to its stronger type system, it is possible to do type driven programming thus preventing many errors to happen at all, which is also not possible in plain C.

Finally everyone is moving to C++, C for OS development is stuck on UNIX and embedded devs that wouldn't use anything else even at point gun.


> There are also library types that enforce checking for errors, something currently impossible in C.

This is a weak form of checking for a kernel. L4 Hazelnut was written in C++ for this reason, but they didn't use it much, mirroring Shapiro's experience with EROS. And when they had to revise the kernel design to plug security holes and they wanted to formally verify its properties, they switched to C because C++ was too complex and left too much behaviour unspecified, and thus we got verified seL4 written in C.


C++ has shrunk a lot in the mindshare since its peak in mid-90s. And Rust is the trendy thing now in the same space.


Only if we are speaking about enterprise CRUD apps that used to be done in MFC, OWL, VCL, Motif++.

OpenCL lost to CUDA because it did not natively supported C++, only when it was too late.

NVidia has designed Volta specifically to run CUDA C++ code.

There is no C left on game consoles SDKs or major middleware engines.

All major C compilers are written in C++.

Microsoft has replaced their C runtime library by one written in C++, exposing the entry points as extern "C".

Beyond the Linux kernel, all native parts on Android are written in C++.

The majority of deep learning APIs being used from Python, R and friends are written in C++.

Darwin uses a C++ subset on IO Kit, and Metal shaders are C++14.

AUTOSAR has updated their guidelines to use C++14 instead of C.

All major modern research OSes are being done in C++, like Genode.

Arduino Wiring and ARM mbed are written in C++.

As for Rust, while I do like it a lot, it still cannot compete with C++ in many key areas, like amount of supported hardware, GUI frameworks and available tooling.


> AUTOSAR has updated their guidelines to use C++14 instead of C.

Really? Interesting thing. You mean for "standard/legacy" autosar, or for the new "dynamic" variant?

When I was back in automotive, the autosar design(s) where probably the ones software people were mostly complaining about.


BMW were the ones pushing it.

I am no expert there, learned it from their presentation at FOSDEM last year.

https://archive.fosdem.org/2017/schedule/event/succes_failur...

Page 12 on the second slideset.


IOKit dates to c. 2000 so it’s hardly a modern example and even people at Apple bitch about the fact that they went with C++.


Most likely because they dropped Objective-C driver framework from NeXTSTEP.

They are surely a vocal minority, otherwise Metal shaders wouldn't be C++14.


> I believe he means RAII. It makes it almost impossible to forget to release resources or rollback transaction.

This kind of pattern doesn't exist in a microkernel. I agree it might be useful in a monolothic kernel, but that's not the context here.


> All of the memory you describe for other purposes is allocated at user level and booked to processes.

No, they aren't. A microkernel is responsibe for basic thread management and IPC. Both of which are highly dynamic in nature.

You seem to be confusing the system that decides when to make a scheduling decision (userspace process - although still part of the microkernel project, so still included in all this anyway), with the system that actually executes that decision (the microkernel itself). And in the case of systems like QNX the kernel will even do its own decisions independent of the scheduler service, such as switching the active thread on MsgSend.

But whether or not it's in ring0 or ring3 is independent of whether or not it's part of a microkernel. A microkernel delegates responsibility to ring3 processes, but those processes are part of the microkernel system - they are in fact a very critical aspect of any microkernel project, as without them you end up building a bootloader with aspirations of something bigger than a kernel.


> A microkernel delegates responsibility to ring3 processes, but those processes are part of the microkernel system

I disagree. Certainly you won't get a usable system without some core services, but the fact that you can replace these services with your own as long as you satisfy the protocol means there's a strong isolation boundary separating them from the kernel. Certainly they are essential components of the OS, just not the kernel.

As for the alleged dynamism of thread management and IPC, I don't see how it's relevant. There exist asynchronous/non-blocking IPC microkernel designs like VSTa and Minix in which the kernel allocates and manages storage for asynchronous message sends, but it's long since proven that such designs are hopelessly insecure. At the very least, it's trivial to DoS such a system.

Only bounded message sends with send/receive buffers provided by processes can you avoid this inevitability. If the idea with Fuchsia is to reimagine consumer operating systems, avoiding the same old mistakes seems like a good idea.

As for scheduling, that's typically part of the kernel logic, not a user space process. Yes, message sends can donate time slices/migrate threads, but there are priority inversion problems if you don't do this right, as L4 found out and avoided in the seL4 redesign. I honestly don't know why Google just didn't use or acquire seL4 for Fuchsia.


>The advantages C++ has for application-level programming are useless in this domain.

ESR was recently making some generalized observations in this direction: http://esr.ibiblio.org/?p=7804


how about we argue the impossibility of most people ever being able to understand what's going on in C++ code (even their own code) and the cataclysmic consequences of using an over convoluted language? I mean there is a reason why the original pioneers of C don't use C++. (i mean other than the fact that dmr is dead)


On the other hand, large C code bases are a special kind of hell, lack of namespaces and user-defined types make it difficult to understand, modify and test.


> On the other hand, large C code bases are a special kind of hell, lack of namespaces

Can you please name a project that you have worked on where you have run into problems because everything was in a single namespace? What was the problem, how did you run into it, and how did you resolve it?

There are a lot of advantages to namespaces. I used to believe that single-namespace languages would cause problems for large software, but working with Emacs (huge single namespace with all the libraries loaded into memory at once, so much worse than C, where you only link a subset of libraries), this problem has not surfaced. I mean literally the only difference is that ".", or whatever the language-enforced namespace accessor is, goes from being special syntactically, to being a convention. When you start to think about namespaces as trees, this makes more sense. Namespaces just push naming conflicts to the parent node. There is no magic that is going to solve conflicts or structure things well or give things clear names. All that is up to the programmer.


But we're discussing a microkernel, not a large C code base, yes?


We're discussing an operating system with a microkernel in it's heart and many things built around.


I understand my code - the language doesn't dictate the understandability of the code that is written. Any language can be used to write indecipherable bad code. You are blaming the wrong thing. C++ seems to be very widely used to write some amazing things, despite your apparent hatred of it?


Would you really say that this sort of complexity is just down to writing indecipherable bad code?

https://isocpp.org/blog/2012/11/universal-references-in-c11-...

In my view C++ is a very complex language that only few people can write safely and productively.

When you say "I understand my code" I have to believe you. The problem is that understanding other people's C++ code takes ages, even if they don't abuse the language. Trusting their code is another story entirely.

C++ is a very flexible language in that it puts few restrictions on redefining the meaning of any particular syntactic expression.

That's great, but it also means that there is a lot of non-local information that you have to be aware of in order to understand what any particular piece of code actually does.

I'm not surprised that C++ is both loved and hated and perhaps even more often simply accepted as the only practical choice.

There aren't many widely used languages around that allow us to optimize our code almost without limit and at the same time provide powerful abstraction facilities.

At the same time, there aren't many widely used languages around that make reading other people's code as difficult as C++ (even well written code) and come with a comparably long tail of accumulated historical baggage.


Yes universal references take a while to understand. I read Scott Meyer's book and the chapter dedicated to it took some getting used to, and note taking.

The language is dealing with some tricky concepts. To hide them or try to gloss over them would lead to writing virtual machines and bloated memory usage etc. in the style of C# / Java.

How else would you deal with movement of variables and when an rvalue becomes an lvalue inside a function?


Haskell, Common Lisp, Ada, Scala, OCaml, F# come to mind.

Even Java and C# are slowly getting down that path.

Languages get those features because they make sense and solve real production problems.


Most (I hesitate to say all) programmers understand their own code. The problem is usually that nobody else understands that code you wrote.

> Any language can be used to write indecipherable bad code. You are blaming the wrong thing. Some languages allow stupid things. Some even encourage it. So, no, languages can and should be blamed.


I have to maintain other people's code, people who have left the company and not commented it. It is horrible to do, but it is possible. It's even better if they wrote it in a logical way.


> I mean there is a reason why the original pioneers of C don't use C++. (i mean other than the fact that dmr is dead)

Bjarne created C++ exactly because he didn't want to repeat the experience he had, when he lost his Simula productivity to BCPL.

Of course the C designers thought otherwise of their own baby.


So goto spaghetti is understandable? And dropping those isn't an argument since proper C++ usage also implies agreeing on a proper subset of the language to use. Modern C++ with sane restrictions is way more easy to understand. Especially w.r.t. resource ownership and lifetimes (as pointed out).


I'm not going to argue that one language is better than another but I do honestly get sick of all this "goto" bashing that often rears it's head. Like all programming constructs, goto can be ugly when it is misused. But there's times when I've simplified code and made it far more readable by stripping out multiple lines of structured code and replacing it with a single goto.

So if you're going to argue in favour of C++ with the caveat of good developer practices then you must also make the same caveat of C (ie you cannot play the "goto spaghetti" card) otherwise you're just intentially skewing your comparison to win a pointless internet argument.


No, I would never argue for C++. The reason being mostly its toolsets (constantly changing, instable and often incoherent). I just don't think readability is an argument - and I am as sick of (pointless) arguments against C++'s readability as you are about goto arguments :) Edit: Just to be clear - there are actual arguments against C's readability. For example when figuring out where and when data gets deleted - but as others have pointed out dynamic memory management is a whole different beast in kernel wonderland.


>So goto spaghetti is understandable?

There's no goto spaghetti in C -- it's only used for local error handling, not for jumping around, at least since the 70s...


You should look at some codebases I occasionally find on enterprise projects.


Enterprise projects written in C?

All 10 of them?


I wonder where you are counting those 10 from.

Enterprises also write native code, it is not everything Java, .NET and SAP stuff.


Sure, but most of it is in Java, .NET and such.

The rest of it could hide any number of dragons (and be written in any kind of legacy, nightmarish, and/or proprietary tools and languages), so it's not much of a proof of widespread bad C "goto" abuse.

Let's make a better criterion: how many of the top 200 C projects in GitHub suffer from "spaghetti goto" abuse? How many of all the C projects in GitHub?


Enterprise software is much more than just desktop CRUD applications.

For example, iOS applications, portable code between Android and iOS, distribution tracking, factory automation, life science devices, big data, graphics are all a small list of examples where C and C++ get used a lot.

Sometimes it says C++ on the tin, but when one opens it, it is actually the flavour I call "C with C++ compiler".

Github is not representative of enterprise code quality.


Your argument about enterprise code cannot be verified since we can't have access to it. Also, the sample of enterprise code you have access to is probably limited and thus most likely biased. Doesn't seem like a very good general argument, but maybe it is a good one for your own individual situation, if we are to believe your word.


You should say the same to coldtea, the person asserting that there are only 10 enterprise projects written in the C language and that there's no goto spaghetti in C language programs.


> If you want to avoid C++ that's great, but to argue for C over it is insanity rooted in nostalgia.

Did you know that code in C++ can run outside of main()?

I used to be a C++ believer, and advocated for C++ over our companies use of Java.

One day, they decided they wanted to "optimize" the build, by compiling and linking objects in alphabetical order. The compile and link worked great, the program crashed when it ran. I was brought in to figure it out.

It turned out to be the C++ "static order initialization fiasco":

https://yosefk.com/c++fqa/ctors.html#fqa-10.12

If you've ever seen it, C++ crashes before main(). Why? Because ctors are getting run before main(), but before other dependent statics have been constructed.

Changing the linking order of the binary objects fixed it. Remember nothing else failed. No compiler or linker errors/warnings at the time, no nothing. But one was a valid C++ program and one was not.

You might think that is inflammatory, but I considered that behavior insane, because main() hadn't yet even run, and the program cored leaving me with trying to figure out what went wrong.

>> Furthermore, you don't want exceptions in kernel code.

>Nobody said anything about C++ throw/catch exceptions.

I'd like to add that if you're finding yourself restricting primary language features (e.g. templates, statics ctors, operator overloading, etc.) because the implementation of those features are bad, maybe using that language is the wrong choice for the project you're working on.

After I read the C++ FAQ lite [1] and the C++ FQA [2], I realized the determinism that C provides is kind of a beautiful thing. And yes. For a kernel, I'd argue C over C++ for that reason.

[1] C++ FAQ Lite: http://www.dietmar-kuehl.de/mirror/c++-faq/

[2] C++ Frequently Questioned Answers: https://yosefk.com/c++fqa/


Well, if your main argument against C++ is undefined order of static initialization amd that it caught you by surprise, then I'd counter that by saying that you do not know the language very well. This is very well known behaviour.

I think that there are stronger arguments against C++: the continued presence of the complete C preprocessor restricting the effectiveness of automatic refactoring, the sometimes extremely cumbersome template syntax, SFINAE as a feature, no modules (yet!)...

Still, C++ hits a sweet spot between allowing nasty hardware-related programming hacks and useful abstractions in the program design.


> ...then I'd counter that by saying that you do not know the language very well. This is very well known behaviour.

So parsing your sentence. I'm right, and you're blaming me for not knowing a language as expertly as you. I can live with that.

Edited to add:

I admit it's a little snarky perhaps, but the c++ standard is 1300 pages long. It took my browser in 2018 1 minute to open it.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n379...

I really do not have time to read a document like that to figure out whether or not that behavior is spelled out in the standard. So yes, I'll let you be the expert on this.


Sorry if the statement offended you. It came from the experience that I so far haven't encountered anyone who seriously uses C++ and does not know about the undefined order of static initialization. Also, I haven't yet had a situation where this was a big deal.

There are worse pitfalls than unstable order with static initializers specifically. If you dynamically load shared libraries at runtme on Linux, you risk static initializers being run multiple times for the same library. This is platform specific behavior that is AFAIK present on other UNIX systems as well and I'm certain that you won't find that in the standard.


> Sorry if the statement offended you. It came from the experience that I so far haven't encountered anyone who seriously uses C++ and does not know about the undefined order of static initialization.

Water under the bridge.

While I did say I was brought in to fix it, what I didn't say was that the group's management thought that Java coders could code in C++. D'oh.


It's worth noting that most language standards are of similar length


Even in C, the program doesn't start in main(), Glibc actually wraps a function called _start() before the main(), something like this

void _start(){ / bla bla / exit(main(argc, argv)); }


The first thing I thought of was http://lbrandy.com/blog/2010/03/never-trust-a-programmer-who...

The better I get at C++, the less of it I actually use.


I've not seen that. But I think that graph matched my experience with C++. I just stopped right before "We Need Rules."


Well, let me tell you that C suffers from the same issue of running code outside main().

It is funny how many issues people blame on C++ that are usually inherited from C semantics compatibility, or existing C extensions at the time C++ started to get adopted.


No no no no no. This is a C++ problem. As much as you want to blame this particular problem on C, C handles this the right way.

Let's try to write the equivalent of this error in C:

  int myfunc() {
          return 5;
  }
  
  static int x;
  static int y = 4;
  static int z = myfunc();

  int main()
        {};
Compiling that in gcc gives me:

  main.c:8:16: error: initializer element is not constant
   static int z = myfunc();
                  ^~~~~~
And it makes sense, C just wants to do a memcpy() to initialize the static variables. In C++, the only way the class is initialized is if the ctor is run. And that means running the ctor before main().

Edited to add:

You're correct that 5.1.2 does not specify memcpy() as a form of initialization. But see my reply below about C11 and static storage classes.


Now try that with other C compilers as well, or without using static.

Also add common C extensions into the soup like constructor function attributes.

Finally ISO/IEC 9899:2011, section 5.1.2.

"All objects with static storage duration shall be initialized (set to their initial values) before program startup. The manner and timing of such initialization are otherwise unspecified. Program termination returns control to the execution environment."

Don't mix what your compiler does, with what the standard requires.

Doing so only leads to unportable programs and misunderstandings.


The C11 standard is clear here on syntax for a static initializer.

Read section 6.7.9, constraint 4:

  All the expressions in an initializer for an object that
  has static or thread storage duration shall be constant 
  expressions or string literals.
It's syntax, not initialization.

And that makes sense. However the memory is initialized before runtime could be via memcpy() it could be loaded as part of the executable and then mapped dynamically at runtime. That's what 5.1.2 is saying.

What 6.7.9 constraint 4 is saying, is that static variables can only be constant expressions.


Yes, but global variables are not necessarily static.


I think you're missing the point entirely here.

C++ has to run code before main if a ctor is in a class that's static. There's no other way to initialize that static object.

C prevents this by requiring static storage to be initialized with constants.


Fair enough.


All variables declared at file scope have static storage duration in C.


I was talking about using static keyword.


Yes, but what's important here is the storage duration, which the static keyword doesn't affect at file scope (it just affects the symbol scope).


Use of static in the context of that C snippet is deprecated in C++. One is supposed to use an unnamed namespace instead.

    namespace {
        int myfunc() {
            return 5;
        }
    
        int x;
        int y = 4;
        int z = myfunc();
     };

     int main()
     {}
In my opinion, his point is still valid.


Just FYI, I believe that deprecated usage was later undeprecated. See https://stackoverflow.com/questions/4726570/deprecation-of-t...


The same issue always occurs where you have static constructors that are not constant.

That's why it gives you a compiler warning, and why your IDE marks it in yellow.

The same issue occurs in any language with static constructors, including Java.


If you think C code doesn't run before main() you're very naive. Just try this:

#include <stdio.h>

static volatile int* x; static int y = 42;

void __attribute__((constructor)) foo() { printf("[foo] x = %p\n", x); }

int main() { x = &y; printf("[main] x = %p\n", x); return 0; }

And before you complain about that being a compiler extension yes, it is, but it's also not rare, either, and you're probably using C libraries that do this.


>And before you complain about that being a compiler extension yes, it is, but it's also not rare, either, and you're probably using C libraries that do this.

e.g. All Linux kernel modules use this for initialising static structs for interfacing with the kernel.


It's a compiler "hack" for shared libraries, because there is no other way to run initialization for elf objects. [1] The C standard doesn't allow it. And gcc forces you to be explicit about it.

[1]: https://www.geeksforgeeks.org/__attribute__constructor-__att...


How is this an actual problem in the real world?


If one isn’t careful, you end up with interdependency between static initializers. Since the order of static initialization is undefined, you get fun bugs like your program crashing because a minor change caused the objects to link in a different order.

For example, the dreaded singleton using a static variable inside a function:

    Singleton& get_singleton() {
        static Singleton instance;
        return instance;
    }
Having a couple of those referenced in static initializers is a recipe for disaster. It’s a bad practice, but people unfortunately do it all the time. Those that do this are equally unequipped to realize why their program suddenly started crashing after an innocuous change.


People who use singletons deserve no better...


This made me think of segmentation faults caused by stack overflow due to allocating an array with too many elements on the stack, which is also "fun" to debug until you learn about that class of problems.

That applies to both C and C++ though.


And one of the reasons why the VLAs introduced in C99 became optional in C11.


C++ gives you a lot of rope to hang yourself but style guides help constrain the language to deal with issues like the one you described: https://google.github.io/styleguide/cppguide.html#Static_and...


That's great for google and greenfield projects, but if you have people that insist that enum's should be static classes, god help you.


The Singleton pattern can be used to fix the order of static constructors. I think that this is the only reasonable use for the singleton pattern (which is just a global variable in disguise).

In my opinion, it's better to not rely on static constructors for anything non-trivial (singleton or not). They can be such a pain in the ass to debug.


Here’s a few of my personal favorite insane nostalgics:

* Donald Knuth: http://tex.loria.fr/litte/knuth-interview

* Linus Torvalds: http://harmful.cat-v.org/software/c++/linus

* Martin Sústrik: http://250bpm.com/blog:4


Dude, that's a Knuth article from 1993.

I don't really like C++ and I haven't been forced to use it (with C and C++ you are basically forced to use them, few people use them for greenfield projects willingly, same for JavaScript; this of course doesn't apply for people who are already C/C++/JavaScript programmers), but from everything I've seen about modern C++ they are moving to a more consistent programming style.

Criticizing C++ in 2018 with arguments from back in 1993 feels dishonest.


Knuth's arguments still hold though:

"The problem that I have with them today is that... C++ is too complicated. At the moment, it's impossible for me to write portable code that I believe would work on lots of different systems, unless I avoid all exotic features. Whenever the C++ language designers had two competing ideas as to how they should solve some problem, they said "OK, we'll do them both". So the language is too baroque for my taste. But each user of C++ has a favorite subset, and that's fine."

In fact, they do so even better than they did back then. E.g. I eagerly anticipated C++11, but virtually every codebase that's older than three or four years and not a hobby project is now a mixture of modules that use C++11 features like unique_ptr and modules that don't. Debugging code without smart pointer semantics sucked, but debugging code that has both smart pointer semantics and raw pointers sucks even harder.

There's a huge chasm between how a language is standardized and how it's used in real life, in non-trivial projects that have to be maintained for years, or even decades.


I am currently on a team maintaining a giant codebase and migrating to C++11 (and beyond) for a new compiler. We do not have issues with the deprecation of auto_ptr, the use of raw pointers or general debugging COM problems. The code base is 20 years old and we do not complain to debug it.

Debugging pointers seems a poor reason to criticize an entire language!

C++ may be complicated but the English language is also complicated; just because people tend to use a smaller vocabulary than others doesn't make the language irrelevant or worthless.

Looking at how English has been used to create a raft of rich and diverse poetry, plays, prose and literature in general, the same should be applied to C++ because the unique use of it in a variety of varying circumstances surely is its beauty.


> Looking at how English has been used to create a raft of rich and diverse poetry, plays, prose and literature in general, the same should be applied to C++ because the unique use of it in a variety of varying circumstances surely is its beauty.

I don't think this is a valid argument, though. Natural languages have to be rich. Programming languages should be terse and concise because we have to keep most of them in our heads at one time and our brain capacity is limited. You don't need to know all of English/French/Romanian but you kind of need to know all of C++/Python/Javascript to do your job well when developing C++/Python/Javascript.

I think the C++ designers lately kind of agree with me but the backward compatibility requirements are really stringent and they can't just deprecate a lot of the older features.


I think it's more that programming languages have to be precise and unforgiving. Natural language is the opposite.


That was obviously (I hope?) just one example. C++ has a huge set of overlapping features, some of which have been introduced as a better alternative of older features. Their interaction is extremely complex. It's great that your team manages to steer a large, old codebase without trouble, but most of the ones I've seen can't, and this complexity is part of why they can't.


Looking at contrieved legal texts, which is a better comparison with code than poetry, I don't agree. I don't even agree that there would be the english language.

Legalese uses a ton of latin ididoms, arcane rights and philosophies. This is comparable to the cruft of C or C++ standards. For a microkernel of some thousand LOC you shouldn't need a multi-paradigm language.

seL4 did it in Haskel, which is a step in the right direction. Then it was ported to a provably safe subset of C.


A large chunk of his argument doesn't hold at all. This:

"At the moment, it's impossible for me to write portable code that I believe would work on lots of different systems, unless I avoid all exotic features."

Is just not remotely true anymore. Modern toolchains entirely obsoleted that. Modern C++ compilers are nothing like what Knuth used in 1993.

If anything it's easier to write portable C++ than it is portable C due to C++'s STL increasingly covering much of the POSIX space these days.


“Criticizing C++ in 2018 with arguments from back in 1993 feels dishonest.“

That statement itself seems intellectually dishonest. What has changed that invalidates his arguments? After all, C++17 is still backwards compatible to the C++ of 1993.

Pardon me for finding this humorous, but stating that I can’t use a Donald Knuth quote in a computer science topic because it’s an old is like saying I can’t quote Sun Tzu when talking about modern events because the Art of War is an old book.

https://en.m.wikipedia.org/wiki/The_Art_of_Computer_Programm...


Donald Knuth is an amazing person, but I'm not sure he's necessarily the same authority in a discussion about industrial programming languages as he is in a discussion about computer science.

So to change your analogy, it would be like quoting Sun Tzu about the disadvantages of modern main battle tanks, using the Art of War. Sure, the principles in the Art of War are solid, but are we sure that they really apply to a discussion about Leopard 2 vs M1 Abrams?

That said, I'm not a fan of C++ either. I think its problems are intractable because they'd have to break backwards compatibility to clean the language and I'm not sure they can do that, unless they want to "Perl 6"-it (aka kill the language).


Fair enough, but I still wouldn't disregard Sun Tzu or Donald Knuth as making arguments comprised of "insanity rooted in nostalgia." That was my primary point.

In any event, Knuth specifically made statements dismissive of C++ 25 years ago that I believe are still valid today. I must have missed reading Sun Tzu's missive on mechanized warfare from the 6th century BC. ;)

Indeed, we can both agree on the backwards compatibility problem. I'm waiting on a C++ build as I type this. Also, I really like the new language features like std::unique_ptr std::function and lambdas.

I'd still rather do my true low-level programming in C bound with a garbage-collected higher-level language for less hardware-focused or performance-critical work instead of bolting those features on to C by committee over the span of decades. For example, C shared libraries with Lua bindings or LuaJIT FFI are bliss in my humble opinion.


That same Linus is using Qt nowadays...


He is, but insists that all of the business logic remains in "sane" c files.

https://news.ycombinator.com/item?id=16489944


Qt isn't quite the same as vanilla C++, however.


And the core of his QT program is still written in C.


For someone that was so religiously against C++, he should have kept using Gtk+.


He wasn't religiously against C++, just pragmatically.


Martin's take is really good, it's a really well thought through blog post.


I don't think it is a good blog post. He first criticises exception handling as undefined behavior which it is certainly not, and then criticises exception handling in general because it decouples error raising from error handling. This is whole point of exception handling because they should be used for non-local errors. Most of the "errors" handled in Martin's projects ZeroMQ and Nanomsg (which are both great libraries btw!) should not be handled as exceptions, as they are not unexpected values but rather states that have to be handled. Here, he uses the wrong tool for the job and criticises the tool.

He then criticises exceptions thrown in constructors and favors a init-function style. I never had any problem with this because I follow the rule that there shouldn't be much code in the constructor. The one and only task of a constructor is to establish the object's invariant. If that is not possible, then the object is not usable and the caller needs to react and shall not use the object.

In the second series, he first compares apples (intrusive containers) and oranges (non-intrusive containers), and then argues that the language forces him to design his software that way. Basically he argues that encapsulation makes it impossible in his case to write efficient code, and that you have to sacrifice it for performance.

However, with C++, you can extract the property of being an object in an intrusive list into a re-usable component, e.g. a mix-in, and then use your intrusive list with all other types. I can't do this in C in a type-safe manner, or I have to modify the structs to contain pointers, but why should they have anything to do with a container at all?

Besides that, I think that Martin is a greate programmer who did an amazing job with ZeroMQ. But I have the impression that he is wrong in this case.


No it's not, they confuse "undefined behavior" with "hard to analyse behavior" for starters. Exceptions are not UB, but the control flow is totally not obvious.

If I were to start a project today, I'd rely heavily on optional and result types and use exceptions only for serious errors, when it makes sense to unwind ans start from a clean slate.


I wish I could give you gold


What are you talking about... Why would you write a kernel in C++ instead of C? You want fine grained control over what the machine is doing. Imagine dealing with all that bullshit C++ comes with when trying to write a kernel. And then you’re trying to figure out if this C++ compiler you need for this architecture supports dark matter backwards recursive template types but it only supports up to C++ 76 and you’re just like fuck my life


Never has man so eloquently expressed the frustration of millions (OK maybe thousands).

The trick to use C++, is to use less of it. C with classes and namespaces. Oh and smart pointers.


Scalable C (https://hintjens.gitbooks.io/scalable-c/content/preface.html) looks like a reasonable way to write software in C from my C++ programmer perspective. However, you could also use C++ to express the intend more directly.



"Orthodox C++" looks more like writing C-code and keeping the C programming style. Most of the points are really questionable.

- "In general case code should be readable to anyone who is familiar with C language". Why should it? C++ is a different language than C. I speak german, but I cannot read e.g. French unless I learned it even though they are related.

- "Don't use exceptions". Why? There is no performance penalty in the non-except case, and the exception case should be rare because it is exceptional. I can see arguments for real-time systems, and for embedded systems were code size matters. The alternative is C-style return codes and output parameters. Exceptions are better in that case because you cannot just go on after an error condition, and functions with output parameters are harder to reason about because they loose referential transparancy. Of course, in modern C++ one could use optional or expected types.

- "Don't use RTTI". I never needed RTTI in my professional life.

- "Don't use C++ runtime wrapper for C runtime includes". C++ wrappers have some benefits over the C headers. They put everything in namespace std, so you don't need to use stupid prefixes to prevent name clases, and they define overloads for some of the C functions, e.g. std::abs(int) and std::abs(long) instead of abs(int) and labs(long).

- "Don't use stream, use printf style functions instead". If this means to use a type-safe printf variant I could agree to some point, although custom operator(<<|>>) for custom types are sometimes nice. If it means to use C printf etc I would strongly object.

- "Don't use anything from STL that allocates memory, unless you don't care about memory management". You can use allocators to use e.g. pre-allocated storage. The STL also contains more than containers, why would you not use e.g. the algorithms and implement them yourself?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: