Hacker News new | past | comments | ask | show | jobs | submit login

> Just because that memory is reserved at boot doesn't mean it suddenly has no lifecycle of any kind.

Yes it does. The "lifecycle" is: allocate at boot, machine halts.

All of the memory you describe for other purposes is allocated at user level and booked to processes. This is how you make a kernel immune to DoS.

> Nobody said anything about C++ throw/catch exceptions.

That's the only meaningful difference in error handling between C and C++. Since you mentioned error handling as a reason to choose C++, what else could you possibly mean?

> If you want to avoid C++ that's great, but to argue for C over it is insanity rooted in nostalgia.

Sure, you keep believing that. It's clear you're not familiar with microkernel design. The advantages C++ has for application-level programming are useless in this domain.




> Since you mentioned error handling as a reason to choose C++, what else could you possibly mean?

I believe he means RAII. It makes it almost impossible to forget to release resources or rollback transaction.


> It makes it almost impossible to forget to release resources or rollback transaction.

No. For that you need a better type system. Linear types show great promise for this.


Linear types don't "show" promise, they solve the issue, and this has been known since linear logic was popularized by Wadler[1] and Baker[2] in the early 1990s. The problem is that programming with linear logic is very inconvenient for a lot of things, and very inefficient for when you actually want to share data.

[1] http://homepages.inf.ed.ac.uk/wadler/papers/linearuse/linear...

[2] http://home.pipeline.com/~hbaker1/LinearLisp.html


I understand RAII as resource management solution. What use does RAII have in error handling? It makes things convenient, but it does not make error handling go away.


It's easier to get this right:

    {
      Resource foo(path);
      if … {
        return -ENOMEM
      }
      return 0;
    }
Than to get this right:

    {
      Resource* foo = acquire(path);
      if … {
        release(foo);
        return -ENOMEM
      }
      release(foo);
      return 0;
    }
Even if you do the goto-style:

    {
      Resource* foo = acquire(path);
      int rc = 0;
      if … {
        rc = -ENOMEM
        goto out;
      }
    out:
      release(foo);
      return rc;
    }


No, but exceptions aren't the only way to handle errors in C++.

There are also library types that enforce checking for errors, something currently impossible in C.

Also thanks to its stronger type system, it is possible to do type driven programming thus preventing many errors to happen at all, which is also not possible in plain C.

Finally everyone is moving to C++, C for OS development is stuck on UNIX and embedded devs that wouldn't use anything else even at point gun.


> There are also library types that enforce checking for errors, something currently impossible in C.

This is a weak form of checking for a kernel. L4 Hazelnut was written in C++ for this reason, but they didn't use it much, mirroring Shapiro's experience with EROS. And when they had to revise the kernel design to plug security holes and they wanted to formally verify its properties, they switched to C because C++ was too complex and left too much behaviour unspecified, and thus we got verified seL4 written in C.


C++ has shrunk a lot in the mindshare since its peak in mid-90s. And Rust is the trendy thing now in the same space.


Only if we are speaking about enterprise CRUD apps that used to be done in MFC, OWL, VCL, Motif++.

OpenCL lost to CUDA because it did not natively supported C++, only when it was too late.

NVidia has designed Volta specifically to run CUDA C++ code.

There is no C left on game consoles SDKs or major middleware engines.

All major C compilers are written in C++.

Microsoft has replaced their C runtime library by one written in C++, exposing the entry points as extern "C".

Beyond the Linux kernel, all native parts on Android are written in C++.

The majority of deep learning APIs being used from Python, R and friends are written in C++.

Darwin uses a C++ subset on IO Kit, and Metal shaders are C++14.

AUTOSAR has updated their guidelines to use C++14 instead of C.

All major modern research OSes are being done in C++, like Genode.

Arduino Wiring and ARM mbed are written in C++.

As for Rust, while I do like it a lot, it still cannot compete with C++ in many key areas, like amount of supported hardware, GUI frameworks and available tooling.


> AUTOSAR has updated their guidelines to use C++14 instead of C.

Really? Interesting thing. You mean for "standard/legacy" autosar, or for the new "dynamic" variant?

When I was back in automotive, the autosar design(s) where probably the ones software people were mostly complaining about.


BMW were the ones pushing it.

I am no expert there, learned it from their presentation at FOSDEM last year.

https://archive.fosdem.org/2017/schedule/event/succes_failur...

Page 12 on the second slideset.


IOKit dates to c. 2000 so it’s hardly a modern example and even people at Apple bitch about the fact that they went with C++.


Most likely because they dropped Objective-C driver framework from NeXTSTEP.

They are surely a vocal minority, otherwise Metal shaders wouldn't be C++14.


> I believe he means RAII. It makes it almost impossible to forget to release resources or rollback transaction.

This kind of pattern doesn't exist in a microkernel. I agree it might be useful in a monolothic kernel, but that's not the context here.


> All of the memory you describe for other purposes is allocated at user level and booked to processes.

No, they aren't. A microkernel is responsibe for basic thread management and IPC. Both of which are highly dynamic in nature.

You seem to be confusing the system that decides when to make a scheduling decision (userspace process - although still part of the microkernel project, so still included in all this anyway), with the system that actually executes that decision (the microkernel itself). And in the case of systems like QNX the kernel will even do its own decisions independent of the scheduler service, such as switching the active thread on MsgSend.

But whether or not it's in ring0 or ring3 is independent of whether or not it's part of a microkernel. A microkernel delegates responsibility to ring3 processes, but those processes are part of the microkernel system - they are in fact a very critical aspect of any microkernel project, as without them you end up building a bootloader with aspirations of something bigger than a kernel.


> A microkernel delegates responsibility to ring3 processes, but those processes are part of the microkernel system

I disagree. Certainly you won't get a usable system without some core services, but the fact that you can replace these services with your own as long as you satisfy the protocol means there's a strong isolation boundary separating them from the kernel. Certainly they are essential components of the OS, just not the kernel.

As for the alleged dynamism of thread management and IPC, I don't see how it's relevant. There exist asynchronous/non-blocking IPC microkernel designs like VSTa and Minix in which the kernel allocates and manages storage for asynchronous message sends, but it's long since proven that such designs are hopelessly insecure. At the very least, it's trivial to DoS such a system.

Only bounded message sends with send/receive buffers provided by processes can you avoid this inevitability. If the idea with Fuchsia is to reimagine consumer operating systems, avoiding the same old mistakes seems like a good idea.

As for scheduling, that's typically part of the kernel logic, not a user space process. Yes, message sends can donate time slices/migrate threads, but there are priority inversion problems if you don't do this right, as L4 found out and avoided in the seL4 redesign. I honestly don't know why Google just didn't use or acquire seL4 for Fuchsia.


>The advantages C++ has for application-level programming are useless in this domain.

ESR was recently making some generalized observations in this direction: http://esr.ibiblio.org/?p=7804


how about we argue the impossibility of most people ever being able to understand what's going on in C++ code (even their own code) and the cataclysmic consequences of using an over convoluted language? I mean there is a reason why the original pioneers of C don't use C++. (i mean other than the fact that dmr is dead)


On the other hand, large C code bases are a special kind of hell, lack of namespaces and user-defined types make it difficult to understand, modify and test.


> On the other hand, large C code bases are a special kind of hell, lack of namespaces

Can you please name a project that you have worked on where you have run into problems because everything was in a single namespace? What was the problem, how did you run into it, and how did you resolve it?

There are a lot of advantages to namespaces. I used to believe that single-namespace languages would cause problems for large software, but working with Emacs (huge single namespace with all the libraries loaded into memory at once, so much worse than C, where you only link a subset of libraries), this problem has not surfaced. I mean literally the only difference is that ".", or whatever the language-enforced namespace accessor is, goes from being special syntactically, to being a convention. When you start to think about namespaces as trees, this makes more sense. Namespaces just push naming conflicts to the parent node. There is no magic that is going to solve conflicts or structure things well or give things clear names. All that is up to the programmer.


But we're discussing a microkernel, not a large C code base, yes?


We're discussing an operating system with a microkernel in it's heart and many things built around.


I understand my code - the language doesn't dictate the understandability of the code that is written. Any language can be used to write indecipherable bad code. You are blaming the wrong thing. C++ seems to be very widely used to write some amazing things, despite your apparent hatred of it?


Would you really say that this sort of complexity is just down to writing indecipherable bad code?

https://isocpp.org/blog/2012/11/universal-references-in-c11-...

In my view C++ is a very complex language that only few people can write safely and productively.

When you say "I understand my code" I have to believe you. The problem is that understanding other people's C++ code takes ages, even if they don't abuse the language. Trusting their code is another story entirely.

C++ is a very flexible language in that it puts few restrictions on redefining the meaning of any particular syntactic expression.

That's great, but it also means that there is a lot of non-local information that you have to be aware of in order to understand what any particular piece of code actually does.

I'm not surprised that C++ is both loved and hated and perhaps even more often simply accepted as the only practical choice.

There aren't many widely used languages around that allow us to optimize our code almost without limit and at the same time provide powerful abstraction facilities.

At the same time, there aren't many widely used languages around that make reading other people's code as difficult as C++ (even well written code) and come with a comparably long tail of accumulated historical baggage.


Yes universal references take a while to understand. I read Scott Meyer's book and the chapter dedicated to it took some getting used to, and note taking.

The language is dealing with some tricky concepts. To hide them or try to gloss over them would lead to writing virtual machines and bloated memory usage etc. in the style of C# / Java.

How else would you deal with movement of variables and when an rvalue becomes an lvalue inside a function?


Haskell, Common Lisp, Ada, Scala, OCaml, F# come to mind.

Even Java and C# are slowly getting down that path.

Languages get those features because they make sense and solve real production problems.


Most (I hesitate to say all) programmers understand their own code. The problem is usually that nobody else understands that code you wrote.

> Any language can be used to write indecipherable bad code. You are blaming the wrong thing. Some languages allow stupid things. Some even encourage it. So, no, languages can and should be blamed.


I have to maintain other people's code, people who have left the company and not commented it. It is horrible to do, but it is possible. It's even better if they wrote it in a logical way.


> I mean there is a reason why the original pioneers of C don't use C++. (i mean other than the fact that dmr is dead)

Bjarne created C++ exactly because he didn't want to repeat the experience he had, when he lost his Simula productivity to BCPL.

Of course the C designers thought otherwise of their own baby.


So goto spaghetti is understandable? And dropping those isn't an argument since proper C++ usage also implies agreeing on a proper subset of the language to use. Modern C++ with sane restrictions is way more easy to understand. Especially w.r.t. resource ownership and lifetimes (as pointed out).


I'm not going to argue that one language is better than another but I do honestly get sick of all this "goto" bashing that often rears it's head. Like all programming constructs, goto can be ugly when it is misused. But there's times when I've simplified code and made it far more readable by stripping out multiple lines of structured code and replacing it with a single goto.

So if you're going to argue in favour of C++ with the caveat of good developer practices then you must also make the same caveat of C (ie you cannot play the "goto spaghetti" card) otherwise you're just intentially skewing your comparison to win a pointless internet argument.


No, I would never argue for C++. The reason being mostly its toolsets (constantly changing, instable and often incoherent). I just don't think readability is an argument - and I am as sick of (pointless) arguments against C++'s readability as you are about goto arguments :) Edit: Just to be clear - there are actual arguments against C's readability. For example when figuring out where and when data gets deleted - but as others have pointed out dynamic memory management is a whole different beast in kernel wonderland.


>So goto spaghetti is understandable?

There's no goto spaghetti in C -- it's only used for local error handling, not for jumping around, at least since the 70s...


You should look at some codebases I occasionally find on enterprise projects.


Enterprise projects written in C?

All 10 of them?


I wonder where you are counting those 10 from.

Enterprises also write native code, it is not everything Java, .NET and SAP stuff.


Sure, but most of it is in Java, .NET and such.

The rest of it could hide any number of dragons (and be written in any kind of legacy, nightmarish, and/or proprietary tools and languages), so it's not much of a proof of widespread bad C "goto" abuse.

Let's make a better criterion: how many of the top 200 C projects in GitHub suffer from "spaghetti goto" abuse? How many of all the C projects in GitHub?


Enterprise software is much more than just desktop CRUD applications.

For example, iOS applications, portable code between Android and iOS, distribution tracking, factory automation, life science devices, big data, graphics are all a small list of examples where C and C++ get used a lot.

Sometimes it says C++ on the tin, but when one opens it, it is actually the flavour I call "C with C++ compiler".

Github is not representative of enterprise code quality.


Your argument about enterprise code cannot be verified since we can't have access to it. Also, the sample of enterprise code you have access to is probably limited and thus most likely biased. Doesn't seem like a very good general argument, but maybe it is a good one for your own individual situation, if we are to believe your word.


You should say the same to coldtea, the person asserting that there are only 10 enterprise projects written in the C language and that there's no goto spaghetti in C language programs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: