Hacker News new | past | comments | ask | show | jobs | submit login
Some were meant for C [pdf] (2017) (humprog.org)
149 points by craigkerstiens on June 21, 2023 | hide | past | favorite | 147 comments



I think the main weakness of this paper is that it doesn't really address why C is used in lieu of other "better-C" non-managed languages, such as C++, D, Nim, Rust, or Zig--it's almost entirely focused on the idea that the competitors to C are inherently managed languages, framing the question essentially as a "why C instead of Java?" which is a frustrating approach when you're asking "why not <language which is not Java in the same way C is not Java>?"


C is better because it doesn't try to program for you, it isn't clever, convenient, or have any ideas about how you structure or solve a particular problem. It gives you basic functionality and then gets out of your hair and lets you ge on with programming without making a grand statement about computer science. For people who actually like to program and don't want the language to do it for you that is pretty attractive.

It also turns out that if you remove all the fancy abstraction and write everything out, the code becomes very maintainable and easy to manage. The fact that its about as portable as a program can be helps. I find that most people don't want to write C, but almost everyone is pretty happy interacting with C code written by others, because its simple, fast, portable, easy to integrate, and bind to other languages, and most non-C programmers can read it and have a rough idea of what is going on.


I do think you’ve identified a significant rift between those that prefer C and those who do not. I think you’re being downvoted for the value judgement you’re injecting, though.

> gets out of your hair and lets you ge on with programming

This is the divergence, I think. For me, the method in which c “gets out of my hair” means that more burden is placed on me, the developer. Which means I have to work more slowly, cautiously, and remember things that a good type system could have been checking for me. And the lack of features means I spend time working around that lack rather than getting real work done.

I suspect these preferences are irreconcilable.


> I do think you’ve identified a significant rift between those that prefer C and those who do not.

I agree. I think the article is very right in identifying that C developers look at programming differently. I think that is a good thing. Even as a Die-Hard C programmer, I celebrate that people use other languages and especially that people are developing new languages like Rust, Odin, Zig and so on. I don't think it wrong for someone to reject C, if its not right for them. Do what works for you. I do think that given Cs massive success, people should perhaps be a bit more interested to figure out why, instead of just dismissing it so fast.


Your comments on this thread up to this point sound like a religious fanatic.

I greatly enjoyed reading.

Goes to show why it’s called “The C Religion”.


I think, most designers of languages want abstractions, convenience, and clevernes. So for people who don't want that we have to cling on to C since there aren't other options ;-)


> I think, most designers of languages want abstractions, convenience, and clevernes.

You get that in C as well in the form of libraries and frameworks. Today you saw a post in HN on how to do object oriented programming with C. Some time ago people shared C frameworks to enforce memory safety. There are generic data structures libraries for C as well.

I think your jab on how C programmers don't want abstractions, convenience, or cleverness is a weak strawman. It's obviously not it. What I think C programmers value is expressiveness and speed, and avoid bloat and overall inefficiencies, and C detractors don't want to address that because it forces them to face the real world tradeoffs which expose the shortcomings of their personal choices. There's a pervasive fear of missing out on the latest and greatest, and an irrational need to jump onto bandwagons, and C is extreme opposite of it. And yet it still outperforms anything that came after.


I learned C in the 1980s. At that time, for practical purposes, it seemed it was the same as Pascal, only with less verbose syntax - { } instead of begin/end.

But then I discovered Usenet and the - I don't know exactly how to describe it - the cult of undefined behavior, and the people who care only about standards and not reality.

A basic, straightforward language like what people say C is, is certainly useful. I don't know Rust, and I don't have any plans to learn it.

As I think we all are aware, C originated before computers were standardized on 8 bit bytes, ASCII, Unicode, and 32/64 bit words. So, I mean, no matter how much people say it's appealing because it's simple and reflects the nature of the hardware, it really doesn't.

Is it not true that the Linux kernel is compiled with GNU extensions and not standard C?


> But then I discovered Usenet and the - I don't know exactly how to describe it - the cult of undefined behavior, and the people who care only about standards and not reality.

I don't understand what you tried to say. The standard specifies how the language actually works, and therefore what developers can reliably do with it. How exactly do you see this as a break from reality?


If you stopped at that sentence, then the rest of my comment might clarify.

It's hard for me to elaborate further without any idea of your background. How long have you been programming in C? Have you encountered TAOCP and MIX/MMIX? Do you write strictly standards compliant C? Did you use to read or post on comp.lang.c?

For reference, here is a list of things that are/were off topic:

https://benpfaff.org/writings/clc/off-topic.html

Dennis Ritchie posted on clc in 1999 and was told he was off topic. It has passed into history as a "joke".

"DECtapes are highly platform specific, and are not covered by ANSI C, which is the subject of this newsgroup (comp.lang.c). Try a DEC-related newsgroup.

If you want us to comment on your source code, please post it in the body of your email.

What was your C question?"

https://groups.google.com/g/alt.folklore.computers/c/wbzzoyS...

Face saving aside, that was normal treatment for newbies other than Dennis Ritchie, so if it was a joke, who/what exactly was being made fun of?


> background. How long have you been programming in C?

Since the mid 90s.

> Have you encountered TAOCP and MIX/MMIX?

I fail to see the relevance.

> Do you write strictly standards compliant C?

There is no such thing as non-strictly standards compliant C. It's either C, as specified by one of the international standards, or K&R C for those who can't target a standard, or things that are not C.

> Did you use to read or post on comp.lang.c?

Yes.

What was your point?


The relevance of MIX/MMIX is that to me, MIX is a lot like C in its aspirations and failures as time passed it by. Knuth described similar motivations in terms of portability, not wanting to choose a favorite architecture, and so on. But it ended up being an incredibly baroque, opaque, and obscure way to show algorithms, even though he meant it to be more concrete and relevant than pseudocode.

Then MMIX seems to me to sort of concede that sample code should be a little more like real life code, while still being far more difficult to follow than need be.

Some people just love complications for their own sake, like a medieval monk illuminating a manuscript. That's not wrong, but it's not right for everyone.

The issue I have is with people falsifying the reasons that C can be attractive. The IOCCC is the essence of what C is actually about. It is just not anything like an elegant "high level assembler" that reflects machine architecture. Maybe there is an alternative these days that fits the bill better, or maybe not.

>There is no such thing as non-strictly standards compliant C. It's either C, as specified by one of the international standards, or K&R C for those who can't target a standard, or things that are not C.

Well, here we go. First of all, the tone of this statement takes me back to clc. I could almost believe it's a direct quote.

Secondly, I haven't written a great deal of C code in the last 40 years, all told, but I was writing C code before C89 (and you, I infer, were not). It certainly wasn't K&R C, and it couldn't be standard C if that wasn't invented yet, right?

Everything meaningful "written in C", I think, falls into your category of "things that are not C". Like the Linux kernel.

That's just annoying to some people like it was when "kilobytes" were renamed "kibibytes" or "dwarf planets" were suddenly not "planets".

But shouldn't it bring with it a tiny bit of doubt that there might be something wrong with the ideals of the orthodox? There are many belief systems that have proved impossible for humans to follow, and just because many people are hypocrites doesn't mean that we should define ideals people strive for such that everyone becomes a hypocrite.

If people say undefined behavior allows your program to blow up the world, that is not debatable in isolation, as all definitions are arbitrary. But it is inconsistent with ever using or encouraging anyone else to ever use the language.

>What was your point?

I was pretty cautious. I think I read the FAQ as one was instructed to. I don't know if I ever posted any questions at all. I read it, and the arguments with newbies who over time increasingly seemed like trolls, as entertainment. Same as with the Ayn Rand worshipping Objectivists, the anarchists, or the Holocaust deniers in other groups.

I used to think, no matter how certain people are in their rectitude, can they not see it's leading to endless flamewars where people just wind them up for fun? What do they think the group is for? Do they make the connection that ultimately it is for whatever actually exists in that space? Do they really want what exists there?


writing C like doing extreme sport, respect but I don't want to try it soon. thanks all developers of interpreter, because they wrote it with C, so I don't need to write C.


You have the wrong idea and doing yourself a disservice by not learning C. It is not as difficult as people make it out to be.


> more burden is placed on me, the developer. Which means I have to work more slowly, cautiously, and remember things that a good type system could have been checking for me.

Very interesting. You cite these as drawbacks, and I consider them all to be advantages.

Which just proves the point!

> I suspect these preferences are irreconcilable.

I think you're right. Fortunately, they are only preferences and so aren't incredibly meaningful. I prefer C, but most of my professional work isn't in C. And I don't always choose C for any given project.

Every language has its own advantages and disadvantages, so I select the language for a project keeping in mind which one mesh best with the project at hand.


> You cite these as drawbacks, and I consider them all to be advantages.

Why would they be advantages? Working more slowly and cautiously might be advantageous in some cases, but being forced to do so because otherwise your program might die a horrible death seems hard to justify as advantageous. Unless you're treating it as a sort of extreme sport, like rock climbing without a rope.


> being forced to do so because otherwise your program might die a horrible death seems hard to justify as advantageous.

The advantage is that it forces me to do the task properly, and prevents me from getting lazy and having important skills and habits atrophy.

Also, it doesn't feel like an extreme sport or anything. It's just programming, like in any other language. Not any more or less stressful.


It may not be stressful for you, but it ends up being stressful for anyone who has to rely on the resulting programs. When they have a security issue due to a buffer overflow, or a crash due to a null pointer error, etc.

The "just trust me bro" school of programming is not suitable for delivering serious systems.


if you can't write code without buffer overflows, how are you going to write code that is accurate on any of a number of other dimensions? You need to do things the right number of times, if you don't, you get the wrong answer. I agree that security vulns are bad bad, but correctness is also a good thing, and getting things right is not a "nice to have" skill.


> it ends up being stressful for anyone who has to rely on the resulting programs.

Only if you're writing bad programs. The idea that it's impossible to write good, solid programs in any given language is ridiculous.

> The "just trust me bro" school of programming is not suitable for delivering serious systems.

No one is advocating that. Your choice of language doesn't change the need for processes that ensure quality.


> The idea that it's impossible to write good, solid programs in any given language is ridiculous.

The data disagrees. There's plenty of evidence that the severity of bugs and the number and severity of security bugs is far higher in languages like C or C++ than in memory-safe languages.

> Only if you're writing bad programs.

This is precisely the "just trust me bro" mentality I was referring to.

"I write good programs, I swear!"

But you're human, and you really don't. You make mistakes. Empirically, you make roughly as many mistakes per line as programmers in other languages, but the consequences of those mistakes are worse.

You can argue with me all you want, but you're essentially arguing with reality. C has survived for legacy reasons, but almost everyone is desperate for a better, more robust alternative.

Just so you know where I'm coming from - I learned C in the early 80s, not long after learning Fortran. I later started using C++ because it seemed to offer some benefits. But now - more than forty years later, when there are so many better alternatives for any given problem except one which requires legacy integration - it's astonishing to me that people are still staunchly defending C.

It reminds me of what Max Planck said about physics:

> “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents die, and a new generation grows up that is familiar with it.”


Because working slowly and rewriting code typically leads to better code. By taking it slowly you grow more familiar with what you want, it gives you more time to think, and its generally more enjoyable.

This culture of “fast coding” is a major reason software ends up so flawed. People are so enamored with quantity they completely forget quality along the way. And that means everyone suffers.


You didn’t actually address the point I raised about why C is hard to justify for that reason.

Besides, a strong type system also forces you to think about the design of your code, and demonstrably leads to more reliable results, on an industry-wide basis.


> a strong type system also forces you to think about the design of your code

True. And so does using a language like C.


Except the forcing function in C occurs because if you make a mistake, you can have serious problems at runtime. With a strong type system, if you make a mistake, your program won't compile. It's a big difference, and it's hard to justify the C approach for reliable systems development in a modern context. No-one in this thread even seems to be trying.

If you want to use it for hobby projects because you enjoy it, knock yourself out. But advocating its use for anything important hasn't been viable for decades except for legacy reasons.


> It's a big difference

It is! And there is absolutely a role for strongly typed languages.

> But advocating its use for anything important hasn't been viable for decades except for legacy reasons.

We entirely disagree here.

The big benefit of more stringent languages is that they require less effort on the part of the programmer to ensure safety and correctness. This is of great value -- perhaps even indispensable -- in certain contexts. For instance, if the programming team is large, or if the company wants to hire less experienced developers, or if the company wants to keep up a very high velocity.

But those are economic arguments. It's different than asserting that a given language is simple too dangerous to use under any circumstance. I think that assertion is entirely unsupportable.


This is so true, I've pointedly never learnt to touch type since I don't want to be able to write faster than I can think.


How is the C type system slowing you down? I thought the complaints around C productivity were around memory management and low-level APIs (both of which can be mitigated using libraries: you can “abstract away” these problems and stay in C).


Like Walter Bright, I would love to have some sort of “span” or “slice” primitive. I would love language level tagged unions rather than writing them out each time. Pattern matching isn’t exactly a type system feature, but relies on it to work, and would make those more useful. The lack of destructors means a lot of manual handling of errors. The lack of generics makes it difficult to write generic data structures without losing type safety. There’s no real const.

There are ways to do all of these things, but they’re more manual, more error prone, and get in the way of doing the actual work you’re trying to do.


That describes the basic feature set of C++ very well.

People complain about C++ that it can do much more. Yea. But there is no need to use template meta-programming, multiple inheritance and other special features. Actually I think programmers shouldn’t use them in productive code.

Recently some projects switched the compiler to ‘g++’ and use the basic subset of C++. Makes sense? Maybe Herb Sutters “C++2” will evolve, main features are safety and simplicity.

I’m fine with both C and C++. They do a good job. Address-Sanitizer is a game changer! But I want more.


Yeah, kinda:

> I would love to have some sort of “span” or “slice” primitive.

This was recently added!

> I would love language level tagged unions rather than writing them out each time.

There is std::variant, but it’s not as nice as a language provided solution, both to use and the generated code.

> Pattern matching isn’t exactly a type system feature, but relies on it to work, and would make those more useful.

This is not in C++, and my understanding is that the proposals are nowhere near close.

> The lack of destructors means a lot of manual handling of errors.

The huge C++ feature. Yes, a clear win. I think a version of RAII closer to Rust than to C++ would make more sense for C, though.

> The lack of generics makes it difficult to write generic data structures without losing type safety.

Yep, absolutely.

> There’s no real const.

There isn’t in C++ either.


C++ 20 std::span is an example of how little WG21 actually cares about safety, correctness or other principles of software engineering.

The slice type you actually want has nice safety properties. The slice type proposed to WG21 about five years ago as std::span compromises a little bit, knowing they'd object to good engineering here, and it offers an unsafe way to access the slice as well as safe access. The committee voted to strip out safe access and specifically rejected efforts to ensure there was a sane way to use this type.

I assume Jarrad Waterloo's proposal to fix std::span in C++ 26 will get traction because it's now embarrassing for the project but there's no reason we should pretend this was some mere oversight. C++ is less safe than it should be on purpose.


I get similarly frustrated at the implementation of operator-> and operator* in std::optional. In what world would I want to provide accessors with no more safety than a raw pointer?

Edit: I was looking it up just to make sure I remembered it correctly, and there’s been a recent change that makes it worse. The operator-> and operator* on std::optional are now marked noexcept, so implementations are forbidden from making the accessors safe.

https://cplusplus.github.io/LWG/issue2762


That's what I've heard, but since I'm not that in the weeds with that whole process, I didn't want to comment on it at that level.

It's struck me as being similar to operator* on Optional. I get why they did it. It makes sense. It's also unfortunate. I'm glad I'm not on that committee.


The need to express polymorphism manually in C, the need to remember to check errors returned by functions (instead of the monadic style of computation), the need to define closure's environment structures (those that are passed via something like `void *arg`), etc.

All these things are pretty tedious. And all these things are what other languages are doing under the hood. The difference is that C gives you a choice of _how_ to do (more) things; other languages do not.


> The need to express polymorphism manually in C

Or, don't do polymorphism in C. It's not a mandatory technique. I love C, but I'd argue that if you're doing advanced OOP, you should use a more appropriate language.


Sum types. In C they can be constructed out of unions and structs with lots of checks (and possibility for error) but it's painful.

There's really no excuse for C not having them, other than C being designed before algebraic types became mainstream. They make certain programming errors impossible, particularly with null values and misusing a union with the wrong type. It is possible to compile them as a zero-cost abstraction, and essentially all recent languages aimed at the same domain as C (Rust, D, Go, Zig, Hare, etc.) include them.


Go goes NOT have sum types. I'm not sure Zig technically does, but is close enough.


Inability to specify non-nullable pointers, or truly immutable data, or ownership, or thread-safety of objects on the type system level means I have to remember these unwritten invariants in every case, and/or code defensively.

Libraries can't specify these things in a compiler-enforced way either, so they remain "RTFM" problems for programmers to deal with manually.


> the C type system

The what now?

C has a type system in the same sort of way that the United States has socialism.


A strong type system, weakly checked.

I mean, the type system it isn't checking still isn't as strong or powerful as I'd like, but it is there. Unlike your CPU which doesn't care, C agrees that in principle an integer, a pointer and a floating point value are in fact different kinds of thing.

I think Option<!> is excellent, C thinks I should just write a comment in the program instead, these are big differences but they aren't so fundamental as the difference between US corporate welfare and socialism.


> C agrees that in principle an integer, a pointer and a floating point value are in fact different kinds of thing.

Sure, but that's kiddie-pool, 1970s-level stuff compared to modern type systems.

No business in their right minds is saying, in 2023, let's develop a new system in C. If they are, it's because they have people stuck decades in the past who are arguing for it.

(Edit: or, of course, they need to integrate with a C system, the number of which "is too damn high".


I think it's work either way. You either spend brain-time on memorizing the intricacies of a lot of prebuilt tools, and how / when one might apply them (modern c++) - or you dispense with the many tools, and spend your brain time on memorizing relatively few, but with more effort and detail required on their application. (C). I prefer the latter, but as you say - it's probably linked to personality to some extent, and quite "baked in".


> You either spend brain-time on memorizing the intricacies of a lot of prebuilt tools

I suppose another piece of evidence about how this is a personality thing is that I hate having to learn specialty tools. They're limiting in that typically, they only exist on certain platforms, and they often specific to certain languages, so knowing those tools often doesn't help me generally. (Libraries and the equivalent count as tools here, too.)

So I make it a point to not grow a dependency on them even when I use them. I very much prefer to focus on skills that I can leverage widely.


Sounds a sensible policy to me!


>I have to work more slowly, cautiously, and remember things

I think that should be a good thing.


It depends on what I’m being slow and cautious about. Imagine a programming language that is identical to C, but every prime-numbered line is ignored. In such a language, I would be cautious to add or remove lines, slowly and cautiously remembering which line numbers to avoid. It wouldn’t provide any benefit, as I’d be paying attention to line numbers instead of paying attention to null checks, or integer overflow.

In the same way, C imposes an extra burden compared to other languages. If I declare a std::vector in C++, or a Vec in Rust, it will be deallocated when the scope is exited. But in C, every malloc() requires a call to free(). It requires extra effort for no benefit.


> It requires extra effort for no benefit.

There is a benefit, although it may not be useful for your use case. In that situation, I agree with you -- use a different language.

I find it interesting that so many comments here criticizing C (and a couple of other languages) seem to be taking the stance that "if it isn't good for every kind of task, it isn't good for any kind of task".

Every language both rocks and sucks, depending on what you're doing with it. There is no One Ring to Rule Them All. That's why we have so many different languages -- we should be choosing the one that fits the job at hand.

This is why I think holy wars over languages are always ridiculous. Any given language may be a terrible choice for a particular job. That's not a fault of the language, it's just a sign that language is the wrong tool. Use a different tool.


This is a far better way of putting what I tried to say, thank you.


There is absolutely a benefit, but it doesn't sound like one to you: it explicitly requires you to know that you're actually doing something with memory, which rapidly deteriorates as a concept for developers once they no longer have to be cognizant of the fact that they are indeed actually using it.

I prefer a world where people actually think about the resources they're using.


But does that benefit actually exist? The approach of languages like C++ or Rust isn't to hide memory in the way a GC'd environment like Java or Python does (although C++ does have historical baggage with its copy constructors)--this feels to me like it's the same problem I mentioned in my first comment, that you're focused on contrasting C with managed languages and ignoring the space of non-managed languages. And, anecdotally, from what I've seen perusing various Rust crates, my sense is that too many Rust programmers actually spend too much time worrying about their memory usage, designing extensive and cumbersome zero-copy interfaces when, no really, you should just copy that 5-byte string to your own storage.

I don't see how having to remember to always add in calls to free, with the cost of using more memory if you forget to do so, is really supposed to help you minimize your memory usage. That's the main difference we're talking about here...


Remembering to use free() is a lot easier than remembering the inner workings of each and every destructor of each and every object you use.

Will destroying this window free the memory for this font instance? Will destroying this DOCX processing object close the file? Who knows?

But when the rule is just to clean up after yourself, it's a lot easier to remember.


In C, declaration != heap allocation. And this is a very important detail because implementing something like "deallocation when the scope is exited" adds complexity to the language; more things and more rules you now have to remember.


Humans build tools to help us achieve things. No good comes from making me manually remember things a machine could easily check for me.

I doubt you crafted the HTTP call to post this comment by hand, even if you could. I could. I don't want to. That's why tools exist.


> C is better because it doesn't try to program for you, it isn't clever, convenient, or have any ideas about how you structure or solve a particular problem

Why is that better though? I personally appreciate it when cleverness and convenience make my life easier, my programs less buggy and less likely to end up in a CVE, or point me toward better structure.

> It also turns out that if you remove all the fancy abstraction and write everything out, the code becomes very maintainable and easy to manage

History shows that this isn't true though, not in all cases.


> Why is that better though?

Some people think so. Hence some where meant for C.

> History shows that this isn't true though, not in all cases

Not in all cases? I had no idea it was possible to write bad code in C, I must immediately switch to language where all code is always perfect no matter what the programmer does.

History does show that most of the largest, longest running, most depended on open source projects like Linux, Apache, Git, MySQL, CPyton, OpenSSL, Gcc, Curl are written in C, so its hard to argue that the language isn't maintainable.


The oxidizing of the Linux kernel has already begun, many of the core-tools have been rewritten in rust, large segments of firefox are written in rust, chromium is getting oxidized [1].

I don't think this is a sound argument because of survivorship bias.

[1] https://security.googleblog.com/2023/01/supporting-use-of-ru...


>> History does show that most of the largest, longest running, most depended on open source projects like Linux, Apache, Git, MySQL, CPyton, OpenSSL, Gcc, Curl are written in C, so its hard to argue that the language isn't maintainable.

> I don't think this is a sound argument because of survivorship bias.

The argument is that those projects prove large C programs/systems are maintainable. 'Survivorship bias' doesn't enter into it; they are large and have been well maintained.

Edit: I remember a time when "correlation is not causation" was the endlessly-repeated phrase. La plus ca change, la plus c'est la meme.


> The argument is that those projects prove large C programs/systems are maintainable. 'Survivorship bias' doesn't enter into it; they are large and have been well maintained.

It shows that large C programs / systems can be maintainable. My claim is that the projects became maintainable out of necessity, that is, despite C, not because of C.

Had C been opinioned around certain things and enforced maintainability out of the box to some degree, then and only then you could claim that C is maintainable, because this speaks for the general case.

On the other hand, C is not opinionated about things, so you must enforce maintainability and high standards to get around all the footguns. This is what makes it unwieldy and why Rust is so loved.

This plays greatly into the survivorship bias. You claim that C is maintainable because of all the large C projects that were forced to become maintainable. Had they not been maintainable they would not be as large or survive the test of time, and thus you wouldn't be able to cite them as evidence to your claim.

In contrast, the oxidification of a lot of software, and the rise of "better C"s that have picked up steam shows that there are languages that are more maintainable and arguably better approaches for most software as they avoid all the footguns.

NASA has strict requirements to make it maintainable, e.g. no heap alloc, so when you force yourself to not use language features, it appears that those features make things worse for you.

Edit:

Anyone care to inform me how my response falls into "low standards" for HN, and so much so that I got flagged and can not comment ?


Funny that you would bring up survivor bias. I tend to think the oposit. If C is so bad, why does only C programs survive? Is issues are more visible, because it is so success full, and so much of the world uses C.

One thing i have considered is the importance of the "fun factor" a language. C is seen as a "Fun" language to program by many C programmers, where as Rust is seen more as the "correct" and "responsible" language to use by Rust users. Are Rust developers going to find enough joy in maintaining large open source projects for decades? Time will tell.


> If C is so bad, why does only C programs survive?

The short answer is because you're ignoring all the programs that survive that aren't C. The vast majority of software I use on a day-to-day basis is in fact not written in C, but in C++--and as a software developer, that includes things like my compiler (gcc/g++), debugger (gdb), build system (cmake, ninja). I would guess that the oldest libraries still in common use are written in Fortran, not C--I've definitely seen Fortran code with comments that predate C in the first place.


I didn’t make a claim about C being “so bad”, because it really isn’t “so bad”.

My only claim is with respect to maintenance; namely that C in and of itself is not a language that enforces maintenance and good practices out of the box.

The chain of causality is the opposite. Good projects survive because of good practices, not because of C, and they’d have survived in any other language; just like all other software that has survived.

That’s the only claim that I am making. That, and that compiler enforced soundness makes software more enjoyable to build.


>Why is that better though?

Doesn't have to be better, preferable is enough. Plus, it is more flexible...


I agree it is more flexible and might be more preferable to some, but it is also more dangerous, and I believe it is harder to maintain as well.

Since the comment I replied to specifically said "better", that's what I directed my reply at.


> C is better because it doesn't try to program for you, it isn't clever, convenient, or have any ideas about how you structure or solve a particular problem.

Oh, C very much does have ideas. One of the big problems I have with C is that it has a surprisingly limited feature set that makes it difficult or impossible to express certain programming styles. Want to describe a function that returns multiple values? Well, you can't really do that (especially if you want this to mean "use two registers"). How about using unwind to express exceptional control flow? Can't do that either. Or if you want a function with multiple entry points? Or coroutines? C's fundamental inability to express certain kinds of control flow [1] greatly limit your ability to structure your code in ways that might make the code clearer.

> It also turns out that if you remove all the fancy abstraction and write everything out, the code becomes very maintainable and easy to manage.

I think experience has shown that this is not the case. The closest kernel of truth you might find is that if writing code requires a certain amount of tedium, that tends to limit the scale you can grow a codebase to, and size is one of the largest drivers of complexity. While it's definitely the case that abstraction can decrease maintainability, it is a mistake to think that it necessarily does so. Indeed, abstraction is often necessary to make code more maintainable. For example, when I design APIs, I think very hard about how to design it in such a way that it's impossible to misuse the API. So I make my APIs distinguish between UTF-8 strings and Latin-1 strings, so you can't be confused as to what's in them, or even distinguish between a row number and a column number.

The other thing to bring up is that we know quite well that humans are pretty bad at doing tedious, repetitive tasks with high accuracy. And computers are pretty good at that! What this suggests is that languages should be designed so that things that are tedious and repetitive--such as propagating errors up a stack, or cleaning up resources on failure--can be done by the compiler and not the programmer. As we've seen with things like gotofail, C is a language which lacks features that enable work to be pushed onto the compiler and not the programmer, and code written in C suffers as a result of that.

[1] And there's even more exotic kinds that I could get into--this is merely the list of features available in languages you've probably heard of, maybe even used!


What's the difference, between a function with multiple entry points, and several very similar functions? Sharing of instructions for a more compact image? That could be a compiler optimization.

A work around would be to put most of the body into a macro which is expanded into multiple functions that have alternative entry means.

C returns multiple values via structs; e.g. see the standard library functions div and ldiv.

Coroutines can be hacked in C. I've hacked delimited continuations in C; they work by copying sections of the stack out to a heap-allocated object, which is restored to the current stack top when the continuation is resumed.

Hacking coroutines in C was a popular sport in the 1980's and 90's.

Exception handling can be implemented in C. I wrote a library for that a quarter century ago; it is used in Wireshark. More recently I created a Lisp language which has exception handling; it's rooted in the C one.


>Or if you want a function with multiple entry points?

`entry` was a reserved word in K&R, but not implemented.


> It also turns out that if you remove all the fancy abstraction and write everything out, the code becomes very maintainable and easy to manage.

I was with you on everything until this part. I just don't see how reducing soundness via a weaker type system, and needing to explicitly manage memory vs e.g. Rust, uniq_ptr, shared_ptr, and so on makes things easier to maintain. I also don't see how absence of generics helps here either.


this.

properly made abstractions are actually more maintainable and easier to manage. for ex, take anybody learning programming about writing to a file, if you write your code using ruby, it would be a lot safer and maintainable than the C counterpart even though the latter is more powerful.


Abstractions hide details, and you'll likely need to interact with something the abstraction hides eventually. So then you end up doing one of two things:

1. Learning every detail of the abstraction and the thing it's abstracting, fighting it until you persevere and get it to do what you want

2. Write something from scratch that does exactly what you need

Going the #2 route from the start isn't such a bad idea, and has a lot of advantages IMO.

The hard thing about all this is that abstractions are beautiful at first. It's not until you're far into a project, or doing something unusual that it all falls apart.


C has plenty of abstactions. Are you saying that the set of abstractions provided by C are optimal or nearly so?


No, sorry, I don't know C well.

I was just advocating against abstractions in general. Using primitive types and designing around the data – not higher-level language abstractions – is a nice approach that doesn't get enough attention. Instead conferences are filled with talks about how some new abstraction is the silver bullet for all your problems.

I just wanted to challenge the "more and higher abstractions are good" narrative, since using them has ate up a lot of my time and I hate to see other people repeat the mistake. Obviously every abstraction is different and unique in its value.


> I was just advocating against abstractions in general

No, you are advocating against particular kind of abstractions, and dismissively pretend that most abstractions are bad. Otherwise, you won't be using high-level programming languages and just stick to assembly or punch cards instead. You shouldn't even be interfacing with the operating system, write your own OS every time you need to do something with the computer. Also, since you don't like abstractions, please also avoid using higher maths or any kind of formalized systems that build upon layers of layers of abstractions. Define your own axiomatic system from first principles every time you need to reason about anything. While you are at it, consider moving out of the civilization, and avoid any kind of recent technologies such as computers, as they are built on the idea of "more and higher abstractions".

I'm being absurd, but I hope you get the point.


I think you have a misunderstanding of what an abstraction is. An abstraction is a simplification, something that hides the details to provide a simple interface to access.

Data Types merely pack data and provide soundness in how you process it. You can pack the data as you want, show everything and be fine.

OOP Languages pack objects and use them to build abstractions, that's it. A strong type system doesn't mean you need to hide implementation details and build babushkas of abstractions. It simply means that the type system reduces the number of programs you can write by making sure your code adheres to some soundness checks.


Uh oh I wrote C code and just multiplied meters by yards and now the program exploded help


I think it's high time to bury the myth of the "bondage and discipline language".

Study after study has shown that, comparing development of the same application in C vs. a language with a very strong type system such as Ada, the application in the language with the very strong type system costs less to develop and has far fewer errors.

The fact that C "gets out of your hair", i.e., doesn't provide checking for certain kinds of correctness, is an anti-feature.


i haven't come across a single language that would limit me in how i structure my code. that is, i didn't feel limited. whether it was python, ruby, smalltalk or even lisp. heck, once i translated a program from pike to common lisp. the structure of both programs was identical despite everyone claiming that lisp is so much different. it can be, but it doesn't force you to be.

most non-C programmers can read it and have a rough idea of what is going on

that's because most programmers can read most other languages and have an idea what's going on. that's not at all a special feature of C.


> i haven't come across a single language that would limit me in how i structure my code

Rust is one of the languages that limits how the code and data can be structured. Often it's either the Rust way, or it's going to be awkward or not even compile (e.g. it doesn't like doubly-linked lists, structs containing a references to their own data, mutable getters, and a few other common patterns). This is often very off-putting to people coming from C who want to do things their way, and end up fighting with the compiler over every line of code.


i haven't had an opportunity yet to work with rust. it is something i want to try. i actually like the idea of being constrained and forced to find different ways to solve a problem because i think i can learn something from that. i guess that haskell should be equally challenging. i actually expected that challenge from lisp too, but i was rather disappointed to discover that this wasn't the case and lisp isn't really that much different, neither was smalltalk. it's a nice language, but message-passing doesn't feel any different from method/function calling, so it's not really a different paradigm that affects the structure of your code.


In Lisp, you can just use what you know and not challenge yourself; you're not constrained or forced. Any such challenge has to be self-imposed.

You can learn enough to be able to work in it, and then just use it like Fortran.


> C is better because it doesn't try to program for you, it isn't clever, convenient, or have any ideas about how you structure or solve a particular problem.

This is much of why I prefer to use C (well, I actually prefer to use C++-as-a-better-C, but since I'll omit almost all of of what C++ brings, it's still C for all intents and purposes).

Other reasons I prefer C include producing small executables (and, more importantly, being able to effectively control executable size), that it is the most portable language (in the sense that there are C compilers for just about every computer platform in existence), and it is both readable and concise.


> It also turns out that if you remove all the fancy abstraction and write everything out, the code becomes very maintainable and easy to manage.

This is key to the success of C (also the main reason stated by Linus Torvalds for his choice of the language). You can write really complicated code which may take time to unravel/comprehend but nothing is really done behind your back nor hidden (please let's not start with UB :-).


Avoiding the UB footguns does steer you pretty hard if you try to write secure & portable C. For example keep away from signed integers and dynamic memory & pointer arithmetic.


The historical answer was fairly simple:

1. C was slightly more understandable than existing options at the time

2. The compiled objects could be manually verified/debugged with relative ease

3. There was no mystery/interpretation between the runtime and the hardware, but this isn't technically true anymore for many platforms.

4. The simplified environment demands simplified structural design. Try anything fancy, and C can be very unforgiving to the uninitiated (easier to write bad code). Do a one liner in high level languages instead =)

5. Due to the well-defined simpler syntax of standard C89, the GNU groups were able to bring opensource/free compilers to many new platforms. ARM popularity, Linux, android, and most IT infrastructure owe their existence to these compilers... even if it was just bootstrapping the new environment.

6. All languages have use-case specific tradeoffs. I have worked with over 54 different languages, and have observed the following: if your code still builds and runs on a 5 year old OS, than it will likely not need refactored for another 5 years.

As Intel begins to deprecate x86 legacy features, there will be a lot of drivers that will need rewritten.

Have a wonderful day =)


Also at the time compilers were expensive proprietary things. Ease of compiler implementation was big and enabled low barrier to entry. The allowance for variation in semantics, like data type sizes lowered it too.


C was not more understandable than existing options at the time.

You know that cdecl program to explain C declarations? There never was a pdecl for Pascal, or mdecl for Modula or what have you.

C is terse and cryptic; and that's what appealed to some of the programmers.

You could get the machine to do what you want and---bonus!---programmers not versed in C cannot read your code!

This would have been a selling point to assembly language programmers: switching to C would be like a lateral move to a different priesthood.


The Lindy Effect for programming languages.


Very clearly put!


I like Rust, but I wouldn't categorize it as a "better-C". It is a whole different world, and some things work well in Rust but not in C, and vice versa. Many things are flat out uncomfortable to express in Rust, and the number of times you need to bail out and just Rc<RefCell<T>> or Arc<Mutex<T>> random things is a bit high.

Interacting with anything but basic C is also horrible, requiring significant engineering efforts to hide the glue in a binding crate when possible, and sometimes it just doesn't work out. Many C libraries are heavy on linked lists of linked lists with their own lifetimes, and that's Rust's kryptonite.

Zig is definitely a "better-C", but also young and not nearly as widely adopted.


> the number of times you need to bail out and just Rc<RefCell<Box<T>>> or Arc<Mutex<Box<T>>> random things is a bit high.

People keep saying this, but in all the Rust I've written, I've never needed to reach for this. Indeed, from what I've seen in the community, this kind of nesting is often a code smell that maybe you should think about the architecture of your system a bit more.

> Many C libraries are heavy on linked lists of linked lists with their own lifetimes, and that's Rust's kryptonite.

One article that was on here earlier today discussed what's the usual Rust salve to this problem, handles: https://news.ycombinator.com/item?id=36419739. That is to say, instead of saying that everything is a pointer, instead make an arena (an array) of allocations and use indexes into that array instead. This is the solution in petgraph, for example.


The issue is not just architecture, it is also what you need to interact with.

In my experience, if you're making practical applications that interact with the outside world where this is not the case, it's because you're using libraries made of the broken dreams of it's authors, giving it's best to hide all the pain from you.

I tend to end up writing that code for system integration, so I don't get to see a world of pure type system bliss.

> That is to say, instead of saying that everything is a pointer, instead make an arena (an array) of allocations and use indexes into that array instead

An index into an arena is the very definition of a pointer, but I get the idea.

Unfortunately, you cannot redefine how to deal with types defined and allocated by C, nor how C should interpret and deference types allocated by Rust that must integrate into C-esque designs such as being a node in a C-defined linked list.

Teaching existing C to be anything else is not an option, but Rust is not - and is not meant to be - good a acting like C. The glue is not pretty to neither C nor Rust devs.

Zig is a middle ground that gives up some things from Rust in order to let C paradigms still work, while still having improvements.


(You’d never want the box in either of those situations, as the data is already boxed by the Arc or Rc)


Yeah got a bit carried away, and was stuck in C-ffi land where things get weird - Box::leak, ManualDrop, PhantomPin wrappers, all that.


> I like Rust, but I wouldn't categorize it as a "better-C".

100% agree. C and Rust are very different languages.


I would like to attempt an explanation. It boils down to learning complexity.

> why C is used in lieu of other "better-C" non-managed languages, such as C++, D, Nim, Rust, or Zig

I am a system admin. I do not earn my income writing code and therefore spend at most a few hours a week programming. I've spent about 1000 hours writing C code in my life. About 200 hours of Golang. Years of Posix Shell. Years of Perl 5. I've had a little exposure to Java and C++ and Haskell. I have read a few examples of Rust.

C++ has a higher complexity than C. C++'s syntax is more powerful, it has an additional paradigm (the C++ template system), and if you mix in QT you have one more paradigm (QT preprocessor), it has a very wide ranging standard lib (which data structure do you use for lists? vector, dequeue, ...) augmented by QT and augmented by boost. C++ is huge. There is no way I will learn that in my professional life, I simply do not have enough hours of training left. C++ is not a valid successor to C, because its complexity hinders acquiring the language. Rust suffers the same complexity as C++. I will not have enough hours on my learning schedule to acquire a proficient level of Rust.

Golang is nice for me personally. The book "The Golang programming language" is only double the size of "The C programming language", which makes them comparable in complexity. I get stuff in Golang done faster than in C since I find debugging easier.

I have neither used NIM, nor D, nor Zig. All I can tell you is that C is sexy because the language is small and therefore one can acquire it in a life time -- without being a full time programmer.


C is “small”, but the amount of footguns and unnecessary complexity is incredibly high. Where’s a hash set when you need one? Or an automatically growing list? Pretty much any simple task is difficult in C.

I would recommend beginner programmers start with something high level and widely used like Python way before C, even though Python probably has the longer manual. You can learn the basics relatively quickly and be much more productive than in C.


2017, so zig was very much a baby. Rust was still young.


(2017)

This always gets 0 comments or hundreds of comments. Some previous ones with more than a handful of comments:

* https://news.ycombinator.com/item?id=34640233

* https://news.ycombinator.com/item?id=26300199

* https://news.ycombinator.com/item?id=19736214

* https://news.ycombinator.com/item?id=15179188


Some of those comments are pure hilarity: "C got popular because it turned out to be ideally suited for DOS. C could deal with near/far pointers. No other language could". Yes, C was very good at having two (or even three, "huge" being fully-normalized far pointers) completely differently sized kinds of pointers, without any proprietary extensions, absolutely, and it caused no problems whatsoever because C memory model indeed presupposes a segmented address space.


Escaped the dupe detector because four months ago there was an extra slash in the url.


I appreciate this kind of discussion. It's important to note C programmers these days effectively avoid the inherent risks of unsafety by deploying solid memory strategies.

For example (self-plug incoming) we're hosting a workshop this summer [0] teaching people to replace a rat's nest of mallocs and frees with an arena-based memory allocator.

These sorts of techniques eliminate the common memory bugs. We don't do C programming the way we did back in the 80's and 90's.

[0] https://handmadecities.com/boston


Arenas were pervasively used in the 80s, 90s, 2000s, ... and are still widely used

Using Malloc and free in the naive way was the exception, not the rule

They can be the cause of memory safety problems in some cases, as well as a partial solution in others


This is at odds with my understanding of how C programming was (typically) done. We might also be defining arena usage differently - that's one way I can reconcile our mismatched outlooks.


My assumption is that @chubot ment that malloc'ing each time you need some memory has been bad practise ever since. Mostly, pre-allocating stuff for whatever you need is the way to go, but that is not far away from writing general purpose arenas. At least not conceptually. So, arenas are nothing new (I guess... I had not been around back than).


Indeed, arenas are not a new invention, but to quote a knowledgeable friend who's been around longer than us:

> you can find, for example, spolsky writing about [arenas] in 2003

Game engine, embedded, and OS people certainly knew about them. But this is the crucial point:

> it's possible for many people to know/use them, and also for most people not to know/use them

I grew up on Linux forum boards -- with lots of greybeards writing C -- and I was never once exposed to arenas (or bump / linear allocators.)

In school we wrote programs in C, and professors never challenged the forests of mallocs: despite most bugs stemming from them.


I'd define an arena as the pattern where the arena itself owns N objects. So you free the arena to free all objects.

My first job was at EA working on console games (PS2, GameCube, XBox, no OS or virtual memory on any of them), and while at the time I was too junior to touch the memory allocators themselves, we were definitely not malloc-ing and freeing all the time.

It was more like you load data for the level in one stage, which creates a ton of data structures in many arrays. Then you enter a loop to draw every frame quickly, and you avoid any allocation in that loop. There were many global variables.

---

Wikipedia calls it a region, zone, arena, area, or memory context, and that seems about right:

https://en.wikipedia.org/wiki/Region-based_memory_management

It describes history from 1967 (before C was invented!) and has some good examples from the 1990's: Apache web server ("pools") and Postgres database ("memory contexts").

I also just looked at these codebases:

https://github.com/mit-pdos/xv6-public (based on code from the 70's)

https://github.com/id-Software/DOOM (1997)

I looked at allocproc() in xv6, and gives you an object from a fixed global array. This is similar to a lot of C code in the 80's and 90's -- it was essentially "kernel code" in that it didn't have an OS underneath it. Embedded systems didn't run on full-fledges OSes.

DOOM tends to use a lot of what I would call "pools" -- dynamically allocated arrays of objects of a fixed size, and that's basically what I remember from EA.

Though in g_game.c, there is definitely an arena of size 0x20000 called "demobuffer". It's used with a bump allocator.

---

So I'd say

- malloc / free of individual objects was NEVER what C code looked like (aside from toy code in college)

- arena allocators were used, but global, fixed-size arrays, and dynamic pools were maybe more common.

- arenas are more or less wash for memory safety. they help you in some ways, but hurt you in others.

The reason C programmers don't malloc/free all the time is for speed, not memory safety. Arenas are still unsafe.

When you free an arena, you have no guarantee there's nothing that points to it anymore.

Also, something that shouldn't be underestimated is that arena allocators break tools like ASAN, which use the malloc() free() interface. This was underscored to me by writing a garbage collector -- the custom allocator "broke" ASAN, and that was actually a problem:

https://www.oilshell.org/blog/2023/01/garbage-collector.html

If you want memory safety in your C code, you should be using dynamically instrumented allocators (ASAN, valgrind) and good test coverage. Depending on the app, arenas don't necessarily help, they can hurt.

An arena is a simple idea -- the problem is more if that usage pattern actually matches your application, and apps evolve over time.


I agree an arena is a simple idea, though what I really mean is an arena-based memory allocator (the better phrase is arena-based memory strategies, plural.)

The speaker who's giving the workshop wrote a popular article "Untangling Lifetimes: The Arena Allocator" [0] detailing what we mean. Search for "Composition With More Complex Allocators" and see whether that clarifies my stance better.

Thanks for the feedback and references, those are always useful.

[0] https://www.rfleury.com/p/untangling-lifetimes-the-arena-all...


Or better yet, don’t use dynamic allocation at all! Very happy when I’m on a firmware/embedded project with no dynamic allocation, it makes C halfway enjoyable… though extremely boring.


Wow! I'm really interested in the Boston event. I'm a little afraid I may not be experienced enough though. How master are we talking with these masterclasses?


If you are comfortable programming in any systems language and not a complete beginner, this conference is a legit opportunity for growth (and mentorship.) There is no elitism or judgment here—imo you shouldn't miss out!


I'd be curious if there were any studies done on how much effect this kind of approach has on the number or severity of bugs or security issues in a code base.


What is "solid memory"? Is that what I call "contiguous memory"? (Allocating a big block and doing all your memory stuff in that one block, using metadata to know how much of the block is "free"?)


Nice plug! Just signed up


Very interested to hear more, will be registering virtually!


The crux of the argument of this article seems to be that C allows working directly with raw memory, which is necessary for some low-level tasks, such as communication with memory-mapped devices. This is not possible in pure managed language, due to the additional level of abstraction.

However, the author themselves state that this is not something that implies that undefined behavior are unavoidable when working close to the metal:

    In fact, the very “unsafety” of C is based on an unfortunate conflation of the language itself with how it is implemented. Working from first principles, it is not hard to imagine a safe C.13 As Krishnamurthi and Felleisen [1999] elaborated, safety is about catching errors immediately and cleanly rather than gradually and corruptingly
I would say that, what they are describing is essentially what Rust has achieved, or at least what it is meant to do.


While I don't fully disagree with you, I think the author would disagree. This line of argument is also repeated in recent C++ standards papers around "safety," that is, this argument is explicitly saying "memory safety != safety, and safety is more important, and therefore focus on memory safety is bad for safety."

I have two responses to this that aren't "back on script."

> it is not hard to imagine

(from your quote)

Sure, we can imagine this language, but it does not exist. Yes in theory maybe if we all did what the author said, C could be that language, but it is simply not today. When people say "C" they mean the C that actually exists, not an alternative implementation that would technically conform to the standard. Pushes have been made in this direction a number of times, from a number of different people, yet the market has spoken.

Second,

> Consider unchecked array accesses. Nowhere does C define that array accesses are unchecked. It just happens that implementations don’t check them. This is an implementation norm, not a fact of the language.

This is just factually incorrect. Let's examine both C89: https://www.open-std.org/JTC1/sc22/wg14/www/docs/n1256.pdf and C99: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf

> J.2 Undefined behavior

> An array subscript is out of range, even if an object is apparently accessible with the given subscript (as in the lvalue expression a[1][7] given the declaration int a[4][5]) (6.5.6).

It is literally defined as undefined behavior. While sure, that does mean a particular compiler could implement and document semantics for this, it's by definition not portable, because it is stated as such by the standard.


C, is high level language that happens to map really well to hardware. Much of UB is UB simply because it difficult to effectively detect when something goes in to a state of UB. Writing to a memory address can be done in one instruction, but figuring out if that address is legal to write to is a slow and complicated process. The C standard let an implementation ignore that problem, but it is also does not stopping the implementation from making sure the address is valid before writing to it. Almost no one tries to make such an implementation, because the users have decided they would rather have speed, but anyone can write a ISO compliant implementation without any UB if that's what they want.


It maps well to hardware from the 1970s and 1980s. It has very little to do with modern hardware, and for that matter modern hardware does some somersaults to present an interface that C "expects".


Modern hardware is designed to map to C rather than the other way around. Turns out people want to buy hardware that can run existing software.


What are the specific differences?


The easiest read on this topic is Chris Chisnall’s ‘C is not a Low Level Language’ article [1]. It is a good read in general and not all that long. It addresses vectors, out-of-order execution, pipelining, and plenty of other topics along the way.

1 - https://dl.acm.org/doi/10.1145/3212477.3212479


Communication with memory-mapped devices actually works pretty well in other languages. And surprisingly badly in C: when trying to directly use memory mapped space as backing store for C objects, compilers have freedoms about the memory accesses, which doesn't mix well with finicky mmio devices. (There are also platform differences which make this pretty nonportable). Reads and writes to memory mapped devices can be represented as eg function calls in other languages (and commonly in C too).


Any good docs on memory and data structures for c?


The article that introduced me to "Priest with Balloons", by Tiny Ruins.

Based on a true story: https://en.wikipedia.org/wiki/Adelir_Ant%C3%B4nio_de_Carli


I moved away from C to Java, then I returned to C. I tried Python several times, but always returned to C.

C is small, with very few hidden 'gotchas', and it works. No wonder I always come back to C. You could call it the WYSIWYG programming language.


What are the 'gotchas' that Java has but C does not?


Java is a much larger and more complex language than C. Consequently, it is much more fragile. I was one of the ones who thought Java was the best thing since sliced bread back in the 90s ("Write Once, Use Everywhere"), but I found it it very difficult to work with at times because you had to jump through hoops to get around those complexities.

And that's not including the bastardisation of Java that occurred when Microsoft tried to bring it down by changing many of the classes to something incompatible. I once got into an online discussion of something or other and couldn't work out why the other guy's code and results differed from mine. I then discovered that while I was using Sun's (official) Java, he was using Microsoft's (different) Java.

Don't get me wrong. Java is a good (overall) language but it's just not the simple and fast language that C is. There is the old joke that C makes it easier to shoot yourself in the foot. That is true, but it is also a very versatile and clean language.


This "communicativity" is a feature only enabled by fixing a certain execution model (roughly "everything is addressable bytes in memory"). Thus any languages that would support "communicativity" would necessarily need to adopt this model explicitly - which would deprive them of some abstractness, and, I think, even probably make them C-like. In other words, there is simply nothing other to do than "creating new worlds" (or creating a C).


I have a trivial, superficial gripe with modern languages. I hate this syntax:

    let variable_name : type = value;
I find it extremely ugly.

I know that people here have disdain for discussions about syntax [1], and I know that once compiled, the specific syntax doesn't matter that much.

But I think that language aesthetics do matter.

People say that languages are "just tools", and the output quality is all that matters. But, personally, I would want the tool that I'm going to be spending a significant amount of time working with to be beautiful and feel good.

And that's one point where I consider C to be superior: aesthetics. If I start a hobby project, I may start in C, because programming in it makes me feel good, even if it's unsafe.

I'm not a language expert, so I don't know if there are good technical reasons for this modern syntax to have replaced C-style initialization, but I think there must be some way to avoid using this modern syntax without losing whatever feature is made possible by it.

So, my point is: We shouldn't discard aesthetic considerations so lightly. Languages are not "just [ugly] tools", they can be a medium of artistic expression to some people.

[1] https://wiki.c2.com/?WadlersLaw


I agree that syntax matters, but I like that syntax so much better. Here are some reasons:

- The variable name comes before the type. Thus if the type name is large it makes it difficult to scan for the variable. This isn't too bad for `let`s in a language like C, but my day job is C++ right now and the fact that the return type for functions comes first is terrible. It's inevitably enormous, so it makes it bloody impossible to quickly see what the damn function is called.

- The keyword `let` is first, so with syntax highlighting you can easily visually see the structure of the code, and which lines declare variables vs. call functions.

Are there reasons to like the C style

    type variable = value;
besides familiarity, and it being slightly shorter?


There are good technical reasons. That said, I don’t expect them to be persuasive, but the main reason for doing so is that:

1. It doesn’t require as much context to parse, making tooling easier.

2. It plays nicer with type inference or deduction.

These reasons generally trump aesthetics for new language developers, and so I wouldn’t expect to be seeing the C style in newer languages. Also, not everyone agrees aesthetically; I am the polar opposite of you.


I don't think that more modern variable declaration syntax is used because it allows for other things to be expressed syntactically (although maybe it does, like for type inference), but I think a lot of us just think it _does_ look better. I don't mind the C and friends

    type variable_name = value;
style, but I do like the type coming afterwards. I don't feel like there's a tradeoff, I think it's just better across the board, and I assume a lot of people are in that boat as well due to its popularity.


This is not a modern syntax, it predates C, and this whole debate goes a-a-all the way back to ALGOL and PL/I, which were late 60-es. And it doesn't really matter if the type is before (ALGOL) or after (PL/I, Pascal) the variable name; what matters is if it is actually before/after (ALGOL, PL/I, Pascal) the variable name, or around of it (C): compare "X ^ARRAY[10] OF ^INT" with "int (x)[10]".


I'm with you until type inference comes into play. For example, I find the proliferation of "auto" in C++ code far uglier than Go's "foo := expression" syntax where most of the time you don't even need to specify a type. Since most modern languages include type inference, I can see why they are moving away from C-like syntax, even though I still prefer it in most cases.


I generally think that C is the right level of abstraction and provides the right model for thinking about programming. However, I don't actually use C because I can work at the same level in other programming languages.

Right now, C++ is my language of choice, but I'm far from happy with it, for obvious reasons. I think my ideal language would be as close to C as possible while adding some quality of life modifications. Off the top of my head, those would include: namespaces, modules, generics, removal of the -> operator, and removal of forward declarations (which would imply order-independent declarations).

Of course, it's easy to imagine such a language, but hard to implement it and be successful in practice. Most attempts probably never get out of the hobby project phase. Those that achieve some success will feel pressure to continue to add features to differentiate themselves from the competition. Maybe I am in too small a niche and there simply isn't a market for what I would like to use.


C23 technically has namespaces for the attribute syntax. Hopefully it will be extended into the main syntax in the future. It's also easy to implement basic generics using either _Generic() or m4 macros.


You may enjoy Hare.


I had not heard of Hare. It does look promising. Thanks for the recommendation.


No generics... Zig seems like a better fit.


I had a bad first impression when the Zig compiler refused to compile a file with Windows line endings. For all I know it might be an interesting language, but to me that decision indicates a level of pedantry and hostility to non-Linux environments that doesn't look too promising.


Ah, I didn’t know you were a fellow Windows user, you will find hare more offensive in this respect, the official compiler does not and will never support Windows. In theory someone could write one but I doubt it will happen.


That's unfortunate. I'm not a defender of Windows or Microsoft; I think they're terrible. But if you want to make video games or any other software primarily for desktop computers, Windows needs to be a first class citizen.


Man, I write all my programs in pretty much C. I'll use a class or two, but that's as far as it goes. I call myself a C++ programmer, but really I don't touch 95% of it with a barge-pole. It just seems like a load of unnecessary, complicated crap that will surely go wrong and confuse me.


I've always thought of assembly language as the arithmetic of programming. You can do anything programming-wise with assembly (ignoring micro-instructions). I don't know what the C equivalent in math would be but can't see it going away for the same reason eliminating the math equivalent wouldn't make sense.


I'm working on some bare metal ARM32 firmware, and I can't find a language that comes close to C. I can get it to generate pretty much any assembly I can think of. Other languages make it a pain to write memory unsafe code.


c and cpp are great. definitely worth learning, sometimes worth using.

probably best avoid for the first few years of serious engineering as a beginner.


Sentimental.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: