Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, I started noticing huge flaws in Apple's Music app, which I told them about and work around mostly, but...are they because Apple software is written in C? C++, Objective-C, same thing. Like can C code ever really be airtight?



I'd say lack of QA. Apple Music (especially on macOS) is EXTREMELY buggy, unresponsive, slow, and feels like a mess to use. Same for iMessage.

Other apps are also written using the same stack with almost no bugs. I wouldn't blame the language here, but the teams working on them (or more likely their managers trying to hit unrealistic deadlines).


No, I would not blame the teams or their managers. You can't just blame a manager you've never met just because he's a manager, we're talking about the manager of Apple Music, they could very well be capable and well-minded, likely personally capable of coding. So let me give you another example in the same vein as C, where everybody uses a technology that is terrible, questioning it only at the outset, and then just accepting it: keyboard layouts. QWERTY is obsolete. It wasn't designed at random, it would have aged better if it did, it was designed to slow down typing so typewriters wouldn't jam. And secondly, in order for salesmen to type "TYPEWRITER" with just the top row, so the poor woman he was selling didn't realize typewriters were masochistic. So that's how you end up with millions of people hunting and pecking, or getting stuck for months trying to learn touch typing for real with exercises like "sad fad dad." It takes weeks before you can type "the". It's just the network effects of keyboard layouts are just next-level. Peter Thiel talks about this in "Zero to One" as an example of a technology that is objectively inferior but is still widely used because it's so hard to switch, illustrating the power of network effects. I for one did switch, and it was hard because I couldn't type neither Qwerty nor Dvorak for a month. But after that Dvorak came easily, you don't need an app to learn to type, you just learn to type by typing, slowly at first, then very soon, very fast.

So with regard to C, I would say it is not objectively inferior like QWERTY became, it's actually pretty well designed. It does produce fast code. I use it myself sometimes, not a bad language for simple algorithm prototypes of under 60 lines. But it's based to a huge degree on characters, the difference between code that works and code that fails can come down to characters, C is about characters, pretty much any character, there's no margin of error. Whereas with Lisp, you have parentheses for everything, you have an interpreter but you can also compile it, I am actually able to trust Lisp in a way that is out of the question with C. There's just so incredibly many gotchas and pitfalls, buffer overflows, it's endless, you have to really know what you're doing if you want to do stunts with pointers, memory, void types.

I guess the bottom line is if you're want your code to be perfect, and you write it in C, you can't delegate to the language, you yourself have to code that code perfectly in the human capacity of perfection.


> it was designed to slow down typing so typewriters wouldn't jam

I'm not quite sure that this is what actually happened: https://repository.kulib.kyoto-u.ac.jp/dspace/bitstream/2433...


> you can't delegate to the language, you yourself have to code that code perfectly in the human capacity of perfection.

Clarifying that what I mean by this is that it's not realistic to expect large C codebases to be perfect. Bug-free, with no exploits. Perfect. Same thing.


You're being downvoted to oblivion, even though your general point (rephrased, C is an unforgiving language and safer languages are a Good Thing) is pretty mainstream. Here are my guesses why:

1. You start off by saying you can't just blame the team or their manager if you're dissatisfied with a product, then instead of explaining why the people who made a piece of software aren't responsible for its faults you go off on a long non-sequitur about QWERTY.

2. Your rant on QWERTY just isn't true. You namedrop Peter Thiel and his book, so if he's your source then he's wrong too. QWERTY is not terrible, not obsolete, it was not designed to slow down typists, and there's no record of salesmen typing "typewriter quote" with just the top row. It's true that it was designed to switch common letters between the left and right hands, but that actually speeds up typing. It also does not take weeks for someone to type "the" ; and if you mean learning touch-typing, I don't know of any study that claims that alternative keyboard layouts are faster to learn.

The various alt. keyboard layouts (dvorak, coleman, workman) definitely have their advantages and can be considered better than QWERTY, sure; people have estimated that they can be up to ~30% faster, but realistically, people report increasing their typing speeds by 5-10%; or at least the ones who have previously tried to maximize their typing speeds... If learning a new layout is the first time they'd put effort into that skill, they'd obviously improve more. It's probably also true that these layouts are more efficient in the sense that they require moving the fingers less, reducing the risk of RSI (though you'd really want to use an ergonomic keyboard if that's a concern.)

QWERTY is still used because it's not terrible, it's good enough. You can type faster than you can think with it, and for most people that's all they want. There's nothing wrong with any of the alternative layouts, I agree that they're better in some respects, but they're not order-of-magnitudes better as claimed.

3. Your opinions about C are asinine.

"not objectively inferior like QWERTY" - So, is C good or not? We're talking about memory safety, C provides literally none. Is this not objectively inferior? Now, I would argue that it's not, it's an engineering trade-off that one can make, trading safety for an abstract machine that's similar to the underlying metal, manual control over memory, etc. But you're not making that point, you're just saying that it's actually good before going on to explain that it's hard to use safely, leaving your readers confused as to what you're trying to argue.

"not a bad language for simple algorithm prototypes of under 60 lines" - It's difficult to use C in this way because the standard library is rather bare. If my algorithm needs any sort of non-trivial data-structure I'll have to write it myself, which would make it over 60 lines, or find and use an external library. If I don't have all that work already completed from previous projects, or know that you'll eventually need it in C for some reason, I generally won't reach for C... I'll use a scripting language, or perhaps even C++. Additionally, the places C is commonly used for its strengths (and where it has begun being challenged by a maturing Rust) are the systems programming and embedded spaces, so claiming C is only good for 60-line prototypes is just weird.

"C is about characters" - Um, most computer languages are "about characters". There are some visual languages, but I don't think you're comparing C to Scratch here... You can misplace a parentheses with Lisp or make any number of errors that are syntactically correct yet semantically wrong and you'll have errors too, just like in C. Now, most lisps give you a garbage collector and are more strongly typed than C, for instance, features which prevent entire categories of bugs, making those lisps safer.

4. You kinda lost the point there. You started by saying that the people who wrote Apple Music "could very well be capable and well-minded, likely personally capable of coding", i.e., they're good at what they do. Fine, let's assume that. Then, your bottom line is that in C "you have to really know what you're doing" and "you yourself have to code that code perfectly in the human capacity of perfection". What's missing here is a line explaining that humans aren't perfect, and even very capable programmers make mistakes all the time, and having the compiler catch errors would actually be very nice. Then it would flow from your initial points that these are actually fine engineers, but they were hamstrung by C.

And the tangent on QWERTY just did not help at all.


> So, is C good or not? We're talking about memory safety, C provides literally none. Is this not objectively inferior? Now, I would argue that it's not, it's an engineering trade-off that one can make, trading safety for an abstract machine that's similar to the underlying metal, manual control over memory, etc.

One might make the argument that Oberon, with its System module, provides the same memory control abilities but few of the disabilities of C.

> so claiming C is only good for 60-line prototypes is just weird.

That seems like a misrepresentation of the claim above?

> Um, most computer languages are "about characters". There are some visual languages, but I don't think you're comparing C to Scratch here... You can misplace a parentheses with Lisp or make any number of errors that are syntactically correct yet semantically wrong and you'll have errors too, just like in C.

Well, not really. Lisp is actually about trees of objects. The evaluator doesn't even understand sequences of characters. That you can enter it as a sequence of characters is purely coincidental, but there have been structured syntactic tree editors (sadly they went down for being proprietary and expensive at the time).


> [...] Oberon [...]

Sure, and that would be a good argument, there are several interesting languages out there that do various things better than C. I'm not intimately familiar with the Wirth languages, but I thought Oberon provided garbage collection?

> [...] misrepresentation [...]

Fine, they never claimed it was only good for that, but I still find it weird to claim that "it's fine, it's great for X" where X is a thing that the language is not particularly good at, while ignoring Y, the thing it's well known for.

> [...] trees of objects [...]

I just don't think that "about characters" or "about trees of objects" is an interesting way to differentiate between programming languages, and I think that this discussion is actually confusing between two different properties. First, is how the source code is represented and edited. It's almost always as a plain text file. Some languages have variants on the plain text file: SQL stored procedures are stored on the RDBMS, Smalltalk stores source code in a live environment image. There are other approaches, such as visual editing as-in Scratch, or Projectional Editing (https://martinfowler.com/bliki/ProjectionalEditing.html) as in... um... Cedalion? I don't actually know any well-known ones.

The other property is how the language internally represents its own code. Sure, Lisp has the neat property that its code is data that it can manipulate, but other languages represent their code as (abstract) syntax trees, too. Basically every compiler or interpreter for a 3rd generation language or above, i.e., anything higher-level than assembly language, parses source code the same way: tokenization then parsing into an abstract syntax tree using either manually-coded recursive descent, or a compiler generator (Bison, Yacc, Antlr, Parser combinators, etc.) So your point that the Lisp evaluator doesn't even understand sequences of characters is true for any compiler, they all operate on the AST.

I think that there's a point to be made somewhere in here that one language's syntax can be more error-prone than another's, but that wasn't the argument being made... Not that I understood, anyway.


> So your point that the Lisp evaluator doesn't even understand sequences of characters is true for any compiler, they all operate on the AST.

Lisp does not really operate on an AST. It operates on nested token lists, without syntax representation. For example (postfix 1 2 +) can be legal in Lisp, because it does not (!) parse that code according to a syntax before handing it to the evaluator.

Lisp code consists of nested lists of data. Textual Lisp code uses a data format for these nested lists, which can be read and printed. A lot of Lisp code, though, is generated without being read/printed -> via macros.


If (postfix 1 2+) is ready to be handed to the evaluator, it's because it has been parsed. That means it must be a parsed representation. "Parse tree" doesn't apply because parse trees record token-level details; ( and ) are tokens, yet don't appear to the evaluator. "Abstract syntax tree" is better, though doesn't meet some people's expectations if they have worked on compilers that had rich AST nodes with lots of semantic properties.

The constitutents of the list are not "tokens" in Common Lisp. ANSI CL makes it clear that the characters "postfix", in the default read table, are token constituents; they get gathered into a token until the space appears. That token is then converted into a symbol name, which is interned to produce a symbol. That symbol is no longer a "token".


You're arguing semantics, I think. I would simply say that Lisp's AST is S-expressions (those nested token lists), and that the parser is Lisp's read function. Then your example is just something that's allowed by Lisp's syntax, while something like ')postfix 1 2 +(' would be something that's not allowed by the syntax.

What you say about Lisp code being generated without being read or printed is of course true, and while Lisp takes that idea and runs with it, it's not exactly unique to Lisp either; Rust's macro can do the same thing, without S-expressions. In other languages you usually generate source code, for example Java has a lot of source code generators (e.g., JAXB's XJC that used to come with the JDK).


The parser for s-expressions is READ. S-Expressions are just a data syntax and know nothing about the Lisp programming language syntax. Lisp syntax is defined on top of s-expressions. The Lisp evaluator sees no parentheses and no text, It would not care if the input text contains )postfix 1 2+( . The reader can actually be programmed to accept that as input. The actual Lisp forms need to be syntax checked then by a compiler or interpreter.

There are lots of language with code generators. Lisp does this as core part of the language and can do it at runtime as part of the evaluation machinery.


Apple Music isn't a great example because depending on which OS and version you're running, it's essentially a hosted web application.

Or given how new it is, it's likely majority written in Swift when presenting a native app


Bugs in Apple's Music apps have essentially nothing to do with it being written in C++ and Objective-C (and these days a significant portion of it is JavaScript and Swift).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: