> - Why is this function returning "void"? This makes no sense, that's not a function.
Personally the distinction between subroutines and functions always struck me as overly math-centric thinking, because it's not an interesting difference as soon as non-local state exists, which is almost always the case.
> - =, +=
I don't quite get why this remains contentious; even in math these are somewhat overloaded ("=" in equations vs. "<=>"/"=" for equality of equations, for example); code simply is not math. Why would the symbols mean exactly the same thing in a different context?
> - Why {} braces?
Big braces have always been used to group stuff together (e.g. case distinctions, summaries etc.), so it kinda makes sense to use that for groups of lines, i.e. code blocks.
> - Why do we need a ';' at the end?
C/UNIX was originally written using actual teletypes, so not only does ";" make it easier to parse, but it also gives you a very explicit marker that you definitely reached the end of the statement. That matters when every line you're looking at when interactively editing a file uses up some paper.
It's related to the same problem with = as assignment, note GP's context: Math and engineering majors. They have been taught, assuming they paid attention, that = is not an assignment but a relational statement. If they've never programmed before, having = as a thing that changes the system state is still new to them. This is a common issue I've seen in tutoring over the years, where students' mental model is closer to Excel than to <standard programming languages>. That is, if you do this:
a = 10; b = 20;
c = 3*a+c;
printf("%d\n", c); // prints 50
They aren't surprised, but then they do this:
a = 20; // this demonstrates part of their confusion, they half-get that it is an assignment
printf("%d\n", c); // prints 50 ?!?!?
They expected 80. So they haven't fully internalized that `=` means assignment and is not establishing a relationship between the thing on the left and the formula on the right. One of the key things that helps is teaching them to "read" a program, and to read `a = 10` as "a gets 10" or "10 is assigned to a".
> Why would the symbols mean exactly the same thing in a different context?
When you have one mental model that you've been taught for 12+ years and are 18-20 years old and don't have a ton of experience, this is confusing. That it is not or was not confusing to you is either because you had a learning experience that specifically targeted this confusion, you made the connection more readily (maybe closer to the top of your class than average), or are looking back on your past and neglecting how far you've come since you were that age and had that level of understanding.
A good example, but unfortunately harder to follow than it should be because you accidentally wrote "3 * a + c" when you meant "3 * a + b". Unless you are making some very subtle point about uninitialized values?
To make this a slightly more useful comment, I'll mention that I confirmed that even with -Wall neither clang-6.0 nor gcc-9 gives a warning about the second assignment to "a" being unused. This would often be a nice thing.
I understand the confusion with = for assignment, but isn't the solution usually to use symbols like := or <-? So shouldn't += and friends be on the easy side of things to teach?
It's hard to use := or <- as notational conventions when teaching C-syntax languages to novices. If you do, you're teaching them two things: The logical representation and the actual representation. They already have a lot in their heads so this only creates more confusion, "Why do I get a syntax error when I write 'a := 10'?" "Because that's just for the notes, not the code...".
The problem with += (IME) is with the incomplete and incorrect model that students have, += is one of the situations that can force a realization that = in C-syntax languages is assignment and is not a relational expression. But if they haven't grokked that yet, it's jarring.
> Personally the distinction between subroutines and functions always struck me as overly math-centric thinking
Not really, many programming languages generally considered math-centric are perfectly fine with functions returning (or taking) a unit type of some sort, category theory (and even its predecessor abstract algebra) has a lot of use for initial and terminal objects, etc.
It’s more like mathematicians used to be vaguely uncomfortable with non-referentially transparent things when computation as a mathematical object was new. They realized the error of their ways pretty quickly, all in all, but a realization such as this can take decades to make its way through layers of educational materials.
> I don't quite get why [ = for equality] remains contentious
Because it’s a huge problem when teaching, even if it isn’t for competent practitioners. Maybe it wouldn’t be if we put CS (and graphs, and games, and lots of other things utterly lacking in prerequisites) in elementary-school mathematics where it belongs, but if wishes were horses I’d have a racing franchise by now.
Yes, "math-centric" in the sense of high-school math (=> "functions are R->R").
I mentioned the = thing because I only ever see this brought up by what seems like a very particular developer tribe. I used to tutor 1st semester CS students in programming C and Java - for many this was the first time they were programming - and I don't think I've ever seen someone confused about it. Lots of other things though; programming is hard.
> Personally the distinction between subroutines and functions always struck me as overly math-centric thinking, because it's not an interesting difference as soon as non-local state exists, which is almost always the case.
It's about command/query separation which is still a best practive to this day.
>Personally the distinction between subroutines and functions always struck me as overly math-centric thinking
Well, the word is taken from math, it seems pretty rude to take a word while preserving nothing of its original meaning. A function is a mapping between sets, it has nothing to do with abstraction or reusability or any of the dozen other keywords said when introducing the concept in programming courses, it's just an association between sets satisfying certain (relaxable) constraints.
From this perspective, a function that returns nothing is a hilarity, it collapses all elements of its domain to a single point. A function that takes nothing and returns nothing is a deviancy, where is the mapping then if nothing is taken and nothing is returned?
Keep in mind that the usage of 'functions' to denote callables* is a new thing that happened probably around the mid to late 1980s. Computer Science always called self-contained pieces of code '(sub) procedures' or '(sub)routines' or '(sub)programs'. Fortran, the oldest programming language, honors the difference until now. Pick any conference paper or book written before 1980 and I guarantee you that 'function' will denote a literal mathematical function used to describe some property of an algorithm or declare a certain constraint on a computational object, and that callable code blocks are always called one of the terms above or their variations.
>because it's not an interesting difference as soon as non-local state exists, which is almost always the case.
The thing is, there is no such thing as 'state' with functions in the first place. The word denotes something in a world that knows nothing about change or time, taking the word and redefining it to include those concepts is meaningless. Imagine if I took numbers, added physical units to them, then insisted that this is the mathematical definition of numbers all along. That's what happened with functions.
Also how exactly is the difference between pure subroutines and mutating subroutines irrelevant when non-local state is involved? it's the exact opposite, the difference is an irrelevant and opaque implementation detail if the mutated state is callee-local and never leaks to the caller, otherwise it's an outsized obsession of life that can mean the difference between a good weekend and a sleepless 24 hour and all things in between. It's exactly when it's non-local that mutable state is revealed as the horsemen of death it is.
* : see? even the verbs don't make sense: how exactly do you 'call' an association? it's a mapping, you either 'apply' it or 'refer' to it or 'look up' an element in a table listing (a sample of) its input-output pairings. Similarly, you don't 'return' from a function, you never 'went' anywhere to return, you had an element and used it to get another element with the help of a mapping between the 2 sets containing the 2 elements, this 'happened' instantaneously, timelessly, there is no instruction pointer in math breathing down your neck and telling you to return here and go there.
Personally the distinction between subroutines and functions always struck me as overly math-centric thinking, because it's not an interesting difference as soon as non-local state exists, which is almost always the case.
> - =, +=
I don't quite get why this remains contentious; even in math these are somewhat overloaded ("=" in equations vs. "<=>"/"=" for equality of equations, for example); code simply is not math. Why would the symbols mean exactly the same thing in a different context?
> - Why {} braces?
Big braces have always been used to group stuff together (e.g. case distinctions, summaries etc.), so it kinda makes sense to use that for groups of lines, i.e. code blocks.
> - Why do we need a ';' at the end?
C/UNIX was originally written using actual teletypes, so not only does ";" make it easier to parse, but it also gives you a very explicit marker that you definitely reached the end of the statement. That matters when every line you're looking at when interactively editing a file uses up some paper.