> Sure, it's an ambiguous language sometimes, but it sounds like your complaint comes down to standardising it more and using longer variable names.
Has unstandardization and shorter variable names ever been heralded as the icon of understandability? I think that answers how I feel about that.
> When you write a function, you still need to give the arguments names. When you write an algorithm and want to share it with someone, you still need to give the variables names. That's all the greek symbols are - and mathematical notation is an abstract, human evaluateable programming language.
I get that. When I write a loop over a space of integers I sometimes use "i" or "j". When I iterate over a 2d array I usually use "x" and "y" ("z" for 3d). If I'm writing networking/file code I usually have an array of N length of bytes called "buff" or "buffer".
These are all common threads. But when I'm writing "buffer", "i", "j", "x", "y", or "z" they mean the same thing at every location in every piece of software to every software developer.
But, and here's the big but, I get a tingling feeling every time I use one of these. I always ask myself "is this the meaning I'm attempting to convey"? If there's a slight variation I'll just rename the variable to a more verbose version to make it easier to follow.
> Common ideas are abstracted into functions, like sine, cosine - min, max, even weird things like Bessel functions, Legendre polynomials, these are all written as function(arguments) in maths.
I can't name an idea in computer science that isn't a function (in at least 1 major programming language). We can go so far as to say that the basic operators, right down to define, are functions in LISP. I can on the other hand name many ideas that aren't functions in "Math"-lang and I'm sure you can as well.
> Python's list and set comprehensions etc actually come from a real mathematical notation for defining sets.
The difference is python uses, for the most part, the ASCII character set and presents a simple-for-all notation.
> All those capital sigmas and capital pi's are reduction operators - sums and products.
Why does Pi and Sigma provide a better interface to these operations then apply and reduction notation of LISP?
> There isn't an assignment operator - you're expected to give each variable a unique name if you have a multi-stage process, so you only ever make statements of equality rather than assignment.
Am I just supposed to guess the starting state of a variable? How do I define variables based on others? What type is my new variable (collection, graph, tree, value)? Why not just have a simple variable name for every non-common variable (Pi for Pi or 3.14... is fine because it is a constant everywhere).
Also, if there is a multi-stage process you should have multiple functions that define a subset of the solution, each with an appropriate name, each with appropriate "test-cases" (proofs) for that subset, each with documentation and type annotations to provide an understanding as to how it works. None of them should require domain-specific ideas to understand after you've read the "tests", docs, types, and names.
> Why does Pi and Sigma provide a better interface to these operations then apply and reduction notation of LISP?
I can answer that. You have hit the nail on the head here, that pi and sigma are in fact much better when you're doing math.
Unlike most programming languages, mathematical notation is not limited to using characters in one font and size on a series of lines, one after the other. Math notation is 2D with whatever fonts you please. So we take advantage of that. When you see a big symbol like a sigma or pi, you know that it's a higher-order operator like 'reduce' in Lisp. There's a whole family of symbols that you can put there, not just sigma and pi, but integrals, tensor products, coproducts, unions, intersections, exterior products, etc.
Or in short, that's just how mathematicians write certain types of higher-order math, by using a big font. They're all vaguely similar to "reduce" in the sense that they apply the inner expression to each element of a set and produce a single output. However, they don't translate to "reduce" in any concrete way. You can't rewrite summations in math as "reduce" in Lisp, because while they produce a single output, there's not one particular way in which that happens. For example, summations and integrals make existential claims, while unions and intersections do not.
I think that's the biggest difference between programming and math here—much math is nonconstructive, many of these operators are not total, and therefore it doesn't make sense to think of them as functions, and therefore it doesn't make sense to write them as functions—because functions are total.
To rephrase that, in math, sum() is not a function, so we don't write it using function notation. This to me is clearer—we avoid getting confused by using the same notation for two different things (functions and non-functions).
Has unstandardization and shorter variable names ever been heralded as the icon of understandability? I think that answers how I feel about that.
> When you write a function, you still need to give the arguments names. When you write an algorithm and want to share it with someone, you still need to give the variables names. That's all the greek symbols are - and mathematical notation is an abstract, human evaluateable programming language.
I get that. When I write a loop over a space of integers I sometimes use "i" or "j". When I iterate over a 2d array I usually use "x" and "y" ("z" for 3d). If I'm writing networking/file code I usually have an array of N length of bytes called "buff" or "buffer".
These are all common threads. But when I'm writing "buffer", "i", "j", "x", "y", or "z" they mean the same thing at every location in every piece of software to every software developer.
But, and here's the big but, I get a tingling feeling every time I use one of these. I always ask myself "is this the meaning I'm attempting to convey"? If there's a slight variation I'll just rename the variable to a more verbose version to make it easier to follow.
> Common ideas are abstracted into functions, like sine, cosine - min, max, even weird things like Bessel functions, Legendre polynomials, these are all written as function(arguments) in maths.
I can't name an idea in computer science that isn't a function (in at least 1 major programming language). We can go so far as to say that the basic operators, right down to define, are functions in LISP. I can on the other hand name many ideas that aren't functions in "Math"-lang and I'm sure you can as well.
> Python's list and set comprehensions etc actually come from a real mathematical notation for defining sets.
The difference is python uses, for the most part, the ASCII character set and presents a simple-for-all notation.
> All those capital sigmas and capital pi's are reduction operators - sums and products.
Why does Pi and Sigma provide a better interface to these operations then apply and reduction notation of LISP?
> There isn't an assignment operator - you're expected to give each variable a unique name if you have a multi-stage process, so you only ever make statements of equality rather than assignment.
Am I just supposed to guess the starting state of a variable? How do I define variables based on others? What type is my new variable (collection, graph, tree, value)? Why not just have a simple variable name for every non-common variable (Pi for Pi or 3.14... is fine because it is a constant everywhere).
Also, if there is a multi-stage process you should have multiple functions that define a subset of the solution, each with an appropriate name, each with appropriate "test-cases" (proofs) for that subset, each with documentation and type annotations to provide an understanding as to how it works. None of them should require domain-specific ideas to understand after you've read the "tests", docs, types, and names.