Hacker News new | past | comments | ask | show | jobs | submit login

This is much less confusing than (2), and leads to the observation that the parentheses are redundant, so now we can write "x + y + z"

This is a non-problem if you're using Lisp. (+ x y z) accepts an arbitary number of arguments and the operator precedence problem does not exist since there is no operator precedence.




If this is so obviously preferable, why have mathematicians so obdurately not adopted this style?

Mathematical notation is not an archaic practice that is followed out of a respect for tradition, or a doctrine that has been developed from first principles, it is something that has evolved (and continues to do so) because it has been useful.


We do have f(x, y) in mathematics notation, as well as notations like { a, b, c }, ( 1, 2 ). The inventor of vector and matrix notation finally had the brilliant idea of dropping the silly commas [ i j k ]. Think of how ugly matrices would look with commas.

Now in mathematics, there is no equivalent of a 200,000 line piece of software. The number of identifiers a mathematician works with in any given work is small, even if the work is large (many pages of derivation).

Mathematics notation is ambiguous and inconsistent. One letter names are preferred so that xy can denote the product of x * y. So much so that mathematicians reach for other scripts like Greek rather than make a multi-letter name. Then, inconsistently, we have things like sin(x).

Consider that xy(z + w) is likely interpreted as the product of x, y and z + w. It has the same syntax as ln(z + w), the natural logarithm of z + w, not the product of l and n.

Professionally printed mathematics resorts to the use of special fonts to visually resolve these things. This problem is so serious that everyone who is anyone in mathematics goes to the trouble of professional typesetting, even in minor papers.

In software, the program-defined functions form vast vocabularies. They cannot all be assigned to infix operators without creating mayhem. Many functions have 3 arguments. Seven argument functions are not uncommon. Those also cannot be infix.

In spite of what the author may say, real Python code is chock full of "import foo" and "foo.bar.baz(this.or.that(), other.thing())".


> goes to the trouble of professional typesetting

Calling people’s use of LaTeX to type their homework “professional typesetting” seems like a stretch. Professional typesetting would be something like: send your hand-written manuscript to a full-time typesetter, and wait for them to do the work.

A better description would be “goes to the trouble of using math typesetting software designed by experts”. But is this really so strange? People use even more sophisticated software than that for making image collages of cats with mustaches, for modeling platonic solids, for adding their favorite song to a frivolous home movie, ....


precise typesetting then


Well mathematics notation largely follows speech. People say “one plus two” — largely because speech doesn’t have closing parentheses, so we need to speak in a way that makes it clear when we’re done talking — so that’s how we write it. But for a computer, prefix notation is great because it’s unambiguous and clear even without knowledge of PEMDAS. Similar to how Americans write MM/DD/YYYY because that’s how we say dates, but we can still acknowledge that YYYY-MM-DD is the best format for computers.


I wonder how much is the reverse: we now tend to say mathematical expressions as they are written, but before this was standardised, you would just explain the steps.

Probably not "one plus two" -- I think + is essentially a variant of & which is a ligature for "et", and I guess most languages put "and" between the things being combined. But I'd be surprised if (x/y)^2 was said "x over y all squared" by many people before this notation. But the notation is clearly more designed for thinking on paper than for explaining down a phone line.


1. Mathematicians have different priorities than programmers, and they use different tools. Working with an equation on a whiteboard, it's easier to write "a+b+c" and then cancel terms as needed. When writing a formula on my computer, cancelling terms is something I almost never do, so it would be silly to use a notation that's been optimized for that.

When I am doing algebra on my computer, I hope I have a tool like Graphing Calculator (not "Grapher"!) that lets me simply drag a term from here to there, and automatically figures out what needs to happen to keep the equation balanced.

2. They have, except they use Σ for the prefix version. When it's more than a couple terms, and there's a pattern to it, Σ (prefix notation) is far more convenient than + (infix notation).

If programming languages look like they do because they're taking the useful notations from mathematics, why doesn't your favorite programming language have a Σ function? Who's being stubborn here?


Most programming languages do have some variant of `sum(seqence)`. Python certainly does. Or, like, loops, which do the same thing.

But they're optimized for different things. Using the same tool for infinite length sequences and fixed length sequences doesn't make a whole lot of sense. We often have different types for them (tuple/record vs. list/array) too.


Having done addition in both infix and prefix varieties on my computer, over the past few decades, I don't understand why prefix notation is considered 'optimized' for indefinite (not 'infinite') sequences and infix notation is considered optimized for definite length sequences.

What exactly "doesn't make a whole lot of sense" about (+ a b)? (It doesn't look the same as you wrote it in school? Neither does "3+4j", or "math.sqrt".)

Being able to use the same tool for different types of data is precisely what makes high-level programming so powerful. As Alan Perlis said, "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures." Having only one way to add numbers (in languages that do that) is a great feature.

Python's insistence on these pre-selected groupings of functionality has always made it frustrating for me to work with. The two ways to add numbers look and work nothing alike. Does "TOOWTDI" not apply to math?

(Yes, I'm also frustrated and confused that Python has built-in complex numbers, and a standard 'math' module with trigonometric functions, but math.cos(3+4j) is a TypeError. What possible benefit is there of having a full set of complex-compatible functionality, but hiding it in a different math module, with all the same method names?)


The zen never says TOOWTDO, it says TO(APOO)OWTDI. (That's "there's one, and preferably only one, obvious way to do it.)

`reduce(op.add, [x, y])` works. Python could remove it's infix ops and use only prefix ones. But prefix ones aren't obvious. And as Guido says, readability matters.


Σ is sum in Python and many other languages, and it's generalized across all binary operators via reduce.


This really isn't a good argument it just looks like one on the surface. In fact while computer science is a branch of math its pretty clear that most applied software development has substantial differences from theoretical math.

Further there is no reason to suppose that any group of humans act optimally in this area when they clearly act on the whole so illogically and sub optimally in every other area of human endeavor.

It asks respondents to prove the much more complex question of why different groups may prefer a particular notation instead of taking the much more simple and direct route of explaining why the poster prefers a particular notation.

In short its not a missile its chaff.


Mathematics uses infix notation literally everywhere. Just because you prefer some notation doesn't mean people who have used infix notation since they were 5 years old will like it. Also, I avoid lisp mostly because S-expressions are almost unreadable to me.


> Mathematics uses infix notation literally everywhere.

Mathematics uses prefix, infix, suffix, circumfix, and some more complex notations; basically any pattern any computer language uses probably is inspired from math even if math doesn't use the notation for the same operation or operator symbol.


Lisp is pretty ugly :)

    (eql (* x (+ y z)) (+ (* x y) (* x z)))
I wonder if its famed sense of enlightenment is partly just overcoming the mental hurdle of it's syntax.

Edit: typo fix, had x/z mixed up


You don't have to do it like that. You could do:

    (= (* x (+ y z))
       (+ (* x z) (* y z)))
Or:

    (= (* x (+ y z))
       (+ (* x z)
          (* y z)))
Or:

    (= (* x
          (+ y z))
       (+ (* x z)
          (* y z)))
Or:

    (= (+ (* x z)
          (* y z))
       (* x (+ y z)))
Or:

    (= (+ (* x z)
          (* y z))
       (* (+ y z) x))
Or:

    (= (+ (* x z)
          (* y z))
       (* (+ y z)
          x))
I think the last one makes the relationships quite clear.

Writing legible equations is an art form, as is writing sexps.

Edit: If you think lisp is ugly, compare with regexps. For bonus points, explore writing regexps with lisp, e.g. with Emacs's `rx` macro. I think you won't find a more easily maintainable way to write them.


For comparison infix notation with the same precedence for all operators:

    (x * (y + z)) eql ((x * z) + (y * z))
Or:

    (y + z * x) eql (x * z + (y * z))
With different precedence:

    x * (y + z) eql x * z + y * z
But now you have to remember precedences and mentally regroup the expression.


I assume you just copy-pasted from parent, but both of you have written `(* x z)` `(* y z)` rather than `(* x y)` `(* x z)`.


Sorry, is it

xz + yz = x(y + z)

when using operators?

That seems wrong.


I don't know any Lisp, but I would decode

    (= (* x (+ y z))
       (+ (* x z) (* y z)))
As

  x*(y + z) = x*z + y*z
e.g. (+ y z) means you add y and z with highest priority.


Which is incorrect, they aren't equal.


That is probably not a statement of equality, but a boolean test for it.


Well, technically it solves to x = z when y <> 0 and any x, z when y = 0 though I'm not sure if that's the original intention.


Some would argue that the fact that you have all of those alternative forms is part of the problem. Lisp is one of the (if not the) most individualistic programming languages around. Lisp makes it easy for a programmer to create their very own impenetrable, arcane, domain specific languages. This causes large organizations to avoid it like the plague.

Large teams don't want artists, they want replaceable parts.


The tragedy is that the "large organization" then goes on and writes multiple bad DSL to solve problem X which includes several code transpilers and a varying amount of custom syntax. In the end, they do the same thing as the Lisp folks -> they write a DSL, but because language XYZ they chose is less capable than Lisp, the solution is hacky and difficult to understand (and can't be – in comparison to Lisp – easily extended).

This is true for a lot of "frameworks" and especially true for modern fontend web frameworks (which have a compile/build step).


You have alternative line breaking in infix also. All of it sucks compared to keeping it on one line, though:

  x * (y + z) = x * z + y * z


  x * (y + z) =
  x * z + y * z


  x * (y + z)
  = x * z + y * z


   x *
   (y + 
    z)
  =
   x *
   z
      +
   y *
   z
etc.


Are you talking about macro or formatting?

If it's about formatting, given that this is what this thread is about, you can use code formatter for lisp languages exactly like for any other language, in fact they are even easier to write for lisp because of the consistency of S expressions.

If you are talking about macro, then you're in the same boat as other languages that have macros, C, Rust, ect... And remember that the first rule of macros is to not use macros, except for when you absolutely have to, and in those cases it is the most elegant solution. If you have devs inventing DSLs for everything, then lisp isn't your problem.


Some would argue that Lego and Technic are too complicated, and we should all play with Duplo.

Thankfully, Michelangelo and Leonardo were not in large teams.

Which do you aspire to be: a cog or an artist?


I think most people would prefer to be an artist, but art rarely pays the bills, hence the cliche "starving artist." My post was not meant to rip on artists, though that's how it comes off now that I read it again. The truth is that society wants more cogs but needs more artists.


If prefer it if the product of my engineering effort were maintainable and clear. I succeed if someone with no context can understand the system I built without help.


I think lisp is beautiful in a mathematical sense, but not in the ‘producing practical code’ sense.


tbh, if you're writing a lot of equations like this in a program, the Lisp version should probably be something like

    (eqn x * (y + z) = x * z + y * z)
where you write and test an `eqn` macro that rewrites everything in the correct syntax.


using rm-hull/infix in clojure

        x * (y + z) = x * z + y * z reduces to
    ($= x * (y + z) = x * z + y * z)
Specifically its more complex only in that the entire expression is wrapped in ($= )

In order to give x y and z values to run it simply wrap it in a let and give them values.

    (let [x 1 y 2 z 1]
      ($= x * (y + z) = x * z + y * z))
alternatively wrap it in a function

    (defn testme [x y z]
      ($= x * (y + z) = x * z + y * z))


Your comment is begging the question. The reason that Lisp “+” can accept a list of arbitrary length, rather than a pair, is that the underlying addition operator is associative.


Not true. Division isn't associative and you can do e.g. (/ 12 6 3)


This is true - Lisp is making / left-associative.

But this observation does not change my point: that the parent comment is saying "Lisp already has the ability to do + on lists", but the reason "+ on lists" makes sense is because Lisp is using the underlying associativity of mathematical +. And the latter associativity property, for abstract mathematical "+", is what the blog post is describing/exploring.


The computing + is not associative for inexact types like floating-point. That's why it's important for the Lisp + to be consistently left-associative; the result could vary if that were left to the implementation to do however it wants.

In addition/relation to floating-point, another way in which addition is not associative in computing is if there are type conversions. (+ 1 1 0.1) is not the same as (+ 1 (+ 1 0.1)). The former will do an integer addition to produce 2, and then that is coerced to 2.0 which is added to 0.1. The latter adds 1.0 to 0.1, and then adds that to 1.0: two floating-point additions.

In languages that don't have bignums (which could include some Lisp dialects) whether overflows occur can depend on the order of operations, even when all operands are integers.

The reason we can have a n-ary + is that three or more arguments can be decimated through a binary +. The concept of + is defined as a binary operation.

Lisps have variadic functions that are blatantly non-associative, like, oh, list. (list 1 2 3) isn't (list 2 3 1).


Try writing out typical mathematical formula derivations using only s-expressions. I tried for a period and abandoned the persuit. It’s just not comparable to established mathematical notation.


Two solutions. Threading macros wherein instead of nested parens like (x (y (z))) one writes

    ((->> z)
          y
          x)
In clojure there is an interesting package https://github.com/rplevy/swiss-arrows which allows one to perform successive operations with explicit placement of the result of prior evaluation by placing a <> in the form

    ((->> (z <>))
          (y <> 7)
          (x <>))
In practice it seems like there is often less need to do so as many similar functions or the same variety have the same ordering and other options like as-> exist too.

There is also the idea of processing math expressions infix as expected when desired.

https://github.com/rm-hull/infix

    ($= 3 + 5 * 8)
    ; => 43


The Lisp community has literally tried exactly this, on and off, for the past half century -- and they always come back to s-expressions. Every new Lisp programmers says "I know, I'll make a macro to let me write infix math!", and then abandons it 2 months later. It's not like Lisp programmers aren't aware of how schoolchildren write (+ 2 2).

I've written tons of code in both language families. In infix/prefix (i.e., Algol-family) languages, I frequently wish for a nice consistent prefix syntax. In prefix-only (i.e., Lisp-family) languages, I can't say I've ever wished for infix notation.

I don't understand what the perceived issue is with infix notation, except for unfamiliarity -- and that passes soon enough.


So you trade simplicity in one problem (precedence) and gain complexity in another (variadic arguments and (lang-dependent) multiple variadic function implementations)


You don't have to make your functions variadic, it is just handy when working with numbers.


This is a non-problem in every language with variadic functions. "add(x, y, z)" looks good to me.


You have to be careful though. In Julia, there is "fma(a,b,c)" which is not necessarily the same as "muladd(a,b,c)".

What are the guarantees on add(a,b,c) versus a+b+c?

If do add(1.0e30, 1.0, -1.0e30) is that the same (aka IEEE fp addition is noncommutative)


> If do add(1.0e30, 1.0, -1.0e30) is that the same (aka IEEE fp addition is noncommutative)

You might be thinking of associativity. Both addition and multiplication are commutative up to NaN.


Yup. In addition to ignoring the usual Lisp convention for associative operators, I think the article muddies the waters here by using words like "add" and "mul" in all of the non-infix examples, making them unnecessarily verbose. After using prefix notation for years, it seems very natural for pattern recognition and formula manipulation. Never having to think about precedence and associativity is really nice.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: