Hacker News new | past | comments | ask | show | jobs | submit login

BTW, tell me which of these makes more sense:

(+ 3 4 (- 1 (/ 3 4)) (* (- 9 2) 8 3))

+(3 4 -(1 /(3 4)) *(-(9 2) 8 3))

why oh why would you want a function to be in the same grouping (set of paren) with arguments to another function, instead of grouped with its own arguments?

the less you nest, the less it matters. but it is not a matter of taste which way makes more sense and scales better.




I realize this is an aside, but when I started thinking about learning lisp, I really wished the latter notation existed. After thinking about it, I realized why: it's in essence the Excel syntax, something that a lot of computer-literate people are familiar with.


Most people find infix easier to read. I really think the idea that lisp's advantage comes from having operators in the same group as parameters is ridiculous. To me lisp's power is in the orthogonality of statements, functions (and anonymous functions), data and lists.

You could make a language that had very lisp like syntax but didn't support things like lists and lamda functions and it would equally clearly define a tree. That language would be just about completely useless.


infix and whether to call functions like foo(x y) or (foo x y) are separate issues.

if infix is C's advantage, that is pretty silly. because first of all there is only a limited number of infix operators, and when you make your own constructs they are prefix, but different from s-expressions in the kinda silly way i illustrated.

anyway, you can put infix into a lisp. people don't usually want to because it kinda sucks. unless you're working in certain domains.

infix means:

- memorizing order of operations and remembering it whenever reading code (does == or && have higher precedence? they didn't drill that one into us in middle school, so it doesn't feel quite so obvious as arithmetic order of operations)

- losing characters for use in identifiers

- commas in argument lists

- paren for changing order of operations

- functions that take a predefined number of arguments that has to be 1 or 2

- infix only works with a limited number of built in things, it doesn't scale nicely. i suppose you could change this, e.g. make # a special character, and then you can define infix foo then write arg1#foo#arg2. but like, that's ridiculous. no one wants to do that with functions in general. they only want to do it with math because they hate math and don't want to have to understand anything about it, they just want to use it mechanically like they memorized in school.

- infix is an approach with less generality


People prefer infix with arithmetic operators. Why? Probably because thats what they know. If your example used named functions it would look like this:

(plus 3 4 (minus 1 (divide 3 4)) (times (minus 9 2) 8 3))

plus(3, 4, minus(1, divide(3, 4)), times(minus(9, 2), 8, 3))

Anyway none of this has anything to do with the assertion that people only use explicit state in procedural languages because the languages don't support nesting well. I prefer functional programming to procedural but that doesn't ring true for me at all. As a matter of fact I prefer functional languages that don't have lisp's bracketing syntax.


Where on earth did you get those function names? Perhaps you meant

  (sum 3
       4
       (difference 1 (division 3 4))
       (product (difference 9 2) 8 3))
Moving the function name outside the parens makes as much sense as moving a verb outside a sentence.

P.S. The way to pronounce the < function is "ascending".


You are the first person I've ever heard to pronounce < as "ascending". Really, gt/lt functions are my biggest tripover point when doing prefix-everything expressions, because they work in the opposite way of how we were taught as children to interpret them. a<b is true if b is greater than a, and we can evaluate it visually by looking at the expanded side of the operator versus the pinched side of the operator (the former being greater than the latter iff true). In (< a b), the expanded side is pointing at exactly the element which it does not represent, meaning that we can't use those nice optimized mental pathways which are devoted to visually evaluating the expression.

It's logically consistent with how the rest of the system works, but it sucks because having to unlearn anything sucks. Personally, I just imagine it being rewritten as infix.


I'm a big believe in lifelong unlearning.

The "ascending" tip is from experience. It's really easier than moving operators around in your head.

The human mind is good at overloading operators. Especially since the infix < never appears after an open paren, it takes little time to teach yourself to read it as "ascending". It even looks small on the left and big on the right, visualizing an ascending list. Once you learn it that way, shortcuts like (<= 1 n 10) to see if a number is between one and ten come naturally.


True that infix < doesn't appear right after parens, but in the minds of most people, there isn't really a notion of "infix" as an entity unto itself. The alligator just eats the bigger fish.

Really, the problem is that a small amount of whitespace can change the meaning of the code.

(< 1 2) => #t

(<1 2) => #f

...for a convenience function <1 that semantically means "is less than 1". Maybe you wouldn't define such a function, but having to think about it at all or having to mentally redefine < somewhat validates the idea that the syntax here is a stumbling point.


I do remember stumbling on < early on, but now I don't get it wrong any more often than I do with infix.


you could just rename the operators to lt and gt, so there is no visual cue either way.

you could also change the argument order, but that's probably not a good idea :)


regarding state, see the other example above. then post saying you meant infix math, not state.

edit: and state is not only used to avoid nesting. that is just common. i do it myself in ruby. too much chaining stuff is confusing in ruby, even with OO shortcuts (which are how people actually avoid using the crappy function call syntax too much, even more than via infix math). so you save to a variable and split it up.


> infix only works with a limited number of built in things

Some infix languages have a mechanism for defining new infix operators.


read before you post


Imagine for a minute that I did read before I posted, and apply my comment in that new light. :)


I said you can make a mechanism to define new infix operators, but it's not very good. Then you quoted so as to imply I didn't know that, and said that actually some languages have it.


Here's a fuller response. You said:

     infix only works with a limited number of built in 
     things, it doesn't scale nicely. i suppose you could
     change this, e.g. make # a special character, and then
     you can define infix foo then write arg1#foo#arg2. but
     like, that's ridiculous. no one wants to do that with 
     functions in general.
First, there are languages which have infix functions as a first class concept in the language (e.g., J, and I would suppose APL and K), so it isn't that infix doesn't scale. Secondly, even in languages which have the call() convention, user-definable operators can coexist (see logix, for example, which is an infix macro system built on top of python).

Additionally, you don't have to have an arg1#foo#arg2, as long as you're willing to use spaces to separate things, the way that lisps, forths, and so on do.


so you really advocate tokens in the order:

x foo y

over

foo x y

? well, right or wrong, you are definitely deviating from the mainstream. that is, C and java coders will agree with me that prefix, and unlimited arguments, is better. the only difference they'll do in the general case of function calls is to add commas in the argument lists, and move the open-paren to the right one token.

i think not using infix function calls has nothing to do with why people are put off by s-expressions.


"so you really advocate tokens in the order:"

No. Nowhere am I advocating that. Rather, I'm pointing out that it's not an obviously ridiculous idea -- people have implemented it, and some people like it.

"i think not using infix function calls has nothing to do with why people are put off by s-expressions."

I disagree; I think it is one reason.


but how can it be a reason people dislike lisp when C and Java programmers put their function calls in prefix order, too? the vast majority of programmers do function calls that way, and prefer it.

and infix function calls in general is obviously a bit ridiculous because it doesn't scale. why would you want all functions to take 2 arguments? or were you going to

(a b foo x y z)

for a function of 5 arguments? and memorize which side to put the extra one on, and what order they go in, etc?


Curi, I'm not saying it's a good reason to dislike lisp. If I thought it was a good reason, I wouldn't like lisp, and yet lispy languages are my favorites (most of my lisp code has been CL, but Arc is interesting at 0.0).

There are a number of possible solutions to your question about infix functions. In J, insofar as I understand it, all functions take either one or two arguments, but the arguments can be arrays, so you often get the effect of more than two arguments.

For example (it's been a while) the form * / 2 3 4 would have the function *, the function /, and an array or list of three integers. J is parsed right to left, so the array is collected first, and then there's a modifier function '/' which takes two arguments: a function on the left, and an array on the right, and applies the function to each of the elements of the array (like map in lisps), outputing the new array. I don't know the details anymore of which symbols are what primitives, but J is interesting, in my opinion.

Alternatively, for your example, you might do

    (a b) foo (x y z) ; or
    (a b, foo, x y z) ; or
    a b Foo x y z     ; if functions have their own
     naming rules like in Erlang, or
    a b foo! x y z;   # where the bang means call, or
something else. Surely you can come up with 10 or 12 yourself. Some of these scale in some ways, and not in others; you could have a rule that you can only have one call per line, to remove ambiguity. That sounds rather restrictive, but so do Python's indentation rules when you first hear of them, and that works out okay, in my experience. I don't think any of them lend themselves to nice general-purpose macros, but I might be wrong.


I know I'm not as smart as you guys, but I very strongly prefer 3 + 4 + (1 - 3/4) + ((9-2) * 8 * 3)

If you code Lisp long enough does your first expression becomes as natural to read as my expression is for me?


Prefix arithmetic is easier to read:

  (+ 3
     4
     (- 1
        (/ 3 4)
     (* (- 9 2)
        8
        3))
You can see straight away that the whole thing is one big addition; that the third summand is a subtraction, etc..

Also, a lot of math is prefix: f(x,y) ; d/dx (...) etc..


I never understood why this was an issue. People are really used to writing prefix functions in all languages def foo( bar, baz):

And I've never really had a hard time with reading arithmatic, so the debate really leaves me scratching my head.


I note that infix was so natural that you resorted to parens.

Infix looks reasonable until one has more than 2 operators. Then people start making mistakes. To combat those mistakes, they start parenthesizing. The number of mistakes goes down but there's still confusion. (Some folks know more precedence levels than others and many folks think that they know precedence levels that they don't know.)


Another way to put that is that Lisp is so natural it resorts to forcing parens everywhere even when I don't want them! I'd never deny that confusion about order of operations between &, |, ==, <<, %, and so on, is a major source of bugs. (But I'd blame poor coding style for that, in any language, in the first place.)

However, in every field besides this niche of computer science, including almost all of math, finance, science, and engineering, infix is used. This means that it is at least good enough and I suspect it has advantages.

Many math operations really are just fundamentally unary or binary. Generalizing - or / or x^y or mod to lists is just silly as far as I can see, and adds confusion. I don't need to see parens around the outermost operation. For the two most common associative operations, + and *, order of operations is quite good enough and it has the advantage that everyone since grade 6 has been working with it.

(I'll give you that there are very many cases where list notation is great, but I don't think it's common that they help very much in science, business, or engineering.)


> However, in every field besides this niche of computer science, including almost all of math, finance, science, and engineering, infix is used.

The reason is that the "reader" in all of those domains is another human and humans do error correction almost without thinking.

Also, each of those domains has a very small number of operators - programming languages have lots of operators.

Feel free to demonstrate that you know the precedence/associativity rules for your favorite programming language by typing them without looking them up. (I know two people who can do that for C; the vast majority can't.)


I'm only trying to defend a very narrow point, that when coding mathematical expressions, infix isn't bad.

Code is read by human beings too (perhaps just the person who wrote it) and more find infix arithmetic more natural looking.

I see the appeal in the idea that arithmetic is really just a very special case of a programming structure and should be treated as such, but on the other hand, I and many others can instantly see what a + bc/d - d(e+f)*g means and would like equation-heavy pieces of my programs to somewhat resemble equations everywhere else in life.

Does you have the quadratic formula memorized in list notation? How about the sum of an arithmetic or geometric series, or a formula for an inverse square law force?


> I'm only trying to defend a very narrow point, that when coding mathematical expressions, infix isn't bad.

Programming isn't math.

> Code is read by human beings too (perhaps just the person who wrote it) and more find infix arithmetic more natural looking.

And that's how infix causes bugs. The human reader error corrects and the compiler doesn't.

My goal is correct programs. What's yours?

Lisp notation eliminates a whole class of bugs.

Bugs are expensive - what are you getting for the ability to have more of them?

> I and many others can instantly see what a + bc/d - d(e+f)*g means

Really? It has at least two meanings. Which one is correct?

Yes, I do memorize formulas in a form that doesn't allow for precedence/associativity errors. Why I should prefer a form that does allow for such errors?


Programming isn't math, but mathematical expressions are often found in programs, more or less so depending on the domain. My preference is to have readable mathematical expressions in programs that resemble the forms in which I see or use them elsewhere.

In the general case of complex logical and bitwise expressions, order of operations can cause a tremendous number of bugs. I like to use parentheses to make these cases absolutely unambiguous anyway. But I can't remember introducing a serious bug because I messed up order of operations between +- and X. Anyway, if it's such a problem, there's nothing to say you can't put parentheses around every operation in infix notation, at least in any language that I know of!

Maybe my preference relates to having a fairly visual memory for formulas and such. If I want to find a root of a quadratic equation, I do (-b + sqrt(b * b - 4 * a * c))/(2 * a), and that is easy, and anyone with high school math sees that in someone's code they know what it is. (I probably have to check for a zero denominator and also look at the discriminant unless I'm directly using complex numbers, and there's also the conjugate root, but that doesn't change much.)

If I have to write (/ (+ (- b) (sqrt (- (* b b) (* 4 a c)))) (* 2 a)) then I can do it, but it takes a lot of thought and it doesn't go along with the way I think about the quadratic formula. Granted, this is because I learned it the way I did, but I also know I'm not the only one.

A footnote to that is that to my mathematical sensibilities, in list notation, using the same symbol for negation and subtraction is hideous!


> Anyway, if it's such a problem, there's nothing to say you can't put parentheses around every operation in infix notation, at least in any language that I know of!

Most of us have to read code written by other people. Those other people don't have exactly the same precedence defense habits that we do.

No, we don't have to end up in the nasty middle ground where the paretheization is inconsistent and buggy, but we do. Since the "infix is good" theories and argument predict otherwise, how much weight should we give them?



Forgive my total ignorance; do the kids today not use HP calculators? When I was in college (+/- 1990), we all had HP calculators, and so thinking in terms of (+ 1 2 3 4) felt pretty natural.


Nope. TI graphing calculators are basically standard. TI-83 is standard in high school and a TI-89 is preferred by those who know how much symbolic manipulation and calculus can improve their grades. They both do infix order of operations.

I believe what is popular right now with new calculator buyers are the versions of the TI-83 and TI-89 with USB connectors and faster processors, I think they're called the TI-84 and TI-89 Titanium.


not only that, 2 of the 3 sets of paren were not needed, the order of operations already would have gotten it right


Not necessarily. Infix precedence/associativity is so "natural" that different languages have have different precedences or associativity for the same operator.

If one works in multiple languages, the only safe thing is to ignore precedence and associativity and parethesize everything.


cool. which of the basic arithmetic operators have difference precedence in which languages?

what i am used to is * and / first, then + and -.


apl for one.

And then there's associativity. It doesn't much matter for * and +, but it matters a lot for / and -, and if you think that * and / have the same precedence, it matters for .

And, you're continuing to ignore the fact that there are far more infix operators. Even if infix worked for +-/, that doesn't tell us that it works when there's .,->,<, &, ?, %, $, #, @, ~, ^, |, \, =, and so on (such as digraphs).

No language that uses infix has resisted the temptation to extend it past the point where it causes more problems than it solves.


I think you may have mixed people up, I've been arguing against infix, so I'm not continuing to ignore that ;p

I believe I posted a comment pointing out that lots of people aren't completely sure, offhand, if && or == has higher precedence.

And even in this branch of the thread, when I said he could have omitted some of the parentheses, I didn't mean to say infix is powerful because it can use less parenthesis. it gets rid of them with a dirty trick that doesn't scale. What I was pointing out is that people don't actually know the infix precedence rules by heart (like they try to say they do. they say it's so natural...). so they end up putting parenthesis frequently just cause they aren't sure.


It becomes just as natural, and it's easier to debug. You can drop down lines and autoindent to see the structure of the mathematical forumula. I don't know of any editors that will do that sort of thing for infix.


not if you used infix notation for + - * / as a child e.g. in school and learned Lisp after age 16 -- at least that is my experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: