Hacker News new | past | comments | ask | show | jobs | submit login

infix and whether to call functions like foo(x y) or (foo x y) are separate issues.

if infix is C's advantage, that is pretty silly. because first of all there is only a limited number of infix operators, and when you make your own constructs they are prefix, but different from s-expressions in the kinda silly way i illustrated.

anyway, you can put infix into a lisp. people don't usually want to because it kinda sucks. unless you're working in certain domains.

infix means:

- memorizing order of operations and remembering it whenever reading code (does == or && have higher precedence? they didn't drill that one into us in middle school, so it doesn't feel quite so obvious as arithmetic order of operations)

- losing characters for use in identifiers

- commas in argument lists

- paren for changing order of operations

- functions that take a predefined number of arguments that has to be 1 or 2

- infix only works with a limited number of built in things, it doesn't scale nicely. i suppose you could change this, e.g. make # a special character, and then you can define infix foo then write arg1#foo#arg2. but like, that's ridiculous. no one wants to do that with functions in general. they only want to do it with math because they hate math and don't want to have to understand anything about it, they just want to use it mechanically like they memorized in school.

- infix is an approach with less generality




People prefer infix with arithmetic operators. Why? Probably because thats what they know. If your example used named functions it would look like this:

(plus 3 4 (minus 1 (divide 3 4)) (times (minus 9 2) 8 3))

plus(3, 4, minus(1, divide(3, 4)), times(minus(9, 2), 8, 3))

Anyway none of this has anything to do with the assertion that people only use explicit state in procedural languages because the languages don't support nesting well. I prefer functional programming to procedural but that doesn't ring true for me at all. As a matter of fact I prefer functional languages that don't have lisp's bracketing syntax.


Where on earth did you get those function names? Perhaps you meant

  (sum 3
       4
       (difference 1 (division 3 4))
       (product (difference 9 2) 8 3))
Moving the function name outside the parens makes as much sense as moving a verb outside a sentence.

P.S. The way to pronounce the < function is "ascending".


You are the first person I've ever heard to pronounce < as "ascending". Really, gt/lt functions are my biggest tripover point when doing prefix-everything expressions, because they work in the opposite way of how we were taught as children to interpret them. a<b is true if b is greater than a, and we can evaluate it visually by looking at the expanded side of the operator versus the pinched side of the operator (the former being greater than the latter iff true). In (< a b), the expanded side is pointing at exactly the element which it does not represent, meaning that we can't use those nice optimized mental pathways which are devoted to visually evaluating the expression.

It's logically consistent with how the rest of the system works, but it sucks because having to unlearn anything sucks. Personally, I just imagine it being rewritten as infix.


I'm a big believe in lifelong unlearning.

The "ascending" tip is from experience. It's really easier than moving operators around in your head.

The human mind is good at overloading operators. Especially since the infix < never appears after an open paren, it takes little time to teach yourself to read it as "ascending". It even looks small on the left and big on the right, visualizing an ascending list. Once you learn it that way, shortcuts like (<= 1 n 10) to see if a number is between one and ten come naturally.


True that infix < doesn't appear right after parens, but in the minds of most people, there isn't really a notion of "infix" as an entity unto itself. The alligator just eats the bigger fish.

Really, the problem is that a small amount of whitespace can change the meaning of the code.

(< 1 2) => #t

(<1 2) => #f

...for a convenience function <1 that semantically means "is less than 1". Maybe you wouldn't define such a function, but having to think about it at all or having to mentally redefine < somewhat validates the idea that the syntax here is a stumbling point.


I do remember stumbling on < early on, but now I don't get it wrong any more often than I do with infix.


you could just rename the operators to lt and gt, so there is no visual cue either way.

you could also change the argument order, but that's probably not a good idea :)


regarding state, see the other example above. then post saying you meant infix math, not state.

edit: and state is not only used to avoid nesting. that is just common. i do it myself in ruby. too much chaining stuff is confusing in ruby, even with OO shortcuts (which are how people actually avoid using the crappy function call syntax too much, even more than via infix math). so you save to a variable and split it up.


> infix only works with a limited number of built in things

Some infix languages have a mechanism for defining new infix operators.


read before you post


Imagine for a minute that I did read before I posted, and apply my comment in that new light. :)


I said you can make a mechanism to define new infix operators, but it's not very good. Then you quoted so as to imply I didn't know that, and said that actually some languages have it.


Here's a fuller response. You said:

     infix only works with a limited number of built in 
     things, it doesn't scale nicely. i suppose you could
     change this, e.g. make # a special character, and then
     you can define infix foo then write arg1#foo#arg2. but
     like, that's ridiculous. no one wants to do that with 
     functions in general.
First, there are languages which have infix functions as a first class concept in the language (e.g., J, and I would suppose APL and K), so it isn't that infix doesn't scale. Secondly, even in languages which have the call() convention, user-definable operators can coexist (see logix, for example, which is an infix macro system built on top of python).

Additionally, you don't have to have an arg1#foo#arg2, as long as you're willing to use spaces to separate things, the way that lisps, forths, and so on do.


so you really advocate tokens in the order:

x foo y

over

foo x y

? well, right or wrong, you are definitely deviating from the mainstream. that is, C and java coders will agree with me that prefix, and unlimited arguments, is better. the only difference they'll do in the general case of function calls is to add commas in the argument lists, and move the open-paren to the right one token.

i think not using infix function calls has nothing to do with why people are put off by s-expressions.


"so you really advocate tokens in the order:"

No. Nowhere am I advocating that. Rather, I'm pointing out that it's not an obviously ridiculous idea -- people have implemented it, and some people like it.

"i think not using infix function calls has nothing to do with why people are put off by s-expressions."

I disagree; I think it is one reason.


but how can it be a reason people dislike lisp when C and Java programmers put their function calls in prefix order, too? the vast majority of programmers do function calls that way, and prefer it.

and infix function calls in general is obviously a bit ridiculous because it doesn't scale. why would you want all functions to take 2 arguments? or were you going to

(a b foo x y z)

for a function of 5 arguments? and memorize which side to put the extra one on, and what order they go in, etc?


Curi, I'm not saying it's a good reason to dislike lisp. If I thought it was a good reason, I wouldn't like lisp, and yet lispy languages are my favorites (most of my lisp code has been CL, but Arc is interesting at 0.0).

There are a number of possible solutions to your question about infix functions. In J, insofar as I understand it, all functions take either one or two arguments, but the arguments can be arrays, so you often get the effect of more than two arguments.

For example (it's been a while) the form * / 2 3 4 would have the function *, the function /, and an array or list of three integers. J is parsed right to left, so the array is collected first, and then there's a modifier function '/' which takes two arguments: a function on the left, and an array on the right, and applies the function to each of the elements of the array (like map in lisps), outputing the new array. I don't know the details anymore of which symbols are what primitives, but J is interesting, in my opinion.

Alternatively, for your example, you might do

    (a b) foo (x y z) ; or
    (a b, foo, x y z) ; or
    a b Foo x y z     ; if functions have their own
     naming rules like in Erlang, or
    a b foo! x y z;   # where the bang means call, or
something else. Surely you can come up with 10 or 12 yourself. Some of these scale in some ways, and not in others; you could have a rule that you can only have one call per line, to remove ambiguity. That sounds rather restrictive, but so do Python's indentation rules when you first hear of them, and that works out okay, in my experience. I don't think any of them lend themselves to nice general-purpose macros, but I might be wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: