This is very cool, and a direction that I hope more toolchains will go.
One problem, though, is that reading trees in text format is not very easy on the eyes. In fact, this might be one of the biggest reasons that LISP is (to people like me) not very readable. Trees in graphical form are orders of magnitude easier to parse visually.
I'd love to see software that makes visualizing tree structures easier. At one point I had a grand plan of porting graphviz to JavaScript, with SVG as an output format so it could be easily styled with CSS.
I think that approaches like this will be a lot more usable (and deliver a lot more "wow" factor) if you can view the trees graphically.
Such a tool has been written and I've been trying to convince the author to open source it, but if that doesn't work out I might have a go at it myself. We can already look at the LLVM-level calling graph since llvm includes functionality to output GraphViz dot code.
One thing that this kind of low level "interface" allows is a very rapid development of coding tools. How long until we see a context-context sensitive auto completion pop up? These language features would even allow more complex hints. Language analyzers can find bugs and note them right when writing code.
Of course, this is nothing new and already done in a few languages. But these features will make it easier (and hence faster) for the IDE to have some awesome coding features.
BBN Lisp had context sensitive error correction at least four decades ago. It was called the DWIM (Do What I mean) facility. It operated (in 1971) on unbound atoms and undefined function errors. If you misspelled a function name, typed an 8 or 9 instead of a left or right parenthesis, or typed an extra parenthesis, and then ran the incorrect code, the DWIM facility was called to help you. If you ran DWIM in "Trusting" mode, then changes would be made for you automatically. If you ran it in "Cautious" mode, you would be prompted to confirm proposed changes. If you didn't respond to the suggested changes within a certain time limit, the changes were made for you. BBN Lisp also had an "UNDO" function undid the side effects of any specific event, so there was no danger of it DWIM permanently munging your code. If you misspelled a command in the REPL (e.g PRETTYPRNT instead of PRETTYPRINT), the DWIM would understand that it was a misspelling and would then execute the right command.
BBN Lisp Manual (see section 17, which starts on page 335):
Yes, certainly, our current REPL is somewhat inflexible, but the (fully functional, but not something we want to drop in last minute - probably next release cycle) version of the REPL in the REPL.jl package as well as the IJulia Notebook already do some of this for e.g. completion of fields of a composite type.
Mostly vim, emacs and sublime (there's plugins for all of those). The folks at forio have also started an IDE for Julia, but last I looked they didn't have much more than an editor (and their website hasn't been updated in a couple months, so I'm not sure what the status is).
A footnote mentions that @which is not as helpful for functions defined the REPL, because it doesn't give a file and line number in that case. This is not the case, however, in the new IJulia notebook interface (a REPL based on the IPython notebook); there, each multiline input cell is named "In[nnn]" (like Mathematica) and is treated like a file, so you can type "@which foo(3)" and it will give "foo(x) at In[22]:1" if you defined foo on line 1 of input In[22].
Note also that, in the latest versions of the REPL (and IJulia), you can simply type "? foo(3)" instead of "@which foo(3)".
We're currently working on documentation. I assume you're talking about the reflection section? There's issue https://github.com/JuliaLang/julia/issues/2886 open about that and it will probably be addressed by the end of the week (since it's tagged for 0.2).
This actually wasn't the kind of reflection I had in mind when I put that stub header in there, but it certainly does belong. I was thinking more along the lines of traditional "what fields does this data type have?" reflection. But yes, large parts of this would be good to have in the manual.
I'm not implying that their expression data structure is in anyway deficient, but given that Julia is so lispy in the first place, why was a design decision made to abandon (the more readable/familiar ?) S-expressions ? Can one of the devels give us an insight ?
Edit: Can someone also comment on how Julia compares to Dylan in the semantic sense ?
In short, they are looking to be the go-to language for scientific and high performance computing. Programmers in that domain will have some expectations, and S-expressions are not it. They also consciously mimicked Matlab's syntax.
To expand upon this: I love Lisp, but good array syntax is tremendously helpful when you're doing scientific programming (at least of a certain stripe). You want to be able to do things like A[1, 2:6] to get elements two through six of the first row of A. The great joy of Julia (for me, at least) it makes arrays easy to deal with, but also brings all sorts of wonderful Lisp-y features.
I understand, but those are hardly two conflicting things, are they ?
Julia does seem to use some sort of Scheme in their implementation, wouldn't it make sense to allow S-expressions in the language (or is it allowed ?) ? Or maybe, the designers/implementors felt that it wasn't necessary ?
In a strict sense they are, I think: A[1, 2:6] could writtenas an S-expression---something like (getindex A 1 (range 2 6)), maybe, but it is not in itself an S-expression. (I think: I'm relying on Wikipedia for the precise definition of "S-expression".)
Your broader point stands, though: I can't see any reason that one couldn't have S-expressions + syntactic sugar (not that I know anything at all about designing programming languages).
My guess is that it's a sociological thing? As scott_s pointed out, people expect something Matlab-like. From http://julialang.org/ : "Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. "
The parser and AST lowering is written in scheme. I can't really comment on all the design decisions involved, since I joined the project at a later stage, but personally I'm rather happy with it and I think especially people new to programming have an easier time with a syntax like this than with S-expressions.
While it's certainly possible to write numerical code in S-expressions, it's often not very pleasant. Compare Julia's "2x^3n - 2y^2 + 1" with the S-expression for the same:
(+ (- (* 2 (^ x (* 3 n))) (* 2 (^ y 2))) 1)
(I think I got that right.) Which would you rather write – or read? Mathematical notation and code tend to lean rather heavily on syntax to lessen the mental burden on the programmer. It's far less egregious but I also prefer `f(x)` to `(f x)`.
In general, we all felt that basic Matlab syntax is pretty good for numerical programming and linear algebra and would actually be fine if you got rid of some of the weirder choices (like indexing with parens and required trailing semicolons).
I do agree that infix syntax for math and array manipulation is more sane. This concerns input parsing (and can be taken care of in Common Lisp too - see the infix reader macro), this wasn't the point of my question.
Did you find that manipulating expressions i.e writing useful macros, with the current expression datastructure in Julia was in some way more efficient than using S-expressions ? Axiom and Maxima have this system where the user predominantly interacts in infix, but for things which require more expressive power, one can switch to lisp (and use the underlying S-expression). I'm thinking about something like writing a macro for generating code for tensor operations. For example something like:
$ @einstein((i j k), C[i, j], A[i, k] * B[k, j])
should be able to generate gemm (forgive my Julia illiteracy).
Oh, having syntax is absolutely an annoyance when writing macros. There's a reason that Lisp is where real macros came from. But it's not so bad – syntax quoting with interpolation helps a lot.
Wouldn't it make sense to switch to S-expressions for the underlying expression structure then (or some sort of automorphism between the two) ? You'll have the misfortune of calling yourself a Lisp, but then you'll have more power :)
Regarding Dylan, there are many similarities, but I'm not a Dylan expert so take it with a grain of salt (or some fact checking).
Both languages are fully dynamic and support multiple dispatch. But whereas Dylan opted to have both single and multiple dispatch, as well as dynamic and static dispatch, Julia only has one kind of dispatch: full dynamic multiple dispatch.
I'm not sure how Dylan passes variables and whether it separates reference types and values types, but Julia doesn't, instead opting to distinguish immutable versus mutable types. Immutables have many of the benefits of value types for the compiler, plus a few more. Mutable and immutable values have identical semantics other than that fact that you can't change an immutable value, which makes the programming model much simpler (all values have reference semantics), while giving the performance of value types for immutables since the compiler is free to pass them by value when it sees fit.
I believe Dylan allows you to inherit from concrete types by structural inheritance – i.e. tacking fields onto the fields of the parent type. in Julia all concrete types are final – you cannot inherit from them. I haven't found that tacking fields onto a structure is all that useful; sharing behavior is far more useful than sharing structure, and delegation is generally a better way to share structure.
Abstract types in Julia are essentially what OO researchers call traits: abstract types that you can program to. I don't think that Dylan has traits. These are crucial to much generic programming (the other kind of generic programming is writing generic code for parametric types; although these are actually the same thing in Julia).
Dylan supports multiple inheritance, while Julia does not. We might add multiple inheritance in the future – or not: so far it's not terrible to live without it, although there are a few places where it might be nice. In the meantime, duck typing will suffice, and once the standard library is complete and stable it will be much easier to assess if we really need multiple inheritance or not; if the standard library doesn't need it, I think the rest of the world can live without it.
The biggest philosophical differences seem to stem from:
1. Dylan was created in a time when OOP was new and hot and had to be included; Julia doesn't really care much about OOP – generic and functional programming are much more important.
2. Dylan tends to give you lots of options (e.g. single or multiple dispatch, static or dynamic), whereas Julia tends to give you a single option, but it's the most powerful one (only dynamic multiple dispatch). This keeps the language simpler and easier to learn and use, but no less powerful.
Dylan doesn't have both single and multiple dispatch. Any method call is going to be using multiple dispatch.
Dispatch is dynamic, except where the compiler has determined (perhaps with hints from the programmer in the form of sealing) that it can be optimized into a static dispatch. One mode of the compiler extends this to help further eliminate runtime dynamic dispatch but that isn't enabled currently (as we aren't sure of the state of that code).
Dylan's OO isn't traditional OO at all in that it isn't like Java, Smalltalk, C++, Python, etc. Instead, it is CLOS-style OO. I wouldn't say that it was new and hot and had to be included as the OO in Dylan is a direct descendent of what was already in Common Lisp and going back from there to Flavors from Symbolics on the Lisp Machine and so on. (In fact, some of the same people were responsible...)
As for #2, I don't really see that at all. Dylan has a very intelligent and hard working compiler that tries to let you have the freedom that you want, but still be able to make things run efficiently. After all, it was targeted originally at hardware from 15-20 years ago.
I'm not a Julia expert, so I won't try to address the other points.
It appears that this post seems have been interpreted as questioning the design choice of infix vs prefix. I do think that infix is far easier to deal with, and that the array slicing syntax is a joy; what I meant was more about how picking a infix-y datastructure (like Julia) for storing expressions, would seem to limit one's ability to manipulate them ?
I thought the only reason lisp's macros were as powerful, was because one didn't spend all his time parsing stuff.
One problem, though, is that reading trees in text format is not very easy on the eyes. In fact, this might be one of the biggest reasons that LISP is (to people like me) not very readable. Trees in graphical form are orders of magnitude easier to parse visually.
I'd love to see software that makes visualizing tree structures easier. At one point I had a grand plan of porting graphviz to JavaScript, with SVG as an output format so it could be easily styled with CSS.
I think that approaches like this will be a lot more usable (and deliver a lot more "wow" factor) if you can view the trees graphically.