Hacker News new | past | comments | ask | show | jobs | submit login
The Egison Programming Language (egison.org)
79 points by afshinmeh on July 13, 2018 | hide | past | favorite | 67 comments



> Egison is a programming language that realizes non-linear pattern-matching against non-free data types.

It's a what? Can someone ELI5 this for a lowly Python and JS dev - why would I want this?

----

I watched a talk about Scala's future [0] recently, in which the presenter compared the taglines of a few different languages (Go, Rust, Erlang, Scala), and how they relate to business pains or non-academic problems.

His conclusion about Scala's tagline - "Scala combines object-oriented and functional programming in one concise, high-level language" is that "no engineering manager in the history of software development ever thought to themselves 'hmmm, if only I had a language that fused OO and FP, I'd be able to solve my business problems'. That hasn't happened; that will never happen. This is an academic novelty that has zero relevance to any of us as professional software developers [...] this is not a business pain, this is an academic interest."

I felt a little bit like that when I read Egison's tagline: why should I care about this language?

[0] "The Last Hope for Scala's Infinity War" https://youtu.be/v8IQ-X2HkGE?t=15m8s


> It's a what? Can someone ELI5 this for a lowly Python and JS dev - why would I want this?

Nah, but I can ELI a person with some programming experience.

Pattern matching is super useful for writing code that spends a lot of time building and deconstructing syntactic data structures.

Non-linear pattern matching is useful when de-constructing large-ish expressions that might contain multiple occurances of the same sub-term.

Nonfree datatypes allow you to encode some constraints about your datatypes; e.g., "lists that are equal modulo permutation".

Supporting type-checked non-linear matching over nonfree datatypes allows you to encode a lot of constraints in the type system and syntax that would otherwise be encoded with lots of nested if-statements and only checked by tests, not by static type checking.

This combination of features could be very useful when implementing a lot of mathematical code that operates over algebraic structures. Which is why the language's homepage mentions the implementation of (features for) computer algebra systems as a sort of killer app.

> this is not a business pain, this is an academic interest

Not all innovation is motivated by the business pains of today's engineering managers. And often the solutions to those pains are not recognizable; e.g., thousands engineering manager in 1950 could've beneifted from CAD software but I bet very feew were sitting around saying "I could really use a good piece of CAD software right now".

Scala is another good example of that, despite a marketing complaints about messaging. No one asked for FP+OO, but Scala took off because the combination of FP and OO solved a lot of the pain caused by Java.

Substantially improving the ease of implementing new types of computer algebra systems could easily create a lot of business opportunities e.g. in providing improved analysis/synthesis tools for all sorts of CAD tools.

that said, I don't think this language hits the target. See the hodge operator example...


Thanks for making the effort to explain, I think I have a better handle on what it means now.


Scala's mistake is ostensibly that it is an academic language trying to pitch itself as a language for building business applications.

Egison appears to be a self-consciously academic language, which strikes me as a perfectly legitimate.

In particular, the goal seems to be to provide a more intuitive way of writing algorithms using extensible pattern matching. Now, I have worked on commercial software that has highly algorithmic parts (graph traversals, etc.) written in Erlang (a language that supports a powerful, but not extensible, form of pattern matching). I found the pattern matching facilities to be quite helpful in producing readable, concise, correct code.

The use case is narrow, but strikes me as real nonetheless.


> "Scala's mistake is ostensibly that it is an academic language trying to pitch itself as a language for building business applications."

It's a bit subtler than that. Scala is an industry programming language perfectly suited to business applications, but with a big academic and research input (remember that Odersky, its creator, was a big contributor to Java, an industry language) and with an initial emphasis on research. The mentioned talk is criticizing Scala's marketing blurb because it was written almost entirely from a research angle, and doesn't accurately and convincingly portray Scala's benefits for business. See the revised blurb at the end of the talk [1]. It's a question of being clever about marketing.

----

[1] revised blurb: "Scala is an open source programming language that helps you write correct code and modify code safely, while seamlessly reusing existing JVM libraries."


As a Scala dev, one issue I see with the language is that it doesn't keep you in check. Easy things are easy and there are lots of syntax sugar and little conveniences here and there that make the language great, but the advanced features are just as accesible to the developer. You need self-restraint when doing Scala, there is a real risk of losing perspective and getting distracted with fancy features and libraries.


Which syntax sugar do you think verbose? I don't think Egison has lots of syntax sugar.

Egison features a customizable pattern-matching facility. Pattern-matching methods for each data type and pattern can be customizable by users. There might be a possibility that you thought syntax sugar is not syntax sugar.


It requires taste or code reviews, essentially.


I have the same feeling. It is a very powerful and interesting language from my anecdotal experience, but I also feel it gives too much options to use


Which syntax sugar do you think verbose? I don't think Egison has lots of syntax sugar.


I think you're right - this is an academic language. That doesn't mean it's devoid of value, it's just not meant to solve industrial problems. Instead, it's supposed to experiment with a new paradigm so that other (more pragmatic) languages can someday incorporate some of its ideas.


Scala has always nurtured that ridiculous notion that an animal that's 50% cat and 50% dog would appeal to both cat-lovers and dog-lovers, which doesn't actually make any sense.

This language is clearly an academic endeavor.

Disclaimer: I don't know what non-linear pattern matching on non-free data means either, but the fact that it's not explained to me is proof enough they aren't meaning to cater to non-academic users.


Express Intuition Directly with Essentially New Syntax

Egison makes programming dramatically simple!

    (define $twin-primes
      (match-all primes (list integer)
        [<join _ <cons $p <cons ,(+ p 2) _>>>
         [p (+ p 2)]]))
Uh, ok?


It helps if you remember that $primes is a list of primes, and <...> is pattern-matching.

So what it does is matching of such elements (named p) that the next element (see cons) equals p+2, which would be a twin prime, since p is a prime. Then it generates an element consisting of a pair [p, p+2]. The pattern apparently matches such a list comprised of elements (see <join ... >) that are [p, p+2] (the inner <cons ...>).

You definitely remember that Lisp lists are singly-linked lists, and (cons head tail) prepends a new head element to the list tail. This is why it is matched, pertty similarly to how it's done in Haskell.


Oh, I thought it was doing some kind of prime search. But you have to give it a list of primes to start with? How about one of:

  print [(x,x+2) for x in primes if (x+2 in primes)]

  print [(x,y) for x, y in zip(primes, primes[1:]) if x+2 == y]
The simpler one doesn't even require the primes list to be sorted, so I thought the second one would be more equivalent. It's still a lot easier to read and write than Egison one.


Or in my fave language:

     my @primes = (1..∞).grep: *.is-prime;
     my @twin_primes = map { ($_, $_+2) }, @primes.grep: (*+2).is-prime;


Gonna go out on a limb and guess... Perl 6?


`my` gives away perl

infinite symbol gives away 6

So, yeah. Correct me if I'm wrong.


Yup both right. All the clues were there >:3 I could have used Inf instead of the unicode but thats not really whimsical enough for Hacker News.

A slightly less listy pattern style is to declare types. Like I'd prefer this sort of declarative style:

     subset Prime of Int where *.is-prime;
     subset TwinPrime of Prime where (* + (2|-2)).is-prime;
     my @twin_primes = grep TwinPrime, ^Inf;
Just feels a lot more clear what my intent is and I get some cheap types for later, to use in function parameters that require twin primes.


Huh, this is actually making me want to give Perl 6 another try.

How is performance on a task like this compared to, say, Python?


Your first snippet has quadratic performance. The second is closer to the idea.

Pattern-matching would be even more obvious is a piece of Haskell:

    pair :: [x] -> Maybe (x, x)
    pair p:q:rest = if q = p + 2 then Just (p, q) else Nothing
    pair _ = Nothing

    findTwins primes = [x | x <- map pair primes]

I suppose that Eigson's power lies in matching of more complex structures, as they mention non-linear matching, matching with multiple results, matching with lexical scoping.


Your Haskell code doesn't quite parse. But the following works:

    import Data.List
    primes = Data.List.nubBy (\a b -> gcd a b > 1) [2..]
    findTwins primes = [(p,q) | (p:q:_) <- tails primes, q == p+2]
    main = mapM_ print . take 10 . findTwins $ primes
Finding primes arbitrarily far apart can be done in Haskell as

    findBigTwins gap primes = [(p,q) | (p:qs) <- tails primes, let q = p+gap, q `elem` takeWhile (<= q) qs]


Thank you for your comment!

The power of Egison's pattern matching becomes more obvious when we consider more general pattern such as (p, p+ 100) not only (p, p+2).

We can write the first 5 prime pairs whose form is (p, p+100) with a small modification for pattern matching for twin primes.

  (take 5
       (match-all primes (list integer)
         [<join _ <cons $p <join _ <cons ,(+ p 100) _>>>>
          [p (+ p 100)]]))
  ;=>{[3 103] [7 107] [13 113] [31 131] [37 137]}


I still don't see how this should require nested cons-es and 3 different types of enclosing braces.

Not only that, but you have redundancy between the ,(+ p n) part of the pattern and the emitted result of [p (+ p n)].

Point being: I applaud the attempt here, but it's not something I'll ever want to use.


I guess I had just lost the context. It's pretty cool as a pattern-matching demo.


> first snippet has quadratic performance

Not if primes is a set.


Yep, that's where I stopped. What I think is going on... Some people think predominantly in abstraction and don't really care about syntax. As Paul Graham once said, LISP doesn't really have syntax - you write parse trees directly. That may be fine for people who have achieved that level/style of programming, but it's a tiny minority.

Why should I write (+ p 2) when most people think (p+2)? This difference gets worse with more complex expressions. For the masses (and I mean a lot of very capable people) this is just garbage. For some it's wonderful. I can't say it's wrong, just not my thing. Perhaps it's the Blub paradox - I've never had the LISP epiphany so I can't say. I have had the python epiphany, and it's quite the opposite of this.


If you have lots of math expressions to write, you can use an infix reader in Common Lisp. It has a programmable reader. Such a thing - reading infix expressions - was already a standard feature of the MIT Lisp Machine OS in the 80s.

In Lisp you actually don't write parse trees. Lisp uses a serialized hierarchical data format of a tokenizer. S-expressions are not Lisp syntax trees, but a data format.


I relish comments like these from Lisp fans that actually KNOW Lisp.

Someday we'll resurrect it all...

Reader macros blew my mind when I first read about them. Being able to change the very laws of the universe from within my simple little programs... Insane!


> This difference gets worse with more complex expressions.

Actually, no - it's the other way around: more complex expressions really benefit from the uniform of syntax and using macros to simplify the code.

Not all macros are worthwhile, and some are detrimental to readability (and not writing those is one of the first Lisp lessons), but there are also macros (syntax extensions), which are elegant and powerful.

The common example is the `->` or `thread-first` macro, which chains a list of function calls, in the following manner:

    (-> (list 3) (prepend 2) (prepend 1) print)  ;; -> '(1 2 3)
    
into:

    (print (prepend (prepend (list 3) 2) 1))
With longer expressions, or longer chains of function applications, the difference becomes even more visible, with `->` version having way less parens (possibly less than in Python for equivalent expression) and less nesting, which makes it easier to read and modify.

But it doesn't end there - this is just one macro, out of many powerful syntax extensions, which make for much clearer code. There are macros for lazy evaluation, both sequences (like generators) or expressions (more like `lazy` in OCaml); for partial evaluation of functions, for changing name resolution policy, for declaring classes, for looping, for pattern-matching, for error handling, for FFI, for logic statements, for transactional memory, for async control flow, and so on and on.

Some of these are trivial to implement, but some are hard or just complicated, with many cases covered; so you're not necessarily expected to write them yourself. If a macro's semantics are well-defined - as in most popular examples - importing a package and reading docs is enough to use them to a good effect. You don't really have to use any of them, but they all exist to make certain patterns in code simpler, more readable, and more convenient to work with. Other languages often lack many of them.

Anyway, what I want to say is that the more complex the problem, or the larger codebase, the more readable Lisp becomes, at some point surpassing most other languages. This is what makes Lisp fans to be so good at "recursion and condescension", and is also something you can't see in a `(+ p 2)` kind of examples.


> Why should I write (+ p 2) when most people think (p+2)?

just an observation. most people don’t think this naturally. most people learned (p+2). maybe there’s an argument for whatever is more natural, but the point is that people forget that what they view as natural was once unnatural and had to be learned.

as a small anecdote, i did a calculator competition in high school. we used those hp rpn calculators. it was unnatural at first but then i flew once i gained the skill in using rpn.


I like Lisps but I still don't like this. This is like the Perl of Lisps. To my eye, this is not a parse tree, it's syntax vomit on top of a parse tree.


> LISP doesn't really have syntax - you write parse trees directly.

Lisp has plenty of syntax, it's just extensible and largely defined in operators. Commonly used macros like DEFUN, WITH-OPEN-FILE, and LOOP all add their own syntax to the language; that's what macros are for. When you put them together, typical Lisp code ends up looking like

    (defun count-words (filespec)
      (with-open-file (f filespec)
        (loop with counts = (make-hash-table :test 'equal)
              for line = (read-line f nil)
              while line do
                (loop for word in (cl-ppcre:split "\\s+" line) do
                  (incf (gethash word counts 0)))
              finally (return counts))))
Which I don't think is that alien. It translates pretty cleanly into Python:

    def count_words(filespec):
        with open(filespec) as f:
            counts = collections.defaultdict(int)
            for line in f:
                for word in line.split():
                    counts[word] += 1
            return counts
Do you find the former significantly harder to understand? I don't see how it's any more like "writing a parse tree directly."

> Why should I write (+ p 2) when most people think (p+2)? This difference gets worse with more complex expressions.

As mentioned in other comments, there are infix packages available for when you're writing math-heavy code (or if you just need infix operators), but I also think that a lot of people don't write very much code with complicated inline arithmetic in it, and overstate how much the awkward math syntax would affect them (there are of course plenty of projects that are math-heavy and where it does make a large difference). When you're defining variables/functions/classes/etc or calling functions or looping or whatever, there's not much difference between Lisp and most other languages aside from whether the bracket is { or ( and which side of the keyword it goes on; in my experience, that sort of code tends to make up the large bulk of most codebases in most languages.

edit: Another place where Lisp's default syntax is significantly different from a lot of languages is something like object.f(x).g(y).h(z), which in Lisp looks like (h (g (f object x) y) z). This can be remedied with common macros like -> that let you write (-> object (f x) (g y) (h z)).


Both your snippets are exceedingly long, verbose and painful to understand. In C#, that normally is a quite verbose language, you can write just this:

    IDictionary<string, int> CountWords(string f) => File.ReadAllText(f).Split().ToLookup(w => w).ToDictionary(kv => kv.Key, kv => kv.Count());
Or a bit more concise in F#:

    let wordsCount f = File.ReadAllText(f).Split() |> Array.groupBy id |> Array.map (fun x |> fst x, (snd x).Length) |> dict


Sure, there's other ways to write it, the point of the post was Lisp's syntax, not the algorithm I used to demonstrate some features of it. You could write examples like yours in Lisp or Python, too. As well, your examples both slurp the whole file into memory, which mine avoided.

edit: And your C# version is only 24 non-whitespace characters shorter than the Python version, and 20 of those are because you called your variables 'f' instead of 'filespec' and 'w' instead of 'word'. A 4 non-whitespace character difference makes it exceedingly long and verbose? Or are you just advocating 1-letter variable names and avoiding newlines?


The c# version is calling only 4 methods and in C# you obviously have to declare the types. It’s the difference between declarative style versus imperative that I wanted to highlight, obviously Python is more compact than C#, but written in that way it becomes more verbose and more difficult to read and write. Write something like that in Lisp and let’s see how it compares.


Isn't "kv => kv.Key" a lambda function that is called?


TXR Lisp:

  (defun count-words (filespec)
    [(opip file-get-string
           (tok #/[^\s]+/)
           (group-reduce (hash) identity
                         (do inc @1) @1 0))
     filespec])
We build a function which gets the file as a string, then tokenizes non-space-character chunks out of it, which are then group-reduced to a histogram hash. We pass the filespec to this function.

I posted this yesterday but deleted it, because grandparent's point wasn't about code golfing but just comparing Lisp and Python syntax.

There is a group-by function in TXR Lisp, but group-reduce is more efficient, because by using it we avoid building the group lists and counting. It's something I invented. Basically it performs multiple left folds in parallel, using the entries in a hash as multiple accumulators. Items from the sequence are hashed to their respective accumulator entry and injected through it. 0 is the initial value for the accumulator when it doesn't exist, functioning exactly like the initial value in a regular fold. (do inc @1) expands to a function which just increments its left argument (the accumulator) and returns it. We cannot use succ because it takes exactly one argument.

group-reduce has an added flexibility in that it doesn't construct a new hash, but takes an existing one as an argument. This adds the (hash) verbosity to the code, since we have to construct the hash ourselves. But with that we could run multiple successive group-reduce jobs that go into the same hash table. Also, the function is spared from having to provide a way to pass through hash arguments for different kinds of hash tables with different options.

The identity argument is needed because group-reduce takes a function that projects the items to keys; in this case the items themselves are the keys so we use identity.

Here is an interactive gist of how to solve the problem succinctly using group-by and then counting lengths:

  This is the TXR Lisp interactive listener of TXR 198.
  Quit with :quit or Ctrl-D on empty line. Ctrl-X ? for cheatsheet.
  1> [group-by identity '(1 2 2 2 3 3 3 3 3 4 4)]
  #H(() (1 (1)) (2 (2 2 2)) (3 (3 3 3 3 3)) (4 (4 4)))
  2> [hash-update *1 len]
  #H(() (1 1) (2 3) (3 5) (4 2))
This leads to the following solution:

  (defun count-words (filespec)
    [(opip file-get-string
           (tok #/[^\s]+/)
           (group-by identity)
           (hash-update @1 len))
     filespec])
Easy to follow, and brief, but wastefully conses up lists just for the sake of obtaining their lengths.

> verbose and painful to understand.

Doesn't seem honest. I don't know Python, yet I can understand what that is doing, and might even be able to spot a logic error, if it had one (though not some issue of syntax).


Would you feel weird writing add(a, 2)? or (add 2 3)?


I might go for sum(2,3)


In lisp, nearly everything is written in the same form. I.e. everything either is a function call or looks like one. Once you know that, you pretty much know lisp syntax.

People freak out about the parens. But I think what's the big deal? Functions are called with the paren in front of the function and the args space separated. so f(x, y) in C like syntax becomes: (f x y) - Whoop dee do, not so hard.

And since the syntax is extremely consistent, things that are math "operators" in other langs are just functions in lisp.

For "(+ 1 2 3)" + is actually the name of the function, so we are invoking + function and 1 2 3 are the arguments. Returns 6.

Once you know that, and that function args are evaluated before passed into the function, then you no longer have to worry about operator precedence and parenthesis to control order of operations. Once you get used to it, it's actually a lot simpler.

(* 2 (+ 5 5)) => 20


> In lisp, nearly everything is written in the same form. I.e. everything either is a function call or looks like one. Once you know that, you pretty much know lisp syntax.

Actually not. Lisp has several types of forms. A function call is one. There are atleast two others: macro forms and special forms.

function form with +

  (+ 1 2)
macro form with DEFUN. The syntax for DEFUN is:

  defun function-name lambda-list [[declaration* | documentation]] form*
The function name the is either a symbol or a list of (setf name). The lambda-list has complex syntax with optional, aux, rest and keyword arguments, declaration has a complex declaration syntax, etc...

The third type of form would be built-in syntax. LET is an example. The syntax for LET is:

  let ({var | (var [init-form])}*) declaration* form* => result*
This means that

  (let ((a 1)
        b)
    (setf b a))
is a valid program.

This is not a valid Lisp program, which Lisp will complain about:

    * (let ((a 1 2)
           b)
      (setf b a))
  ; in: LET ((A 1 2) B)
  ;     (A 1 2)
  ;
  ; caught ERROR:
  ;   The LET binding spec (A 1 2) is malformed.

Reason: Lisp expects that bindings are either symbols, lists with a single symbol or pairs with a symbol and a value form. (a 1 2) thus is not valid.

Is Lisp syntax easy? Once you look deeper, it actually isn't. Only a core language with only function calls is relatively easy.

* Lisp writes programs on top of a data structure called s-expressions. S-expressions are easy, Lisp not.

* Lisp usually has the operator as the first element of a lisp -> uniform appearance.

* Lisp has some built-in syntax. Usually as little as possible.

* Lisp lets the developer add new syntax via macros -> zillions of macros and the user needs to learn a few patterns to understand Lisp code.


Note I said "or looks like one".

If you squint a bit, macros and built-ins are similar in structure to functions (they're all s expressions).

I agree with much of what you said, but I still maintain that lisp syntax is still relatively trivial compared to most languages.

I also don't think exposing newcomers to macros on a language that is homoiconic is terribly useful until they're already comfortable with the basic s-expression syntax.


> If you squint a bit, macros and built-ins are similar in structure to functions

Because they have a parenthesis in front and back and the operator as first element. Macros then implement complex code structures between those:

  (foo a b)
and then

  (let ((a 10)
        b)
    (declare (type (integer 0 100) b)
             (type number a))
    (declare (optimize (speed 3))
    (declare (special a b)
             (dynamic-extent a))
    (prog ()
      start
       (setf b (+ a 10))
       (go end)
      end)
    (the integer (+ a a b)))
Now remove the parentheses. Looks like a program in a typical language.

Macro forms have lots of internal structure. For an extreme example check the syntax definition of the LOOP macro. Two pages of EBNF syntax declaration. That there are an opening parentheses and a closing one does not suddenly remove the syntax:

   loop for i from 1 below 10 by 2
          and
        for j from 2 upto 50 by 3
        when foo(i, j)
          collect i into is and j into js and i * j into ps
        when bar(i) > baz (j)
          return list(is, js, ps)
or the Lisp version:

  (loop for i from 1 below 10 by 2
          and
        for j from 2 upto 50 by 3
        when (foo i j)
          collect i into is and j into js and (* i j) into ps
        when (> (bar i) (> baz j))
          return (list is js ps))
Does it LOOK like it has less syntax? Not really.

> lisp syntax is still relatively trivial compared to most languages

Not really. Check out the concept of a code walker in Lisp. That's a tool which understands Lisp syntax and can walk over Lisp code and do transformations.

Here is one:

https://gitlab.common-lisp.net/cl-walker/cl-walker/tree/mast...

Mildly complex... complexity comes in, because Lisp has built-in syntax transformations via macros.


Fair enough, I see your point.

Though stylistically my personal preference is for the macro to remain as simple as possible with limited syntax introduced.


First of all, the Blub paradox is wrong. It assumes that languages can be ranked on a one-dimensional axis labeled "power". That assumption is wrong.

To see why it's wrong, think about Lisp and Haskell. Users of both languages are sure that they're at the top of the power curve, they're sure that they're looking down when they look at the other language, and they're sure why they're looking down. "How can you get anything done in [Haskell|Lisp]? It doesn't even have [macros|a decent type system]!" But if both are sure they're looking down, then languages can't be well-ordered by "power".

Next, that syntax. I suspect that that kind of syntax "clicks" with some people, and not with others. (Almost everyone could learn it, but that's not the same thing.) And I think you're right that "the masses" - the vast majority of people - are people to whom it doesn't "click". This may be the real flaw of Lisp (and Haskell) - the syntax is just wrong for the large majority of programmers.

Note well: This is my pet theory. I have no data. All predictions guaranteed wrong or your money back.


I'm not sure there are many people who know both a Lisp and Haskell and still insist on even comparing them. The conclusion I see most often, when someone tries, is basically that "whatever, both are still centuries ahead of Java".

With a decent macro system you can implement a type-checker (CL, Clojure and Racket do), as sophisticated as you want. But you can also write a type-level interpreter (of Lisp, if you want) which would evaluate programs during compilation (I can't find the post anymore, someone was describing their job interview gone... weird...). The difference between the two, more than with other such comparisons, comes down to aesthetics and cultures.

IOW, you can't infer the lack of ordering based on two items having an ex aequo position.



IMO the problem with Haskell isn't syntax, it's jargon. Not only is the jargon dense, heady, and ubiquitous, but (in my limited understanding) it also doesn't necessarily correspond cleanly to math concepts of the same name.

I guess you could say that jargon is just another form of syntax.


Well, I thought about saying that with Haskell, it was semantics at least as much as syntax. By that, I meant what you meant, but I also meant more: I wonder if functional programming itself is a poor match to the way that most programmers think, and not just because they are untrained on FP.

But I didn't say that, because I thought it was a bit of a digression to my point, which was already a digression on phkahler's point, and the digressing has to end somewhere...


It's all good. I've rather enjoyed reading the responses. IMHO this type of thing is what really makes languages appealing or not too different people.


I’ve been casually following Egison for a while now. Its customizable pattern matching is really cool. Unfortunately I do think the lisp syntax does turn some folks off; whether this set of folks overlaps with those who would be interested in Egison otherwise, is another question.

I’ve also noticed that it “pivoted” to focus more on math; is that intentional?

Anyway, good to see it on HN again. Anything that pushes more pattern matching research is a plus for me =). Mainstream languages barely started to adopt first-order pattern matching!


Thank you for your comment!

I started to implement a computer algebra system as a killer application of customizable pattern matching of Egison.

It allows users to customize the pattern-method for mathematical expressions from more primitive level than the other computer algebra systems.


Looks like a lisp dialect, why is it a separate language?


A language is a dialect with an army and a-- err a parser and a compiler :)


To me it looks like Clojure and php had a child together :grin:


This looks like an intruiging alternative to Julia. Last time I tried Julia, effecient multidimensional code generation was incomplete due to required work in the type system. I wonder how Egison's performance matches it's expressiveness.


When did you last try julia? Things are looking pretty great these days.


I love how it matches so much more. It would take me some time wrap my head around the new possibilities.

I doubt I have time to learn this or use it but it would be cool to see a list of examples of things that are much more concise and elegant in this language so that it expands my thinking. Is there a list of examples with comparisons to traditional languages?


interesting about the math integration with tensors and differential forms. gonna have to read the linked paper.


Thank you for your interest!

Here is a link for a new paper that discusses the integration of tensor index notation including the support for differential forms.

https://arxiv.org/abs/1804.03140


This seems very interesting, I have only just skimmed it so far. Am I wrong in thinking this might relate to (representations for) Geometric Algebra?


The calculus for differential forms is a part of Geometric algebra, so I think there is a relation. Egison's strong point is it can handle tensor-valued p-forms due to its ability to handle both of tensor index notation and differential forms at the same time.


Cheers.


Your syntax is bad and you should feel bad.

Mandatory XKCD: https://xkcd.com/297/


Is there a academic language which focuses solely on performance and optimization- by building the program and data structures around the hot-loop and optimal cache usage? Something Mike Acton would create if he wrote compilers?


A language called "Spiral" went by yesterday, that I think does a little bit of that. https://news.ycombinator.com/item?id=17519138




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: