In CS literature, this programming model is called "stream computing."
I work on System S, which is the research name of IBM's InfoSphere Streams, which is a distributed, realtime (edit: soft realtime; so, high throughput, low latency, but not hard realtime with guaranteed deadlines) streaming system with an associated language. Another project in this area is Storm. See this comment thread for more on that: http://news.ycombinator.com/item?id=3193115
But, Anic seems to be more related to the kind of streaming languages that came from the digital signal processing and embedded worlds. See, for example, the StreamIt project: http://groups.csail.mit.edu/cag/streamit/
Note that Streams, Storm, Anic, StreamIt all have the same underlying programming model, but Streams and Storm target a different area than Anic and StreamIt. Streams and Storm target the emerging area of "big data" where you need to distribute your computation across a cluster. Anic and StreamIt are lower level: applications such as video decoding are streaming in nature, but one typically implements them on a single chip, and often even in hardware.
Standard disclaimer: my views are not official IBM views.
I've been greatly interested in ANI-like dataflow programming languages for a long time now (though maybe not with ANI's crazy syntax). I would encourage you to give it a try! Don't let the existence of other languages hold you off (and in any case, ANI can be considered dead).
Take it and improve on it. I'd also experimented with puzzling out a syntax for a language like this and it's very exciting to see something that actually does it. Read through the tutorial. The syntax looks awful but is actually very coherent once you get into it. As an aside, getting there first doesn't matter. Doing it best is what matters in the long run. Apple, google, Facebook, etc.
Yeah, I'm having to fight really hard to continue giving the language a shot after seeing that syntax. Let's see, from their Dining Philosophers example:
[]{
Already, the syntax is starting to look a little strange, but OK, I'll give it a shot.
id = [int\];
Hm. Backslash seems to be used for something other than escaping. Not a good sign.
=;
Huh? The equals operator is being used alone, with nothing on either side? That's really stretching the bounds of convention.
-> --> <-> <-
We have various types of arrows that are used as syntax. In some cases, they're used as binary operators, in some cases as unary. Doubling up of dashes appears to change the meaning. This is really not looking good.
And then there's the example you mention. Besides how it looks; their example of solving a useful concurrency problem is "a bug-free, efficiently multithreaded real-time clock + infix calculator hybrid application"? Why would you even want that?
There may be something interesting here, but between their quite bizarre syntax and poorly chosen examples, they aren't doing a very good job of hooking me.
Hm. Backslash seems to be used for something other than escaping. Not a good sign.
From the tutorial[1]:
In ANI, \ means "latch". Basically, a latch is a place where you can "hold on to" an object of the specified type. A latch is like a box that you can put things in to, take things out of, and peek at what's inside.
If it weren't for the long-running convention of using backslashes as escape characters, postfix backslash would make some kind of sense, at least visually.
Forward slashes are for prefixes, backward slashes are for postfixes. Nice parallel there :)
Aside from breaking the escaping convention, slashes in code give the page an unfortunate swirly quality. It seems like line other parallel or perpendicular can make the line of the page look it is at a different angle.
I'll never understand why people keep going out of their way to make new weird syntaxes. Is it really necessary? Does it really give something over well established C-like syntaxes?
At the very least, people should approach new languages from an ease of typing angle. I look at that and all I can think is "That would be a bitch to type out. Doing it all day? No thank you."
You could ask they the same question about C-like syntaxes? Was it really necessary to invent anything new after somebody had hit upon S-Expressions?
Anyway, to answer in the affirmative: Certain paradigms provide from certain syntax. E.g. Haskell would be much clunkier to write in a C-like syntax than with the ML-derived one it has. And Lisp's macros would be harder to pull off with a different syntax. In my opinion, Python also profits from syntactic differences with C.
Somebody more knowledgeable could talk about SQL-syntax.
> At the very least, people should approach new languages from an ease of typing angle. I look at that and all I can think is "That would be a bitch to type out. Doing it all day? No thank you."
Old languages would also benefit from that approach. We've seen some alternative syntaxes JavaScript, but not really for something like C.
One part of me says "Don't reinvent the wheel", but the other part of me suspects that forcing a new language to follow the syntactical conventions of another language that was designed with different considerations in mind is a bad idea.
The best example of a language that I think suffers from it's association with the syntax of another is C++. The features that it provides are all more or less fine, but cramming it into a syntax that was made for a much simpler language really wasn't a good idea.
(I understand why it was done from a historical perspective, and understand the few benefits it affords, but I think those are particular to that example)
On the other hand again, there really isn't a lack of syntaxes these days that a new language developer can look to for inspiration. Surely one of them should work fine most of the time.
This specific one is just bad though. I mean "\" is really difficult to type on almost all non-us keyboards. It's not even really easy on US keyboards (the pinky has to go a long way). See how often it's in there. Even standard C syntax is a pain on some non-us keyboards. I.e. on a German keyboard, the [, [, {, }, are rather difficult to type. That's why I (being German) switched to an English layout around two years ago. Now I have trouble typing umlaut and thus writing good German, but at least my coding speed improved by probably 50% and my Hand hurts far less.
Now I'm interested to know which languages exist that don't rely heavily on backslashes. I guess I can't think of a language that doesn't use blackslashes for string escapes. Some other uses off the top of my head:
* C -- line-continuations (important for macros).
* Perl -- regex back-references.
* Haskell -- anonymous functions.
* Python -- line continuation.
* Tcl -- line continuation.
* Icon -- non-null test (unary), generator limitation (binary).
* Prolog -- \+ for not provable.
* J -- the prefix adverb and the grade-down verb (\:).
* TeX -- Yes.
During my perl experience, I learned that any special character next to $ is usually some magic variable that does something. So I would bet that $\ also does something.
edit: oh yeah.
$\ - The output record separator for the print operator. If defined, this value is printed after the last of print's arguments. Default is undef.
> Now I'm interested to know which languages exist that don't rely heavily on backslashes.
I posit that line continuations don't count, at least not in the "rely heavily" category.
Then I am not sure how popular J and Icon are. So let' keep those away. Is the backlash really used that often in Prolog? I wouldn't say "heavily". So that leaves us with:
* Perl
* Haskell
* Tex
I think that's a more realistic "rely heavily" category.
What you use depends on where you use it. Inside of a match, \1 refers to what was matched by the first parens. After the match $1 refers to what was matched. In a substitution, \1 is special cased to mean $1, but that is frowned on.
Thus: /\b(\w+)\W+\1\b/ means "match repeated word".
And: /\b(\w+)\W+$1\b/ means "match word preceeded by the word matched on your last match.
Moving on: s/Hello (\w+)/Goodbye $!/ means "Replace Hello followed by a word with Goodbye followed by the same word."
And finally: s/Hello (\w+)/Goodbye \1/ is special cased to mean the same thing, but is frowned upon.
There are various keyboard layouts that are superficially English but which allow various diacritics via dead keys, such that, for example, you can type AltGr+" followed by u to get ü. (I just did exactly that.) I know of us altgr-intl on Linux (that is, xkb), but there are similar layouts on OS X and Windows.
I solved that problem by binding capslock to Alt Gr and using a custom US layout with Alt Gr plus e-[;' bound to €ßüöä. This basically allows me to quickly switch to the German layout when necessary without losing much speed.
I guess if LCtrl and their LMod3 were swapped (ie so that capslock is used as Ctrl for shortcut keys), I could see this being an interesting layout. It has all the common keys either on the home row or easily reached from the home row. I can't ever see myself switching from Colemak though, I like how the fingers roll across adjacent keys too much.
(Plus since I don't type much German, I don't need quick access to üöäß, though an English-centric variant of Neo would be.. interesting, maybe using those keys for the most common programming symbols?)
It's the other way around, language designers go out of their way to conform to C-like syntax in order to cater to programmers of C-like languages.
C's syntax is a fairly low level, which makes it fairly easy to map to assembly, but not very well suited for representing high level constructs (see C++).
// remove more clutter
#define O printf
#define R return
#define Z static
#define P(x,y) {if(x)R(y);}
#define U(x) P(!(x),0)
#define SW switch
#define CS(n,x) case n:x;break;
#define CD default
There is obviously a lot of effort put in it. The idea behind it is so cool. He shared and put it up. That's very good.
But unfortunate part of sharing is also receiving criticism.
Sometimes criticism is stronger when the initial expectation is higher. I think most clicking that link got excited about all the features they read before then they got disappointed when they saw that piece of code.
So his job is not to babysit other techies, i.e. he doesn't care about how others see this project, so then he wouldn't mind the criticism either, right?
On the other hand there's something to be said for terseness. Ever read a math or CS paper that's full of equations and other graphical shorthand? To the uninitiated, it's gibberish. Once you learn the conventions, it's a way to communicate complex formal ideas with very concise notation.
I'm not saying this language does a good job at that, but to dismiss something interesting just because of syntax prejudice is shortsighted.
Pretty much every CS paper I've read that was full of equations could have easily done without them. There certainly are sub-fields of CS where complex equations in papers are justified, but far too often it is laziness, and hides assumptions or imprecise descriptions that makes implementing the described methods harder.
Mostly, when I come across complex equations in CS papers, I tend to skip over them and only go back to look at them if there are parts of the paper I can't make sense of without them - it is very rare to find that they are necessary at all.
The cases where I find I need them are usually a sign of trouble - it tends to mean there'll be a lot of guesswork to figure out parameters and parts of the algorithms that are not spelled out in the paper at all. But usually the same ideas will be expressed in English, code or pseudo-code in much simpler ways.
My research for my MSc involved a bunch of papers on error correction in OCR, including a ton of image processing and statistical analysis, and not one of the 50+ papers I reviewed actually depended on the equations present in them for understanding the ideas, but I quickly learned to appreciate the ones that were light of equations for the seemingly substantially higher odds that the algorithm descriptions would be pretty much complete and precise.
There certainly are sub-fields of CS where complex equations in papers are justified, but far too often it is laziness, and hides assumptions or imprecise descriptions that makes implementing the described methods harder.
But without such obfuscation, how would CS PhDs retain their competitive advantages?
I mean, if anyone could just read your paper and actually implement the algorithms that you talk about there without having access to your base code and the real details that you didn't publish, then they might scoop you on the next (quite obvious) iterative improvement to your algorithm without having to do two years of preliminary work. And then you'd only get one paper out of it, whereas by obfuscating the hell out of the thing, you can milk it for five or six.
>
Mostly, when I come across complex equations in CS papers, I tend to skip over them and only go back to look at them if there are parts of the paper I can't make sense of without them - it is very rare to find that they are necessary at all.
Obviously any equation can be expressed in words, but those who are familiar with the notation are able to read the equations and understand the ideas in a paper in a fraction of the time. This is important for those who read papers regularly.
Maths is not about conventions. It is a dynamic writing style. All operators get replaced by adjacency after a few lines as writing them is boring. Computer verifying proofs is very hard for notation reasons.
Apparently, you didn't read the line above the code, saying that this is a compacted version. If you removed whitespace from a C code, you would get something that is even worse.
I guess he was trying to demonstrate what you can achieve with a small number of characters. It may not be a good idea from a marketing perspective but come on... "Saw that, closed the page" is simply narrow minded.
It's not a good idea from any perspective. In my experience, the only people who cram as much logic into as few characters as they possibly can are beginner to intermediate level programmers who are in that awful phase where they think they know a lot more than they do.
It's just a little dense to demonstrate the succintness of the language. The 2 samples above are normally dense. I really don't see the problem. It's not clever to make a judgement about a language (whose syntax you don't know) just because you don't like some sample of it.
I think some of the language's concepts are pretty neat and unusual.
See I think language is a tool and some judgement about a tool is more rational -- so I appreciate all the features listed about it, it is very unique, other judgement is more subjective -- "oh I don't like to use curly braces" or "stupid Emacs keybindings are breaking my pinky finger" etc. and those are also very important decisions for picking a tool.
I just highlighted that the choice to present the most obfuscated and hard to read (and by number of upvotes it seems that most agree) piece of code on the front page of the language is not helping appeal to that second (subjective) part it drives people away before they even get to click on the the tutorial (which explains what is what).
But after skimming over the tutorial my impression is that the concepts are so simple there's even less excuse for that horrible syntax.
Uncomfortable syntax can be excusable if you have a clear rationale for it, but I can't for the life of me see why this syntax would need to be this awkward.
Agreed, but the tutorial is actually quite good and the syntax (mostly) makes sense to me now.
What I don't understand is that writing a stream that unlatches a "variable" (a thing bound to a latch) executes once per time the variable is set, but it's not clear why the same code bound to a constant doesn't loop infinitely.
Of course, every new language should look exactly familiar to us users of existing languages.
God forbid we should need to learn any new syntax or shudder concepts! The old ways are the best ways.
If the example code doesn't look INSTANTLY, IMMEDIATELY awesome and amazing, it isn't worth a second glance. Even the first glance was a horrible waste of time!
I admit I was snarky and maybe jumped to conclusions ;) But it sounded like OP took a look at the syntax, had a quick visceral reaction to it and closed the tab.
The syntax put me off at first, and still bothers me a bit (something to do with too much symbolic sugar and existing familiarity with escape chars)
That said...
I feel like this streaming syntax might be better suited as a meta-programming framework of sorts...build the more intricate objects/modules in a language such as C/C++ or java, and use these stream/latch metaphors to orchestrate those modules.
Yes, you can already do this with *sh (largely the point of Unix pipes), or directly in C/C++/Java (e.g. Storm), but ANI seems to provide a really rich interface for this kind of orchestration. It has many of the right primitives.
It would help if fellow HNers read through the content a bit before upvoting something quickly. Here is a project which was last updated more than 2 years ago (no changes in source/tutorial/wiki in 2 years). There's no working implementation to support the claim. Any sane programmer would highly doubt existence of a (faster_than_C && safer_than_java) claim. Why are we as a community are becoming more and more obsessed with sensational link-baits?
Agreed: http://code.google.com/p/anic/source/list there has been no updates since 2010. I was honestly surprised to see this on HN and actually had to double check that I wasn't seeing an old HN submission for some reason.
I was briefly involved in this project, I wrote some code for instruction selection, was active on the mailing list and had a few lengthy discussions about dataflow programming with Adrian/Ulitmus. Last I heard, in early 2011, he was still working on it, but in private, and he had changed focus somewhat to something even more ambitious. I voiced my concern over raising the bar before the first simpler version was released and feature creep, but I guess his mind was made up. I haven't heard anything since, despite trying to reach him a couple of times :-(
So, from this, I would say that ANI can safely be assumed dead unless a working compiler is surprise-released.
Same here, I had some discussion with the creator but it looked rather doomed from the start. He was worrying about parser optimization, logo design, and interactive shells, when there wasn't even any (hand-)compiled program or proof of concept of the semantics...
At first I thought this was a joke due to some of the copy, but after reading through the tutorial a bit, it seems pretty interesting. Less hyperbole would be nice though. From the FAQ, ANI (the language, anic is the reference compiler) is faster than C because it is automatically parallel code. Not exactly what I think of when someone says "faster than C".
Nevertheless, for a modern take on a dataflow language, ANI is intriguing. If nothing else, the paradigm is probably different enough from imperative/OOP/functional that it is worthwhile to learn even if ANI doesn't take off.
Try to imagine, if you will, the amount of time and effort it would take you to write a bug-free, efficiently multithreaded real-time clock + infix calculator hybrid application in a language like C.
How much are you betting? "multithreaded real-time clock + infix calculator hybrid application" sounds useless and stupid in any language. No one who can code would be stupid enough to write such a thing, specially in C, and specially as a freely available library.
Why, oh WHY did they make extensive use of the backslash character? Backslash is almost universally used as an ESCAPE SEQUENCE INITIATOR. Any other use is just going to be confusing. Especially when you end up making constructs like "(\a/\b)", or having to context shift because of string escape sequences like "\n".
> Q: Why are backslashes (\) used as language operators? Isn't that confusing, given they're used in other languages as escape characters?
> A: This is a valid point, but backslashes were chosen for a purely pragmatic reason; on virutally all keyboards, backslashes are very easy to type (requiring only a single keystroke). This is a handy property for backslashes to have because in ANI, you'll be typing them a lot!
> Incomers from other languages might be thrown off a tiny bit, but a programmer that's spent some time with ANI will quickly come to realize that there is actually never any good reason to end a line of ANI code with a syntactual backslash! If one insists on doing so anyway, they are writing ill-formatted code that would be confusing regardless of how backslashes are interpreted by the language. Thus, the backslash conflict is there in theory but irrelevant in practice.
> The usage of \ in the language syntax is a thought-out practical compromise, though the issue may be reconsidered in the future depending on programmer feedback.
> on virutally all keyboards, backslashes are very easy to type
That's such a typical US-centric attitude. On (most?) European Apple keyboards, "\" is alt+shift+7 or some similar Vulcan death grip combo. Not exactly "very easy to type".
Yeah right, I guess the 100 millions or so French-speaking people don't amount to much. On French keyboards it's ctrl+alt+"<" (which is at the bottom left of the keyboard), or, when available "Alt Gr"+"\".
Going out of your way to produce such small and dubious improvement is always a bad idea imho.
If you really think C syntax is a pain point, the sanest thing to do is to go with textual keywords like in ruby or lua.
Keywords are reasonable easy to type (I'm much faster typing plain text without
special characters), simple to auto-complete and read very well. For example,
they could have used something like "to" instead of "->", eg:
"Hello World" to std.out
That's just an idea I had on the spot, not sure how it would play out. But I
think this stream programming would actually lend itself well to some sort of
literate syntax.
This is indeed what I meant. Typing "end" is, for me at least, way faster than typing "}". The reason is that "end" does not require modifier keys, and that all the letters are relatively accessible in the middle of the keyboard. (If you don't speedtype, this might not make as much a difference.)
Thanks for the elaboration. I might even agree about the typing, even though e.g. `end' requires one keypress more than } for me. But I find that punctuation stands out more. From a practical point of view, I prefer reading Haskell's
\arguments -> body
to Scheme's
(lambda (arguments) (body))
because it stands out more. What gives me as the reader an even better hint without getting in the way, is indentation. That's why I prefer that to e.g. curly braces for reading. (Writing, especially in non-programming editors, like a webform, is easier with explicit markers like `end' or } though.)
You do know that "\" doesn't even exist on Japanese keyboards, right? It was replaced by "¥" a long time ago.
If you were serious about making things easy, you'd look at all the common keyboard layouts (not just American ones) before choosing a "thought-out practical compromise" that is of dubious practicality.
They should really try that on a German keyboard. On mine, backslash is not even listed on the keys as an icon. (Same for Swedish, Norwegian, or Dutch, I think).
anic is the reference implementation compiler for the experimental, high-performance, implicitly parallel, deadlock-free general-purpose dataflow programming language ANI.
In short, ANI seeks to break out of the shackles of imperative programming -- a stale paradigm which for four decades has produced hundreds of clones of the same fundamental feature set, none of which offer intuitive hands-off concurrency, and differing only in what lengths they go to to sugar-coat the embarrassing truth that they're all just increasingly high-level assemblers at heart;
Haha. The entire point of programming is to tell the hardware what to do. Any programming language that is not 'high-level assembly' has severe leaky-abstraction problems. The reason C and C++ still enjoy so much success despite limited syntax is that they stay true to the hardware and don't force another layer of abstraction on you.
Please, the next person who designs a language (which are often enough US people i guess), CHECK KEYBOARD LAYOUTS IN EUROPE FIRST.
Really. It's a pain to type []{} and \ on a german keyboard especially if it's supposed to be like every second character! Stop it. Please. Especially when there are proven languages that can cope without all this sh*t (yes, i am looking at python here).
Edit:
Although this might be an interesting language there is no way i am going through this syntax just to "check it out". Lost an opportunity to gain a new community member.
Front page claims there's a Pascal dining philosopher's implementation on the Wikipedia page. Except there isn't. There's a link to the article about Rosetta Code. After following that link, and then clicking around some more, I still haven't found this Pascal version to compare....
Anyway, the reason I ask is because I wanted to know what this line does:
That line has no counterpart outside of ANI. At first I thought it was some kind of guard or synchronisation point, but according to the Yacc grammar, it just separates “instructors” from “outstructors”[1]. Like the rest of the syntax, it’s not terribly clear at first glance.
Right, because the concerns we have today primarily is high-performance uniprocessor computing. Or maybe not. Does it seem to anyone else that almost all of the people creating new languages lately are generally tackling the wrong problems?
This same repo was posted here in November of 2010. I started following it, but the owners quickly got overwhelmed with other projects, or work, and it all lost steam. I deleted my google group membership yesterday. Funny this pops up now.
If you're going to complain about the syntax, complain that everything is left-to-right (except math inside expressions) except initializing variables which, for some reason, is right-to-left.
I work on System S, which is the research name of IBM's InfoSphere Streams, which is a distributed, realtime (edit: soft realtime; so, high throughput, low latency, but not hard realtime with guaranteed deadlines) streaming system with an associated language. Another project in this area is Storm. See this comment thread for more on that: http://news.ycombinator.com/item?id=3193115
But, Anic seems to be more related to the kind of streaming languages that came from the digital signal processing and embedded worlds. See, for example, the StreamIt project: http://groups.csail.mit.edu/cag/streamit/
In particular, a StreamIt tutorial: http://groups.csail.mit.edu/cag/streamit/papers/streamit-coo...
Note that Streams, Storm, Anic, StreamIt all have the same underlying programming model, but Streams and Storm target a different area than Anic and StreamIt. Streams and Storm target the emerging area of "big data" where you need to distribute your computation across a cluster. Anic and StreamIt are lower level: applications such as video decoding are streaming in nature, but one typically implements them on a single chip, and often even in hardware.
Standard disclaimer: my views are not official IBM views.