Hacker News new | past | comments | ask | show | jobs | submit login
How Flow-Based Programming Could Save The Sanity Of Web Developers (fastcolabs.com)
148 points by robbyking on Aug 23, 2013 | hide | past | favorite | 150 comments



>> What we need is not more programmers. What we need is to enable non-programmers to participate in the creation process, not just the ideation process,” says Kenneth Kan, CTO of a company called Pixbi and a recent convert to flow-based programming.

I disagree with this. This might work fine for small programs, but will quickly fall apart for anything of significant size. National Instrument's LabVIEW [0] offers a graphical programming language called "G". G makes it easier for non-programmers (usually other types of engineers) to create the software they need. However, as their programs grow, they can easily get out of hand and become a dangled mess if they don't understand good program design. I am all for getting more non-programmers programming, but this method does not solve the need to understand programming fundamentals.

In my experience with LabVIEW, it was really great for making things like concurrency easy, and it was really great for quickly building GUIs to control hardware. I like to think that graphical programming makes some "hard" things easy, and some "easy" things hard (it's a bit manually intensive to make complex mathematical equations - though there are blocks you can use that let you drop in C code into the interface).

Like anything else, the methodology has its advantages and disadvantages.

[0]: http://en.wikipedia.org/wiki/LabVIEW


I've been using LabVIEW and I agree with you. LabVIEW has great hardware support which makes it really easy to get up and running collecting data.

But like you said, I've looked over some previous LV projects that have multiple copy and pasted sections, terrible design, etc.

As a software engineer, LV holds you back in my opinion. I don't doubt for a second that some really amazing things can be created with LV but ultimately it is extremely tiring/annoying to reinvent the wheel to accomplish things that would be extremely simple in almost any other programming language. This seems to come into play when you move past the simple act of collecting data and doing primitive processing.

Edit: The other thing I forgot to touch on was the presentation of the "code". With wires running all over the place and ambiguous function blocks you quickly find yourself wishing you could cozy up with a text editor instead. The main reason I don't think data flow programming will "take over" all programming is because ultimately I think its a inferior presentation for professionals.


As someone who has fallen into the unfortunate position of being the local LabView expert, there's a little tricky you might want to know. Pretty much all of the awesome hardware support is contained in the daqmx library. The C api for this library can be connected to through nicaiu.dll (or the corresponding dynamic library format for your OS). By linking in here, you get all the wonderful hardware support while being able to write code in your favorite language with your favorite editor and commit it to your favorite source control.

I've been slowly translating all of our LabView code out of G and it's both made my life vastly simpler and has also allowed other coders to work on it without learning G.


Good tip! I was aware there were some options along this line but unfortunately using LV(more specifically, "G") was a project requirement.

You briefly touch on another point that is worth repeating. Source control with LV is terrible.


Source control is fine in NoFlo's JSON graph format or DSL.


So, are you saying when I diff I am looking at textual representations of the graphs?

Which is fine, that is what I'd expect. It's how I'd implement it. But isn't that a huge clue about the fundamental weakness of graphical representation - as soon as you try to do anything nontrivial it's back to text. And back to text for a very good reason, not because the right git plugin hasn't been written yet.


One could write a real 2D graph diffing system, it is not inconceivable, at least. However, you won't get this for free by diffing a textual representation of the graph.

One simple hack is to take each version, set transparency to 50%, and then overlay them.


In a similar vein to LabVIEW, I've used MathWorks' Simulink and its Stateflow FSM environment quite a bit over the years and I've found that the most effective way to use it is to have your functions written in standard textual code, with the control flow (reactive/parallel/etc) connecting the function using the graphical wiring environment. That looks similar to what NoFlo is proposing, which is a promising sign for IMHO :)


Its nice to see other people who are bothering to understand the benefits of this rather than snarking away.

I once fell into the unfortunate position of having to program proprietary building control systems made by a company called Crestron. Its terrible, overpriced shit, but at the core is a neat little Flow Base Programming tool that is based on some proprietary C derived language. What amazed me about it was how quickly we could teach people with little to no programming experience how to reuse existing functions for new projects just be altering the flow between components and editing parameters in functions. It was crazy.

Naturally, every now and then custom code had to be written, but then you would just create a new function, and add it to the pile of other functions that could be wired together.

I'm very interested in NoFlo, and can't wait for this tool to come out.


One advantage I've found of this approach is that the graphical overview of the wiring makes spotting architectural 'code smells' easier, which makes it easier to refactor as you go instead of realizing the poor structure farther down the line (ie too late) when the thing is released and you start running into issues caused by the architecture.


My experience also. I've been working in various test and manufacturing automation roles at HP and I've seen some complex stuff done in LabVIEW that's very hard to read and practically impossible to hand off to a different engineer.

The typical LabVIEW user here is an experienced EE with limited SW engineering skills so I can see how a different "paradigm" might be appealing. As we all know, though, maintenance cost and sometimes performance also counts.

Also, I'm not sure why representing a program as data structures flowing between reusable components is described as something different or new. I think the major contribution of FBP, if such a thing really exists, is the GUI used to create these programs. Surely the fundamental building blocks have been programmed in a more traditional way.


"I am all for getting more non-programmers programming..."

May I ask why? I completely disagree with that statement and all the initiatives pushing that agenda.


Imagine a world in which most of the important ideas are in printed form, and only a priestly elite can read or write. Wouldn't you want to increase the base of literate citizens?

Today, most of the important ideas -- their genesis and dissemination -- are in computer form. I would argue that making everyone a programmer is not the goal, but widespread computer literacy, a familiarity and comfort with computer use, database searches, research skills, is the modern parallel to encouraging print literacy in years past.


We already live in that world, and it's been that way for centuries.

If all the automobile designers and mechanical engineers were to vanish today, do you think lay people would be able to hop into the driver's seat and take over the job? The same is true for nearly every industry, and always has been.

The only reason we've come this far as a civilization is because we record what we learn and pass the knowledge on to future generations of people who are interested in that knowledge.

Not everyone has the same interests, and barring an apocalypse, no one needs to know it all at once. Programming is no different.

Sure, things like reading, computer literacy, math, biology, basic problem solving, etc. are important base skills for everyone to know. Once those foundations are in place, delving further is a matter of preference, and not everyone should be pushed to learn computer science any more than they should be pushed to learn quantum mechanics.


To be fair, an awful lot more people know how to read and write now than even 50 years ago, and I think the internet is bringing that number even higher.

I'm astounded at how much better the spelling and grammar of the commentariat has gotten since the early days of ubiquitous internet. Most people seem to be reading something every day (even if it's about Beyonce) and that means that their ability to read about what they want to know more about has gotten better. The effect of this newly capable public on entrenched power structures can't help but to ultimately be positive.

I understand how the same case for computing could be made.

>Not everyone has the same interests, and barring an apocalypse, no one needs to know it all at once. Programming is no different.

knowing everything at once != knowing how to program

We're not that smart, people. The same case could be made for not teaching people how to cook, or drive a car. What makes something a "base skill"?


> We already live in that world, and it's been that way for centuries.

I assume you mean a world where some people know more than others. Fair enough, but in a democracy, that chasm cannot grow too large before a citizen's right to choose becomes meaningless.

> ... not everyone should be pushed to learn computer science any more than they should be pushed to learn quantum mechanics.

Not the topic. We've accepted that the ability to read and write are basic to a functioning modern life. It's now true that the ability to compute has the same status. That doesn't mean everyone needs to know how to write a computer program, but print literacy never meant that a person should be able to write a novel.


>I assume you mean a world where some people know more than others.

No, not exactly. I'm referring to the tendency for most people to be experts in one or two fields and mostly ignorant in others that are not related to their core competency.

>We've accepted that the ability to read and write are basic to a functioning modern life. It's now true that the ability to compute has the same status. That doesn't mean everyone needs to know how to write a computer program, but print literacy never meant that a person should be able to write a novel.

That's exactly my point. People should know how to use computers effectively, but that doesn't translate to "should know how to program" just like knowing how to read and write does not translate to "being a novelist", nor does learning math translate to "being a calculus professor."


> That's exactly my point. People should know how to use computers effectively, but that doesn't translate to "should know how to program" just like knowing how to read and write does not translate to "being a novelist"

No, people should know how to use computers which does translate to "know how to program" but not "be a professional software developer", just like knowing how to effectively function in a world with written language does translate into "knowing how to read and write" but not "be a novelist".


The vast majority of the computer-using population, on a daily basis, has zero need for programming in order to use a computer. The same cannot be said for reading and writing.

They do, however, need to know how to use a computer. This is an argument for computer literacy, not computer programming literacy.


If you can't make the computer do work for you, you're not really using it effectively. If you can only use programs others have written, you aren't really computer literate, you're just a monkey pushing a button. Computer programming literacy is the new reading. The vast majority of people who use computers now do need programming, they just don't realize it because they don't know they could be automating what they currently do manually.


Oh, and everyone who uses a computer should learn some basic shell scripting, it will make them vastly more productive regardless of their field of expertise.


The base level of computer literacy should be deeper than "how to use Word" to some level of algorithmic literacy.


Why? Most people can't even use a word processor or spreadsheet right now, let alone comprehend algorithms and programming.

Besides, we all drive cars and we all use refrigerators and air conditioners daily. Not everyone knows how those work or is able to fix them. If they did, we'd be a world of "jacks of all trades and experts in none." That's not what got us to this point as a technologically advanced civilization.

It's far more important for everyone to know how to drive properly and safely than it is for them to know how to perform maintenance on their cars. In fact, having everyone do their own car maintenance would probably be just as dangerous as putting programming into the hands of people who don't know how to avoid phishing attacks and computer viruses.


> Most people can't even use a word processor or spreadsheet right now, let alone comprehend algorithms and programming.

If you can comprehend written procedures for a task, you can comprehend programming (you might not be able to understand algorithmic analysis, but that's a different issue.)

People (especially people that are otherwise knowledge workers) that "can't" do that largely can't because they've been taught that computers are deep magic accessible only to an elite priesthood.

I don't think we are at the point where truly universal programming literacy is necessary or practical in the very short term, but just as with writing before it became something universally expected, I think we are at least at the point where we should start to see it as part of the basic repertoire of knowledge workers, even those whose primary job isn't producing computer code.


You might notice that the historical solution to this problem was "teach people to read and write (program)" not "express all ideas in pictographs so that the illiterate masses can understand them".

We improved classical literacy by turning people into readers and writers. We improve computer literacy by teaching people to become programmers, not by trying to dumb down programming to something that the plebes can understand. I'm all for turning more non-programmers into programmers. The issue isn't that as much as it is the issue of this fantasy that if it wasn't for those dastardly geeks who want to be lazy and charge a ton of money for it, everybody could program with no real work, which is just pure bull puck.


The appropriate analogue here is not literacy, it's carpentry. That everyone makes use of furniture does not imply that everyone would benefit from taking the time to learn to build furniture themselves. It would be a good thing if furniture were made more durable and less expensive(although perhaps not such a good thing for furniture makers) but the solution to that is improving tools and methods used by professional carpenters.


But just as you wouldn't want to read a novel written by someone who is merely literate in English, you wouldn't want your bank to use software written by someone who didn't have professional-level skills in programming.


> ... you wouldn't want your bank to use software written by someone who didn't have professional-level skills in programming.

That's the first step toward creating a programming priesthood, a very bad idea. Software is as software does. Good software can be identified without requiring its creator to have some credential attesting to his ability to craft "professional-level" code, no more than a writer needs a certificate attesting to his ability to write a readable novel.

For proof of this thesis, one need only look at Microsoft -- until recently the top of the programming profession, highest salaries, copious certificates and assertions of competence, and the worst software.

http://www.vexite.com/2005/ending-microsofts-cowboy-spaghett...


Sure - let me answer that with my post for the "Everyone does not need to learn to code" thread. When I say getting more non-programmers programming, I mean it from more of a practical standpoint.

My previous post:

"From a practical standpoint, I really wish everyone was taught enough programming to know how to automate basic manipulation of text data. I see way too many people editing long lists of data by hand, when there are much better ways to go about it (even Excel works great for this kind of stuff).

Just knowing this setup below can let you do some really power automation of manual tasks and can save you a lot of time.

Example template in Python:

  import csv
  with open('mydata.csv', 'r') as csvfile:
       reader = csv.reader(csvfile, delimiter=',', quotechar='"')
       for row in reader:
           # Data manipulation here
"

Link: https://news.ycombinator.com/item?id=6237430

I'd also recommend looking at Jach's post (the child post of my post in the link) - I thought he made some really good points/critiques about 'tool-based' vs 'concept-based' teaching.


I agree with you to a point. Yes, I think computer literacy is very important, and far too many people think of computers/tablets/phones as magic boxes that show pretty pictures when you click/touch things.

Your Excel example, to me, falls into that category. Learning to use a word processor or spreadsheet application is something that is increasingly necessary in today's world, and I am often surprised by how few people (including, shockingly, many software developers) know how to effectively use them. I just don't feel like taking it all the way to "everyone must learn to program" is a good idea, any more than "everyone must learn to perform open-heart surgery" is.

Based on your comments, I think we actually agree.


Because it will free up them to focus on what they're good at and free up 'real' programmers to work on harder problems more appropriate to their background. At work I see many people struggling with 'simple' programming problems in all kinds of strange ways (often involving abusing excel) that could so easily be done in a dozen lines of python.

Occasionally they end up coming to me to ask for help. Now they have to spend their time teaching me the intricacies of their domain, and then I have to spend my time coding up a solution, which might be wrong because of some subtle detail they felt was too obvious to mention. And now we're both stuck in a situation where every time something changes instead of just fixing it, they have to come to me and first explain exactly what they need to change and then wait for me to have time to fix it.

At the end of the day this isn't productive use of anybodies time.


I agree about making some easy things hard in my experience with Pure Data.

NoFlo's UI design makes it frictionless to drop in to edit any component's JS, or build a new component.


"In my experience with LabVIEW, it was really great for making things like concurrency easy"

Absolutely. Ever since my stint with G, I've longed for a general purpose solution for handling flow control in concurrent code. If NoFlow avoids some of G's pitfalls, it could make an excellent addition to my programming toolbox.


If LabVIEW was perfect we might not have to build all this. Execution matters :-)

I've written about this in http://bergie.iki.fi/blog/noflo-kickstarter-launch/


I have two questions. First, what exactly is Flow-Based Programming? Emphasis on exactly. I know what dataflow is and I know who J. Paul Morrison is and I've even browsed through his book. I still can't figure it out, and the book (at least the parts I looked at) had too many diagrams and not enough code to clarify this most basic of questions. Every discussion on it seems hopelessly hand-wavey. What I want is the diffs between FBP and other computational models. Just the diffs, please! Or is it just dataflow? In that case, which version of dataflow? There have been many.

Second, we know from the history of these things that it's easy to sell managers and journalists on boxes-and-lines visual programming. The pictures look a lot easier to understand than reams of source code do, but that's always because the examples are trivial—invariably some variant of a box with 2 in it, another box with two in it, arrows leading from those to a box with + in it, and then an arrow leading to a box with 4 in it. These always turn out to be a siren song because they don't scale to application complexity. How exactly is FBP different? Emphasis on exactly.


The crude/simple explanation I think is to imagine your program like an Excel spreadsheet: Some cells on the spreadsheet contain your client-side data (i.e. your "model"). Other cells are used by a javascript library to decide what's in the DOM (i.e. your "view")

The "flow-based programming" model involves creating all the formulas that shuffle the data from the model to the view.

At least that's what I can gather from what I've read (and I agree it's always been a big fail in the past, though there may be some merit to this idea eventually in limited roles.)


>>I have two questions. First, what exactly is Flow-Based Programming? Emphasis on exactly.

Let the confusion flow through you!


"The paradigm was so disruptive that it was suppressed by computer scientists for decades."

It didn't really get better from there.


I kept reading and reading through poetic descriptions of how valuable the method is, hoping to get to at least a short description of what the method actually is. Excitedly I clicked on "the company published FBP as a technical disclosure bulletin", only to get to the Wikipedia entry for "IBM Technical Disclosure Bulletin". Reluctantly I must conclude that the article is pure and content-less bullshit instead.


Yeah, I must have imagined the 1-2? computer science professors at MIT who had active data flow research programs when I showed up in 1979. The general opinion on campus was "this looks very interesting" but no one had yet figured out how to make it really work and be comprehendable.


And still haven't. The most interesting attempts, like Ed Ashcroft's and Bill Wadge's Lucid, seem to have withered. It's not clear to me whether that was because they didn't offer enough that was different, or it was too hard to make the implementations efficient, or both. And yet it all does still "look very interesting".


Well, we all know how reluctant scientists are to promote groundbreaking ideas, since all prestige in science comes from doing what's been done before.

No wait, scratch that. That's just in BizzaroWorld.


For people looking for something more substantial, you might enjoy this c2 page about FBP vs. Actors:

http://c2.com/cgi/wiki?ActorsAndFlowBasedProgrammingDiscussi...

The basic idea with FBP is that you have multiple processes that communicate over channels (kind of like actors, but the communication is unidirectional across those channels).

I'm not a fan of GUIs like this generally, but the idea of FBP is interesting in the same way that the actor model is interesting.


Aren't there like eleventy-bajillion enterprise tools that include exactly this kind of visual, dataflow-oriented methods of constructing large systems by wiring together individual components?

Its not exactly an "arcane" model that's been lost in the mists of time since its use in "1970s banking software".

And its very much not something that's been "suppressed by computer scientists for decades".


There are multiple communities of 'creative coders' who already work with this paradigm using tools like vvvv[0], MaxMSP[1], PureData[2] & others.

Closer(ish) to home we have QuartzComposer[3] available for OSX.

I like all of these tools but can't help feeling I'd be more effective if I could just get at the underlying code half the time (in cases where that's not an option).

[0] http://vvvv.org/ [1] http://cycling74.com/ [2] http://puredata.info/ [3] http://developer.apple.com/graphicsimaging/quartz/quartzcomp...


> I like all of these tools but can't help feeling I'd be more effective if I could just get at the underlying code half the time (in cases where that's not an option).

I've seen novices do cool stuff with Max without thinking that. Perhaps this feeling comes more from familiarity with textual coding than something fundamental about textual coding.

On the other hand, we keep saying "a picture is worth a thousand words", but here we are writing blog posts in gobs! That suggests that text is what we train for, and is what we might be most effective with in the future too.


> I'd be more effective if I could just get at the underlying code

For something similar to MaxMSP/PureData but with writing code in a text editor, you may wish to take a look at ChucK [1]. Instead of graphical edges or links between between UGens ChucK provides the '=>' operator.

E.g.

adc => Chorus c => LPF lpf => Echo e => Delay d => dac;

[1] http://chuck.cs.princeton.edu


I took a class in ChucK once. It is moderately terrible and half-finished but I managed to write a patch once while blackout drunk, so I'd say it's pretty intuitive.


I have the same frustration with PD and QC.

NoFlo's UI design will make it easy to dive in and edit any module's source, as well as make a new module when code is easier than wiring. We are coders designing this tool for our own work.


Seconding Max/MSP, you can write C code in it thanks to "gen".


How does flow-based programming compare with functional reactive programming, like Elm [1] and Bacon.js [2]? Could those be made visual with a graph editor like this? Bacon.js has some flow diagrams [3] to describe some of its advanced concepts.

1. http://elm-lang.org 2. https://github.com/baconjs/bacon.js 3. https://github.com/baconjs/bacon.js/wiki/Diagrams


Also see Flapjax:

http://www.flapjax-lang.org/docs/

https://github.com/brownplt/flapjax/

The programming behind the JavaScript library is really solid, and was most actively developed from 2006 through 2009.

It never caught on, possibly because the mental model required for successfully hacking with "event streams" and "behaviors" is decidedly a functional one, and those concepts are quite abstract to begin with (more so than objects and prototypes). Also, studying the implementation is not for the faint of heart. On top of that, I don't think any big names got behind it and there wasn't exactly a marketing campaign.

Ahead of its time? Definitely. And it's probably still worth looking into as an alternative when trying to choose among the various reactive libraries that have popped up in the last year or so.


FRP is very similar in nature to FBP. (even the acronyms are similar) The main difference is in packaging.

The original FBP systems were still 70s-style near-metal code within the component design, but the architecture described a protocol just sufficient for statically and asynchronously connecting the components together, and a runtime method that is amenable to the simplistic approach of having each component run round-robin until all return a "finished" signal. It's very much an "industrial engineering" perspective.

FRP, on the other hand, comes from the traditions of functional programming, and so the evaluation order computation is given a more academic treatment, and the languages are given more explicit syntax, where FBP doesn't aim to describe itself much more deeply than the flowcharts. Both FRP and FBP have purity and immutability as core concepts.

So as I see it: different starting point, different packaging, same conclusions.


Elm could do this, and it would also benefit from a graph editor that knew about the types


"Let's get non-musicians making music! We'll just give them a nice MIDI keyboard instead of that hard-to-use violin"

Sure, this will skip over some tricky bits of getting started with programming, just as a MIDI keyboard can make it easier to make violin sounds without having to learn all that pesky fingering.

However, as with playing the violin, those bits aren't really the hard part to learn; the easiest to use keyboard in the world won't make you a composer and flow based programming is not going to make you a programmer.


MIDI tools enabled more people to make new kinds of music.

Not everybody will be a programmer, but hopefully a slightly larger percentage of the population will be able to discover the power of algorithmic thinking, and hopefully they will work on some interesting problems.


The title is sensationalist. The "coding method" is simply data flow programming. Many languages provide data flow facilities nowadays. The reason why graphical UIs for data flow languages have not become popular is that large programs become unwieldy to visualize and edit, whereas everyone know how to use text editors.


It makes me wonder at which point JavaScript will become more or equally unwieldy that using such a system will be a competitive trade-off. Of course, this is in detriment to the actual security and extensibility of so-called web apps. It's a great opportunity to coopt developers onto a platform with only industry-accepted functionality.


Here's the deal (and I'm sure that this is the same for most programmers): release something that uses this paradigm and let's me get something done and I'll use it. It's getting something done that matters. That's the same motivation for me using Go (easy concurrency), Perl (easy text mangling), Lua (easy embedded stuff), and on and on.

If Flow-Based Programming is the next big thing then great. Let's get at it.

But please don't sell me bullshit like: "The paradigm was so disruptive that it was suppressed by computer scientists for decades."


I haven't read the rest of the article, but I got to that phrase and my disposition instantly turned from "huh, this is interesting" to "snake oil", whether deserved or not.

Reminds me of the recent "One weird trick!" article [0] and lines like, "the simple solution dietitians don't want you to know!"

Content aside, that kind of thing immediately sets off alarms for me.

[0] http://www.slate.com/articles/business/moneybox/2013/07/how_...


For me it's the fact that I've seen this whole spiel a lot of times already, in EVERY DETAIL. Visual programming will save us all. Nonprogrammers should be able to program. Suppressed, blah blah blah. For something so revolutionary, it sure is some awfully-well-trodden ground. And I don't just mean that this is an old paradigm, I mean this spiel is well-trodden ground.

In a nutshell, it doesn't work. Using Fred Brook's venerable terminology, the essential complexity of programming is too difficult for nonprogrammers. Even if you entirely get rid of the accidental complexity... and visual layout does not do that, it's full of incidental complexity involved in the layout itself... you still can't get non-programmers into a programming situation, and trying to get them into a concurrent programming situation is just sheer insanity.

And we know it doesn't work because it has been tried. Over and over. Ad naseum, as in, yeah, I'm sick of hearing these ideas as The Answer as if nobody has ever heard of these before. Tell me what's different about your solution and all the other people who have tried this.


I agree with you about the visual-layout-makes-programming-easy spiel, but it does seem like there might be a little more to flow-based programming than that. At least a few smart people seem to take it seriously (e.g. on LtU), and dataflow as a computational paradigm has a lot of research behind it and hasn't seen a whole lot of implementation. It's not out of the question that someone could apply it in a new way that would make some classes of application easier to build. I'm open to that and would find it interesting. But after having looked repeatedly for an intelligible technical explanation of FBP, I'm starting to think that the idea is either a trivial repackaging of a well-known model (like Unix pipes, or CSP with directed graphs) or is better shelved with the snake oil. The fact that it's typically accompanied by rather sensational language is not a point in its favour.

Edit: it's too bad that the programming model is getting lumped in here with the visual programming schtick. Those are two different things. (Of course the former would get no PR without the latter.)


To be clear, I fine with their attempt to make any product they intend to make. I'm very skeptical about it due to the fact that it's been tried a lot, but hey, their money, their choice, maybe they'll make it stick, maybe they'll make a niche market work (LabVIEW, etc), who knows. It's the rhetoric I'm objecting to. Which bodes poorly, IMHO; if they really believe this gibberish then it's more likely they'll make some very bad business decisions, such as resisting finding a good niche in favor of trying to take over too much stuff. There's a lot of prior art to learn from here, and if they've learned as little of it as that marketing spiel implies, their odds of success are very, very low.


I consider this an instance of stream programming.


Do you think of stream programming as a kind of dataflow programming? If not, what's the difference?

I suppose "dataflow" is an overloaded term, and the fact that its various meanings have quite a bit in common doesn't help.


Definitely, yes. In section 2 of this paper, I tried to explain how our language is related to what are typically called "synchronous dataflow" languages, or just SDF languages: http://www.scott-a-s.com/files/pact2012.pdf

Slide 11 of this talk makes the same point, but with less words: http://www.scott-a-s.com/files/debs2013_tutorial_slides.pdf


Data flow when used as a converse of control flow is quite well defined. Are you thinking in terms of how data flows across the system or in terms of control? Stream programming usually uses data flow at the top level and control flow in the atomic components connected together (we can have compound components that involve more data flow). Once you add control lines to your system, however, you are basically re-encoding explicit control flow as data flow.


I might have been unclear; what I meant was that the different programming languages or models that people call "dataflow programming" (spreadsheets, Lucid, Oz, FBP, etc.) have a lot of differences between them; and on top of those there is dataflow analysis in compilers, dataflow hardware architectures, and so on.


Data-flow is a broad term that doesn't mean one specific thing, but it is usually used correctly in what it means; e.g. data-flow analysis contrasts nicely with control-flow analysis. I don't find it confusing.

I believe FBP primarily deals with data flow, not control flow, but the "flow" in FBP seems to be left intentionally ambiguous.


This reminds me of my experiences learning programming and web development. For a while I used Drupal for everything, because it allowed me to 'program without programming'. The views module (now in core drupal, or slated to be, I think), is essentially a visual query builder.

What I discovered after a while, was that there was a tiny sweet spot where it was nice to do complicated things without writing code, but that it had huge drawbacks. Queries became ridiculous, for one.

And very soon, the stuff I'd build using the Views module was 'complicated' enough conceptually that a normal SQL query would not only be much faster, but easier and quicker to write. The fact that this query could be version-controlled and didn't live in a database was nice too...


It's interesting that business processes are often described as input, process, output. Whether a human or a machine accomplishes the process is another discussion. But what is not unique is that designing the process, it's all still another process. The downside is that not knowing how to effectively scale a process is the harder problem that may require different, more scalable tools later.

Move fast and break things. Replace what breaks with stronger pieces until it doesn't.


Agreed. It's easier to teach someone python than it is to teach a programmer visual programming.


> Reminds me of the recent "One weird trick!" article [0] and lines like, "the simple solution dietitians don't want you to know!"

Computer scientists hate him!


Flow based programming has a funny way of being suppressed... this has been tried many, many times. Some examples off the top of my head: Scratch, Windows Workflow Foundation, Arc Model Builder, etc. All of the marketing for NoFlo seems to be targeted at non-programmers who think "I could code anything if I could just understand the syntax!" Their main video basically says "Don't you wish you could finally stop paying those sneaky developers who act like everything takes longer than it really does? Now you can!"


> Their main video basically says "Don't you wish you could finally stop paying those sneaky developers who act like everything takes longer than it really does? Now you can!"

That one point really gave me pause as well. As I posted the other day:

-----

"If your bottom-line is dependent on code, you know the angst of asking a programmer, “How long will {{ insert feature }} take?” You know programmers can jargon-speak their way to your consent and when everything comes crashing down, you are to blame. NoFlo frees your businesses from a labyrinth of text files so you can get-a-grip on what-the-hell-is-going-on!"

If you tell people that NoFlo is the solution to prevent programmers from "jargon-speaking" their way to your doom then... Sorry, that's just a lie with a topping of polarization.

The way to prevent programmers messing up your business is to not hire shit programmers and manage good programmers properly. Not hiring the programmers with the fanciest visualization.

https://news.ycombinator.com/item?id=6198748


Flow based programming is actually heavily used in music technology, where it actually is a good paradigm for the domain. Max/MSP and its open source cousin Pure Data are interesting, and have dedicated if small communities of not-exactly-programmers using them for real, creative expression.

But most of the proposed use cases are clearly bullshit, and using these languages/programs makes it clear why: organizing code in a single dimension is already hard... organizing it properly and sensibly in two dimensions much more so, as well as a number of other issues with order and tracking race conditions. For anyone who wants to really evaluate this technology, I would give these products a try.

Really though, functional programming and OOP both have sufficient support for data flows that it isn't really an issue for programmers.


Ha ha ha, if only :).

But on serious note, with all the brains we have, we should've created some tools to automate things, progress is too slow. I think we can do way more.


> we should've created some tools to automate things, progress is too slow

Most developer software is abstractions and tools to automate things. Tasks that were hard in 1995 are trivial today; most web developers have probably never had to think about the vast majority of things that web developers in 1995 concerned themselves with, because our libraries and frameworks and tools take care of them.

The complexity of the trade is ever-increasing, though, and so we need new tools and new libraries and new frameworks to deal with that increasing complexity. This is the biggest place where visual programming falls apart - it's fine at dealing with programming as it existed when it was released, but programming is not a static discipline, and evolves wildly and rapidly. Non-sentient tools just simply can't keep up.


True. But it was not all linear progress. In 1995 I had Turbo Pascal for example, that had ide with decent editor, fantastic debugger and profiler. And don't even start me on context sensitive help. What I have today is way crappier or non-existant.

LightTable is tool that one guy started as a futuristic tool, yet it is what we had before.

Complexity of our work is ever increasing and fun definitely never stops, I wish tools were better so we can enjoy languages more. As tools augment our ability to do work, having excellent tools would allow you to do more and be easier.


Plenty of tools do what you describe, and far better than Turbo Pascal did in the 90s. Try Visual Studio or IntelliJ, for example.

When you can't perform static analysis (which is very difficult to do effectively in dynamically-typed languages), many of those tools you are talking about are just not possible.


Programming languages evolve slowly because they're more like mathematical notation than technology, so said someone smart whose identity I can't recall. Lots of smart people have been thinking for more than a half century about the following problem: How to elegantly tell a computer what to do? With all that brainpower invested in this problem, the low-hanging fruit is gone. It's understandable that progress is slow now. I doubt there are great programming paradigms waiting in the wings, waiting on technological advances like faster CPUs, more memory, or better graphics.

After staggering expenditures of effort by some of the world's sharpest minds over more than half a century, I doubt there are any more major breakthroughs awaiting us in the field of programming languages. (Unless AI starts to play a major role, in which case I couldn't predict how programming would change.)


Just what exactly do you think programming _is_?


Scratch is not Flow-based programming. It's semi visual programming for kids, it only eliminate syntax error for kids but still produces code-like output.


Here's one example of reimplementing Jekyll (static site generation) using NoFlo: http://bergie.iki.fi/blog/noflo-jekyll/


Thank you for providing an example.

I look at that and I see a couple of things. First, this is procedural programming straight out of my old, old COBOL books. Batch processing - read some files, process the data, output it. That graph is pretty, but no different from the structured analysis and design that was forced on me in the mid 80s.

That stuff was a nightmare to work with. No real search, endless 2D layout issues, no way to reuse, a very little bit of information conveyed in a lot of space, no way to debug, and so on. And, if your program is anything but batch/data flow oriented, good luck.

I see maybe 100-300 lines of JSON in the image. Or Lua. Whatever. A modest amount of paste code to tie already written components together in in/out chains. Now, I get that your diagram is more easily 'groked' than JSON, and that is a benefit. But what if I want to search for things? Use awesome tools like sed to batch change stuff? Perform static analysis? All that graphical stuff just falls apart. Oh, perhaps there is some DOM, and I access that, but then I'm right back into the incredible power and expressiveness of text.

More importantly, I'd like to see the flow diagrams of all those nodes you are connecting together. I am prepared for egg on my face here, but are those written in NoFlo, or text? I'm guessing the latter.

So, how is this different than LabView? I completely understand the value of being able to quickly plug together pre-existing components when you have a tiny interface (you just have in/out nodes, basically, in that diagram). A small solution for a small problem, and I don't mean that dismissively.

Despite the article's claim, and as many have already said, this stuff has been around for decades. There are definitely places for it. But having lived through it, I tell you SA/SD died a very deserved death.

Sorry, this isn't directed at you, you were just the person kind enough to provide an asked for example.

The other major thing that sticks out for me is that that graph is just a high level, block diagram of the program. Well, "just" not meant to be dismissive. The original article talks about the horrors of programming where changing one thing means changing 10 other components. They are not describing programming, they are describing hacking spaghetti code with no design. If that is the way you program, or the way the code base you are working on is coded, then I can see why these flow diagrams seem so revolutionary - a design is being done, and software is broken up into discrete components. That's pretty fundamental SW engineering. But the magic is not in the 2D visual display, but in the idea of small components that do one thing and that are not intimately tied to other components. So hey, if the tool helps you learn or use that method, good, I guess. How you reasonably scale it up to anything large is beyond me (I've done entire avionics systems in data flow diagrams, and it ain't pretty at that scale, believe me).


> this is procedural programming straight out of my old, old COBOL books. Batch processing - read some files, process the data, output it.

Modularizing systems into separate, standardized units has a tendency to look like this (or like shell scripting). Is this a huge problem?


Not at all. The point is that the visual code doesn't differ much from the textual code in terms of architecture and complexity.


I think there is some broad confusion over "flow-based programming" implying "non-programmers using a graph editor to connect things". So I'm going to ignore that completely in favor of talking about flow-based programming for programmers.

Here's an example I have experience with:

The GRL research language [1] is a dataflow language for behavior-based robotics. It's a real programming language that batch compiles down to C code that runs in O(1) time and space (it's not Turing complete). It escapes from the "spaghetti code" problem you mention by providing an abstraction of a procedure as a compile-time rewrite rule that transforms the dataflow graph (for Lisp hackers, every procedure is basically a macro that can be passed around by value at compile-time).

(FWIW, graphical editors can actually be a useful and pleasant way to construct throwaway flow graphs, but aren't particularly useful for developing new abstractions (i.e. dataflow procedures or data types). They run into the spaghetti code problem you mention, by which the only way to reuse a chunk of functionality is copy-paste)

The system has been used for things like programming actual robots, Half Life bots [2], and motor control for interactive game-like thingies [3]. What it's really great at is managing stateful information over time. Things like moving averages, low pass filters, and other types of state machines (flip-flops, "just true" constructs, etc) all become first-class values in your programming language.

So how could this be used in a more flexible environment that isn't ideological about dataflow programming? While dataflow graphs may be limited in their expressiveness, we can use them to encapsulate many types of state machines, particularly those that change over time or in response to stimulus. Under this model, every self-contained flow graph becomes a thunk whose input values are determined by evaluating arbitrary expressions in the host language (and can be dynamically instantiated). For implementations of this technique, see [3]. Finally, [4] is an implementation of the stateful time-based dataflow constructs I wrote for a game in C#, which power things like motor behaviors, input processing, collision avoidance, animations, and interpolations. As a bonus, you can add events on top of these for a simple implementation of functional reactive programming.

So do you want to build a web app in a dataflow language? Maybe not. Do you want flow-based constructs in your programming language? Depending on what you're building, the answer could range of "Not really" to "I can't live without them."

[1] http://www.cs.northwestern.edu/~ian/grl-paper.pdf

[2] http://www.cs.northwestern.edu/~gdunham/flexbot/manual/grl_f...

[3] http://www.aaai.org/ocs/index.php/AIIDE/AIIDE11/paper/view/4...

[4] https://bitbucket.org/leifaffles/chad/src/d6a13c2315cce41dba...


I also work with a distributed embedded system that uses dataflow. The Von Neumann architecture as a revolution of course. But, in a way, it forgot about the natural concurrency of the electronic circuit and how powerful and simple that can be. Dataflow programming can bring some of that back. I remember reading Alan Kay saying something about how programmers should go back and work with a plugboard for a while. Maybe that is what he meant. Its possible to 'program' something with just wires and simple discrete components and that can liberate your thinking.


Yes, I think this is the main insight of the "revolutionary" part of FBP. It's not there to make everything look like a circuit board, but to be the template for the parts of software that are well-suited for the paradigm - which, if factored out properly, could be a lot of them, more than we use today.

The tricky part isn't in whether the abstraction works, but how we go about introducing it. And the downfall of most visual languages is in burdening themselves with too much power, which I don't think FBP is guilty of. It's evolved from production systems that were still coded in textual forms, but architected - on paper - with the diagrams.


Yup, that's because the NoFlo way isn't going to be the paradigm-shifting development; it's just graphical sugar on the same old thing.

If it'll ever become big then it'll be more like what Javelin does by adding FRP to Clojure's persistent data structures [1]

[1] http://www.infoq.com/presentations/ClojureScript-Javelin


Completely agree - I see links to frameworks and tools in the article but none to projects and products made with them.


Ah, the secret cabal of computer scientists is at it again.


I read up to that quote and decided it was bullshit.


NoFlo is basically a clone of Windows Workflow Foundation but implemented in a worse language

WWF: http://bit.ly/16YNGFK


Meh that looks like some UML class diagram to me (at first glance). Just need to have a "generate code from diagram" in this case. :?


>release something that uses this paradigm and let's me get something done and I'll use it

>This allows programmers to write individual functions that execute one step at a time, but route data between these functions asynchronously and concurrently

These concepts are not new. Concurrent programming is all about data pipelining, this is basically describing a gimped version of Erlang.. with pictures. The obvious practical advantages of a textual language make it laughable.

edit for clarity.


I thought this quote from the article was interesting:

Kuhn’s book on scientific revolutions includes a famous quote from physicist Max Planck about what really causes paradigm shifts:

"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."


Except, of course, scientists change their minds all the time based on the evidence.

Look at all the work in QM in the 20s and 30s. These guys had competing hypothesizes, and some won out based on the evidence. The people that held the opposing view? They quickly accepted the errors of their ways.

This is known as the bozo the clown argument. You know, somebody says "My perpetual motion machine will work! They laughed at Einstein, you know". And the reply is "yes, and they also laughed at bozo the clown".

Almost always, if everyone is against your idea, having tried it, you are probably wrong. There is probably no industry on the planet more open to change. We've gone from procedural to OO, functional programming is having a huge resurgence, static typed to dynamic typed, waterfall to extreme/agile/whatever, and so on.

And, despite the claims of the article, data flow programming has been alive and well the whole time. It hasn't been used much because of all the things that we have raised each time the topic is brought up, with no real response other than claims that we just don't get it, that people disagreeing is a sign that you are doing something right, and so on. I don't find it convincing at all. It's time to retire that Kuhn/Planck quote.


It's time to retire that Kuhn/Planck quote

Oh, I don't think so. As its presence in Kuhn's book suggests, the kind of 'scientific truth' Planck was talking about is not just any hypothesis, but the much larger thing that we now (post-Kuhn) call a paradigm. For a counterexample, you'd need to show a paradigm shift occurring within the individual careers of the most established scientists in a community. QM isn't an example of that, for two reasons: it was the work of a new generation, and the debates you refer to were taking place within the new paradigm [1].

Your example is ironic, since it was Planck who put the Q in QM. If the history of QM refuted the quote as easily as all that, he'd never have said it in the first place.

(I do agree with you that most claims of a new paradigm turn out to be false.)

[1] That's not quite correct; they also took place before the new paradigm had crystallized—a time when (as Kuhn describes it) old models have broken down and lots of big things are up for grabs. Under such circumstances, minds change more fluidly. But such transitional circumstances are relatively rare and Planck was talking about the normal ones.


We've gone from procedural to OO, functional programming is having a huge resurgence, static typed to dynamic typed, waterfall to extreme/agile/whatever, and so on.

One of those you didn't get right? Guess which one?


"'What we need is not more programmers. What we need is to enable non-programmers to participate in the creation process, not just the ideation process,' says Kenneth Kan, CTO of a company called Pixbi and a recent convert to flow-based programming."

Could not disagree more. I'm so tired of these initiatives to "get everyone coding!" Everyone can't be a programmer, just like everyone can't be a doctor and everyone can't be a mechanic. There are certain skills, mindsets, and desires that drive people to do what they do.


I think what most people don't realize is that thedailywtf is cathartic to the modern programmer. We really need less modesty and admit "Programs can be fucked up, and some people shouldn't be allowed to write them." (Of course, I don't care if a random person is fiddling with code in their spare time. It is when they are writing code that will be used by multiple people for years that ability starts to matter. Especially in large code bases.)

Anyway, back to the article...

Consider that a user wants to transform an input (inp) with operation X. Does it matter to them if X(inp) is a program they would invoke normally, or a function within an existing language? The former is probably more user-friendly, but both are conceptually similar. If the tool is handed to a user they can probably beat either solution together without much effort (assuming the environment to run said code isn't too complex).

But what if operation X is a unique problem that doesn't exist in a common library? The moment operation X needs to be written is where people need to become programmers (or have programmers do the work for them). If your business is large, pay professionals to do it, and let them get requirements and feedback from domain experts.

I'm asserting that management is capable of finding competent programmers. Which can be difficult. But hiring based on a mix of portfolio and credentials will probably yield better results than trying to train a domain expert to write code.

Getting back to the general point, there isn't a general solution for writing maintainable and scalable code. Being able to recognize good and bad code is a hard won skill. Being able to write and design it is even harder.

This article asserts that FBP is not only a silver bullet for designing large code bases, but also one so simple that a non-programmer can use it.

This isn't to say FBP is worthless. I think it could be used as a limited DSL in certain cases. For example, permitting an advanced user to describe a filter of data where conditional criteria for multiple fields.


Everyone can't be a doctor, but everyone should have a basic grasp of how their body works in order to live well.

Same with programming, if everybody had some base level of algorithmic literacy, they could at least have some idea if a mind-numbing repetitive task could be done with a simple script.

IFTTT does a great job of making it easy to glue web services together. NoFlo could be a powerful step up from that.


Absolutely agree with you. There are basic skills at the root of every industry that we should all be exposed to and taught in school. I'd argue that "algorithmic literacy" falls into the "math" category, and those who are interested in math will often pursue computer science. I just don't see why there should be huge initiatives to teach "everyone" programming, like the quoted portion of my post claimed.

Regarding IFTTT, I have to say it's a wonderful service and does an amazing job dumbing down powerful APIs and interconnecting them. However, I would love to see a study of how many non-technical people utilize it, and/or how many recipes come from people who are otherwise uninterested or ignorant to programming and computer technologies. I'm willing to bet it's very low.

It's easy to understand an IFTTT recipe in plain English, but it's a whole other ballgame trying to come up with them yourself.


I agree that it is tiresome to see some of these initiatives, but you may consider the analogue that many people now can write, even though not everyone writes literature.


The important distinction is that knowing how to read and write is a basic skill everyone needs in order to be an effective human being.

Similarly, everyone needs to know math and problem solving skills (the major basis of programming) to be effective human beings.

In other words, I fully support the push for STEM, but force-feeding programming into the agenda is both unnecessary (as those interested in math, problem solving and computers will tend to go into programming fields) and potentially counterproductive (having many unskilled "programmers" trying to touch important software could be catastrophic in many cases.)

It's kind of like the groups of script-kiddies who consider themselves "hackers" but really aren't doing anything but utilizing tools real hackers have created. This is how NoFlo (from the article) strikes me.


> The important distinction is that knowing how to read and write is a basic skill everyone needs in order to be an effective human being.

I don't think that's true. Its something people may need to be (or, at least, greatly benefit from in being) effective members of modern society, but all the people that weren't part of the literate minority when literacy was rare (or before writing existed) were not "ineffective human heings".


> Today’s web programmers grapple with problems that people in the early days never had to deal with. They’re building complex UIs, juggling a bunch of APIs, and running multiple processes at the same time. All of these tasks require mastering the flow of data between application components in real-time, something which even the most advanced developers struggle with.

No. The problem is not moving data around. The biggest problem on web development is the drive from the industry to shoe-horne applications on top of platforms that weren't meant for it.


Based solely on my previous experience with a "flow-based" programming system, which endeavored to make programming financial decision logic a matter of connecting boxes, it's the 80/20 problem. It makes 80% of the work dirt simple and able to be completed.

The remaining 20% still needs to be implemented in code, and takes most of the time. It got so bad that they actually ended up implementing "code" objects, just so the code could be edited and incorporated into directly into the tool instead of around it.


I'm a JS hacker working on the NoFlo UI, and one of our primary design goals is to make it easy to code new modules on the fly (in the same environment, with tests) when that makes more sense.


People waiting for "the breakthrough" that suddenly makes programming easy and available to all are kidding themselves. As others have said there's inherent complexity to programming and solving problems that can't be wished away with paradigm changing strategerized facilitations utilized by flow-optimized programmerators or whatever the secret code knowledge suppressed by scientists was prescribing.

On the broader point - and the comments by folks saying we should have made things better by now: we have. I started out learning to program with zero help, no tools, few manuals on an Apple II in some flavor of BASIC or machine code by trial and error.

Decades later having not coded a lick in more than 15 years I picked up Ruby On Rails quickly and easily (enough to get out MVP) and with the help of Stack Overflow and Google have spent the last three years getting better and better at it. I can do dramatically more today than I could have imagined in my days alone with an Apple II and a poor english translation of a poor chinese translation of a pirated Apple manual.

It's all better. The worst programmers have super powers available to them (they don't always use them admittedly) compared to prior days.

Things are getting better but they're earning it, not because "THEY" don't want you to know "SECRET KNOWLEDGE".


> People waiting for "the breakthrough" that suddenly makes programming easy and available to all are kidding themselves. As others have said there's inherent complexity to programming and solving problems that can't be wished away with paradigm changing strategerized facilitations utilized by flow-optimized programmerators or whatever the secret code knowledge suppressed by scientists was prescribing.

I totally agree with you and would go so far as to say there's probably a kind of upper bound to the rate at which a human can translate an original idea in their head into an unambiguous logical construct of the kind which can be executed by a computer.

However, I would also say that the narrowest bottleneck in this process is the rate at which someone can comprehend the current state and structure of a given program, and I would say that anything that tries to improve on the status quo of text files in a directory is pretty cool.


I miss call trees. I used to have tools to generate call tree diagrams for procedural languages, back in the 80s. With virtual/dynamic methods, IDEs don't bother to try to make such a feature anymore.


Sounds cool, do you think you could find an example? I googled for 'procedural call tree' but wasn't really certain what I was looking for.


sorry, is was over 20 years ago, and I forgot the name of the tool :-(

It worked something like making a ctags/etags file (a list similar to the data an IDE uses for "find definition" jumps), and showing an indented list (removing any cycles in the call patterns) of the fan out, as if it were a tree (rather than a graph).


ya, my eyes gloss over when I read 'rediscoveries' like this ...

I think the finer point is more about workflow then programming.

That is state based workflow is fundamentally harder for developers to grok, also maintaining state is a kind of fallacy when scaling out and/or up.

Flow based workflows, while probably can't cover easily the breadth of exotic corner cases that FSM can achieve, are much easier to understand and state is byproduct of the system, not to mention that a full record of mutation occurs with a copy of data at every step (similar to MVCC - Multi-value-consistency-control)

This is similar to how http://www.w3.org/TR/xproc/ XProc works, which has a serious implementation in http://xmlcalabash.com/ and slowly gaining adoption in XML circles.

Its nice to see yet another XML technology, which of course is based on older established concepts in computing, being emulated/replicated in the JS universe ... shame about the visual programming antipattern/lunacy.


I think programmers are always thinking about the flow of programs. Jumping through blocks of statements, following the chains of function calls and analysing their definitions when needed. The _flow_ is there, but it's not visual. We are always imagining what would the compiler or interpreter do with our code.

On the other hand, we must be able to quickly navigate in the source tree. That's why text editors like Vim, that understand the contents of the programs (buffers, text objects, definitions), are so powerful. The same happens with Xcode, for example. The Interface Builder is a graphical tool that defines the code components as visually manageable objects.

Maybe the proposed paradigm could extend this ideas of representing the code in flows, as graphs of objects. I'm really curious to try it. Mainly because it seems to feet really well with handheld devices.


Okay, yeah, so I agree with most of the criticisms here. Flow-based languages didn't catch on because no-one came up with a really compelling general-purpose implementation that scales up to large, complex projects. It's not because it was actively suppressed by the programmer Illuminati to keep our salaries high.

I think the main reason FBP hasn't gone mainstream is because most implementations are built around a graphical programming environments (e.g. LabView, Pure Data, Scratch) in order to cater to non-programmers. But programmers love their text files, and for plenty of good reasons. Code represented as text can be edited by any general-purpose text editor, work well with version control systems, and can (usually) be easily split into separate files.

It's probably instructive, then, to look at a domain in which textual flow-based programming languages have become mainstream. I'm speaking, of course, about Hardware Description Languages, which are used by digital circuit designers to simulate and prototype complex circuits, like CPUs. The two dominant HDLs are VHDL and Verilog. Both of these languages use a flow-based model, in which logical blocks called entities are connected to each other through input and output ports. The reason these languages work is because the domain is a good fit for FBP and because they are designed for the domain experts (circuit designers) to use, and not for some non-technical manager to be able to understand.

Taking a similar approach to web programming may work. Logical blocks connected through ports. Except instead of ones and zeroes going through these ports, you have I/O events and data. I haven't looked at NoFlo, but it might be interesting.

Note that I'm not saying that VHDL and Verilog are good languages, even for their intended domain. They have plenty of warts, and a FBP language for software that basically copies one or the other would be terrible (especially VHDL; oh god, please nobody make VHDL for web programming). But the entity-port model is essentially sound, and I would be quite interested in a general-purpose language that adapts that model in a well thought-out way.


You know what costs us sanity? It's not code. It's people.

Scope creep. Last-minute specifications. Pointy-hairs who don't understand technical debt, and need everything done yesterday. Design by committee. Too many cooks in too many kitchens. Legacy processes, legacy data, legacy user habits. Unrealistic expectations, unrealistic timelines, and unrealistic budgets. Marginally competent coworkers. Office politics.

Compared to those, callback hell and and classitis are a dream come true. And while all the problems above are reducable with the right processes, you'll never shrink them all to zero.

"Algorithms may have mathematical underpinnings, but computer programming is a behavioral science." - @brixen


So, I should use BizTalk Server[1][2] for my website. I'm not saying it's a bad idea, but I get the feeling it will be a hard sell.

1) http://www.microsoft.com/en-us/biztalk/default.aspx

2) http://msdn.microsoft.com/en-us/library/aa577881.aspx


The advantage of this method of programming is that complex parallel processing tasks can be reduced to a directed acyclic graph from information sources, through filters, and into information sinks. You can then take this graph and either interpret it by creating threads for each of the nodes in the graph and shuffling information between them using FIFO queues, or try to compile the graph to a better performing, but "flattened" native program.

The main problems with this approach are how to properly represent timing, latency, and node state, since you can send one piece of information through the graph at a time, or you can try to improve efficiency by pipelining the graph - using the graph like a shift register, and making sure every node has work to do at all times.

Of course it's not a cure-all solution and doesn't map well to every problem domain, but it is generally easier to reason about for complex parallel processing tasks, and can be deployed over multiple machines with minimal "glue" code.


So, hardware description languages, which are essentially flow-based programming languages, solve this using clocks and sensitivity lists. An FBPL for web programming would probably use events instead of edge/level triggers and timers instead of clocks.


These guys should talk to the creators of VNOS 1.1 and ask how that went.

Visual programming does have its place. My mom was able to make a website using Weebly without any help. However, as things get complex I personally can visualize (in my mind) the interconnects. In my opinion, we need easier to use libraries, not visual tools. But I genuinely want to be corrected.


I doubt flow based programming is going to let the nonprogrammer program but I do think there is some potential in the age of the tablet to get stuff done visually with flow based development and touch friendly tools provided you can easily switch back and forth to source. E.g. something sort of like windows worfklow on speed.


"His company is busy resurrecting flow-based programming with a framework called NoFlo, an implementation of FBP for NodeJS"

Something interest to note. The name of the framework is NoFlo that sounds like NoFlow And it is a framework for flow based programming. Wonder why they came up with that name.


Now NoFlo works on both Node.js and browsers... But when I started it, I targeted only the former. So, Node.js Flow -> NoFlo


It comes from "Node Flow."

Or reverse psychology?


There are two points I don't get with such a system.

1) The logic/functions needed in each "black box" still need to be written. How is that handled? Is that where the line of "programmer vs non-programmer" is crossed?

2) Programming is, more or less, the skill of breaking down a problem into the parts needed for its solution and providing the solution. The value of a programmer isn't the ability to type in code, so much as the ability to logically break down the problem and solve it. How would such a programming paradigm actually solve that problem?


The NoFlo team is coders designing this for our own work first. We are free to put as much code in a black box as we like. We'll be running tests with ourselves and our early community to make some best practices, but there is no rule that keeps you from writing most of your logic in one block. Our theory is that the graph-based view will encourage encapsulation in different ways that make sense for the different domains that adopt the tool.

One of the big design differences (vs Quartz Composer, Lab View, Pure Data) is that it will be easy to dive in and edit component code and make new components when needed.


I looked at all the pictures in the article and couldn't imagine how to draw something as fundamental as an if/else-statement. Think about that for a moment.

http://www.maxon.net/uploads/pics/xpresso_17.jpg shows an alternative that i imagine could be used to lazily evaluate only certain connected boxes but still...just look at that mess.

Please don't remind me how if-statments in labview are represented. It gives me nightmares.


At this point, I see the use of Flow-based programming to be analogous to something like Titanium (the JS framework that lets you build iOS apps).

It's interesting in concept, and will most assuredly open the doors to allow a larger swath of people to participate, but for anything of consequence, you've still gotta buckle down and write some code. The reason is almost entirely about performance optimization, which is something you just can't get from compiling from GUI.


I tried a flow based tool for the Mac in late '80s. It really didn't make things easier - and it had terrible support for things that text languages take for granted, like foreign library import. Computer scientists have not "hidden away" the idea. It simply hasn't provided close to the value that text based languages have in real life. The new tool is as much snake oil as any tool before it that uses the same language in its advertising.


The fundamental problem with visual programming methods as opposed to linguistic methods is that abstraction doesn't work as well.

I agree that we'll get glue programmers, who just stick modules together. We already do: they are those programmers who use APIs. i.e. all of us to some extent. So far, it's still programming.


FWIW the visual graph style IDE is used to write scripts for 3d design programs Rhino 3d (Grasshopper plugin [0]) and Revit (Dynamo). It's pretty popular among architects and the like.

[0] http://en.wikipedia.org/wiki/Grasshopper_3d


I was using Xpresso in Cinema 4D years ago, which looks remarkably similar. http://www.maxon.net/products/cinema-4d-prime/customizing.ht...

This is nothing new and neither has it been "suppressed".


I agree.

A lot of software use that kind of visualization. It's nothing new.


This reminded me in some way of FASTech CELLworks (sic): software used in manufacturing automation to define process flows. It had some interesting features such as message passing via a mailbox abstraction, concurrent execution, and a memory-based data store.


This read like a long winded video sales pitch on the unbeatable game plan for seducing women.


> What we need is not more programmers. What we need is to enable non-programmers to participate in the creation process, not just the ideation process

Why don't we need more programmers participating in the ideation process, not just the creation process?


This is the kind of idea that looks fantastic, but that it's not that useful (at least, so far). The equivalent of "Minority Report" interfaces. The article is nothing more that a collection of cliches and hype...


FYI, the Clojure Pedestal Client-side framework has a similar update model to this.


We're building something very similar, aiming to have demos and beta by new year.

(Bad) signup page here if interested: http://www.lexasapp.com/


> The paradigm was so disruptive that it was suppressed by computer scientists for decades.

Did someone not get their "suppress the disruptive programming technique" kickback cheque in the mail this month?


When I saw this I immediately thought of [Drakon](https://en.wikipedia.org/wiki/DRAKON).


Scratch, Lego Mindstorms, DirectShow audio graph editor, any number of lab and process control systems.


Has any body heard of TIBCO? They've been doing this (flow-based programming) for years(>10).


It's a real tragedy people who don't know history can't learn from it.


being a good programmer is not about knowing how to handle the minute details; it is about know how to handle complexity.


"The paradigm was so disruptive that it was suppressed by computer scientists for decades." Stopped reading after that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: