Goal oriented programming is alive and well in SQL.
Visual programming comes around every ten years, and fails each time, because it can't deal with large complexity like text can. There is a reason people write books, not comics, especially for advanced topics!
Negotiating without a common ontology is needlessly hard, and doesn't come up in real use cases. It turns out, when you need to talk to another computer, you need to do that for very specific reasons. (Also, security and authorization!)
Constraint systems are alive and well in SolidWorks, Inventor, and other such systems.
It turns out, just because ideas are interesting, doesn't make them actually better at solving real problems in general! It's not the "FORTRAN versus assembly" problem. It's the "can I actually get there" problem.
I'm all for a topic review, including of classic history. But I also think "different for different's sake" is a good way of eating money and alienating users.
I think this response is exactly what he was trying to preempt at the end of the talk, when he noted that these ideas in particular aren't his point: his point was that all of these ideas hatched in a short timeframe long ago, and they're all quite interesting and different (regardless of workability), so... why did they hatch then?
The unspoken assumption is that fewer new, interesting ideas came out of the following decades, and the thesis he asserts is that this is because we have settled into smugly believing we know the whole territory of computing, rather than keeping the beginners mind, as people in the 60s were essentially forced to have.
I think there are fair positions of opposition to his true thesis, but ignoring his main argument to point out that we don't use these ideas because we're, well, so experienced and savvy (or do use them in some spots, again go us) kind of seems like making his argument for him.
Visual programming is really superset of textual programming. Is basically GUIs vs command lines. These day most people use a GUI such as Windows, Mac OS X or X Windows rather than using DOS or a pure Linux command line. The issue is that current visual languages in general try to force everything into a node based language. It need to be able to describe different data structures in different ways, some maybe as text, some as tables, others as networks.
There is a reason people write books with illustrations, not pure text books, especially for advanced topics.
The only thing that visual programming solves is the syntax element. At the end of the day that is actually one of the easiest parts of programming. Understanding your data structures and relationships can't be solved any more easily through GUI interfaces.
Guis work well for interactive stuff, but not for scripting things.
Visual programming vs textual programming is like video vs transcript. You can read the transcript of a video a lot faster than watching a video. You can not say that video is superior because it simplifies orthography and grammar issues.
There's a reason why people draw in photoshop instead of typing the values of pixels in an array.
I think the biggest problem with textual programming isn't the text per se, but that text is used as the underlying format. I'd rather have projections of an abstract syntax tree that I could manipulate as text with different types of syntax or as various visual objects when that would be more useful.
Frankly, I never understood this objection; what's the problem with using text as the stored format? You can still parse it to an AST, render it in a visual way and then serialize the AST back to text.
I think the main issue is that code syntax is too flexible, so code generation tools create large diffs when updating manually written code. A simple solution to that is having a formatting tool like "go fmt" to adjust the latter before committing.
At the end of the day, text is just an intermediate format that we understand. Its all stored in a binary representation.
Yes, I agree. Text is too flexible (allowing you to do clever things you couldn't do in a drop down menu). A drop down menu won't allow you to make the same mistakes that a text option will, but is way less flexible.
Flow and rhythm are a thing in progrsmming, which is why structured editing continues to feel "stilted" to many people. Heck, many people don't even like static typing for similar reasons...without heavy doses of inference, it forces you to write your code in a certain order or suffer from a bunch of distracting spurious tyoe errors.
I think this is the wrong way to think about visual vs. textual programming.
Textual programming exceeds at plugging into your mind, you can associate words with ideas, imagine what the program will do very efficiently.
Visual programming done poorly is just textual programming with lines and boxes, the visual queues sometimes help but often just get in the way of our mental simulation of the program in our head. However, if done with much more direct manipulation of the executing program, it can bring much of the simulation we are doing in our head into the computer, boosting our ability to think, reason about, and solve problems.
Most of our examples of visual programming are done poorly (e.g. diagram-based languages), but there are also some good examples that show off its real potential (Forms/3, AgentSheets, Pygmalion, ARK).
Yeah, direct manipulation of the program in real time.
The issue with programming is often that programs tend not do work exactly how we think they work, so we have to reverse engineer/debug them to find out how they work and then change that into how we want them to work. Especially as programs grow larger. I think visualizing the program and the state of the program can help with this, especially if the visualization can be directly manipulated.
We spend most of our time thinking about what the code does, textual representations are very efficient at that. All of the ceremony is definitely annoying, but we expend a small part of our mental effort on it. Choose even PHP, and eventually you begin imagining how the computer will execute your abstractions and stop paying attention to the cruft.
No its because programming involves more than just piping a few command together. Otherwise everything would be written in bash with a few pipes (or a visual equivalent).
Visual programming ... can't deal with large complexity like text can
I disagree. I wrote a good bit of Max/MSP a while ago, and while most Max code out there is a horrible horrible mess, I put that down to the target users being musicians and artists who were never trained in software engineering principles like encapsulation and modularity [1]. In my own code, I structured things as I would in a textual language and I found that code could be much cleaner and easier to maintain than textual code (and experimentation was easier too).
I've tried a few other visual languages [2] and they do all have their issues, but I don't think these issues are inherent problems with visual languages, but rather problems of the implementations that I've tried. That is, they can potentially be solved eventually. I think they largely don't get solved because there isn't much isn't much demand from the target users (possibly because they simply don't know any better) [3].
I've seen textual code in mainstream languages that manages large complexity in just as bad or worse ways. Simply being textual or visual doesn't necessarily make one better or worse.
There is a reason people write books, not comics, especially for advanced topics!
Visual languages still allow text where appropriate (or could do). For example, writing a mathematical expression is much simpler in text (and Max/MSP allows you to do just that).
[1] and some issues with Max itself that could be solved. For example: it has very limited data structures and no real ability to allow user defined ones that I know of.
[2] Some that I've tried are targeted at programmers and they too fell flat for various reasons... I certainly don't claim its easy to make visual work. Many seemed to stick too closely to what already exists or otherwise didn't provide good enough data structure and encapsulation/modularity support.. Lack of git integration and unit testing is also a big turnoff for me.
[3] Visual languages are very successful in certain niches: musicians (Max, Pd, synthmaker), artists (lots of different shader/material systems), game developers (blueprints in Unreal for example), robotics/lab control (Labview, flowstone)
> Visual programming comes around every ten years, and fails each time
True. I think though visual programming is alive in niche markets -- control systems, audio processing, industrial control. It might have never becomes mainstream but it is not dead there. Telling those experts that they now have to write code, is not going to go well, they'll never be ready to open Emacs and start coding.
> Constraint systems are alive and well in SolidWorks, Inventor, and other such systems.
Also GUI layouts in some cases are done using contraints.
> But I also think "different for different's sake" is a good way of eating money and alienating users.
The goal is not to do just different things, but do different things in order to better solve problems.
> It turns out, just because ideas are interesting, doesn't make them actually better at solving real problems in general!
But if it does, and you never go seek out new things, you'd miss out. That was one of the points in the talk.
For example there is mention of actor programming. I use that to program large distributed sytems in Erlang. And it is not just a gimmick, but abstractions provided by the framework allow a more concise representation of ideas. It does feel like going from Assembly to using C in terms of concurrent and fault tollerant system. You can do everything in C++ of course with threads and locks, and a library with serialization and networking. But it feels like using Assembly. The advantage also comes in other ways -- abilty to have less ops pain because system are more fault tolerant, ability to hotpatch live system and so on.
Someone who is used to segfault crashing whole backends and spending weeks chasing hard to find use after free bugs or concurrency bugs, might not know there is another way. They accept that "this is programming", the manager accepts that this is how systems are written and delivered and this level of cost of maintenance and bug chasing is acceptable and normal. Nobody in the chain will go and seek out and try new tools to solve the problem better.
> But I also think "different for different's sake" is a good way of eating money and alienating users.
That's true to. For every "this new technology helped us double so and so metric" there is 10 stories of "we re-wrote our stable backend in the latest fad language someone read about on HN and now 2 years later everything is crashing and nothing works".
I take issue with the "they'll never be ready to open ___ and start coding." Sure they can. It just might take them a lot of training. But so what? You know how long it takes someone to become competent with a guitar? Piano? A wrench? Hammer? ... It isn't exactly overnight.
I am also question that actor based programming is somehow something that provides a more concise representation of ideas. It may, for some cases. However, I suspect anything actually of real world use has moved beyond what anyone would consider concise. Indeed, after fumbling through something I was certain should be doable in fewer lines of code, yet again, I am coming to believe that the pursuit of concise solutions is the biggest fallacy in programming.
I confess I am ok, however, with people preaching to a choir to drum up support. I don't have any actual data, but I do feel that this keeps the topics in people's minds, which does increase the likelihood of an idea finally having its time.
> I take issue with the "they'll never be ready to open ___ and start coding." Sure they can.
They can, but they would be upset if they had to (at least the ones I've seen). Making them develop what they did visually in code would be maybe like telling someone who is using Python to start writing assembly. Sure they could become competent, but when they are delivering products quicker with this other, better tool, they wouldn't want to do it.
> I am also question that actor based programming is somehow something that provides a more concise representation of ideas.
Not just actor programming, but the constructs available in Erlang/OTP library. It includes abstractions built on top of actors -- server instances, finite state machine, event handlers, logging, etc. But going back to actors, in general I find, actors map best to real world concurrency scenarios. A connection is an isolated actor, it processes data sequentially internally but there are multiple such instances running concurrently. A customer as an actor works, it has state, it issues purchase orders, it fails, it wait for response and so on. But there are multiple such instances running concurrently. You process an data file in a pipeline, each step is an actor like factory worker, doing some work and passing it along to next one on the conveyor belt. At least from what I've seen it represents the real world problems better than say an epoll event loop or a lock.
> I am coming to believe that the pursuit of concise solutions is the biggest fallacy in programming.
Concise doesn't mean a pure contest on line length. That usually results in obfuscated code. Conciseness includes clarity and operating at the right level of abstraction. If an order is being processed, it means seeing clear steps: load_price_list, submit_payment, check_inventory and so on, perhaps instead of lock_thread_1, handle_promise_callback_5. Some frameworks have better abstractions in place than others. Maybe Matlab has better abstractions for multiplying matrices and solving signal processing problems. Other languages are better at graphics, and so on.
"Different for different's sake" is also a good way to discover new perspectives and ways of doing things. Not everything has to be about making money.
I think you subtly missed a point, it's not so much about visual programming, it's about interactive programming on a 2d display. I remember there used to be a bunch of programming languages where code went into an environment / repository and could be interacted with (like smalltalk).
Visual programming was just one attempt at creating interactive programming on a 2d display...
Today in the web world, everyone is kind of chuffed we have this thing called "hot code reloading" which is slowly getting back to that idea of coding in one space on your screen and instantly seeing the result in another space on your screen
I am personally most interested in how to arrange a software to software pigdin for negotiating communication. Beside the obvious, like the dialtone handshake.
The talk is presented as though the year is 1973 and Bret Victor is reviewing some of the great stuff that's happened the last decade and four major trends that he can imagine will define computing in 40-someodd years.
They revolve around constraint/goal solving instead of writing procedures, direct manipulation of data instead of coding it, using spatial representation and reasoning rather than huge piles of text, and real concurrency rather than mock concurrency built atop sequential Von Neumann machines, all with great examples (these are what the references largely fill out).
The talk concludes with Bret Victor slightly dropping the gag and making the point that these particular things are not the real point: the real point is that believing we know what we're doing, individually or as a profession, is deadly to our creativity, and that the great ideas discussed all happened then because computers were able to be useful at that time, but no one thought that they'd figured it all out yet.
He urges us to realise and become comfortable with the fact that we don't know what we're doing, so that we can be free to do anything.
Visual programming comes around every ten years, and fails each time, because it can't deal with large complexity like text can. There is a reason people write books, not comics, especially for advanced topics!
Negotiating without a common ontology is needlessly hard, and doesn't come up in real use cases. It turns out, when you need to talk to another computer, you need to do that for very specific reasons. (Also, security and authorization!)
Constraint systems are alive and well in SolidWorks, Inventor, and other such systems.
It turns out, just because ideas are interesting, doesn't make them actually better at solving real problems in general! It's not the "FORTRAN versus assembly" problem. It's the "can I actually get there" problem.
I'm all for a topic review, including of classic history. But I also think "different for different's sake" is a good way of eating money and alienating users.