Hacker News new | past | comments | ask | show | jobs | submit login
Dynamicland (dynamicland.org)
297 points by andyjohnson0 on April 7, 2021 | hide | past | favorite | 112 comments



If you're looking for more concrete details on what it's actually like to make things at Dynamicland, I recommend this writeup by Omar Rizwan. (from 2018, so doesn't reflect recent progress, but it's still helpful for getting a feel for the place):

https://omar.website/posts/notes-from-dynamicland-geokit/

I also wrote a short explanation of some little experiments I did there when I visited a couple years ago:

https://www.geoffreylitt.com/projects/dynamicland.html


I wish there was more out there about this. Seems kind of neat despite the limitations but I haven't seen much new come from it since it was introduced years ago.


For those wondering about an update, they posted a "Narrative Description of Activities" document (something to do with 501c status) recently (https://twitter.com/worrydream/status/1367894681342799875), choice quote re comments in this thread:

> The Dynamicland researchers are not developing a product. The computers that the researchers build are models: not for distribution, but for studying and evaluating. The goal of the research is not to provide hardware or software to users, but to discover a new set of ideas which will enable all people to create computational environments for themselves, with no dependence on Dynamicland or anyone else, and to understand how these ideas can be taught and learned. The end result of the research is fundamentally intended to be a form of education, empowering communities to be self-sufficient and teach these ideas to each other.


Yeah but can we go there?


Last I checked, they were in downtown Oakland and you could just knock on the door.


I don't get why there is no way to edit apps live through the operating system. Before I knew how programs work I didn't understand why I can't recode anything as user - change menus in Windows 3.1, change what they do, change logic of forms, etc. Today I know how this works and I'm even more convinced this would be good for everyone, exposing the actual logic behind all we see in apps (start with text, forms, links, buttons, etc.) would only be to everyone's benefit - would expose bugs, help people learn UIs in depth, suggest better functionalities and have swarms of users contribute to computers.

Same with gathering user feedback -- the fact that we have such ridiculously unusable basic UI elements on mobile especially (people tend to NOT find basic UI elelemtsn of apps for months, sometimes years - how the f* is that even possible) is just one consequence of the fact that even if say 1000 users intend to do something and fail, the authors of the app never learn about that. WE get clubhouse to listen to one more type of radio, but we never get "userhouse" to get instant stream of people's complaints about an app (and a special physical button ON THE smartphone itself to launch that "instant feedback to the app author"mode so it's part of the base aspect of being a user of a smartphone)...

sigh.... one can dream.


> I don't get why there is no way to edit apps live through the operating system. Before I knew how programs work I didn't understand why I can't recode anything as user - change menus in Windows 3.1, change what they do, change logic of forms, etc. Today I know how this works and I'm even more convinced this would be good for everyone, exposing the actual logic behind all we see in apps (start with text, forms, links, buttons, etc.) would only be to everyone's benefit - would expose bugs, help people learn UIs in depth, suggest better functionalities and have swarms of users contribute to computers.

This has been done before more than once, including at PARC (which is listed as an inspiration in TFA)


> I don't get why there is no way to edit apps live through the operating system.

I learned how to program on an Apple IIe which had a key sequence to drop to a BASIC prompt and allow manipulating the running program (if it was written in BASIC, which many were). My first programming was hacking my highscores over my brother's.

Then when the web came about in the 90s there was View Source. No one minimized, much less compiled or compressed CSS and Javascript (when they first came out). Web "apps" were rudimentary, but extremely hackable. You could copy and paste locally and hack to your heart's content.

The accessibility is what made programming incredible fun and alluring to me. I'm happy that there are many systems carrying on these ideals today, but I wish there were far more.


I just had to help my wife move her taskbar back from the left hand side back to the bottom where she likes it. She has no idea how it got moved, and I've got no idea how long she just put up with it in the wrong place before it annoyed her enough to get help. Regular users might want to automate things, but they absolutely should not be able to accidentally go inside systems and break them.


that's actually quite an easy mistake to make if the taskbar is not locked, just dragging it to one of the corners of the screen moves it


Most things are web applications and you can edit those - sometimes people even do it with browser extensions.

The reason you can't do it the rest of the time is because developers would have to support it if they allowed it, and then they'd never be able to change anything ever again. Apps also aren't set up to allow this because of Conway's law, or maybe I have it backwards there.


Smalltalk systems have always been able to do this.


I think image computing is generally a bad thing for correctness, though, because it makes the program fragile. It's good that it has to write itself to disk in some simpler format than a memory dump because it's probably easier to repair that format if it gets corrupted.


There's the usual scaling problem with graphical representations.

The now-discontinued Blender game engine was an example of building a complex system by connecting up boxed with arrows. You could easily get to a few square meters of program, and then finding anything was tough.

Dynamicland PR: "Programs are flexible, and compose readily." So what do you compose? A functional block with ins and outs? What's the equivalent of a subroutine? That's where graphical systems usually fall down.


> So what do you compose? A functional block with ins and outs? What's the equivalent of a subroutine? That's where graphical systems usually fall down.

Most visual programming languages have nodes with ins and outs that contain other nodes, just like you’re suggesting. And yes, I’d consider them the equivalent of subroutines.

Programmers seem to talk about visual programming languages as if they’ve failed, but outside of software engineering they appear to me to be more popular than text-based languages. Some examples are VFX, compositing, audio/MIDI, prototyping, and automation.

My own working hypothesis is that plain-text based version control is why most software engineering uses text files. The big differentiator between software engineering and other fields is there’s very little collaboration with visual programming language projects, they’re almost always made by one person.


I think version control is a decent reason, but it's more the full ecosystem. As an example, how can I grep a set of visual functions for some specific pattern - what even is a meaningful pattern to look for.

That lack of an ecosystem makes most devs nervous and we tend to be (rightfully) distrustful of all-in-one ecosystems since your program (probably your livelihood) is then tied to the continued support of that ecosystem.

It's one reason this movement is interesting to me, if we can establish a common communication medium, specifically a non-proprietary one, then tool builders can come along and start building out that ecosystem - likely as inefficient hacks initially but ones that can be refined as the platform gets more focus.


I don't think version control has anything to do with it.

If there were a highly compelling and productive general purpose visual language, storing the data as text, and even providing tools to aid in things like merge conflicts, would be minor details. But when you actually dig into things like (the excellent) Prograph and its descendents, you find out that shapes and lines are actually less intuitive than text for nontrivial code bases. Contrary to what one might expect, a picture is not worth a thousand words.

Perhaps a new visual paradigm will be introduced that changes things, but there's really nothing today that competes with even your least favorite popular text based language.


Why wouldn't you say the examples I gave above don't compete with "your least favorite popular text based language"? My whole point is that not only do they compete, but they are actually winning in most use cases outside of software engineering. Once you take that assumption (which I'd love to hear someone debate), the question just becomes why do plain text programming languages win for some uses cases and not others?

The discussion of "general purpose" programming languages doesn't really make sense with respect to visual programming languages because visual programming languages necessarily exist within a GUI, and GUIs themselves don't scale to general purpose. So what we actually have is specialized visual programming languages that double as (or exist within) specialized GUI applications.

A system that appears to be working quite well in creating much more efficient systems for non-programmers to accomplish programmatic tasks. This kind of poor mapping of textual programming languages to certain problems is why the Stripe globe is state of the art text-based rendering (https://stripe.com/blog/globe), when people are doing algorithmic architecture in Houdini (https://mvsm.com/project/imac-pro). It's not even comparable, text-based languages are getting absolutely decimated outside of software engineering by better tools that leverage visual programming languages.


Your link to "algorithmic architecture" was such a letdown. These guys weren't doing architecture, they were just making a flashy Apple ad! Can you expand on what you mean when you call the Stripe globe and the Apple ad "state of the art"? I assume you don't mean that these are some of the most technically proficient graphics you've seen, because that would just be silly.

And comparing something developed by a design studio with god-knows-how-much rendering time to something designed in-house that runs live in your browser seems a little strange.


You might like this link a bit better: https://www.rhino3d.com/6/new/grasshopper/

For context, "Grasshopper" has been a nodes-and-boxes plugin for Rhino 3D—a 3D modeling app—for many years. By "new" they mean that it has been promoted from plugin to just part of Rhino itself. But if you look for people doing things with Grasshopper, you'll find a ton of examples.


Yes, I would consider the examples "some of the most technically proficient graphics", but you can find practically infinite examples of impressive procedural work done with visual-programming-language-based tools.

I'd love to find more impressive examples using text-based programming languages though, those are harder to find.

I agree that comparing a video to something that runs live obviously isn't fair, but that's not really the axis I'm concerned about here. Plain-text code seems to struggle at the artistic level vs. visual tools, not the performance level. E.g., the counterargument I'm looking for would be impressive visuals created purely in code that's either pre-render (e.g., as a video), or running live, I don't really care, I just want to see visually impressive work.


I have another theory why it doesn't scale. Visual programming is very spatial. This may seem helpful at first but actually it gets in the way of development process.

Developing isn't just writing (drawing) new code. It is iterative process of juggling many pieces of functionality so they work well together. When you have to refactor some logic, suddenly blocks move to different places, and their relative positions change. Whenever related blocks change relative orientation, your internal model for navigation becomes obsolete. Human brain is not used to this and it creates discomfort and frustration.

Traditional programming done with text files in which functions have 1D organization. Because 1D has less spatial relations (just above and below), code reorganization is cognitively easier. It's easier to find good place for piece of code and it's easier to remember where it lives.

I think a truly usable visual programming tool should not allow users to place blocks at arbitrary positions, but should automatically (and predictably) arrange pieces on screen. Structural text editors (for example [1]) are step in right direction, while connected blocks seem to me like a dead end.

Another problem is structuring the code - the 2D space is simply too crowded to represent layers of abstractions in flattened way. The hyperbolic plane would help here [2]. Not every bit of whole design can be shown at once (this would result in information overload), but editor could show what is in focus as well as one or two levels of context.

As to success of existing visual languages, I'd argue that they are all targeted at non-technical people and they all run within sandbox environment developed with traditional text code. Why aren't their creators dog-fooding? My own answer to this is that existing visual languages cannot support the necessary amount of complexity.

[1] https://git.sr.ht/~technomancy/anarres [2] https://www.youtube.com/watch?v=8bhq08BQLDs


> Visual programming is very spatial.

I think this is worth stating, but I think it really means that expressions take up much more space.

This can be confronted from two directions.

First, don't use nodes and noodles for expressions, because it doesn't help clarity and hurts density. Use nodes at a higher level instead of searching for a silver bullet.

Second, of course one flat work space does not scale. Node based tools already have ways of grouping nodes as well as making groups that can be instanced instead of duplicated.

> connected blocks seem to me like a dead end.

Connected blocks of any sort mirror how programs work in the first place. Automatic organization is typically pretty helpful though.

> Another problem is structuring the code - the 2D space is simply too crowded to represent layers of abstractions in flattened way. The hyperbolic plane would help here [2].

I'm not sure what this means, but I'm guessing it is also solved by grouping nodes and making components by groups that can be instanced.

> As to success of existing visual languages, I'd argue that they are all targeted at non-technical people and they all run within sandbox environment developed with traditional text code. Why aren't their creators dog-fooding? My own answer to this is that existing visual languages cannot support the necessary amount of complexity.

This is definitely not true. There are many extremely technical people using visual languages because they are huge speed ups in productivity - more interactive and less error prone by a huge degree. Shader writers use nodes instead of text all the time. Lots of tools transitioned from being text based to being graphs. The difference is that they are domain specific and not general purpose programming languages. It is the general purpose nut that hasn't been cracked.


There are version control systems that aren't text oriented. There's Autodesk Vault, for drawings and 3D designs. There's Alienbrain, which is for games, movies, and even automotive designs, where assets can include a game level, a movie clip, a 3D model, a motion capture session, and a fender. "Previews are available for several hundred file formats". There's even the new NVidia Omniverse, which is even fancier.


Visually programming shaders and composites works extremely well, though it is important to realize what they don't do.

Anything that needs to be sequential is not done well with nodes as well as branching and looping. Also programs like nuke, houdini etc. are basically limited to one data type (images) or a few data types (images, geometry, 1D channels). Shaders are limited to their primitive data types as well.

The advantages are the ability to see output at every stage, work in real time (including seeing errors in real time), etc.

You also get to see descriptions, limitations and special interfaces for each parameter. Not only that, but parameters can be switched between constants, expressions and arrays (channels) easily for debugging. Overall there is a lot more information going on.

I don't think source control has anything to do with it. Anyone can save more versions of a file and many are text. A lot of times version control is used with text fragments that make up reusable groups of nodes.


> Programmers seem to talk about visual programming languages as if they’ve failed, but outside of software engineering they appear to me to be more popular than text-based languages.

There's a large category of conventional wisdom on HN that's categorically false yet constantly repeated—the idea that "visual programming languages failed and are now dead" is one of those. It's... irritating :-)


Unless something has changed that's some PR hyperbole.

The DynamicLand I experienced was a piece of paper represented a function written in a text based programming language. You could choose to have inputs and outputs based on proximity but you didn't really "code" using the blocks. You "code" using a text based programming language. The sheet of paper shows the output of its text based code. If you want to edit that code, you grab keyboard and point it at that sheet and then you can toggle between showing the text based code or showing its output projected on the sheet.

I didn't see many examples that composed all that well. I saw a few like there was a plant->bunny->wolf simulation. Plants spawn randomly, bunnies eat plants, wolves eat bunnies. So that was written in a text based programming language represented by one sheet of paper.

Another sheet of paper had text based code that would graph a value over time. If you pointed its input at the sim it would show a graph of plant population, or bunny population, or wolf population. I don't remember how you chose which output to graph. It might have been based on which edge of the sim paper you connected to.

Another sheet of paper had text based code that would output a value based on its orientation and draw that value on the sheet. In other words it implemented a knob. Lines were coming out of the simulation sheet so if you put the knob so the line from the simulation sheet touched the knob sheet the knob would adjust some parameter of the simulation like plant spawn rate or wolf running speed.

But you can quickly see if you wanted to be able to adjust 5 parameters (plant birth rate, bunny spawn rate, bunny reproduction rate, bunny speed, wolf spawn rate, wolf speed) and you wanted to graph the populations of all 3 types you'd quickly run out of space.

Further, writing anything more complex than simple programs you could write in 30 minutes seemed problematic. In other words, as a learning environment or a toy it was super interesting but at the time they were pitching this not as an educational thing but as an experiment in "computable surfaces" but it was hard to see it as more than something you bring kids to on a field trip and one or two experiences is the end of it.

They said that this particular implementation (paper, projectors) was also not the end goal. You can imagine doing the entire thing with AR glasses (so no projectors needed)


Thanks for writing this up. I'm always interested in new approaches to visual programming, that move away from text -- your comment tells me that DynamicLand is still firmly anchored in text-based territory.


There are some interesting examples of large visual programs at https://blueprintsfromhell.tumblr.com/

I think the goal of Dynamicland is to build computational paradigms other than procedure-oriented programming. Individual cards can call lua subroutines, but the emergent behavior between cards is not the same type of composition as connecting subroutines.

It reminds me of the gameplay in "Baba is You": the "program" is the set of rules currently in play, and the emergent behavior is the set of moves allowed by those rules.


There are some interesting examples of large visual programs at https://blueprintsfromhell.tumblr.com/

Yes, same problem, and same solution, as the Blender game engine. ROS, the Robot Operating System, which is really a message passing library that runs on Linux, is something like that, too, with blocks connected by one-way connections.

Emergent behavior between cards is not the same type of composition as connecting subroutines.

Good point. You can start to see where this model works. Things where there's a lot going on in parallel, and loosely coupled modules need some interconnection.

People keep re-inventing this idea, but so far, it tends to get really messy as it scales up. There's a second level of concepts needed here, something comparable to the invention of modules or classes or traits in programming.


There are plenty of node interfaces that work well, but it will never work to have everything sprawled out in one space. In houdini and touchdesigner you can just press 'g' to group nodes automatically. The group is given an automatic name by default and you can reference the nodes inside with unix style paths -> /group/subgroup/nodename

Blender's interface problems are about their execution of them, not the fundamental ideas.

Ever heard of a kitchy dive bar having the bathroom door labels on the other door with an arrow pointing to the actual door? |Men ->| _ _ |<- Women|

That's a basic description of blender's interface. It's seems like a fever dream of someone who has never actually had to use it or explain it to anyone.


You could say similar things about digital spreadsheets. They may not be a game changer for software developers, but they have an order of magnitude more users than all general purpose programming languages combined.


I went to a meeting at Dynamicland a few years ago, and got to spend a few days playing with it and chatting with people.

The positives are many: I think it's awesome to conceive of computers outside the actual computer boxes, especially in an educational setting. I think the notion of "collaboration" is way more engaging when it's kids standing shoulder-to-shoulder and actually pushing pieces of paper around and arguing about things out loud, rather than "collaborating" on a Google Doc, sitting at different computers. I also think that this can be enlarged: all sorts of working meetings could be improved by people standing shoulder-to-shoulder at a table and pushing around tangible objects, with simple programming about how they'll interact.

My negatives are what I took away from this, and may be incorrect or out-of-date, so feel free to jump in if I'm wrong, but I felt that Bret Victor was very much a purist regarding his vision, and had no desire to help spawn clones or variants of Dynamicland anywhere else. In many ways this may be laudable, but it felt like he was protecting his baby from going out into the wild, which has meant that the spread of ideas and possibilities has been greatly curtailed. It seems like it will only ever be destined to be a tiny playground for Bret and the few friends working with him.


> I also think that this can be enlarged: all sorts of working meetings could be improved by people standing shoulder-to-shoulder at a table and pushing around tangible objects, with simple programming about how they'll interact.

I've worked at a company (~30 people) that used a physical kanban blackboard, with slips of paper and magnets.

The slips started out hand-written. Then someone made a printer. Then someone realized there was still too much information being held on Redmine. Then someone connected the printer with Redmine. Then we decided to keep the long descriptions in Redmine, but priorities and assignments on kanban. Then someone decided we need to keep ticket priorities and progress on Redmine as well, because computers are actually better when you need to sort and filter a mass of tickets. Then someone noticed it's difficult to locate either the physical representation of a ticket, or its copy on Redmine, to keep both in sync. Before I left, we were throwing around ideas like printing QR codes on the tickets, or using CV/OCR. The printer would also get jammed, the paper tickets got lost, we never had enough magnets, and I hate chalk.

We've had a very unusual (in my experience) policy of no remote / no WFH, I didn't mind but I wonder how much more of an obstacle it would have been if remote work was more common. It would certainly make zero sense in the pandemic world, but I didn't stick around long enough to find out.


I see how that might be constructed as a negative, but IMO it's too early to tell whether that's a genuine setback to the project.

"Protecting his baby" might be a very wise decision at this point, if only because of how the reception generally goes; pigeonholing the project into something like "an AR coding environment" or "visual programming with projectors" is a very real risk that could damage the project's aims - even "clones" such as https://paperprograms.org/ make it abundantly clear that they are not attempting to be an "opensource Dynamicland".

It's only been 3 years since it was founded. While that might be generally considered an eternity in tech-time, I feel it's barely enough to get one's feet wet given the scale of the project, which seems to aim to be decades long. Besides, the roadmap they've got on their website mentions 2022 as the year they go public - so, I personally am stoked for what that will bring.


If the goal is to prevent misconceptions about the project’s goals from becoming mainstream, then partially withholding access and information about the project seems like a poor way to achieve that goal.

People will just make assumptions about what the project is based on the photos/videos they see, but won’t absorb the deeper meaning because they won’t get to actually use it.


If you read their material you'll see that they are expecting the project to "meet the world" in 2022 and are looking to achieve widespread adoption by 2040. As others have mentioned this might seem like forever in tech-startup terms, but they aren't aiming to just make a "AR-projector setup", they have much more ambitious goals, so it seems reasonable to also work on longer timelines, including keeping the "baby" in the crib until it's ready to walk instead of crawl.


Might be wise but might not be, look at Xanadu, with the web we got a massive 'halfstep' without any xanadu hands on the wheel.


@SamBam, the concern about Bret maybe not asking for help and just working more in the open concerned me too. It's a large endeavor to try to popularize something like what he's doing. I assume a sense of showmanship and worrying about people misrepresenting it are a bit part of the dynamics. Perhaps sponsorship exclusivity, but that's more speculative.


Bret had a whole lot to do with the UX paradigms on the original iPhone that most of the world is still using today. It seems so intuitive now, but compared to feature phones or early smartphones, the world he helped pioneer was a cohesive and malleable mental/physical model. He knows the consequences missing the mark. If he truly wants to help human thought level up as he’s extolled for years, doing so with deliberate care and field data is of great importance. That said, we need physical, social computing in a big way. Zoom’s limitations have been made clear to all, and we’re primed for a new paradigm.


I think he only worked at Apple after the iPhone was released, although certainly a lot more was added after. He didn't invent multitouch or anything.


I gave Bret the same feedback. My point was that if he could package up a smaller version of it (say with a pico projector and a simple webcam) then you could have a ton of creative efforts happen in parallel. There are a ton of use cases for the home and for schools and I'm sad that the potential there is not fully realized..


It's got a technological problem which is very much like augmented reality, picoprojectors, and such.

A movie projector projects onto an otherwise dark screen, but this thing projects on a surface: your black level is going to be good light for "seeing".

The projectors have to be pretty bright, outside light controlled and the sensor array would have to be pretty robust.

A good installation would be pretty expensive, but there must be a $1000 version that's possible.


Games like Kerbal Space Programm and factorio exhibit a lot of the concepts from the original research agenda[^1] because they are "Dynamic Environments-To-Think-In"

[^1]: http://worrydream.com/cdg/ResearchAgenda-v0.19-poster.pdf


The shadows and latency are rough currently but totally agree that single, flat, rectangular input devices shouldn't be the future. Tactility is amazing and a large reason people prefer their old "dumb" interfaces.


I'm trying to think of the simplest way to solve this problem:

1. Start with an analog synth from the 70s that's just a bunch of big, bulky metal modules connected by patch cords.

2. Remove the guts of each module and shrink them down to lego size

3. Print an English word name on each lego-sized module

4. Connect them up with patch cords.

Now I have a physical artifact that represents a DSP graph in the visual diagram interface of a program like Pure Data.

Question-- what is the easiest/cheapest way to continually pipe the topology data of the physical artifact into my laptop? It wouldn't be too hard to just have each module shouting a superaudio signal at my laptop and instantiating the corresponding object in the software when it's detected. But that doesn't cover the interconnections.

I could take video input of the artifact but that would be crappy UX with the user constantly having to "show" the artifact to the camera from a non-ambiguous angle.

I feel like there's some simple solution lurking out there with something like tinfoil and a 9v battery...


I love this question. I've been thinking about similar problems and assuming that the answer really is video, but it would be very interesting to be wrong about that.

also, you might enjoy eurorack modular synthesis. I find that it constantly makes me re-consider which things are virtual and which things are analogue as a I reconfigure my synthesizer


Thanks, I'll have a look. I know most (all?) of the modern Buchla machines are all analog/digital cyborgs at this point.


if we ignore the issue of powering them, a CV cable is enough for a half-duplex serial connection. you don't need to transfer a lot of data to establish the topology. i think (hand waving!) every module could re-broadcast all the broadcasts it hears (that don't include itself) with its ID appended. those will make their way to the edges where they can be picked up, and would describe all the possible paths through the system.

some microcontroller in each brick with a bunch of uarts would be ideal. then somewhere you need a usb-to-whatever link.

(maybe somebody really clever could put the right set of passive parts in each brick so that every topology of devices would be distinguishable by some sort of analog probing from the periphery. i'm not that clever.)


If we ignore the issue of powering them, a CV cable is enough for a half-duplex serial connection. you don't need to transfer a lot of data to establish the topology. i think (hand waving!) every module could re-broadcast all the broadcasts it hears (that don't include itself) with its ID appended. those will make their way to the edges where they can be picked up, and would describe all the possible paths through the system.

Not too hard. 1-Wire, a very low end LAN from Dallas Semiconductor, would be good for this. The parts are cheap, low-power, and powered over the connection cable.

(1-wire requires 3 wires. You could use stereo phone jacks.)

There are probably musician applications for this sort of thing. Some people like cables.

A similar form of fakery is seen in DJ systems where you have vinyl records that contain not music, but time code.[1] The DJ can do DJ turntable stuff as if playing analogue records. They're just sending time code info to the the control unit which has the audio in memory, and the output is the appropriate audio for the time code.

[1] https://en.wikipedia.org/wiki/Vinyl_emulation


> Not too hard.

Perhaps you're imagining less restrictive requirements for it than I am.

> 1-Wire, a very low end LAN from Dallas Semiconductor, would be good for this.

i'm aware of 1-wire. it is master-slave. can you outline exactly how you plan to use it in an unknown and reconfigurable topology, and how many separate master and slave interfaces you expect to have in each brick?


Dynamicland feels to me like The Mother of all Demos. Ahead of its time, groundbreaking practical applications of interactivity research.

Dozens of companies and projects will spring from this. Just give it time.

We'll be back to using real objects instead of keyboards at some point in the near-future. Brett's right that a lot of the difficulty in actually using computers to produce things is in the translation of human interactions into a medium the computer can understand, and translating the data a computer outputs into a medium humans understand. We aren't two-dimensional creatures. The state of the art cannot remain in pictures and text forever.

I'm not sold that everything will be done this way - some things will always need the precision of automated, textual input (see tool assisted speed runs in video-gaming if you think human beings can match pre-programmed precision inputs).


They actually cite Englebert at the top of the page as guiding the project spirit.


I sent the following message to Dynamicland a few years ago and never got a satisfactory response. Does anyone know if they're doing anything about accessibility for blind people or people with other disabilities?

Hi,

First of all, Dynamicland's goal of "agency, not apps" resonates with me. I'm in favor of things that make programming more accessible to normal people.

However, I worry that Dynamicland will be a step backward for people with disabilities, particularly blind people and people with impaired mobility. For us (I'm legally blind myself), I believe the virtual worlds of today's computers aren't imprisoning but liberating. Consider that a blind person can't see that "fully functional" scrap of paper, and a person who can't use their hands can't write on that paper or manipulate the other physical objects in Dynamicland.

Do you have a plan to solve this problem while holding to your goal of "No screens, no devices"? It seems to me that there's no way to reconcile these conflicting requirements without making an exception for people who can't work directly with the paper and other objects.

Or have you decided that it's better to undo the equalizing effect of computers for people with disabilities, for the good of everyone else? I would obviously be disappointed if that's the case. But I understand that everything's a trade-off, and perhaps it's not reasonable to confine the majority to an inhumane way of working for the benefit of a few. So I don't mean this as an accusation; I really want to know.

Thanks, Matt


I think your question is important, and I wish you had received a reply. I think it's common for people's needs to be in conflict; everything is a trade-off, as you acknowledged.

But I think you baked into your question a false dichotomy. "Dynamicland" does not necessarily have to _undo_ the "equalizing effect of computers", nor does its absence mean that the majority is confined to an _inhumane_ (wow...) way of working.


> an _inhumane_ (wow...) way of working.

That was Bret Victor's characterization of being confined to a screen, not mine.


I think one could expand the project to incorporate virtual participants that can explore the physical workspace via computer pretty easily. Since the stuff only can come to life when there is a computer that can interpret everything. At the end there would still have to be someone who materializes the virtual objects made by remote participants.


This is an important point, but it does seem like computers could be providing more information to blind (or deaf) users - there are still other senses we don't use in UX (touch, heat, smell) and it seems like the real life concept could still be applicable somehow.


To contextualize this, check out Seymour Papert's work on LOGO, especially teams of kids working together to solve problems. Some of this is also related to Alan Kay's work. It's all inspiring, and shows that there may be many ways to compute.


Is the technology of detecting and using marked everyday items (sheets of paper/whatever) as input devices and parts of programs patented? I couldn't find anything on that yet. It feels like something that would benefit from being patents-free.


It was used in Killer Game Programming in Java by Andrew Davidson published in 1995. That used a Webcam and a color coded bracelet you made yourself as an input device for one of the code examples. Microsoft had Surface touchscreen 2.0 pre 2011 that tracked real world objects [1].

1. https://youtu.be/57k2iJbotV4


Great to see this on the front page. I've been lucky enough to visit Dynamicland/volunteer there and it absolutely lives up to the hype.


I think this is a better paradigm than Augmented Reality glasses.

The medium is limited but interesting.

Perhaps this would be more portable if there is a projector underneath a table with a transparent glass top and a camera on top to view inputs.

Potential Applications

1. Mapping (Interactive maps, interactive routing, real-time weather, etc) 2. Collaborative art 3. Engineering CAD and CAM modeling and design.


Tilt5 might interest you as IMO the best paradigm of AR glasses--headmounted projectors with retroreflective surfaces. https://www.tiltfive.com

Portable, multiplayer--and comes with massive improvements in FOV and contrast due to its design.


The mythical AR glasses of the future should be able to do everything your proposed projection table setup can. I see both setups as just two different manifestations of AR as a concept.


Haha. Mythical indeed. But more importantly, they are impersonal. In a collaborative setting, they are simply not conducive enough.


In an ideal AR setting, everyone with a display will see everything everyone else sees. So the exact concept of Dynamicland would be completely reproducible in AR.

And you get the advantage of having optional private workspaces on top of that.


Are you thinking of VR? AR merges the virtual environment with the physical one. One fun example was that I played a Finnish board game sort of like Pictionary with a Finnish friend. All the cards were in Finnish which I don't know, but I could use my phone's AR translation app to "read" the cards in effectively real time.

AR glasses could hypothetically share a networked virtual environment, so I don't see why they couldn't be collaborative.


No, not VR. Interacting socially is better without a huge glass that covers half your face, and most importantly, covers your eyes.


Regular glasses also "cover your eyes", you are aware that AR uses transparent lenses right? They augment the world around you, they don't take away from it.


Proper AR glasses would be completely see-through and not very intrusive.


This is the most "object-oriented" programming language I've seen.


What in the fresh hell! This is so awesome. I've always dreamt of something like this with some form of handwritten programming language, a group of people sketching a program on a single, giant canvas.

I guess this isn't a new project, but I'm glad it's been reposted.

I do worry that this format has some pretty sharp limits, just due to the spatiality and trying to cram functionality into a limited area of a room, amongst other people. Some form of code storage might need to be designed, so one could stack those little code papers on top of one another. Who knows.

Very though-provoking project.


People who love Bret Victor's work and thought should know that he is also part of this effort.


Although I much admire Bret Victor, this Dynamicland project seems like a really, REALLY deep rabbit-hole. I hope he finds it enjoyable and rewarding, but I do hope re-joins the world and starts giving illuminating talks again!


It feels like this has been reposted regularly for years, but the demo videos and slightly threadbare use cases haven't changed at all. Re-imagining how we interact with computers sounds great, but it doesn't feel like this project is actually going to achieve it's goals beyond letting a small number of lucky users play around with stuff.

I'm not sure if darklang is going to succeed, but that seems like a far more fruitful direction to approach the problem from. It's a very hard problem, and they are attempting to make programming a little bit more visual while not removing any power from the user, instead of making it entirely visual and almost completely dis-empowering the user. Importantly, it's actually accessible to people all over the world, and they can use it to achieve real world goals.


> ...but it doesn't feel like this project is actually going to achieve it's goals beyond letting a small number of lucky users play around with stuff.

Will they end up building some product that will be adopted by droves of customers and offers a completely new paradigm for interaction? Probably not.

But I'd argue that's hardly where "esoteric" research like this ends up going, and in my book that's OK. Bret Victor, who is behind Dynamicland, never shipped the full drawing app from his interactive visualizations talk as far as I know. Neither did anyone ever get to buy or download the editor he shows in "Inventing on Principle" as a new IDE that offers incredibly great feedback while programming.

Nevertheless, his talks are among the most inspiring things I and I think many others have ever seen in the area of HCI, and at least in my case are responsible for a large part of my renewed interest in the field.

Will anything out of Dynamicland capture people's imagination and enthusiasm like that as well? Maybe. Maybe not. But the point is to explore, and I applaud those who do.


The influence of his talks show up everywhere and are explicitly called out as inspirations. Elm Lang's time travel debugger, therefore redux, hot reloading work, Observable calls him out as influential, and on and on.


Exactly my point — there's space for those who put together amazing but infeasible-as-product demos to communicate lofty ideas, those who take those and repackage them for "mass consumption", and of course the rare genius who does both, and anything in between.


I feel like putting together a janky tech demo is not especially impressive though. Anybody can think of screens projected onto paper. It's not particularly interesting that the tech demo is possible. Actually making it usable is the really hard part and the reason it remains a tech demo.

That's why Apple is rich. They do the hard part.


I personally hope that the output ends up being systems that are good for individuals with their own dynamic lands at home, not just strong group collaboration. I know there are already people who have setups done in coordination with the Dynamic Land team.

I've visited dynamic land also, and the vision of this more tangible visible computation is bigger than the incarnation/progress last shown.

I'm a fan of Darklang's goals too. But I do think Dynamic Land is targeting more approachable, teachable, and communal computing. Darklang is going to be more for professionals making backends and doesn't target any user facing audio/visuals whatsoever. It could in the end of course, so I hope it does well as an effort too.


> Darklang is going to be more for professionals making backends and doesn't target any user facing audio/visuals whatsoever. It could in the end of course, so I hope it does well as an effort too.

That is the plan, might be a while away though.


I hope darklang succeeds too, but I personally believe that visual programming is at least partly the way to go. This is because I firmly believe that the "most-correct" view of regular programming is that of a directed graph of computations, and directed graphs don't compress very well into text.

I'm building something visual to prove (or disprove) this concept, but I'm also thinking that power users would want a textual language for faster input, as GUIs are better for output where TUIs are better for input.


I agree that visual programming can be great - my argument would be that starting from a model that we know works and making incremental steps towards something more visual is more likely to succeed, than attempting to jump all the way to something fully visual based


I'm a fan of what Observable is doing. Combine bubbles of classic code, with intermediate audio/visual output as you build up to full interactive infographics/demos.

Some types of code, hopefully less than more is just not very flow based and doesn't have a concrete visual representation.


in general I agree that incrementalism is the best way to improve a model. I'm just not certain how to do that with such a drastic change as going from textual to visual - but darklang seem to be doing a great job at that, and I do hope they succeed at their mission.


Thanks! We're very much of the opinion that we got into this mess we're in (gestures broadly) because everything has been incremental to date. That's how you get layers upon layers.


Yes 100% - my founding hypothesis was that because language progress so far has all been improvements on opcodes, that the idioms of programming are still based on the human catering to the computer rather than the other way around. It gets subtler over time, but we still have to, for example, emulate execution in our heads while coding, which is really really hard to learn.


Text is much more searchable. Any binary format is ultimately text. We use it for non-input, non-human-readable formats.

Graphs are great for showing relationships and high level information at a glance. Human can recognize visual shapes (this can apply to text as well but text as well. Many editors now show text minimaps).

But after a certain scale, the best we have for relationships is probably hypertext links and other uri references.


Dark actually started with a directed graph concept. It worked pretty nicely, but users (developers) hated it. We tried to get them to write a simple fizzbuzz, and it was very challenging, and ultimately we abandoned it.


Hey Paul, I'm a big fan of what you guys have been doing. Yeah I did see that you went the visual route early on and it didn't work out, but I think there's quite a large landscape yet to explore. I'm also aiming to work with people who aren't currently developers (ie the usual nocode market) - I think developers have already gotten over the hardest hurdles to learn textual programming and thus aren't interested in the velocity for simplicity tradeoff that visual programming usually involves. This is all hypothetical for now though as I'm still building the platform.


How is darklang related to dynamicland? I only found https://darklang.com/


Right, it honestly feels like the parent commenter either misunderstands Dynamicland's aims, or is willfully drawing comparisons to make it seem like the projects are tackling the same issues, out of which Dark comes out as the more successful one (at least measured by "adoption").

I don't see what the "almost completely dis-empowering the user" is about either; if anything, Dynamicland seems the more empowering of the two, whereas Dark's endgame is more efficient development of a fundamentally conventional type of software.

It feels rather dishonest; the comparisons are sweeping and vague, with very little substance as to an actual criticism.


"2020: Realtalk-2020. We're bringing together everything we've learned to create the next iteration of our Realtalk computing system. Realtalk-2020 will form the foundation for the next decade of research and applications."

"2022: Dynamicland meets the world, in the form of new kinds of libraries, museums, classrooms, science labs, arts venues, and businesses. We will empower these communities to build what they need for themselves, to design their own futures."


This issue here is research funding


> instead of making it entirely visual and almost completely dis-empowering the user

Why do you think that making something visual completely disempowers the user?


Building on the way Dynamicland makes assembling programs a physical thing, I wonder why this couldn't fairly easily be extended to something like scratch[i]/snap![ii]/blocks[iii]?

[i] https://scratch.mit.edu/developers

[ii] https://appinventor.mit.edu/explore/understanding-blocks.htm...

[iii] https://snap.berkeley.edu/


I saw something like this in a presentation at the MIT Media Lab a few years ago (2016?). It's impressive, but it was a little glitchy and it felt overkill for the tasks that they were being used on.

However, I'd guess that new products and ideas will proliferate if we somehow develop incredibly small and powerful projectors. Something that could be attached to a table, or even to your smartphone, and result in the same experience. I'd certainly want that :)


This project on a quick glance reminds me of the teamlab Borderless Museum in Japan. Interactive shapes, colors, and lines on real materials.


Bret Victor has inspired me and my team for many years. His blog is a must read for PM's and curriculum designers. It is a pity that Dynamicland is non-opensource, or at least is not scalable to millions/billions of people. IMHO, building a scalable product and business around an innovative idea is crucial to helping a huge amount of real people.


Reading this reminded me that somewhere out there I've seen another computer system you operate by moving things around in real life.

But after a minute I realized I was just thinking of the Bistromathic Drive in Hitchhiker's Guide, which is a spaceship you pilot by ordering things at an Italian restaurant.


Reminds me a lot of some of the things you can see at Ars Electronica. If you're in Austria, near Linz, very worth a visit: https://ars.electronica.art/center/en/


The actual interactions seem rather limited. Rotations, translations, associations. not sure how much fidelity that actually gives you. How many problems can you actually solve with that?


If I can only find people to sit beside like that video.


Has anyone used Scratch style blocks for real physical blocks?

They fit together like puzzle pieces. Kind of a neat visual that could translate to physical medium.


I'd love something like this for my kids...

Maybe have a board with lego slots for blocks that represent flow and maybe a little bit of interfacing with a computer to assign graphics to an object, say a sprite then the blocks tell it how to move, you could maybe have blocks that represent data and you can name it but the data stored is just json so no real schema for simplicity....

You could create interactive stories from a board of legos, or game like have a squirrel escape a yard without running into the dog, etc...


Man that looks cool! I’d love to use paper and a projector instead of looking at a screen all day


I imagine playing collaborative tower defense on this surface would be pretty cool


Is it open-source yet? I can't seem to find any source code anywhere.


As soon as I saw it I was like "this looks like a Bret Victor project."

Bingo.


It seems that this whole thing is mostly about colorful stickers.


This looks like the most vaporware thing I have seen since Duke Nukem Forever.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: