I think that's exactly the kind of workflow we should be able to create a graphical interface for.
I sometimes wonder if our ability to imagine what visual interfaces can do is limited by our experience with current GUIs. Right now, as other have pointed out, we're limited by the fact that GUI interfaces aren't really composable the way CLI commands are.
But what if we had something that is composable -- maybe something like Smalltalk on steroids -- where every program is a living object that can describe in detail what it does? Then, you'd be able to ask the program what inputs it requires, what its abilities are, and what outputs it can provide.
With something like that, it would be possible to visually put together interesting combinations of programs that we might not have created otherwise. Sort of the like what Bret Victor describes in 'Inventing on Principle'[1] where certain solutions to problems become much more apparent when you can manipulate things and try out new combinations quickly. I'm certain things like this exist (and have existed) in various forms, but I don't think we've explored the concept as fully as we can.
On the other hand, though, I agree with a quote from Eben Moglen earlier in this thread where he talked about 'point and grunt' interfaces. We've been iterating on the same paradigm for a long time. Touchscreens are better in some ways, and worse in others. We gain more physical interactivity with our devices, but we lose a lot of precision because we're now just smacking meat sticks against a pane of glass.
But you know, it's easy for me to sit here and complain about this on the internet. Actually doing something about it is much harder. Maybe it's time for me to fire up Smalltalk and give it a try. :)
I sometimes wonder if our ability to imagine what visual interfaces can do is limited by our experience with current GUIs. Right now, as other have pointed out, we're limited by the fact that GUI interfaces aren't really composable the way CLI commands are.
But what if we had something that is composable -- maybe something like Smalltalk on steroids -- where every program is a living object that can describe in detail what it does? Then, you'd be able to ask the program what inputs it requires, what its abilities are, and what outputs it can provide.
With something like that, it would be possible to visually put together interesting combinations of programs that we might not have created otherwise. Sort of the like what Bret Victor describes in 'Inventing on Principle'[1] where certain solutions to problems become much more apparent when you can manipulate things and try out new combinations quickly. I'm certain things like this exist (and have existed) in various forms, but I don't think we've explored the concept as fully as we can.
On the other hand, though, I agree with a quote from Eben Moglen earlier in this thread where he talked about 'point and grunt' interfaces. We've been iterating on the same paradigm for a long time. Touchscreens are better in some ways, and worse in others. We gain more physical interactivity with our devices, but we lose a lot of precision because we're now just smacking meat sticks against a pane of glass.
But you know, it's easy for me to sit here and complain about this on the internet. Actually doing something about it is much harder. Maybe it's time for me to fire up Smalltalk and give it a try. :)
[1] https://vimeo.com/36579366