Hacker News new | past | comments | ask | show | jobs | submit login

I don’t see Engelbart’s work - and all other similar pioneering work that routinely gets brought up anytime HCI folks get together to wax poetic about what could be - as reminder of how much more progress lies ahead of us.

In fact, most of the things shown in those demos can be done today - perhaps in more narrow ways, but fundamentally we have collaborative document editing, video chat, complex live drawing tools, multi stream video editing, handwriting recognition, etc etc etc. All operating at scales that could only be dreamt of in these early days.

What it brings to my mind though is that software doesn’t exist in a vacuum. Writing software requires many hours of human effort, maintaining it even more so. Sustained, focused, organized human effort requires funding of some sort [0]. Software exists to solve problems, and in our globalized capitalistic economies, it means the value of software does not lie in reaching some paragon of pure academic composability/extensibility, but in solving concrete problems for people while meeting some arbitrary costs/tradeoffs.

This is why those “tools for thoughts” demos seem to always rehash the same ideas and get stuck circling around the same drain that Engelbart & Kay & friends charted 50 years ago; in the meantime, some industrial company you’ve never heard of is paying a few consultants big bucks to come up with “boring” Excel spreadsheets that are just as much “tools for thought” as anything else that humans use.

Now, am I satisfied with this state of affairs, and would I love to see what models for writing and maintaining software could exist in a non capitalistic culture? Absolutely not, and absolutely.

But that seems to me to be more the root cause of why we’re still chasing the Engelbart mirage over half a century later, rather than some fundamental/conceptual “progress” to be made.

[0] and speaking of funding, it is interesting to look at what funding environments those open ended “tools for thought” projects tend to come from; more often than not academia, or in the case of ink and switch, an independently wealthy PI. Places directly connected to money making ventures, like Xerox PARC or MSR, are short lived, and few and far between.




> In fact, most of the things shown in those demos can be done today

That's not a coincidence, as modern OSs were inspired by those demos. But isn't it a sign of a lack of progress the fact that the pinnacle of modern technology is the fact we can do the same things shown 50 years ago _slightly better_?

Why haven't there been equally revolutionizing ideas in HCI since then? We have better screens on smaller computers, perfected tapping on glass and haptic feedback, but what we can do with all this technology is awfully limiting.

Add to that the invasion of advertising in every facet of computing, perversing incentives for companies to develop technology that benefit humanity instead of exploiting it, and in many ways we've regressed.

XR seems to be the next step forward (itself not a novel idea either), but so far it seems that it will be ruled by the current tech giants, which is far from enticing.


> isn't it a sign of a lack of progress the fact that the pinnacle of modern technology is the fact we can do the same things shown 50 years ago _slightly better_?

This was implied, but thank you for capturing the spirit succinctly.

I've got an old Mac II ci that I picked up at a thrift store for $12 that came with an old version of Photoshop and an ethernet card that I keep around to remind myself of how little we've progressed. For a machine manufactured in 1989 with performance measured in MHz, it may be noticeably slower, but that's not the point. It's that we're doing the same tired twentieth century stuff we were doing 33 years ago, just slightly faster.

XR does seem a path forward, but the so-called giants you mention are selling subsidized prototypes you can't even take outside without a stern warning that you may brick the device. It's been close to a decade since the DK1 came out. You'd think they'd be past the point where legs are a new feature.


A lot of what's kept us "stuck in the 20th century" lies in just how many human details need to be accommodated to fully computerize the workflow.

Through the early 1990's, hardly anyone was doing electronic file transfer regularly in their personal workflows. While there were many examples of phoning in remotely to be updated or do certain kinds of work, it was a per-industry thing. The larger changes finally came to pass only as email and office networking gained widespread adoption. So...you didn't need computers everywhere in everyday life. They were a nice addition if you were writing frequently or you wanted a spreadsheet, but the net outcome of that was that you could run a smaller office with less secretarial staff - and not a lot more.

In the 90's and 00's, the scope expanded to cover more graphics and messaging workflows. But it was still largely 1:1 replacement of existing workflows in industry, with an import/export step that went to paper. And when you have the "go to paper" bottleneck, you lose a lot of efficiencies. Paper remained a favored technology.

It really wasn't until we had smartphones and cloud infrastructure that we could rely on "everything goes through the computer" and thus start to realize Englebart's ideas with more clarity. And that's also where the "social media" era really got going. So it's like we've barely started, in fact.

What all the prior eras in computing were like were a kind of statement of "it'll be cool when". The future was being sold in glimpses, but predominantly, the role of the computer was the one it had always had: to enhance bureaucratic functions. And the past decade has done a lot to challenge the paradigm of further enhancement towards bureaucratic legibility. In the way that urbanists joke about "just one more lane, bro" as the way to fix traffic, we can say "just one more spreadsheet, bro" has been the way we've attempted to satisfy more and more societal needs.

But there is a post-Engelbart context appearing now: instead of coding up discrete data models, we've started strapping machine learning to everything. It works marvelously and the cost of training is a fraction of the cost of custom development. And that changes the framing of what UI has to be, and thus how computers engage with learners, from a knobs-and-buttons paradigm to "whatever signals or symbols you can get a dataset for."


Well many here seem to worship an OS that is stuck in 1970's text interfaces, using 2022 hardware to run 1970 text CLI applications.

Only Apple, Google and Microsoft seem to care about pushing consumer OS experiences forward, and unfortunely they always do two steps forward, one backwards, every couple of years.


> I've got an old Mac II ci that I picked up at a thrift store for $12 that came with an old version of Photoshop and an ethernet card that I keep around to remind myself of how little we've progressed. For a machine manufactured in 1989 with performance measured in MHz, it may be noticeably slower, but that's not the point. It's that we're doing the same tired twentieth century stuff we were doing 33 years ago, just slightly faster.

On a computer the size and weight of a paper notebook, with all day battery life, a display whose color quality/resolution/refresh rate were utterly unimaginable 33 years ago, I can have dozens of layers that are up to tens of thousands of pixels in edge size, use advanced AI to generate textures like grass/clouds/etc. or segment arbitrary objects from the background, recomposite all that in real time, etc. etc. etc...

If you haven't used a computer since 33 years ago I highly recommend doing so.

If your argument is that we're still making 2D pictures, well we've been making that for a few thousands (if not tens of thousands) of years. If you want to be making weird experimental 3D/4D/nD VR/AR/xR art stuff there's lots of great tooling for that too (but it won't run on your 1989 Mac...)


> On a computer the size and weight of a paper notebook, with all day battery life, a display whose color quality/resolution/refresh rate were utterly unimaginable 33 years ago

Those were completely imaginable 30 years ago to anybody with a physics background. It was just a matter of time before transistors shrank down to the nm range (of course with the enormous amounts of engineering work that made it possible, but there was no physical reason it couldn't be done).


It’s not about compute, though I’d suggest reevaluating the load involved above and where that might be accomplished before comparing apples.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: