Hacker News new | past | comments | ask | show | jobs | submit login
Alan Kay's talk at UCLA – Feb 2024 [video] (youtube.com)
322 points by sgoyal 10 months ago | hide | past | favorite | 140 comments



The idea of computing as the shared stage to reflect our own intelligence is really what sticks out to me as the best way to frame what interacting with a computer means. It's not new but Alan did a great job of motivating and framing it here. Thanks for posting this great reminder that what we use as computers today are still only poor imitations of what could truly be done if we can transport our minds to be more directly players on that stage. It's interesting to reflect the other way as well. If we are the actors reflecting a computer to itself. An AGI has to imagine and reflect in a space created of our ideas. To be native the AI needs better tools, the "mouse" of it's body controlling the closed loop of it's "graphics", how do we create such a space that is more directly shared? Dynamically trading been actor and audience in an improvisational exchange? This is the human computer symbiosis I seek.


A first-class programming language (not an LLM) to talk to the computer's OS along with a rich library is the most important missing component IMO.

Humans communicate mainly with language and no OS provides this in a satisfactory way for the average user.

The result is users mostly clicking on signs to choose among predetermined tasks, like monkeys in a lab.


That's because computers are dumb servants. And that's a good thing, because computer solutions should be task specific. The human has agency and sometimes imagination, and the computer's job is to solve a problem as transparently as possible with as little cognitive load as possible.

Human language is optimised for human relationships, not for task-specific problem solving. It's full of subtext, context, and implication.

As soon as you try to use natural language for general open-ended problem solving you get lack of clarity and unintended consequences. At best you'll have to keep repeating the request until you get what you want, at worst you'll get a disaster you didn't consider.


Yes, the machine should be a humanity-amplifier, not a humanity-replacement.

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” Frank Herbert, Dune


Herbert is describing a humanity amplifying phenomenon.


Herbert saw machine-thinking / AI as taking away human-ness, not adding to it. Or at least his Bene Gesserit did. Many quotes throughout the series about the corrosive effects of letting humans defer their complicated choices to machines, etc.


> And that's a good thing, because computer solutions should be task specific

Why? I enjoyed your comment but I don't follow why this is obvious?

Why should computer solutions be task specific rather than general?


So that when you ask what 1+1 is you get 2 and not a general solution


Why not both?

If I were making an API call, I'd want to get 2, but as an end user, I'd be delighted in "It's 2 and here's how I arrived at it ...'


"computer solutions should be task specific". Sums it up perfectly


There are things about human language that are fundamentally at odds with effective communication with a computer (or engineering in general).

One example is ambiguity. Every human language has faculties for ambiguity, which is crucial in many human interactions and relationships. The ability to make implicit requests or suggestions while maintaining plausible deniability is valuable, even in situations that are non-adversarial.

In contrast, when communicating with an OS or engineering a mechanical device, ambiguity is a negative and it's crucial to use language that is deterministic.

There is a very real degree to which people are only capable of thinking clearly about complex systems to the degree they are comfortable with tools such as mathematics and programming that can be used to unambiguously describe them.


> In contrast, when communicating with an OS or engineering a mechanical device, ambiguity is a negative and it's crucial to use language that is deterministic.

Very much a historical artifact. Modern systems (LLMs, for example) can in principle handle ambiguous inputs.


Is ambiguity ever desirable when communicating with them?


I would argue no - the entire concept of prompt engineering exists for this reason.


I agree. I think one way to achieve that is to first acknowledge the important distinction between modes of programming and levels of abstraction. See, even in programming, we’re still restricted to a given domain. A web dev does not program to the network stack, thus needs not to known NICs and TCP frames internals. The web dev only interacts with the lower levels via high level parameters (data) through the low level APIs, which are already done and settled, not directly with code. The same goes for each layer and domain. Now, expecting users to write actual code is not realistic or sustainable. What’s more reasonable is to imagine a scenario where users needs only to give the parameters for the functions which are already done and settled. So, composing super high level functions in an environment which disposes of an extensive library of utilities seems the way to go. Users already do parameterization everyday, such as when they press buttons and fill forms. What’s missing is simply an abstraction which gives them the power to compose those functions. Regarding the paradigmatic framing for an end user language, I think stack-based programming offers a superior model, because the context window is directly visible and easy to track and reason, but more than that, stack languages offer a good avenue for learning, since it’s the easiest model to teach by analogy, even physical analogies can be made. It boils down to just pushing and popping in a coherent order. The order of operations is in fact the program. Hard to beat this simplicity and universality.


to my mind at least part of the issue here is human communication languages are fundamentally lossy by nature. Everything has too many meanings and requires inference. This is why when able to communicate with code it gets so much easier because it has to be exactly right or the computer fails. And at least because it is consistent we can debug and fix that communication and once that is done it will work with reasonable consistency.


That describes a limitation of computers (and current interfaces to them) though.

Requiring humans to describe stuff "unambiguosly" is the easy cop out to that.

Getting computers to handle the ambiguity and resolve it as good as humans is what would be really amplifying. LLMs are a good step to that, compared to a regular programming language/interface.


It's also a limitation of human-human communication, and why nation's have ambassadors (who presumably have a shared context from which to start from when dealing with a foreign nation).


I don't think diplomacy and ambassadors are there to handle ambiguity in communication.

There are there to handle conflicting interests and goals.

To that end, ambiguity in communication is something they use on purpose, not something they're there to solve.


> Humans communicate mainly with language and no OS provides this in a satisfactory way for the average user.

Most humans would have nothing to say to an OS. I am a software engineer and most of the time I function many levels of abstraction away from the OS. Most of my work doesn't even run on the same OS I work in, and when it runs, it talks to an OS that's not even running on an actual computer, but a construct that looks like a computer, but is entirely made up by a hypervisor.


>A first-class programming language (not an LLM) to talk to the computer's OS along with a rich library is the most important missing component IMO.

That "programming language" might just be ability to have generative interfaces (LLM based).

>Humans communicate mainly with language and no OS provides this in a satisfactory way for the average user. LLMs literally fix this.


>The idea of computing as the shared stage to reflect our own intelligence

We tried that, and it worked briefly. But the end result is the modern web/app landscape: commercialization, tits and cats, hating, techo-feudal and government control, partisanship bs, spam, narcisism - and rare sprinkles of intelligence here and there.


That’s true, but I think the core of human communication has always been half of what you’ve listed - low intelligence, hating, spam and narcissism. It’s just more obvious and amplified online as you can see everyone engaging with it all at the same time. In pre-internet times, you’d need to be physically present and see maximum a bar-full amount of people doing that.

I’m still very hopeful we will use the tech to help up us with some non-communication related things. Maybe something that’ll even off-ramp people outside the internet world.


>In pre-internet times, you’d need to be physically present

Because of that, the very real possibility of getting a punch in the face if you went over the line also helped curb those behaviors somewhat.


A sentiment often expressed, but I find it too close to “An armed society is a polite society” for my comfort.



We tried that, and it worked briefly.

Where and when?


In the early days of the internet up to the early years of the web.


do you have any specific examples?


I feel like the LLM interface will enable that. I wonder what Alan Kay makes of the current LLM revolution (he does talk a bit about it in the question section @ around 1:35)


I am a big fan of just in time on the fly UI and LLMs seem to make that possible now with some of the fast token outputs there are a couple of experiments [1]using image for now and I expect this to be more useful in not too long.

[1] https://twitter.com/sincethestudy/status/1761099508853944383...


I'm not Alan, but I'm pretty sure he isn't too happy about it.


I heard of Alan Kay via Steve Jobs's intro of the iPhone [1], but otherwise know little about him - can anyone recommend other Alan Kay talks/essays/books?

[1] https://www.youtube.com/watch?v=VQKMoT-6XSg&t=10m2s


The Computer Revolution hasn't happened yet, OOPSLA 1997 keynote

https://www.youtube.com/watch?v=oKg1hTOQXoY

Others have already mentioned The Early History of Smalltalk, highly recommended. You'll probably want to read it a couple of times, revisit from time to time.

The big idea is messaging, or rather "ma"

http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-...

"The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be."

"I think I recall also pointing out that it is vitally important not just to have a complete metasystem, but to have fences that help guard the crossing of metaboundaries."

" I would say that a system that allowed other metathings to be done in the ordinary course of programming (like changing what inheritance means, or what is an instance) is a bad design. (I believe that systems should allow these things, but the design should be such that there are clear fences that have to be crossed when serious extensions are made.)"

"I would suggest that more progress could be made if the smart and talented Squeak list would think more about what the next step in metaprogramming should be -- how can we get great power, parsimony, AND security of meaning?"


From the sibling link, I'd highlight:

Alan Kay: A powerful idea about teaching ideas at TED (2007) https://tinlizzie.org/IA/index.php/Alan_Kay:_A_powerful_idea...

Alan Kay: Normal Considered Harmful (2009) https://tinlizzie.org/IA/index.php/Alan_Kay:_Normal_Consider...

Back to the Future of Software Development (2003) https://tinlizzie.org/IA/index.php/Back_to_the_Future_of_Sof...

All of the STEPS reports - I especially like appendix E in:

https://tinlizzie.org/VPRIPapers/tr2007008_steps.pdf

> Appendix E: Extended Example: A Tiny TCP/IP Done as a Parser (by Ian Piumarta)

> Our first task is to describe the format of network packets. Perfectly good descriptions already exist in the various IETF Requests For Comments (RFCs) in the form of "ASCII-art diagrams". This form was probably chosen because the structure of a packet is immediately obvious just from glancing at the pictogram.

> If we teach our programming language to recognize pictograms as definitions of accessors for bit fields within structures, our program is the clearest of its own meaning. The following expression cre- ates an IS grammar that describes ASCII art diagrams.

> (...) We can now define accessors for the fields of an IP packet header simply by drawing its structure.


Thanks a lot for this (and to other posters). 20 minutes into Normal Considered Harmful and can tell I'm going to binge watch/listen to Kay over the next few weeks. Extremely appreciative.


There are more than 142 hours of Alan Kay talks I'm aware of. If you are going to binge-watch some of these, I'd be happy to help filter out the most relevant parts for you. Not all talks are about computing, for example.

We can then publish some of the topics lists of each talk for others, so they can save time. We can also do this for the many papers [2] book reading lists [3] and lecture notes [4].

[1] https://tinlizzie.org/IA/index.php/Talks_by_Alan_Kay

[2] https://tinlizzie.org/IA/index.php/Papers_from_Viewpoints_Re...

[3] http://www.squeakland.org/resources/books/readingList.jsp

[4] https://internetat50.com/references/Kay_How.pdf

By no means this is a complete list, you can contact me morphle at ziggo dot nl for a chat on how to compile a complete list.


Can't specifically recommend a talk but here's a menu of Kay talks.

https://tinlizzie.org/IA/index.php/Talks_by_Alan_Kay


I find it mind-boggling that he has been doing this for 50+ years.


IKR? I was a very young man and 40 years ago, I went to a talk* by Kay who I thought was 'old' at the time (haha) and this was already 15 years after he started at Xerox as a 30 year old! Now it's 2024 and I feel old... It was a pleasure to see his talk 40 years ago and awesome to again watch a talk today from an 83 year old Kay.

* after the talk, we even shared a cab to the airport and he graciously entertained all of my questions.


Highly recommend his talk on the power of simplicity. This was one of those lightbulb moments for me when I was a younger programmer:

https://www.youtube.com/watch?v=NdSD07U5uBs


My favorite:

Is it really "Complex"? Or did we just make it "Complicated"? https://www.youtube.com/watch?v=ubaX1Smg6pY

If you're interested in kids + computers + education, this 1955 Technology in Education House Committee Meeting is a surprisingly great watch, and has Kay alongside Seymour Papert: https://www.youtube.com/watch?v=hwsQn1Rs-4A


1995, not 1955.


Ha! Thanks for the correction. Comment's no longer editable.


Dealers of Lightning: Xerox PARC

This follows Alan Kay (as well as dozens of others) through their groundbreaking research at Xerox's research lab in Palo Alto, primarily during the 70s.

You will learn how these visionaries and personalities were largely at war with themselves, while HQ (2,000 miles away) largely ignored any of their marvelous outputs... until it was too late.

----

I just checked this morning, and was shocked to see that XRX's total market cap is "only" $2B, when they could have been Apple computer [today ~$2,600B].

An interesting tidbit that many don't know about the Xerox/Apple relationship was that Steve Jobs was allowed into the facility, on two separate tours, because he offered Xerox preferred stock in the then-upcoming Apple IPO — which they then held on to for less than a few years.


Xerox made a ROI of 20000% on the laser printer alone [1]. A better version lecture Alan did for Ycombinator startups [2] but doesn't focus on the return of investment.

Xerox did shrink a bit since then.

[1] https://youtu.be/NdSD07U5uBs?t=1828 [2] https://youtu.be/1e8VZlPBx_0?t=975


From my few hours of research on Xerox PARC, the laser printer did make a huge return (as you note, correctly); it's just "typical Xerox" that the engineers who invented LASER printing (i.e. using a laser to scan) also had to sit on their invention for years because the executive/sales teams didn't believe in an already-functional technology (the first decade of Xerox "non-ink printers" used visible light; the laser literally had to be forced into the Xerox equipment by R&D..!


Alan thinks [0] "The Dream Machine" book [1] is the only accurate and good book on Xerox PARC.

[0] https://news.ycombinator.com/item?id=22379275

[1] The Dream Machine" by M. Mitchell Waldrop


Wow, thanks for both links! From Alan's HN/u comment, it does still appear that Dealers of Lightning is a worthwhile read ("nowhere near the bottom"—A.Kay), just confusing (chronologically) at times [I agree].

I've added Dream Machine to my reading list (but will wait a year, as I just finished Dealer's Lightning early 2024).


PARC (at least the UI that came out of it) is replicable in terms of a new UI paradigm: The virtual headset is criminally underutilized.

How much is computer interface restricted by the little window (even a 4k TV) we get as the view into the computer? The promise to me of a VR headset UI is having an arbitrary amount of real estate to display information (and not just 2D!)

And here's the thing about a good VR UI: it wouldn't just be the visible! Your brain can track the location of things subconsciously, so the log tailing window, the metrics window, that upload/download/file copy status window, can all be in some area you turn your head to glance at to get occasional information on.

Because the PARC UI's windowing system is designed to help out with the "limited viewport": it already recognized that people will do more applications/tasks than there is visible screen space, so you need an overlaid windowing system.

But right now, facebook is the sociopaths in control of VR headsets, so ... we'll be waiting for a while.

The talk is so refreshing, so indicative of research in the 1960s. I think in retrospect from listening to the talk is that all researchers had to do was the possible. What they didn't have to think about was the consequences of technology.

That is not the world we live in now. It is apparent that we are a civilization facing the Fermi Paradox/Great Filter. He alluded to it in the human evolution and human organizations part of the intro, but it is fundamentally structured around post-WWII viewpoint, not the modern view where literally every person's consumption is a step towards collective destruction.


I'm excited to see if Apple's entry into this market (with AVP) will encourage more OEMs to explore these emersive computing environments [as the iPhone did with smart phones].

>the modern view where literally every person's consumption is a step towards collective destruction.

This is just one POV. From a fiat capitalist POV, consumption is essential to prevent collective destruction... although I agree more with your POV.

Thanks for the great comment.


He actively answers questions on Quora and those answers come in a format (short length - though his are unusually well thought out - and narrow focus) that is easy to browse.


I think Quora is "a poor imitation of old static text media" that's difficult to browse and not very accessible, since it takes so much clicking and waiting to open up "more" buttons and follow replies and threads, and is impossible to easily print, or just scroll, skim, and search through, so I collected Alan Kay's answers, some discussion we had, plus some more discussion with David Rosenthal (who developed NeWS with James Gosling), here:

"Alan Kay on “Should web browsers have stuck to being document viewers?” and a discussion of Smalltalk, NeWS and HyperCard:

https://donhopkins.medium.com/alan-kay-on-should-web-browser...

>Alan Kay answered: “Actually quite the opposite, if “document” means an imitation of old static text media (and later including pictures, and audio and video recordings).”

Also here's a collection of HyperTIES discussions from Hacker News (including some discussion with Ben Shneiderman about why hypertext links are blue):

https://donhopkins.medium.com/hyperties-discussions-from-hac...


Agreed about the Quora interface. I have a Jupyter notebook that scrapes all of Alan's answers and comments into a json database - he's answered a lot of questions!


He's been known to answer questions and leave comments right here on Hackernews: https://news.ycombinator.com/user?id=alankay


He's been here on HN as well in the past.


His answers on Quora represent a pretty extensive body of work. That's not true of his comments here (which are very appreciated, of course).

Quora's a better place, with a better UI, for that almost-blogging sort of thing.


The funniest to me was when he was 5 he read about 150 books: "I hit first grade, and I already knew the teachers were lying to me"


he elaborates on how concepts and ideas and schooling are misleading by nature quite a bit in the talk, though if he says that sentence i haven't gotten to it yet. (i've seen him say it elsewhere)

https://youtu.be/dZQ7x0-MZcI?t=8m33s 'our minds are like theaters. (...) we treat our beliefs as reality. that is the worst thing about human beings. and these theaters are tiny; we can only think of a few things at once, it's hard for us to take in larger things. (...) we grow up in whatever culture we were born into. (...) our conclusions tend to be societal—that's a disaster!'

https://youtu.be/dZQ7x0-MZcI?t=57m9s 'because our brains want to believe, rather than think, these can turn into something like a religion. the reason we don't want to come up with blind belief: there's always more, and what it is, is something that we can't imagine. so when we give a name to something, we (...) hurt our ability to think more about it, because it already has a set of properties, it already is the thing the word denotes.' (beautifully illustrated in the video)

https://youtu.be/dZQ7x0-MZcI?t=60m 'school is the best thing ever invented to keep you from thinking about something important for more than a few minutes'

perhaps unsurprisingly, when i explained this on here two days ago, it got downvoted to -2, because hn's comment section is kind of the intellectual antithesis of alan kay: https://news.ycombinator.com/item?id=39586470

i keep being optimistic but you people make me so sad


> i keep being optimistic but you people make me so sad

This is so very true.

I am not a hippy, but I can't immediately thing of a better phrase than the immense toxic "negative energy" of HN comments.

And yet, when I say this to people outside of HN, they tend to react with astonishment. "But it's my favourite place online! I love it! I learn so much!" etc.

I don't think the site even realises, but its negative responses to this (deeply unwise and ill-considered) "Ask HN" question changed the direction of what was the most innovative company in the Linux space:

https://news.ycombinator.com/item?id=14002821


It is hard to imagine a discovery so profound as banging 2 rocks together. Why would you do it?

Where are todays rocks?


i have some in my head, do you want some?


I read more books before I was 12 than I read between 12 and 20... I certainly shared his dismay for what I was being taught.

(and I have consumed less (books) each decade. :/)


Alan Kay won the Turing Award in 2003 for, "For pioneering many of the ideas at the root of contemporary object-oriented programming languages, leading the team that developed Smalltalk, and for fundamental contributions to personal computing." His Dynabook [1], developed during the 70s, is the predecessor of modern tablets and laptops.



This is an outstanding paper; one of my favorites in all of computing. Read it and marvel at all that Kay has been part of.


Too much (self) marketing for my taste; I much prefer this one: https://dl.acm.org/doi/10.1145/3386335 (Ingalls, 2020, The evolution of Smalltalk: from Smalltalk-72 through Squeak)


This probably my favorite because it was mostly unscripted

Alan Kay : July 2007 : A Conversation with CMU Faculty & Students

https://www.youtube.com/watch?v=PFc379hu--8


Wow, this guy Vannevar Bush was the definition of being early:

> Wholly new forms of encyclopedias will appear, ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.


Actually I still don't know of a good way to make a path along several links and share that path with someone else later.


We've been doing it. Write a paper (or a comment) with footnotes that link in order to the resources that should be included in the trail. At your peril, feel free to add as little context as possible (down to none—so it's just the links).


Symlink to a shared directory


Thank you for the story.

My take away is the the concept that GUIs Mirror our minds as individuals the way good writing and theater does. I then ponder about what could be a fair or useful representation of a collective mind in a way an individual mind can process/work-with?

Maybe that is the will be the metaverse(snowcrash).

Thanks again.


It's sad to see that despite Alan Kay's lamentations, not a single comment mentions Doug Englebart or the fireflies to be found.

Especially on a forum for a startup accelerator, it seems like that should have been the most intriguing part of the talk.


i'm not quite sure what he meant by the fireflies, but i did comment a bit on work (largely led by kay) on, in a significant sense, continuing engelbart's efforts, and on interesting new areas to explore https://news.ycombinator.com/item?id=39618408 and on interesting new things in formal methods in particular: https://news.ycombinator.com/item?id=39622073


I wish we had a popular operating system for end users like ourselves that was “live” all the way down.

It’s unfortunate we’ve been stuck with Windows, Mac, and Linux only


I have the same dream. A part of me wishes Richard Stallman set out on making a Lisp OS instead of making a Unix clone, but this was the mid-1980s and thus I understand the technical limitations and the social environment of the time. The 1990s could’ve been a better time; workstations and commodity PCs were powerful enough to run an entire Lisp or Smalltalk operating system, and there would’ve been substantial interest in such a system. Imagine had we ended up with a free, open source Lisp or Smalltalk OS running on the Pentium and PowerPC machines of the era as an alternative to Linux and the BSDs. I think this would’ve been an easier foundation to develop a FOSS desktop instead of the X11/KDE/GNOME/Wayland situation we have today.

But the dream isn’t dead. If only I had more free time…


The dream isn't dead, a number of people are working on it. Several live systems are demonstrated in talks on Youtube.

There are several working systems and even a real OS with native device drivers on modern bare hardware.

You can run legacy code under its qemu sandbox, but that is added only to broaden its appeal to customers as this part is not 'live'.


Sadly the work from the STEPS project seems to have disappeared, especially the Frank software he used in some talks. That looks like it would have been very interesting to play with.


I have most of the Frank/STEPS code still running. I posted a lot about it before on HN.

Yes, its a lot of fun to play with it, I invite people to join in the fun. 20.000 Lines of code for almost all of personal computing.


Thanks, do you have any of it publicly available or would you prefer an email?


Not all is publicly available and most needs recompiling and explanation. I prefer email.


The email address in your profile is bouncing:

  [email redacted]
    host mx02.mail.icloud.com [17.42.251.62]
    SMTP error from remote mail server after pipelined end of data:
    554 5.7.1 [CS01] Message rejected due to local policy. Please visit https://support.apple.com/en-us/HT204137


you can try morphle73 at gmail dot com

This Apple mail server seems to reject your emails because it thinks its spam.


> Sadly the work from the STEPS project seems to have disappeared

Unfortunately that phrase -- steps project -- is effectively un-Googleable.

Do you have a link at all please?



Apply to YC for fun and see if they'll fund you. It's a long shot due to the non-commercial nature. But who knows? If you do, include me in it. I'm not the best OS dev, but I do know Pharo. I'm up for making a YC application if you are! ;-)

My email is in my profile if you'd want to entertain this idea.


> A part of me wishes Richard Stallman set out on making a Lisp OS instead of making a Unix clone

Oh so very much so, yes!


I worked at Interval Research (yet another Palo Alto lab) in the mid/late 90s on a large project (and if it had completed and worked out, would have led to a proper IoT without the crap, maybe) in which we use Smalltalk all the way down to the custom hardware. A few hundred lines of C and assembler tucked away in a corner, but interrupts and processes were all handled in Smalltalk.

It was fun. It could have been great.


Although impolite to ask, do you feel that it not working out may have been influenced by the implementation language?


People are different, that is why flame wars like Emacs vs. Vi still exists. It is incredible that we tend to assume that there ahould be just one technical response to problems.


One could repurpose Pharo for this, I think. I'm not entirely sure how the environment would be an "at the OS level thing" but it should be doable to have a basic OS that basically boots a Pharo environment and then that's your OS.


Live in what sense?

To me Emacs fits the bill, or at least a subset thereof.


With a live programming model: https://en.wikipedia.org/wiki/Live_coding


Pharo (a Smalltalk), is listed in that article as an example of a live programming environment.

Alan Kay is one of the designers of SmallTalk.


Yeah I like Pharo


It would be cool to be able to just click on anything and adjust its code to however you like it. I guess that Smalltalk and its descendants allow this. But so does Emacs. It's not an operating system, but it covers a lot of the use-cases.


You would need live routing of messages. And the ability to reroute, filter, inject messages dynamically into code.

Why must everything be done as a function call? You can’t change anything without recompiling the code.

Today everything is implemented as a function call. Need to send a message? Call a function named “snd_msg” or something.


> Why must everything be done as a function call? You can’t change anything without recompiling the code.

Emacs Lisp supports dynamic binding, so you can dynamically rebind function definitions at runtime.


It's curious how the original 1981 paper [1] on TECO Emacs, written before I was born, describes many qualities I recognize in the ELisp Emacs I grew up on.

Emacs "happens" to be open source ;-), but the paper stresses how the system & language were designed to allow users to mold it at run time without the barrier of recompiling from source, and how empowering users led to better features than "careful design" could have achieved. Selected points I found notable:

- awareness that "An EMACS system actually implements two different languages, the editing language and the programming language". - Editing Language is tweakable (again at run time) by re-binding keys to macros / existing commands / custom commands. This gives agency to users with less programming skill! - Language separation being necessary so that tweaking Editing Language can't break Programming code. - Key bindings are shallow "keyboard sugar" over the concepts of Programming Language. Commands are (almost) regular functions. User can invoke any command by name, bypassing the sugar. - Buffer-local & mode-local bindings. - Commands (and generally as much of the system as possible) implemented in Programming Language which was chosen to be interpreted not compiled, so that users can redefine at run-time and experiment. - "The only way to implement an extensible system using an unsuitable language, is to write an interpreter for a suitable language and then use that one" :-D - "variable [and function] names are retained at run time; they are not lost in compilation"

- Commands are extensively parametrized by variables. User can achieve quite a lot by simply setting [global] variables. - Dynamic not lexical scoping deliberately choosen to make code less encapsulated and more reusable with tweaks. See paper why lexical is worse for dynamic - Unique concept of File/buffer-local variables! Again, if programmer went to the trouble of parametrizing code, maximize the payoff. - Commands and especially compiled code extensively parameterized by calling "Hooks" at interesting points. "These hooks can be thought of as compensating for the fact that some parts of the system are written in assembler language and cannot simply be redefined by the user." <<-- I found this especially thoughtful

- Social dynamics like "library system" for loading extensions written by others. Well, ater Emacs actually lagged behind for decades in ease of obtaining 3rd-party libraries. "too cathedral, not enough bazaar..." Much better now with MELPA, still not as smooth as say VSCode extensions. But I feel a trade-off — VSCode extensions are more "opaque".

[1] https://www.gnu.org/software/emacs/emacs-paper.html


yeah, that's one of the main things people like about emacs


Wonderful as ever.

Kay demonstrates an emulator of Sketchpad. Anyone know if it's been shared anywhere?


Also very curious about this. I’ve been slowly building one myself and would happily give up if another good one already exists. Tried searching the web and can’t find it.


There was a CDClabs github version and the Frank/STEPS code. Contact me, its burried in my 20 TB archive.


CDG Labs. But this is not it: <https://github.com/cdglabs/sketchpad14>



The demo itself is a video of this: https://youtu.be/Vt8jyPqsmxE?t=750 which is a simulation done in the Frank system in the STEPS project.


Ivan Sutherlands sketch pad paper is here: https://dspace.mit.edu/handle/1721.1/14979


I have watched huge number of his talks, and unfortunatelly he did not say something new in this one ..


The audio gating on this makes it incredibly difficult to watch


There are sound reasons why no substantial system in common use is coded in Smalltalk. Kay could have spent the decades since his time at PARC figuring out why, and remedying them.

One of the reasons is that O-O is just one of several important ways discovered to organize software. Any big enough problem will have places for each. Specialization is for insects.


The reason is called Java, and having all key Smalltalk vendors pivoting into Java.

Smalltalk was the ".NET" of OS/2, and Visual Age for Smalltalk code browser still lives on Eclipse.

Then there are those Objective-C and Ruby developers still around, heavily influenced from Smalltalk.


I recently got back into Objective-C (first did in 2002) doing some (for hire) framework work on an app that was built in it originally. And it's very refreshing! I've done a lot of C in between then and now, and I almost have to keep reminding myself that it's C under the hood, really.


Is there a Smalltalk that runs on the JVM?


Not that I've read of, but Vanessa Freudenberg created SqueakJS; a javascript system that can run (pretty much) any Squeak based image in your browser. See https://squeak.js.org/run/



you may or may not be aware that when he headed vpri, they did some substantial research into some of the other important ways to organize software, including things like array languages, david p. reed's work on spatially replicated computation, and cooperating communities of specialized solvers. in this talk, he also mentioned gelernter's tuple-space architecture, though you may have missed it. he definitely isn't arguing that oo should be the universal way to build everything, much less smalltalk; he's lamenting that no better paradigm than their research prototype has emerged since then

however, i do agree that there are some advances made since then that he doesn't fully appreciate, things like the importance of free-software licensing, roy fielding's work on architectural styles, recent advances in formal methods and functional programming, and the web's principle of least power


Bret Victor's DynamicLand seems to be a direct descendent of many of these ideas. RealTalk's reactive DB combines Linda tuplespace ideas with LISP 71 pattern matching and reactive semantics. Each Realtalk object is self contained and can't be 'messed with' externally. It's all introspective and reconfigurable, etc


Do you know anywhere where one can look into dynamicland more deeply? I've been interested in playing around with it for a while (hopefully I can get my hands on a projector lol) but have never found any details. Omar Rizwan's website had a cool post on geokit but that was all I managed to find.


Omar has a new project in the same vein as DynamicLand called Folk - https://news.ycombinator.com/item?id=39241472


I found out about this a few months ago through Cristobal's blog: https://cristobal.space/. Somehow didn't notice how the post mentions Omar's involvement at the top lol. Thanks anyway tho.


you'll probably have to talk to the dynamicland folks; i'm not sure what their current strategy is for getting it out into the world, but it doesn't seem to be the obvious 'upload the software to gitlab and hope for the best' approach


Bret plans on publishing everything this Spring - https://twitter.com/worrydream/status/1753116042254340526


that sounds very appealing, but it doesn't say he's publishing everything

> Dynamicland's new research website will be up in the spring.

> current status:

> 1156 pages

> 11,120 images

> 693 videos

> 56 pdfs

(https://nitter.privacydev.net/worrydream/status/175311604225...)

noticeably missing from this list is source code, unless that's on the 1156 pages

i'm sure i'll devour them eagerly, though


yes, agreed. you may or may not be aware that bret was a principal investigator at yc harc, along with vi hart, dan ingalls, john maloney, yoshiki ohshima (who posted this video), and alex warth, at least three of whom were at vpri. yc harc was sorta kinda headed by alan kay https://www.ycombinator.com/blog/harc


Yep, was definitely aware. It just seems like of all the projects at HARC, this is the only one that branched out from Alan's earlier ideas. The set of papers on https://worrydream.com/refs/ gives a good idea of some of the inspirations behind DL.


btw, is there a lisp 71, or do you mean larry tesler's lisp70?



thanks!


>you may or may not be aware that when he headed vpri, they did some substantial research into some of the other important ways to organize software, including things like array languages, david p. reed's work on spatially replicated computation, and cooperating communities of specialized solvers.

I'm very interested in knowing what array languages they were researching. The only thing I can find is Nile[1] but from the examples it doesn't look like an array language to me.

[1] https://github.com/damelang/nile


nile was the thing i was thinking of, yes


And static type calculus as seen in the MLs, Haskell, and lately C++.


Pretty sure he would appreciate it for the guarantees those come with, but criticize them for being dead programs, that are not alive like for example the internet, one of his examples for systems, that started and from that moment on have not been taken offline to be changed.


he might not appreciate us pretending we know what he thinks ;)


yes, although recent advances in functional programming and formal methods go a lot further than that


This is really interesting; it’s really cool to think about the “statics” and “dynamics” of programming, and I while I have a basic understanding of functional programming (both the dynamic world of Scheme and the static world of languages like Standard ML and Haskell), I’m unfamiliar with these recent advances in functional programming and formal analysis. I’m wondering if you could share some links or references to some of this material?


i'm not the best person to ask, and i don't really know where to start

tla+ is getting uptake in industry, idris is sort of making dependent types practical, acl2 has more and more stuff in it, pvs is still around and still improving, adam chlipala keeps blogging cool stuff, so does hillel wayne, sel4 is an entire formally-proven-secure microkernel, you can try compcert on godbolt's compiler explorer, ləɐn has formalized significant mathematical definitions that working mathematicians use actively while metamath has an extremely convincing approach to proof and an ever-growing body of proofs of basic math, smt solvers like z3 are able to solve bigger and bigger problems and therefore able to tackle bigger subproblems of verifying software (and are easily apt installable and callable from python or from cprover's cbmc), cryptocurrency smart contracts have an incentive to be correct in a way that no previous software did (and people are applying at least idris to at least ethereum), ...

a thing i saw recently that was really impressive to me was parsley, by the main author of pvs as well as some other people: http://spw20.langsec.org/papers/parsley-langsec2020.pdf


"Rudolph Technologies Helps Semiconductor Customers Reach Market Faster with Smalltalk-Based ControlWORKS"

https://www.cincom.com/pdf/CS050418-1.pdf

Someone suggested Smalltalk a million lines long which I have no way to confirm or contradict ;-)


Cincom Systems has been a pioneer of extremely productive programming tools since ever. My first real programming internship was on an IBM 4381 running their Mantis 4GL tool (in 1984-ish).

I like to compare it to Ruby on Rails. Mantis is Ruby on Rails for the 3278 (and Dataflex 2 would be Ruby on Rails for the VT-100).


> There are sound reasons why no substantial system in common use is coded in Smalltalk.

Points in the direction of...

https://newspeaklanguage.org/

https://bracha.org/Site/Newspeak.html

https://en.wikipedia.org/wiki/Newspeak_(programming_language...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: