This request originated via recent discussions on HN, and the forming of HARC! at YC Research. I'll be around for most of the day today (though the early evening).
When you were envisioning today's computers in the 70s you seemed to have been focused mostly on the educational benefits but it turns out that these devices are even better for entertainment to the point were they are dangerously addictive and steal time away from education. Do you have any thoughts on interfaces that guide the brain away from its worst impulses and towards more productive uses?
We were mostly thinking of "human advancement" or as Engelbart's group termed it "Human Augmentation" -- this includes education along with lots of other things. I remember noting that if Moore's Law were to go a decade beyond 1995 (Moore's original extrapolation) that things like television and other "legal drugs" would be possible. We already had a very good sense of this before TV things were possible from noting how attractive early video games -- like SpaceWar -- were. This is a part of an industrial civilization being able to produce surpluses (the "industrial" part) with the "civilization" part being how well children can be helped to learn not to give into the cravings of genetics in a world of over-plenty. This is a huge problem in a culture like the US in which making money is rather separated from worrying about how the money is made.
Then what do you think about the concept of "gamification?" Do you think high densities of reward and variable schedules of reward can be exploited to productively focus human attention and intelligence on problems? Music itself could be thought of as an analogy here. Since music is sound structured in a way that makes it palatable (i.e. it has a high density of reward) much human attention has been focused on the physics of sound and the biomechanics of people using objects to produce sound. Games (especially ones like Minecraft) seem to suggest that there are frameworks where energy and attention can be focused on abstracted rule systems in much the same way.
I certainly don't think of music along these lines. Or even theater. I like developed arts of all kinds, and these require learning on the part of the beholder, not just bones tossed at puppies.
I've been playing traditional music for decades, even qualifying to compete at a high level at one point. There is a high density of reward inherent in music, combined with variable schedules of reward. There is competition and a challenge to explore the edges of the envelope of one's aesthetic and sensory awareness along with the limits of one's physical coordination.
Many of the same things can happen in sandbox style games. I think there is a tremendous potential for learning in such abstracted environments. What about something like Minecraft, but with abstracted molecules instead of blocks? Problems, like the ones around portraying how molecules inside a cell are constantly jostling against water molecules, could be solved in such environments using design. Many people who play well balanced games at a high level often seem to be learning something about strategy and tactics in particular rule systems. I suspect that there is something educationally valuable in a carefully chosen and implemented rule system.
Also perhaps, it's so much easier to exploit such mechanisms to merely addict people, that overwhelms any value to be gained.
I just tried, albeit slightly unsuccessfully, to describe the philosophy of the Montessori system to someone. Your answer, learning on the part of the beholder, sums it up beautifully. Thank you for that.
The way you describe music here sounds a lot like how Steve Pinker has described music: as a mental equivalent of cheesecake; something that just happens to trigger all the right reward systems (the ones based on our love of patterns and structure, and exploiting the same biological systems we use for language) but isn't necessarily nutritious itself.
However, all evidence points to him being wrong about this, making the mistake of starting with language as the centrepiece and explaining everything around it. Human music likely predates human speech by hundreds of thousands of years, and is strongly tied to social bonding, emotions and motor systems in ways that have nothing to do with the symbolic aspects of language.
The way you describe music here sounds a lot like how Steve Pinker has described music: as a mental equivalent of cheesecake;...isn't necessarily nutritious itself.
Note that I didn't mean that in a negative way. Also, if you want to consume macro-nutrients, cheesecake is a pretty effective way to get simple carbs and dairy fat.
is strongly tied to social bonding, emotions and motor systems in ways that have nothing to do with the symbolic aspects of language.
I think there is something akin to this that can be found in games, and that there is something particularly positive that can be found in well constructed games.
Yes, sorry: I could have been more clear that the what I described was Steve Pinker's judgement, not yours.
And I tried to stay neutral towards games on purpose - I have taught game design myself ;). Having said that, a lot of real-world attempts at gamification are pretty banal carrot/stick schemes.
I think games are more like instruments than they are like music. The game itself isn't as interesting as the gameplay you can perform inside it. Speedrunning in particular has a lot in common with musical performance.
I guess in the use of technology one faces a process rather similar to natural selection, in which the better the user's ability to restrict his use to what he has to do, the more likely the survival, i.e. the user will not procrastinate and get distracted. The use of computers for entertainment is unstoppable, it's nearly impossible to not allow the kids find and play those games, chat with friends on WhatsApp, and be exploited otherwise by companies that make money from that sort of exploitation, even though that's at the cost of their psychological health and future success. People spend every single second of the day connected and distracted, and this seems irreversible. I wonder if you have any practical thought on how this can be remedied.
My friend Neil Postman (our best media critic for many years) advocated teaching children to be "Guerilla Warriors" in the war of thousands of entities trying to seize their brains for food. Most children -- and most parents, most people -- do not even realize the extent to which this is not just aggressive, but regressive ...
Neil's idea was that all of us should become aware of the environments we live in and how our brain/minds are genetically disposed to accommodate to them without our being very aware of the process, and, most importantly, winding up almost completely unaware of what we've accommodated to by winding up at a "new normal".
The start of a better way is similar to the entry point of science "The world is not as it seems". Here, it's "As a human being I'm a collection of traits and behaviors, many of which are atavistic and even detrimental to my progress". Getting aware of how useful cravings for salt, fat, sugar, caffeine, etc., turn into a problem when these are abundant and consumer companies can load foods with them....
And, Neil points out -- in books like "Amusing Ourselves To Death" and "The End Of Childhood" -- we have cravings for "news" and "novelty" and "surprise" and even "blinking", etc. which consumer companies have loaded communications channels with ...
Many of these ideas trace back to McLuhan, Innis, Ong, etc.
Bottom line: children need to learn how to use the 21st century, or there's a good chance they will lose the 21st century.
> Bottom line: children need to learn how to use the 21st century, or there's a good chance they will lose the 21st century.
Most children meet entertainment technology as early as before the first birthday, though. Many pre-teens that I see around possess smartphones and/or tablets. Most of the early teenagers possess multiple devices. None of these will be able to judge what's is beneficial to their future and well-being, and opt for it rather than what is immediately fun and pleasing. Just like most of them will live on chocolate bars and crisps if let to do so. The burden falls on the parents, a burden they don't take.
I myself can't think of a future other than one full of device addicts, and a small bunch that managed to liberate themselves from perennial procrastination and pseudo-socialisation only in their twenties. And while my country can prohibit certain products (food, etc.) from import and production within its own borders (e.g. genetically modified, chemically engineered to be consumed greedily), this can't be done with websites, because (a) it's technically impossible and (b) it 'contradicts freedom of speech'. I'll ask the reader to philosophise over (b), because neither the founding fathers of the US nor the pioneers of the french revolution, nor most of the libertarian, freedom-bringing revolutionists had a Facebook to tag their friends' faces.
(edit: I don't want to get into a debate over freedom of speech, and don't support any form of cencuring of it, tho I don't want freedom of speech at the cost of exploitation of generations and generations by some companies that use it as a shelter for themselves.)
> Kay: children need to learn how to use the 21st century, or there's a good chance they will lose the 21st century.
> Gkya: I myself can't think of a future other than one full of device addicts, and a small bunch that managed to liberate themselves from perennial procrastination and pseudo-socialisation only in their twenties.
As a infovore this worries me. If we cannot control ourselves and come up with better solutions for self control then the authoritarian minded are likely to do it for us.
The Net is addictive and all those people pretending it ain't so are kidding themselves.
It's easy to imagine anti-Net campaigners in the same way as we see anti-globalization activists today.
I myself have seen the effects of good diet, exercise and meditation on a group of people, and it is quite remarkable how changed for the better people are. So there is hope!
I believe that social change, example: phubbing being widely regarded as taboo, isn't fast enough to keep with the Net's evolution. By the time a moral stance against phubbing is established mobile phones probably won't exist. For this I think we need a technological solution which is as adaptive as an immune system, but also one which people can opt in to. Otherwise eventually people will demand governments do things like turn off the Net at certain times during the day or ban email after 6pm and so on.
The introduction to technology, well, essentially I'm talking about internet, is so early on a kids life that we can't just say "we should control ourselves". You can't put your kid in a room full of crisps, sweets, alcohol, drugs, pornography, and expect it to come out ten, fifteen years later as a healthy individual that is not an addict to none of them. This is what we essentially do with the internet.
> I myself have seen the effects of good diet, exercise and meditation on a group of people, and it is quite remarkable how changed for the better people are. So there is hope!
You're an adult, I am too. We can realize: this is stealing my life. But a kid can't. And stolen days don't return. This is why I'm commenting: we'd rather raise better individuals than letting them do wtf they want and hoping they'll fix themselves later.
You can't put your kid in a room full of crisps, sweets, alcohol, drugs, pornography, and expect it to come out ten, fifteen years later as a healthy individual that is not an addict to none of them
I know this is bandied about a lot, but is this actually proven? With the exception of drugs, all of those you mention have been within easy reach for me (actually, as a Dutchman, even softdrugs were just one step away if I'd wanted to). Yet I don't consider myself addicted to any of those.
I'm not a native speaker of English, so I wonder: does kid not mean person not yet adolescent? I'm referring to 0-14 yrs olds when I say kid. If we agree on that, and you still say, it's not proven, we can try, well then I can't do much than hoping you either don't have children or no child's responsibility is on you otherwise.
Reading his post I believe he meant the above mentioned things were within reach of him as a child (I don't believe he meant now as an adult).
" I can't do much than hoping you either don't have children or no child's responsibility is on you otherwise."
That's a strong statement to make. Implying he's unable to raise children because he'd like to see evidence that the internet actually has a negative influence on children.
I interpreted his message as he did not only want evidence for the internet, but also the other stuff I mentioned, and their effects on kids. I'm sorry if that wasn't the case.
No, I did not mean I wanted evidence of their effect on kids. I want evidence that "putting your kid in a room full of $bad_stuff" always leads to addiction, since that strikes me as nothing more than scare stories.
Good parents can raise their children correctly even with $bad_stuff present around them, that was the point I was trying to make.
> Good parents can raise their children correctly even with $bad_stuff present around them, that was the point I was trying to make.
I concur. But the internet exposure of kids is mostly not governed by parents. They either are alone with the connected device in their rooms, away from them, or with a mobile device out of their home. The best the parents can do is to educate the kids, but the public lacks the knowledge to effectively do so. They should be given the formation to be able to educate their children, and furthermore schools should educate minors on the use of tech.
"putting your kid in a room full of $bad_stuff" will mostly lead to addiction if the parent is not there to teach the kid: this is harmful to you; not you think?
Mostly agreed, yes. But I would rephrase it as "introducing kids to $bad_stuff without guidance is a bad idea": I don't think that permanent supervision should be required. Once the novelty wears off, and the parent is confident that the kid can behave themselves even in the presence of $bad_stuff, even "putting your kid in a room full of $bad_stuff" can be fine.
And I don't mean that in the sense of "the kids are fine with their heroin syringes", but in the sense "I can leave the cookie jar on the counter and it will still be there when I leave the room".
I think there exist records of hospital mix-ups with babies, with pretty profound differences changing them depending on what environment they wound up in, but this may be mostly anecdotal. One case in Japan like this but it illustrated wealth difference as opposed to what we're looking for here.
Provocative but not evidence. I did look up some twin studies but I can't find one with a clear vice/virtue environment study. Gwern is good at ferreting out this kind of information if you ask him.
Just yesterday before this thread even started (I work as part-time cleaner) I was polishing a window. Through it I saw some children in a sitting room, one of who was literally standing centimeters away from a giant flat screen television. Glued to it.
I thought: "Fuck, they don't have a chance". Their attention spans will be torn to pieces like balls of wool by tiny kittens. Now multiply that effect with the Net + VR and you have an extraordinary psychological effect best compared to a drug.
I didn't have a television in my childhood. I read countless books, and without them, I wouldn't be sitting here, I wouldn't have done any of the things I could reasonably consider inventive or innovative. They might not be world changing things, but they were mine and my life was better for doing so.
I was speaking to a friend who has children a few months ago. He was in the process of uploading photos of his family to Facebook. I asked him whether he considered what he was doing to be a moral act, since he is for practical purposes feeding his children's biometrics into a system that they personally have not, and could not, opt in to. He was poleaxed by the thought. He was about to say something along the lines of 'well everybody's doing this' but I could visibly see the thought struck him that "wow, that's actually a really bad line of reasoning I was about to make". Instead he agreed with me, uncomfortably, but he got it.
I don't know how you get millions of people to have that kind of realization. I do think parental responsibility has a huge role though. My parents got rid of the television in the 80s. It was the right thing to do.
The thing that disturbs me about this argument is that IMHO it's a slippery slope towards "back in my day, we didn't have this new-fangled stuff". We have to be extremely careful that our arguments have more substance than that. That requires a lot of introspection, to be honest.
See, my grandparents worried that the new technology that my parents grew up with would somehow make them dumber (growing up with radio, parents getting television); my parents' generation worried that the technology we grew up with would be bad for us (too much computer, too much gaming, too much Internet). The upcoming generation of parents will grow up wondering whether VR and AR is going to ruin their kids' chances.
Yet kids ALWAYS adapt. They don't view smartphones or tablets as anything particularly out of the ordinary. It's just their ordinary. I'm certain their brains will build on top of this foundation. That's the thing - brains are extremely adaptable. All of us adapted.
There's a term for this worry - it's called 'Juvenoia':
Now, I'm not saying that this is a discussion that shouldn't be had - it certainly should. I just think we all need to be mindful about where our concerns might be coming from.
I never said myself that tech per se will make kids dumber. What I say is, there should be measures governing their exposure, just like there are for other things.
Just like an alcohol drinker and an alcohol addict are different, an internet user and an internet addict are different too. Just because some or most are not addicts, we can't dismiss the addiction altogether.
It's just that it seems a bit unfair to decry (or place undue burdens upon) the vast majority of responsible alcohol drinkers because we've found a few people who have an unhealthy relationship with it.
Recognizing potential dangers is a far cry from saying that there's a risk of "losing the century" because of easy access to technology and entertainment, and it strikes me as rather belittling to the younger generation.
Millennials and their children are still humans, after all, and are just as intelligent, motivated, and adaptable as every generation before them.
Who are responsible alcohol drinkers? In my country the minimum age for consummation of alcoholic products is 18. What would you think of a 10 or 15 years old kid that's a responsible drinker?
What I'm arguing is against an analogue of this in tech. There is a certain period during which the exposure of a minor to technological devices should be governed by parents.
What do you think of adolescents which get recorded nude on chatrooms? Some of them commit suicide. What do you think of children victim to bullying online? What do you think of paedophiles tricking kids online? Isn't a parent responsible of protecting a minor from such abuses?
My general argument on this thread is that we should raise out children as good as we can. Protect them from danger that they cannot be conscious of. We can't certainly place burdens on adults, but we can try to raise adults that are not inept addicts with social deficiencies. And because most of the worlds population is tech-illiterate, it falls on governments to provide education and assistance to parents, just like they do so with health and education.
Most of the counter-arguments here has been strawmans, because while I'm mostly targeting children, I've been countered with arguments about adults.
> The thing that disturbs me about this argument is that IMHO it's a slippery slope towards "back in my day, we didn't have this new-fangled stuff".
> I just think we all need to be mindful about where our concerns might be coming from.
Basically we're on the same page.
Here is proposition. I'll steelman the Conservative view and you tell me what you think. I promise not to claim vidya causes violence or D&D is a leading cause of Satanism.
My proposition is that television media has meaningfully worsened our society by making it dumber. This is an artifact of the medium itself, rather than an issue with any specific content on it. To explain what I mean by dumber I must elaborate.
The television is a unidirectional medium. It contains consensus on various intellectual issues of the day and gives a description of the world I'd call received opinion. There exists no meaningful difference between the advertising that tranches people into buying products and the non-advertising that tranches people into buying ideas. Most ideas that are bought are not presented as items to be sold, they are pictured as 'givens', obvious. Most lying is done by omission. Even were all information presented truthfully, we have a faux sense of sophistication about our awareness which is a problem. When you buy prepackaged meals at a store you are not in the makings of becoming a chef, and in that way you are not chewing over the ideas presented to you, you do no mental cognition. Your state is best described as, and feels like, a hypnotic trance.
One of the problems with this is that television creates a false sense of normalcy that has no objective basis. It asks the questions and provides the answers. All debate is rhetorical debate.
It's the cognitive equivalent of 'traffic shaping' that Quality of Service mechanisms do on routers. In a way that is a much bigger lie. This concept is very similar to Moldbug's Cathedral concept. The people who work for the Cathedral don't realize they represent a very narrow range of thought on the spectrum. Their opinions cannot plausibly be of their own manufacture because one arbitrary idea is held in common with another arbitrary idea and they all hold them.
The key to understanding this is very real and not at all abstract, is that millions of people have synchronized opinions on a range of issues without any other discernible cause other than the television (or radio). Why do populations of teenagers become anorexic after the introduction of television where they did not suffer before it? Synchronized opinion is always suspicious. It defies probability theory to think my grandmother and millions of others suddenly came to the conclusions for example, that gay marriage was a positive idea? Why do millions of conservatives think buying gold is a good idea? It is not that there is something wrong with gay marriage or buying gold. It's that there is no genuine thinking going on about about any of this. There many ways to hedge against inflation that don't involve buying gold. Why is gay marriage the morality tale of the age, and not, say, elder abuse in nursing care facilities.
Why do some things become 'issues' and not a myriad of others? How directed this is is up for debate, but what is not is that the selectivity and constraints of the medium have narrowed our perception of the world, and that has led to the thing that made us dumber: it stunted our native creativity and curiosity.
> Yet kids ALWAYS adapt. They don't view smartphones or tablets as anything particularly out of the ordinary. It's just their ordinary. I'm certain their brains will build on top of this foundation. That's the thing - brains are extremely adaptable. All of us adapted.
There does exist a series of schools in Silicon Valley. The software engineers at Google and Facebook and other firms send their children to them, and they strictly contain no computing related devices. Instead it's schooling of the old fashioned sort, from the early 20th century.
It is possible that this is juvenoia as you suggested. But at least take into account those parents may understand something else about electronic media and its affects on brains. After all many of them study seriously human attention for a living.
The other thing I want to ask you is have you ever visited in your country what we call council estates in Europe? These are places which contain the poorer class of people in our society. I've been to many of these gray lifeless places and they all have many characteristics in common. Television is a major part of their lives and their shelves are bare of books. It is ubiquitous. In the past the working classes were much more socially and intellectually mobile. They read. They did things. Little evidence remains of that today, but it was so.
It is possible that television is like a slow poison that affects some classes more than others. You can't just say people you know are unaffected and therefore it does not matter, because it is possible you may be part of an advantaged group for which reasons may exist why they could be more immunized than most e.g. having challenging or interesting work to do. It's worth considering that all the problems I mentioned still exist without television in society but you might say the 'dose' determines whether it's medicine or poison. There is certainly a sense among many people that television has progressively gotten worse and watching old news broadcasts and documentaries it is hard not to see what they mean. I appreciate this isn't objective measurement, but comparing like with like, say James Burke's Connections with Neil deGrasse Tyson's Cosmos, the difference is obvious and the Cosmos reboot would be considered very good relative to its current competition.
Evidence for my claims could be a reduction in the number of inventions (excluding paper patents) per capita, reduced library visitations with respect to population changes, increasing numbers of younger people unable to read, evidence of decreased adventurousness or increased passiveness in the population, some metric for diminished curiosity/creativity over time. If those were mainly found wanting then I'll concede my error.
I'd be much more concerned about curiosity/creativity, than reduction in IQ or school test scores because creativity is really the key to much of what is good about human endeavor.
I'd also like to point out that you might not be able to spot the 'brain damage' so easily, since it's hard to come up with objective measures without a good control group. If it happened to most people then it's a new normal but that doesn't mean it had no effect.
Thank you, this was a wonderfully thought-provoking response (also, the first season of Connections is probably my favourite documentary of all time!).
One thing I will offer is that in my household growing up, television was positive because it was an experience that we shared as a family. We would watch TV shows together, talk about them together, laugh at them together, etc. In that sense, television brought outside viewpoints into our household and spurred conversation. I think that is one of the key factors that may differentiate between TV having good effects and TV having bad effects on different people.
In a sense, I think that although television itself isn't interactive, you could say that our family was 'interactive about' television. So we got the benefits of being able to use television in a positive way.
Thanks for reminding me of how important that was for me :)
By the way, on the limitation of television being a passive medium.... This reminds me of something I read back when I was a kid that was very profound for me. I can't recall exactly now, but I think it was in a Sierra On-Line catalogue where Roberta Williams said something about wanting her children to play adventure games rather than watch television as with adventure games, they had to be actively engaged rather than passive. This really resonated with me at the time, given that I was really getting into the Space Quest & other 'Quest games :)
> Thank you, this was a wonderfully thought-provoking response (also, the first season of Connections is probably my favourite documentary of all time!).
Thank you. I hope to meet or communicate with Mr Burke at some point soon, I know Dan Carlin had a podcast with him a little while back if you're interested in his new take on the world. Connections remains the high water mark for documentary making and it is worth reading the books. If you want to watch a documentary in a similar style I suggest The Ascent of Man.
> In a sense, I think that although television itself isn't interactive, you could say that our family was 'interactive about' television. So we got the benefits of being able to use television in a positive way.
I believe you, I am mainly thinking of the average 5 hours per day the average American (or European) spends in front of the television. The dose makes the poison!
> This really resonated with me at the time, given that I was really getting into the Space Quest & other 'Quest games
Yes, it is clear that videogaming can provide for a shared community and culture, most obviously the MMORPGS. This is not something television achieves, or if it does, it is rare, like fans of Mythbusters or Connections. In the present we are concerned with developing the foundations of the Net, like commerce or the law. But ultimately I think a Net culture will be the most valued feature we ascribe to the Net.
...In programming there is a wide-spread 1st order
theory that one shouldn’t build one’s own tools,
languages, and especially operating systems. This is
true—an incredible amount of time and energy has gone
down these ratholes. On the 2nd hand, if you can build
your own tools, languages and operating systems, then
you absolutely should because the leverage that can be
obtained (and often the time not wasted in trying to
fix other people’s not quite right tools) can be
incredible.
I love this quote because it justifies a DIY attitude of experimentation and reverse engineering, etc., that generally I think we could use more of.
However, more often than not, I find the sentiment paralyzing. There's so much that one could probably learn to build themselves, but as things become more and more complex, one has to be able to make a rational tradeoff between spending the time and energy in the rathole, or not. I can't spend all day rebuilding everything I can simply because I can.
My question is: how does one decide when to DIY, and when to use what's already been built?
This is a tough question. (And always has been in a sense, because every era has had projects where the tool building has sunk the project into a black hole.)
It really helped at Parc to work with real geniuses like Chuck Thacker and Dan Ingalls (and quite a few more). There is a very thin boundary between making the 2nd order work vs getting wiped out by the effort.
Another perspective on this is to think about "not getting caught by dependencies" -- what if there were really good independent module systems -- perhaps aided by hardware -- that allowed both worlds to work together (so one doesn't get buried under "useful patches", etc.)
One of my favorite things to watch at Parc was how well Dan Ingalls was able to bootstrap a new system out of an old one by really using what objects are good for, and especially where the new system was even much better at facilitating the next bootstrap.
I'm not a big Unix fan -- it was too late on the scene for the level of ideas that it had -- but if you take the cultural history it came from, there were several things they tried to do that were admirable -- including really having a tiny kernel and using Unix processes for all systems building (this was a very useful version of "OOP" -- you just couldn't have small objects because of the way processes were implemented). It was quite sad to see how this pretty nice mix and match approach gradually decayed into huge loads and dependencies. Part of this was that the rather good idea of parsing non-command messages in each process -- we used this in the first Smalltalk at Parc -- became much too ad hoc because there was not a strong attempt to intertwine a real language around the message structures (this very same thing happened with http -- just think of what this could have been if anyone had been noticing ...)
What's a good non-UNIX open-source operating system that's useful for day-to-day work, or at least academically significant enough that it's worth diving in to?
I think, usable day-to-day, I'd say you're down to Haiku, MorphOS, Genode, MINIX 3, and/or A2 Bluebottle. Haiku is a BeOS clone. MorphOS is one of last Amiga's that looks pretty awesome. Genode OS is a security-oriented, microkernel architecture that's using UNIX for bootstrapping but doesn't inherently need it. MINIX 3 similarly bootstrapping on NetBSD but adds microkernels, user-mode drivers, and self-healing functions. A2 Bluebottle is most featured version of Oberon OS in safe, GC'd language. Runs fast.
The usability of these and third party software available vary considerably. One recommendation I have across the board is to back up your data with a boot disc onto external media. Do that often. Reason being, any project with few developers + few users + bare metal is going to have issues to resolve that long-term projects will have already knocked out.
Minix isn't bootstrapping on netbsd, the entire goal of the system is to be a microkernel based unix. It uses the netbsd userland because you don't need to rewrite an entire unix userland for no reason just to change kernels.
Mental slip on my part. Thanks for the correction. I stand by the example at least for the parts under NetBSD like drivers and reincarnation server. Their style is more like non-UNIX, microkernel systems of the past. Well, some precedent in HeliOS operating system but that was still detour from traditional UNIX.
The difference is PharoNOS has a Linux running behind while the idea of SqueakNOS is to build a complete operating system via Squeak. In this way you can quickly hack it. There is a great page about these initiatives here: http://wiki.squeak.org/squeak/5727
I was going to mention QNX Demo Disk in my UNIX alternatives comment. I think I edited it out for a weak fit to the post. It was an amazing demo, though, showing what a clean-slate, alternative, RTOS architecture could do for a desktop experience. The lack of lag in many user-facing operations was by itself a significant experience. Showed that all the freezes and huge slow-downs that were "to be expected" on normal OS's weren't necessary at all. Just bad design.
It's neat that it was the thing that inspired one of your Squeak projects. Is SqueakNOS usable day-to-day in any console desktop or server appliance context? Key stuff reliable yet?
We implemented SqueakOS while some friends implemented SqueakNOS. I don't think they are being used anywhere but for educational purposes it is amazing that drivers and a TCP/IP stack could be implemented (and debugged!) in plain smalltalk. There was some more information here: http://lists.squeakfoundation.org/pipermail/squeaknos/2009-M...
I'd assume that depends on your measure of worth, I'd say. Many operating systems had little academic significance when it was most academically or commercially fruitful to invest time in. Microkernel and dependency specific operating systems would be interesting. Or hardware based capability based operating systems.
Could someone give hints/pointers that help me understand the following? "parsing non-command messages in each process ... not a strong attempt to intertwine a real language around the message structures (this very same thing happened with http"
Does that mean the messages should have been part of a coherent protocol or spec? That there should have been some thought behind how messages compose into new messages?
Smalltalk was an early attempt at non-command-messages to objects with the realization that you get a "programming language" if you take some care with the conventions used for composing the messages.
I tend to do both in parallel and the first one done wins.
That is, if I have a problem that requires a library or program, and I don't know of one, I semi-simultaneously try to find a library/program that exists out there (scanning forums, googling around, reading stack overflow, searching github, going to language repositories for the languages I care about, etc) and also in parallel try to formulate in my mind what the ideal solution would look like for my particular problem.
As time goes by, I get closer to finding a good enough library/program and closer to being able to picture what a solution would look like if I wrote it.
At some point I either find what I need (it's good enough or it's perfect) or I get to the point where I understand enough about the solution I'm envisioning that I write it up myself.
Yes. If it takes me longer to figure out how to use your library or framework than to just implement the functionality myself, there is no point in using the library.
Some people claim you should still use the 3rd party solution because of the cost of supporting the extra code you have written. But bugs can exist in both my code and the 3rd party code and I understand how to fix bugs in my code much more easily.
Other points of consideration: My coworkers might not already know some library, but they definitely won't know my library. My coworker's code is just about as "3rd party" as any library - as is code I wrote as little as 6 months ago. Also my job owns that code, so rolling my own means I need to write another clone every time I switch employers - assuming there's no patents or overly litigious lawyers to worry about.
But you're of course correct that there is, eventually, a point where it no longer makes sense to use the library.
> Some people claim you should still use the 3rd party solution because of the cost of supporting the extra code you have written. But bugs can exist in both my code and the 3rd party code and I understand how to fix bugs in my code much more easily.
The problem is I got so tired of fixing bugs in coworker / former coworker code that I eventually replaced their stuff with off the shelf libraries, just so the bugs would go away. And in practice, they did go away. And it caught several usage bugs because the library had better sanity checks. And to this day, those former coworkers would use the same justifications, in total earnestness.
I've never said "gee, I wish we used some custom bespoke implementation for this". I'll wish a good implementation had been made commonly available as a reusable library, perhaps. But bespoke just means fewer eyes and fewer bugfixes.
If there happens to be a well-tested third party library that does what you want, doesn't increase your attack surface more than necessary, is supported by the community, is easy to get up and running with, and has a compatible license with what you are using it in, then by all means go for it.
For me and my work, I tend to find that something from the above list is lacking enough that it makes more sense to write it in-house. Not always, and not as a rule, but it works out that way quite a bit.
I would also argue that if coworkers couldn't write a library without a prohibitive number of bugs, then they won't be able to write application or glue code either. So maybe your issue wasn't in-house vs third party libraries, but the quality control and/or developer aptitude around you.
You're not wrong. The fundamental issue wasn't in-house vs third party libraries.
The developers around me tend to be inept at time estimation. They completely lack that aptitude. To be fair, so do I. I slap a 5x multiplier onto my worst case estimates for feature work... and I'm proud to end up with a good average estimate, because I'm still doing better than many of my coworkers at that point. Thank goodness we're employed for our programming skills, not our time estimation ones, or we'd all be unemployable.
They think "this will only take a day". If I'm lucky, they're wrong, and they'll spend a week on it. If I'm unlucky, they're right, and they'll spend a day on it - unlucky because that comes with at least a week's worth of technical debt, bugs, and other QC issues to fix at some point. In a high time pressure environment - too many things to do, too little time to do it all in even when you're optimistic - and it's understandable that the latter is frequently chosen. It may even be the right choice in the short term. But this only reinforces poor time estimation skills.
The end result? They vastly underestimate the cost of supporting the extra code they'll write. They make the "right" choice based on their understanding of the tradeoffs, and roll their own library instead of using a 3rd party solution. But as we've just established their understanding was vastly off basis. Something must give as a result, no matter how good a programmer they are otherwise: schedule, or quality. Or both.
Isn't the answer contained in the quote? Do a cost/benefit analysis of the "amount of time and energy" that would go "down these ratholes" versus the "the time not wasted in trying to fix other people’s not quite right tools."
The Lean Startup advocates proportional investment in solutions. WHEN the problem comes up (again, after deciding to do this) determine how much your time percentage wise this took out of your week or month. Invest that amount to fix it, right now. My interpretation would be, spend that time trying to solve part of it. Every time that problem comes up keep investing in that thing, that way if you've made the wrong call you only waste a small portion of your time. But you also are taking steps to mitigate it if becomes more of an issue in the future.
Having gone down several myself, I can say it's hard. You lose time. You have to accept you've lost time and learn how not to do it in the future.
My advice is to collaborate with people who are much, much smarter than you and have the expectation that things actually get done because they know they could do it. You learn what productivity looks like first, at the most difficult and complex level you're capable of.
That sets the bar.
Everything has to be equal to or beneath that unless your experience tells you you'll be able to do something even greater (possibly) with the right help or inspiration
You gain experience by going down similar rat holes, until you feel that you can adequately compare the situation you are in now to an experience in the past.
For many particular examples, there have already been enough rathole spelunkers to provide useful data. Maybe start looking in the places where there isn't already useful data?
Agreed. It's often much, much harder to articulate why an idea is bad or a rat hole. You just move on.
I've come up with explanation by analogy. You can demonstrate quite easily in mathematics how you can create a system of notation or a function that quickly becomes impossible to compute. A number that is too large, or an algorithm that would take infinity amount of time and resources to solve...
It seems to be in nature that bad ideas are easy. Good ideas are harder, because they tend to be refinements of what already exists and what is already good.
So pursue good ideas. Pursue the thing that you have thought about and decided has the best balance between values and highest chance to succeed. Sometimes it's just a strong gut feeling. Go for it, but set limits, because you don't want to fall prey to a gut feeling originating from strong intuition but an equally strong lack of fundamental understanding.
I think you have to weigh your qualms against the difficulty of implementation. They're both spectra, one from 'completely unusable' to 'perfect in its sublime beauty', the other from 'there's a complete solution for this' to 'i need to learn VHDL for this'.
There's some factors that help shift these spectra.
Configurability helps. If I can change a config to get the behavior I want, that is incredible, thank you.
Open source helps. Getting to see how they did it reduces reverse engineering work immensely if I ever have to dig in.
Modularity helps. If I can just plop in my module instead of doing brain surgery on other modules, that makes it a lot easier.
Good components help. Say I need a webscraper and know python. Imagine there was only selenium and not even urllib, but some low level TCP/IP library. I get a choice between heavy but easy or slim but high maintenance. But there's the sexy requests library, and there is the beautiful beautifulsoup4. I tell requests what to get, tell bs4 what I want from it, and I'm done.
Another great example for this is emacs. python-mode + elpy (almost complete solution), hide-show mode, electric-pair mode, and if anything still bugs me, it is fixable. If it were OOP, I'd inherit a lot of powerful functions, but I can always override anything that is wrong.
Expertise helps. If I have written a kernel module, that's another avenue to solving problems I have.
Expertise is a special case here worth more attention. It's the main thing that changes for any single programmer, and can skew this equation immensely. Expertise grows when you struggle with new things. Preferably just outside what you know and are comfortable with.
Considering that, DIY whenever you can afford to DIY (eg. pay the upfront cost of acquiring expertise), DIY whenever it is just outside what you can do, or DIY when it makes a lot of sense (eg. squarely in your domain of expertise, and there's a benefit to be had).
In concrete examples, that means don't DIY when you're on a tight deadline, don't attempt to write your own kernel after learning about variables, don't write your own parser generator when say, YACC, solves your problem just fine.
Specifically with regards to languages and OS's, I wonder how much that cost/benefit equation shifts as things have become so much more complex, and as we continue to pile on abstraction layer after abstraction layer.
I think the problem is not complexity but size. Most of the source for the Linux kernel is in the drivers, for instance. As for languages, most of the weight is in the libraries.
1. If you were to design a new programming paradigm today using what we have learnt about OOP what would it be?
2. With VR and AR (Hololens) becoming a reality (heh) how do you see user interfaces changing to work better with these systems? What new things need to be invented or rethought?
3. I also worked at Xerox for a number of years although not at PARC. I was always frustrated by their attitude to new ideas and lack of interest in new technologies until everyone else was doing it. Obviously businesses change over time and it has been a long time since Xerox were a technology leader. If you could pick your best and worst memories from Xerox what would they be?
Cheers for your time and all your amazing work over the years :)
HN is an excellent venue, but is necessarily text oriented, which is an OK tradeoff I think.
My next project after Stack Overflow, Discourse, is an 100% open source, flexible multimedia-friendly discussion system. It's GPL V2 on the code side, but we also tried to codify Creative Commons as the default license in every install, so discussion replies belong to the greater community: https://discourse.org
(Surprisingly, the default content licenses for most discussion software tend to be rather restrictive.)
Could you afterwards build a discussion Platform to find (partial) agreement in various political etc topics? That seems like it would have huge impact and is really missing.. thought about starting something like that but never got to it.
Still there seems to be only a sandbox install. Why can't we have discourse just like stackoverflow just with technical discussions allowed instead of attacked by both mods and the rules.
Come to think of it, AltSpaceVR on the HTC Vive looks a lot like Croquet.
I think Google Glass should've been held back until VR/Augmented Reality gets established. Many Croquet style roving "viewports" projected from Google Glass feeds in an abstracted 3D model of a real world location would be a great way to do reporting on events.
1. After Engelbart's group disbanded it seemed like he ended up in the wilderness for a long time, and focused his attention on management. I'll project onto him and would guess that he felt more constrained by his social or economic context than he was by technology, that he envisioned possibilities that were unattainable for reasons that weren't technical. I'm curious if you do or have felt the same way, and if have any intuitions about how to approach those problems.
3. I've found the Situated Learning perspective interesting (https://en.wikipedia.org/wiki/Situated_learning). At least I think about it when I feel grumpy about all the young kids and Node.js, and I genuinely like that they are excited about what they are doing, but it seems like they are on a mission to rediscover EVERYTHING, one technology and one long discussion at a time. But they are a community of learning, and maybe everyone (or every community) does have to do that if they are to apply creativity and take ownership over the next step. Is there a better way?
It used to be the case that people were admonished to "not re-invent the wheel". We now live in an age that spends a lot of time "reinventing the flat tire!"
The flat tires come from the reinventors often not being in the same league as the original inventors. This is a symptom of a "pop culture" where identity and participation are much more important than progress...
This is incredibly hard hitting and I'm glad I read it, but I'm also afraid it would "trigger" quite a few people today.
What steps can a person take to get out of pop culture and try to get into the same league as the inventors? Incredibly stupid question to have to ask but I feel really lost sometimes.
I think it is first a recognition problem -- in the US we are now embedded in a pop culture that has progressed far enough to seriously hurt places that hold "developed cultures". This pervasiveness makes it hard to see anything else, and certainly makes it difficult for those who care what others think to put much value on anything but pop culture norms.
The second, is to realize that the biggest problems are imbalance. Developed arts have always needed pop arts for raw "id" and blind pushes of rebellion. This is a good ingredient -- like salt -- but you can't make a cake just from salt.
I got a lot of insight about this from reading McLuhan for very different reasons -- those of media and how they form an environment -- and from delving into Anthropology in the 60s (before it got really politicized). Nowadays, books by "Behavioral Economists" like Kahneman, Thaler, Ariely, etc. can be very helpful, because they are studying what people actually do in their environments.
Another way to look at it is that finding ways to get "authentically educated" will turn local into global, tribal into species, dogma into multiple perspectives, and improvisation into crafting, etc. Each of the starting places stays useful, but they are no longer dominant.
What steps would a group of people (civilization?) need to take in order to make progress here? When choices are abundant, the masses have been enabled, and yet knowledge is still at a premium?
All cultures have a lot of knowledge -- the bigger influences are contextual and epistemological (i.e. "points of view" and "stance", and "what is valued", etc.)
Self-awareness of what we are ("from Mars") is the essential step, and it's what real education needs to be about.
1. what do you think about the hardware we are using as foundation of computing today? I remember you mentioning about how cool was the architecture of the Burroughs B5000 [1] being prepared to run on the metal the higher level programming languages. What do hardware vendors should do to make hardware that is more friendly to higher level programming? Would that help us to be less depending on VM's while still enjoying silicon kind of performance?
2. What software technologies do you feel we're missing?
If you start with "desirable process" you can eventually work your way back to the power plug in the wall. If you start with something already plugged in, you might miss a lot of truly desirable processes.
Part of working your way back to reality can often require new hardware to be made or -- in the case of the days of microcode -- to shape the hardware.
There are lots of things vendors could do. For example: Intel could make its first level caches large enough to make real HLL emulators (and they could look at what else would help). Right now a plug-in or available FPGA could be of great use in many areas. From another direction, one could think of much better ways to organize memory architectures, especially for multi-core chips where they are quite starved.
And so on. We've gone very far down the road of "not very good" matchups, and of vendors getting programmers to make their CPUs useful rather than the exact opposite approach. This is too large a subject for today's AMA.
> Intel could make its first level caches large enough to make real HLL emulators
If you make the L1 cache larger, it will become slower and will be renamed to "L2 cache". There are physical reasons why the L1 cache is not larger, even though programs written in non-highlevel languages would profit from larger caches (maybe even moreso than HLL programs).
> Right now a plug-in or available FPGA could be of great use in many areas.
FPGAs are very, very HLL-unfriendly, despite lots of effort from industry and academia.
Thanks for the attention Alan! I love the reverse-engineering the driven by desire approach :D
We need to find ways to free ourselves from the cage of "vendors getting programmers to make their CPUs useful rather than the exact opposite approach" <- meditate on this we all should
Have you looked into the various Haskell/OCaml to hardware translators people have been coming up with the past few years?
It seems like it's been growing and several FPGA's are near that PnP status. In particular the notion of developing compile time proved RTS using continuation passing would be sweet.
Even with newer hardware it seems we're still stuck in either dynamic mutable languages or functional static ones. Any thoughts on how we could design systems incorporating the best of both using modern hardware capacities? Like... Say reconfigurable hierarchical element system where each node was an object/actor? Going out on a bit of a limb with that last one!
Without commenting on Haskell, et al., I think it's important to start with "good models of processes" and let these interact with the best we can do with regard to languages and hardware in the light of these good models.
I don't think the "stuckness" in languages is other than like other kinds of human "stuckness" that come from being so close that it's hard to think of any other kinds of things.
Thanks! That helps reaffirm my thinking that "good models of processes" are important, even though implementations will always have limitations. Good to know I'm not completely off base...
A good example for me has been virtual memory pattern, where from a processes point-of-view you model memory as an ideal unlimited virtual space. Then you let the kernel implementation (and hardware) deal with the practical (and difficult details). Microsoft's Orleans implementation of the actor model has a similar approach that they call "virtual actors" that is interesting as well.
My own stuckness has been an idea of implementing processes using hierarchical state machines, especially for programming systems of IoT type devices. But I haven't been able to figure out how to incorporate type check theorems into it.
At my office a lot of the non-programmers (marketers, finance people, customer support, etc) write a fair bit of SQL. I've often wondered what it is about SQL that allows them to get over their fear of programming, since they would never drop into ruby or a "real" programming language. Things I've considered:
* Graphical programming environment (they run the queries
from pgadmin, or Postico, or some app like that)
* Instant feedback - run the query get useful results
* Compilation step with some type safety - will complain
if their query is malformed
* Are tables a "natural" way to think about data for humans?
* Job relevance
Any ideas? Can we learn from that example to make real programming environments that are more "cross functional" in that more people in a company are willing to use them?
for user in table_users:
if user.is_active:
return user.first_name;
vs:
SELECT first_name FROM users_table
WHERE is_active
It's unfortunate that the order of the clauses in SQL is "wrong" (e.g. you should say FROM, WHERE, SELECT: Define the universe of relevant data, filter it down, select what you care about), but it's still quite easy to wrap your mind around. You are asking the computer for something, and if you ask nicely, it tells you what you want to know. Compare that to procedural programming, where you are telling the computer what to do, and even if it does what you say, that may not have been what you actually wanted after all.
It's unfortunate that the order of the clauses in SQL is "wrong"
SQL is written goal-oriented.
You start with what you want (the goal). Then you specify from where (which can also be read as "what", since each table generally describes a thing) and finally you constrain it to the specific instances you care about.
SELECT the information I want FROM the thing that I care about WHERE condition constrains results to the few I want
Having said that, I would personally still prefer it in reverse like you say. I can see the value of how SQL does it, though, especially for non-programmers who think less about the process of getting the results and more about the results they want (because they haven't been trained to think of the process, like programmers have).
It makes sense for someone who isn't thaaaaat technical to start with "well, I want the name and salary of the employee but only those that are managers": SELECT name, salary FROM employee WHERE position = 'manager'
Admittedly even that isn't perfect and I assume that it wouldn't take much for someone to learn the reverse.
Along this point, C# and VB.NET have SQL-like expressions that can be used for processing, called LINQ [1]. They even get the order of the clauses correct!
A feature like this may help your programmers who are used to thinking in terms of filter -> select -> order.
> Compare that to procedural programming, where you are telling the computer what to do, and even if it does what you say, that may not have been what you actually wanted after all.
Procedural vs. functional phrasing in no way changes the basic fact that if you ask a computer the wrong question it'll give you the wrong result.
"go through the list of all users and add the ones which are active to a new list"
vs.
"the list I want contains all active users from the list of all users"
To play devil's advocate Prolog is considered much more similar to SQL than any other language and I suspect that will have an extremely high learning cost. That may be me being biased due to learning procedural languages first. At the same time I consider myself well versed in SQL.
I think Prolog suffers in that comparison mostly because of its much more ambitious scope. Most non-developer/DBA people have no concept of what a SQL query is actually doing, whereas most nontrivial Prolog programs require conceptualizing the depth-first-search you're asking the language to perform in order to get it right. If you restricted your Prolog world to the kind of "do some inference on a simple family tree database of facts" that people first learn, Prolog would be pretty easy too.
But a list comprehension is a declarative construct, which can be best appreciated when porting some list comprehensions into loops. Especially nested comprehensions.
Totally meaningful difference! With the list comprehension, you're still telling the machine how to go about getting the data; there is an explicit loop construct. With SQL, I'm simply declaring what results I want, and the implementation is left to the execution engine.
For instance, the SQL query can be parallelized, but not so with the Python list comprehension. If you wanted to create a version that could be run in parallel in Python, you'd have to do it with a map()/filter() construct. Ignoring readability for a sec (pretend it's nice and elegant, like it would be in e.g. Clojure), you are still specifying how the machine should accomplish the goal, not the goal itself.
filter(lambda x: x is not None, map(lambda u: u.first_name if u.is_active else None, table_users))
My main reason for teaching it was that it was a skill that helped me immensely as a journalist, in terms of being able to do data analysis. Because I learned it relatively late in my career, I thought it'd be hard for the students but most of them are able to get it.
Even though I use relatively little SQL in my day to day work, it's my favorite thing to teach to novices. First, it has a similar data model to spreadsheets, so it feels like a natural progression. Secondly, for many students, this is the first time that they'll have done "real" programming and the first time that they learn how to tell a computer to do something rather than learn how to use a computer. In Excel, for example, you double click a file and the entire thing opens. With SQL, you're required to not just specify the database and table, but also each and every column...it's annoying at first, but then you realize that there is power in being explicit.
The main advantage of teaching SQL over, say R, as a first language is that SQL's declarative syntax is easy to follow AND you can do most of what you need with a limited subset of the language...for instance, I don't have to teach variables and loops and functions...which is good because I don't even know how to really do those in SQL (just haven't had the need when I can work from R or Pandas).
When a beginner student fucks up a basic Python script, there are any number of reasons for the failure that is beyond the student's expected knowledge. When a novice student fucks up a SQL query...it's easier to blame the mistake on the student (e.g. Misspelling of names/syntax)
What are the main factors which encourage (and are helpful) non-programmers to use SQL ?
We provide low code platform (SQL) to organize data and build custom applications as per specific workflow requirements. We are assuming that teaching/educating/training combined with lots of sample SQL code with real world examples are helpful to non-programmers for using SQL.
My guess would be that there is a lot of interesting public data available in SQL/CSV/Excel formats. If a journalist can browse that data efficiently they can probably find some interesting stories and leads.
Just a thought: Is it mostly select statements that your colleagues write? Because if they do, they might not fear accidentally altering the data. I found that new programmers can get confused by the difference between things that are immutable and those that aren't.
>>I've often wondered what it is about SQL that allows them to get over their fear of programming
Thats barely programming. Even by the most lenient definition what they do isn't programming.
Firstly SQL's are a little like Excel Macros, they lower the barrier to entry to basic twiddling. Got a SQL client(Toad etc?)? you can throw a snippet or two quickly. Anything beyond that gets difficult. Tricky joins, sub queries, troubleshooting big queries, optimization problems etc etc. Beyond this writing re usable code, test discipline and a range of other tasks that make code run for years is what is your everyday work as a programmer.
Sure you could saw a log of wood once a while, but don't confuse that for being a full time carpenter.
As someone pretty close to this camp, it comes down to your last bullet point - needing to do it, in my opinion. A smaller subset of those people will also learn VBA for the same reason-it helps them get their job done. The benefit those two have is that they are either built in to the tools already (VBA), or a DBA does most of the set up and the used mostly just runs queries against it and doesn't have to worry too much about indexing, performance, schemas, etc (SQL). If I were to try to turn them onto python, it'd be an effort to get it installed and then get them to use the commandline.
With SQL, you get a complete solution to your problem immediately (the data you want is returned). So, high value return on effort motivates people to learn it.
Previously you've mentioned the "Oxbridge approach" to reading, whereby--if my recollection is correct--you take four topics and delve into them as much as possible. Could you elaborate on this approach (I've searched the internet, couldn't find anything)? And do you think this structured approach has more benefits than, say, a non-structured approach of reading whatever of interest?
There are more than 23,000,000 books in the Library of Congress, and a good reader might be able to read 23,000 books in a lifetime (I know just a few people who have read more). So we are contemplating a lifetime of reading in which we might touch 1/10th of 1% of the extent books. We would hope that most of the ones we aren't able to touch are not useful or good or etc.
So I think we have to put something more than randomness and following links to use here. (You can spend a lot of time learning about a big system like Linux without hitting many of the most important ideas in computing -- so we have to heed the "Art is long and Life is short" idea.
Part of the "Oxbridge" process is to have a "reader" (a person who helps you choose what to look at), and these people are worth their weight in gold ...
The late Carl Sagan had a great sequence in the original Cosmos where he made a similar point about how many books one could read in a lifetime:
If I finish a book a week, I will read only a few thousand
books in my lifetime, about a tenth of a percent of the
contents of the greatest libraries of our time. The trick
is to know which books to read.
General question about this figure, which I've seen before:
> read 23,000 books in a lifetime
As a very conservative lower bound, a person who lives to the age of 80 would have to read 0.79 books per day, from the day they were born, to reach this figure.
Or, to put it another way, who has read 288+ books in the last year?
I'm quite sceptical about this figure. Any thoughts as to how this might be possible? Are the people Alan mentions speed-reading? Anyone else know similarly prolific readers?
Yes, it is possible. It is partly developing a kind of fluency that is very similar to sight-reading music (this is a nice one to think about because you really have to grok what is there to do it, and you have to do it in real time at "prima vista").
Doing a lot of it is one of the keys! Doing it in a way that various short and long-term memories are involved is another key (rapid reading with comprehension of both text and music is partly a kind of memorization and buffering, etc.)
I don't think I've read 23,000 books in 76 years, but very likely somewhere between 16,000 and 20,000 (I haven't been counting). Bertrand Russell easily read 23,000 books in his lifetime, etc.
AS someone that read at least one book per day if not more since the age of 6, yes it is possible. I can read between 100 to 200 page per hour, depending of the book.
You reach a storage and money problem fast (Ebook are a savior nowadays). And you tend to have multiple books open at the same time.
How does it work? There are several strategy. First i read fast. Experience and training make you read really fast. Secondly, you get a grasp of how things works and what the wirter has to say. In a fiction book, it is not unusual for me to not read a chapter or two because i know what will happen inside.
Finally... Good writers helps. Good writers make reading a breeze and are faster to read. They present ides in concise and efficient way, that follow the flow of thinking.
I will take more question gladly if you have some :)
Secondly, it is the only way i can absorb information in a way that works. Talks, video, podcast, etc are too slow for me. It lacks a good throughput of information and meaning. Which means i tend to just drop or complete what the speaker is telling.
About applying knowledge : yes everyday, in my life. Once you hit a good amount of knowledge and have a nice way to filter it, think about it and deal with it, thngs become nice. Understanding a problem come faster. You can draw link between different situations or use ideas from other field into yours.
as a training... i read. That is all. I begun when i was 5. Never stopped. So i nearly always was like that. The more you read the more you train your brain to read. And your mind to understand how to deal with knowledge and information. Filter it, classify it, absorb it, apply it.
For non fiction, yes it happens. Lot of book sjust repeat the same thing over and over again. When you begin to read a chapter and can complete what will be said in the next 20 pages just from your understanding of the whole situation, reading it is a loss of time. And it would make me be bored and get down from "The Zone".
I keep track in my brain. I have the advantage of being able to always remember if i have read something by just looking at the backcover and the first lines. I still have to forget a book i read. I can not remmeber all the technicalities of course but far enough to know if i read it before or not.
I reread the books i really like or need when needed anyway. Mainly during vacations.
It depends on your definition of
"reading a book."
Wait, what?
I've been reading a book called,
I kid you not, "How to Read a Book:
The Classic Guide to Intelligent
Reading."
Adler and Doren identify four levels
of reading:
1. Elementary: "What does the sentence say?"
This is where speed can be gained
2. Inspectional: "What is the book about?"
Best and most complete reading given a limited time.
Not necessarily reading a book from front to back.
Essentially systematic skimming.
3. Analytical:
Best and most complete reading given unlimited time.
For the sake of understanding.
4. Synoptical:
Reading many books of the same subject at once,
placing them in relation to one another, and
constructing an analysis that may not be found
in any of the books.
Recent research (along with past research) has cast doubt on the plausibility of extreme speed reading [1].
I don't mean to contradict Alan; no doubt he's a fast reader. But if you're actually reading an entire book every day or two, you're spending a lot of every day reading.
Was it in The Future of Reading [1] perhaps? From page 6:
In a very different approach, most music and sports learning only has contact with a one on one expert once or twice a week, lots of individual practice, group experiences where “playing” is done, and many years of effort. This works because most learners really have difficulty absorb ing hours of expert instruction every week that may or may not fit their capacities, styles, or rhythms. They are generally much better off spending a few hours every day learning on their own and seeing the expert for assessment and advice and play a few times a week.
A few universities use a process like this for academics—sometimes called the “tutorial system”, they include Oxford and Cambridge Universities in the UK.
Apologies for rambling on a bit - but I also have some questions about VPRI. As far as I can gather, it was never the intention to publish the entire system (The whole stack needed to get "Frank" running)? If so, I'd like to know why not? Where you afraid that the prototypes would be taken "too seriously" and draw focus away from the ideas you wanted to explore?
The VPRI reports, and before that some of the papers on Croquet (especially the idea of "teatime" which might be described as event-driven, log-based, relative time with eventual data/world-consistency) are fascinating, and I'm grateful for them being published. Also the Ometa-stuff[o] is fascinating (if anything, I think it's gotten too little mind-share).
It seems to me, that we've evolved a bit, in the sense that some things that used to be considered programming (display a text string on screen), no longer is (type it into notepad.exe) -- it's considered "using a computer". At the same time some things that were considered somewhat esoteric is becoming mainstream: perhaps most importantly the growing (resurging?) trend that programming really is meta-programming and language creation.
ReactJS is a mainstream programming model, that fuses html, css, javascript and a at least one templating language - and in a similar vein we see a great adoption in "transpiled" languages, such as coffee script, typescript, clojurescript and more. HN runs on top of Ark, which is a lisp that's been bent hard in the direction of http/html. I see this as a bit of an evolution from when the most common DSLs people were writing for themselves were ORMs - mapping some host language to SQL.
In your time with VPRI - did you find other new patterns or principles for meta-programming and (micro) language design that you think could/should be put to use right now?
Other than the web-developers tendency to reinvent m4 at every turn, in order to program html, css and js at a "higher" level, and the before-mentioned ORM-trends -- the only somewhat mainstream system I am aware of that has a good toolkit for building "real" DSLs, is Racket Scheme (Which shows if one contrasts something like Sphinx, which is a fine system, with Racket's scribble[2]).
Do you think we'll continue to see a rise of meta-programming and language design as more and more tools become available, and it becomes more and more natural to do "real" parsing rather than ad-hoc munging of plain text?
On the "worse is better" divide I've always considered you as someone standing near the "better" (MIT) approach, but with an understanding of the pragmatics inherent in the "worse is better" (New Jersey) approach too.
What is your actual position on the "worse is better" dichotomy?
Do you believe it is real, and if so, can there be a third alternative that combines elements from both sides?
And if not, are we always doomed (due to market forces, programming as "popular culture" etc) to have sub-par tools from what can be theoretically achieved?
I don't think "pop culture" approaches are the best way to do most things (though "every once in a while" something good does happen).
The real question is "does a hack reset 'normal'?" For most people it tends to, and this makes it very difficult for them to think about the actual issues.
A quote I made up some years ago is "Better and Perfect are the enemies of What-Is-Actually-Needed". The big sin so many people commit in computing is not really paying attention to "What-Is-Actually-Needed"! And not going below that.
Exactly -- this is why people are tempted to choose an increment, and will say "at least it's a little better" -- but if the threshold isn't actually reached, then it is the opposite of a little better, it's an illusion.
What advice would you give to those who don't have a HARC to call their own? what would you do to get set up/a community/funding for your adventure if you were starting out today? What advice do you have for those who are currently in an industrial/academic institution who seek the true intellectual freedom you have found? Is it just luck?!
I don't have great advice (I found getting halfway decent funding since 1980 to be quite a chore). I was incredibly lucky to wind up quite accidentally at the U of Utah ARPA project 50 year ago this year.
Part of the deal is being really stubborn about what you want to do -- for example, I've never tried to make money from my ideas (because then you are in a very different kind of process -- and this process is not at all good for the kinds of things I try to do).
Every once in a while one runs into "large minded people" like Sam Altman and Vishal Sikka, who do have access to funding that is unfettered enough to lead to really new ideas.
Hi Alan, the question that troubles me now and I want to ask you is:
Why do you think there is always a difference between:
A. the people who know best how something should be done, and
B. the people who end up doing it in a practical and economically-successful or popular way?
And should we educate our children or develop our businesses in ways that could encourage both practicality and invention? (do you think it's possible?). Or would the two tendencies cancel each other out and you'll end up with mediocre children and underperforming businesses, so the right thing to do is to pick one side and develop it at the expense of the other?
(The "two camps" are clearly obvious in the space of programming language design and UI design (imho it's the same thing: programming languages are just "UIs between programmers and machines"), as you well know and said, with one group of people (you among them) having the right ideas of what OOP and UIs should be like, and one people inventing the technologies with success in industry like C++ and Java. But the pattern is happening at all levels, even business: the people with the best business ideas are almost never the ones who end up doing things and so things get done in a "partially wrong" way most of the time, although we have the information to "do it right".)
We were lucky in the ARPA/PARC communities to have both great funding, and the time to think things through (and even make mistakes that were kept from propagating to create bad defacto standards).
The question you are asking is really a societal one -- and about operations that are like strip mining and waste dumping. "Hunters and gatherers" (our genetic heritage) find fertile valleys, strip them dry and move on (this only works on a very small scale). "Civilization" is partly about learning how to overcome our dangerous atavistic tendencies through education and planning. It's what we should be about generally (and the CS part of it is just a symptom of a much larger much more dire situation we are in).
So you're rephrasing the question to mean that you see it as 'hunter gatherer mode' thinking (doing it in a practical and short term economically-successful way) vs. 'civilized builder mode' thinking (doing it the way we know it should be done) and that they are antagonistic, and that because of the way our society is structured 'hunter gatherer' mode thinking leads to better economical results?
This ends up as a pretty strong critique of capitalism's main idea that market forces drive the progress of science and technology.
Your thinking would lead to the conclusion that we'd have to find a way to totally reshape/re-engineer the current world economy to stop it from being hugely biased in favor of "hunter gatherers that strip the fertile valley dry" ..right?
I hope that people like you are working on this :)
As a high school teacher, I often find that discussions of technology in education diminish 'education' to curricular and assessment documentation and planning; however, these artifacts are only a small element of what is, fundamentally, a social process of discussion and progressive knowledge building.
If the real work and progress with my students comes from our intellectual both-and-forth (rather than static documentation of pre-exhibiting knowledge), are there tools I can look to that have been/will be created to empower and enrich this kind of in situ interaction?
This is a tough one to try to produce "through the keyhole" of this very non-WYSIWYG poorly thought through artifact of the WWW people not understanding what either the Internet or computer media are all about.
Let me just say that it's worth trying to understand what might be a "really good" balance between traditional oral culture learning and thinking, what literacy brings to the party, especially via mass media, and what the computer and pervasive networking should bring as real positive additions.
One way to assess what is going on now is partly a retreat from real literacy back to oral modes of communication and oral modes of thought (i.e. "texting" is really a transliteration of an oral utterance, not a literary form).
This is a disaster.
However, even autodidacts really need some oral discussions, and this is one reason to have a "school experience".
The question is balance. Fluent readers can read many times faster than oral transmissions, and there are many more resources at hand. This means in the 21st century that most people should be doing a lot of reading -- especially students (much much more reading than talking). Responsible adults, especially teachers and parents, should be making all out efforts to help this to happen.
For the last point, I'd recommend perusing Daniel Kahneman's "Thinking: Fast and Slow", and this will be a good basis for thinking about tradeoffs between actual interactions (whether with people or computers) and "pondering".
I think most people grow up missing their actual potential as thinkers because the environment they grow up in does not understand these issues and their tradeoffs....
>I think most people grow up missing their actual potential as thinkers because the environment they grow up in does not understand these issues and their tradeoffs....
This is the meta-thing that’s been bugging me: how do we help people realize they’re “missing their actual potential as thinkers”?
The world seems so content to be an oral culture again, how do we convince / change / equip people to be skeptical of these media?
Joe Edelman’s Centre for Livable Media (http://livable.media) seems like a step in the right direction. How else can we convince people?
Marijuana helped me realize there was a lot about myself I didn't understand and launched my investigation into more effective thought processes. I've become much more driven and thoughtful since I began smoking as an adult.
I stopped assuming I knew everything, and a childlike sense of wonder returned to my life. I began looking beyond what was directly in front of me and sought out more comprehensive generalizations. What do atoms have in common with humans? What does it mean to communicate? Do we communicate with ecosystems? Do individuals communicate with society? What is consciousness and intelligence? Is my mind a collection of multiple conscious processes? How do the disparate pieces of my brain integrate into one conscious entity, how do they shape my subjective reality?
I found information, individuals, and networks to be fundamental to my understanding of the world. I was always interested in them before, but not enough to seek them out or apply them through creative works. I discovered for myself the language of systems. I found a deep appreciation of mathematics and a growth path to set my life on.
I was able to do this exploration at a time when my work was slow and steady. It came along a couple years ago when I was 25, which I've heard is when the brain's development levels off. I feel lucky to have experienced it when I did because I was totally unsatisfied with my life before then.
Since then I've found work I love at a seed stage startup where I've been able to apply my ideas in various ways. I have become much more active as a creator, including exploring latent artistic sensibilities through writing poetry and taking oil painting classes with a very talented teacher. I've found myself becoming an artist in my work - I've become the director and lead engineer at the startup and am exploring ways to determine and distribute truth in the products we sell, and further to make a statement on what art is in a capitalistic society (even if I'm the only one who will ever recognize it). I've also become more empathic and found a wonderful woman and two pups to share my life with, despite previously being extremely solitary. Between work and family I have less time for introspection now, but I expect I'll learn just as much through these efforts.
Ultimatey, I've learned to trust my subconscious. I was always anxious and nervous about being wrong in any situation before, but now I trust that even if I am wrong in the moment my brain can figure out good answers over longer stretches of time.
I don't know how far cannabis led me down this path but it definitely gave me a good strong push.
This is almost exactly my experience! I don't think HN talks about it much, but cannabis is a great way to approach intuitive depth on subjects. For me it was ego, math, music, civics and information theory concepts.
When I started, it was at a job that I absolutely hated (rewriting mantis to be a help desk system), and it helped me get out of it by opening up better understanding of low level systems. That eventually led to high frequency trading systems tuning and some pretty deep civics using Foia.
Not that it was a direct contributor, but I do consider it a seed towards better understanding of the things around me. I don't necessarily feel happier, but I feel much more content.
In seeking to consider what form this “‘really good’ balance” might take, can you recommend any favored resources/implementations to illustrate what “real positive additions” computers and networking can bring to the table? I’m familiar with the influence of Piaget/Papert - but I would love to gain some additional depth on the media/networking side of the conversation.
Thank you for your thoughts. I feel similarly about the cultural regression of literacy.
With a good programming language and interface, one -- even children -- can create from scratch important simulations of complex non-linear systems that can help one's thinking about them.
I wish this were a better platform for fluid discussion, but I'll dig into your writings and talks (Viewpoint/Youtube/TED/elsewhere?) to gain a better understanding of your thoughts on these topics.
-- I was surprised that the HN list page didn't automatically refresh in my browser (seems as though it should be live and not have to be prompted ...)
It certainly helps when reading long replies, that's for sure. I do think a mini-update box with "click here to load" like on stackoverflow for replies or edits would be an interesting idea.
Of course, the 90's style is pretty hacker-hipster as well...can't deny that.
Imagine:
1. trying to read something long, or
2. going off to a follow a link and to come back and respond,
only to find that the page has been refreshing while you looked away. Now you have to scroll about to find the place you were at in order to respond or to continue reading the comments.
This is maybe the most Alan-Kay-like response so far. Short, simple, but a tiny bit like a message from an alternate dimension. "No, no, I'm not asking you to build the also-wrong solution someone else has tried. I'm saying: solve the problem.
Also feels like worse-is-better vs. the right thing. How much engineering effort and additional maintenance would be required to develop and support such a time-model? A lot. Alas, let us re-create software systems to be radically simpler so that we can do the right thing! Still waiting for Urbit and VPRI's 10k line operating system ... but that's what Alan stands for in our industry: "strive to do the right thing," or as you put it, "solve the problem".
Jaron Lanier mentioned you as part of the, "humanistic thread within computing." I understood him to mean folks who have a much broader appreciation of human experience than the average technologist.
Who are "humanistic technologists" you admire? Critics, artists, experimenters, even trolls... Which especially creative technologists inspire you?
I imagine people like Jonathan Harris, Ze Frank, Jaron Lanier, Ben Huh, danah boyd, Sherry Turkle, Douglas Engelbart, Douglas Rushkoff, etc....
What turning points in the history of computing (products that won in the marketplace, inventions that were ignored, technical decisions where the individual/company/committee could've explored a different alternative, etc.) do you wish had gone another way?
Just to pick three (and maybe not even at the top of my list if I were to write it and sort it), are
(a) Intel and Motorola, etc. getting really interested in the Parc HW architectures that allowed Very High Level Languages to be efficiently implemented. Not having this in the 80s brought "not very good ideas from the 50s and 60s" back into programming, and was one of the big factors in:
(b) the huge propensity of "we know how to program" etc., that was the other big factor preventing the best software practices from the 70s from being the start of much better programming, operating systems, etc. in the 1980s, rather the reversion to weak methods (from which we really haven't recovered).
(c) The use of "best ideas about destiny of computing" e.g. in the ARPA community, rather than weak gestures e.g. the really poorly conceived WWW vs the really important and needed ideas of Engelbart.
I get (a) and (b) completely. On (c), I felt this way about NCSA Mosaic in 1993 when I first saw it and I'm relieved to hear you say this because although I definitely misunderstood a major technology shift for a few years, maybe I wasn't wrong in my initial reaction that it was stupid.
I didn't begin to get it until the industry started trying to use browsers for applications in the late '90s/early 2000's. I took one look at the "stateful" architecture they were trying to use, and I said to myself, "This is a hack." I learned shortly thereafter about criticism of it saying the same thing, "This is an attempt to impose statefulness on an inherently stateless architecture." I kept wondering why the industry wasn't using X11, which already had the ability to carry out full GUI interactions remotely. Why reject a real-time interactive architecture that's designed for network use for one that insisted on page refreshes to update the display? The whole thing felt like a step backward. The point where it clobbered me over the head was when I tried to use a web application framework to make a complex web form application work. I got it to work, and the customer was very pleased, but I was ashamed of the code I wrote, because I felt like I had to write it like I was a contortionist. I was fortunate in that I'd had prior experience with other platforms where the architecture was more sane, so that I didn't think this was a "good design." After that experience, I left the industry. I've been trying to segue into a different, more sane way of working with computers since. I don't think any of my past experience really qualifies, with the exception of some small aspects and experiences. The key is not to get discouraged once you've witnessed works that put your own to shame, but to realize that the difference in quality matters, that it was done by people rather like yourself who had the opportunity to put focus and attention on it, and that one should aspire to meet or exceed it, because anything else is a waste of time.
My reference to X11 was mostly rhetorical, to tell the story. I learned at some point that the reason X11 wasn't adopted, at least in the realm of business apps. I was in, was that it was considered a security risk. Customers had the impression that http was "safe." That has since been proven false, as there have been many exploits of web servers, but I think by the time those vulnerabilities came to light, X11 was already considered passe. It's like how stand-alone PCs were put on the internet, and then people discovered they could be cracked so easily. I think a perceived weakness was that X11 didn't have a "request-respond" protocol that worked cleanly over a network for starting a session. One could have easily been devised, but as I recall, that never happened. In order to start a remote session of some tool I wanted to use, I always had to login to a server, using rlogin or telnet, type out the name of the executable, and tell it to "display" to my terminal address. It was possible to do this even without logging in. I'd seen students demonstrate that when I was in school. While they were logged in, they could start up an executable somewhere and tell it to "display" to someone else's terminal. The thing was, it could do this without the "receiver's" permission. It was pretty open that way. (That would have been another thing to implement in a protocol: don't "display" without permission, or at least without request from the same address.) Http didn't have this problem, since I don't think it's possible to direct a browser to go somewhere without a corresponding, prior request from that browser.
X11 was not the best designed GUI framework, from what I understand. I'd heard some complaints about it over the years, but at least it was designed to work over a network, which no other GUI framework of the time I knew about could. It could have been improved upon to create a safer network standard, if some effort had been put into it.
As Alan Kay said elsewhere on this thread, it's difficult to predict what will become popular next, even if something is improved to a point where it could reasonably be used as a substitute for something of lower quality. So, I don't know how to "bring X11 back." As he also said, the better ideas which ultimately became popularly adopted were ones that didn't have competitors already in the marketplace. So, in essence, the concept seemed new and interesting enough to enough people that the only way to get access to it was to adopt the better idea. In the case of X11, by the time the internet was privatized, and had become popular, there were already other competing GUIs, and web browsers became the de facto way people experienced the internet in a way that they felt was simple enough for them to use. I remember one technologist describing the browser as being like a consumer "radio" for the internet. That's a pretty good analogy.
Leaving that aside, it's been interesting to me to see that thick clients have actually made a comeback, taking a huge chunk out of the web. What was done with them is what I just suggested should've been done with X11: The protocol was (partly) improved. In typical fashion, the industry didn't quite get what should happen. They deliberately broke aspects of the OS that once allowed more user control, and they made using software a curated service, to make existing thick client technology safer to use. The thinking was, not without some rationale, that allowing user control led to lots and lots of customer support calls, because people are curious, and usually don't know what they're doing. The thing was, the industry didn't try to help people understand what was possible. Back when X11 was an interesting and productive way you could use Unix, the industry hadn't figured out how to make computers appealing to most consumers, and so in order to attract any buyers, they were forced into providing some help in understanding what they could do with the operating system, and/or the programming language that came with it. The learning curve was a bit steeper, but that also had the effect of limiting the size of the market. As the market has discovered, the path of least resistance is to make the interface simple, and low-hassle, and utterly powerless from a computational standpoint, essentially turning a computer into a device, like a Swiss Army knife.
I think a better answer than IoT is education, helping people to understand that there is something to be had with this new idea. It doesn't just involve learning to use the technology. As Alan Kay has said, in a phrase that I think deserves to be explored deeply, "The music is not in the piano."
It's not an easy thing to do, but it's worth doing, and even educators like Alan continue to explore how to do this.
This is just my opinion, as it comes out of my own personal experience, but I think it's borne out in the experience of many of the people who have participated in this AMA: I think an important place to start in all of this is helping people to even hear that "music," and an important thing to realize is you don't even need a computer to teach people how to hear it. It's just that the computer is the best thing that's been invented so far for expressing it.
I had similar experience as yours and was comfortable coding web pages via cgi-bin with vi. :-)
That is why now I am very interested in containers and microservices in both local and network senses.
As a "consumer", I am also very comfortable to communicate with people via message apps like WeChat and passing wikipedia and GitHub links around. Some of them are JavaScript "web apps" written and published in GitHub by typing on my iPhone. Here is an example:
I don't think networked X11 is quite the web we'd want (it's really outdated), but it does seem better than browsers, which as you point out are so bad you want to stab your eyes out. Unfortunately, now that the web has scaled up to this enormous size, people can't un-see it and it does seem like it's seriously polluted our thinking about how the Internet should interact with end users.
Maybe the trick is something close to this: we need an Internet where it's very easy to do not only WYSIWYG document composition and publishing (which is what the web originally was, minus the WYSIWYG), but really deliver any kind of user experience we want (like VR, for example). It should be based on a network OS (an abstract, extensible microkernel on steroids) where user experiences of the network are actually programs with their own microkernel systems (sort of like an updated take on postscript). The network OS can security check the interpreters and quota and deal out resources and the microkernels that deliver user experiences like documents can be updated as what we want to do changes over time. I think we'd have something more in this direction (although I'm sure I missed any number of obvious problems) if we were to actually pass Alan Kay's OS-101 class as an industry.
We actually sort of very briefly started heading in this direction with Marimba's "Castanet" back at the beginning of Java and I was WILDLY excited to see us trying something less dumb than the browser. Unfortunately, it would seem that economic pressures pushed Marimba into becoming a software deployment provider, which is really not what I think they were originally trying to do. Castanet should have become the OS of the web. I think Java still has the potential to create something much better than the web because a ubiquitous and very mature virtual machine is a very powerful thing, but I don't see anyone trying go there. There's this mentality of "nobody would install something better." And yet we installed Netscape and even IE...
BTW, I do think the security problems of running untrusted code are potentially solvable (at least so much as any network security problems are) using a proper messaging microkernel architecture with the trusted resource-accessing code running in one process and the untrusted code running in another. The problem with the Java sandbox (so far as I understand all that) is that it's in-process. The scary code runs with the trusted code. In theory, Java is controlled enough to protect us from the scary code, but in practice, people are really smart and one tiny screw-up in the JVM or the JDK and bad code gets permissions it shouldn't have. A lot of these errors could be controlled or eliminated by separating the trusted code from the untrusted code as in Windows NT (even if only by making the protocol for resource permissions really clear).
A lot of the VPRI work involved inventing new languages (DSLs). The results were extremely impressive but there were some extremely impressive people inventing the languages. Do you think this is a practical approach for everyday programmers? You have also recommended before that there should be clear separation between meta model and model. Should there be something similar to discipline a codebase where people are inventing their own languages? Or should just e.g. OS writers invent the languages and everyone else use a lingua franca?
Tricky question. One answer would be to ask whether there is an intrinsic difference between "computer science" and (say) physics? Or are the differences just that computing is where science was in the Middle Ages?
In physics, you can tell you're making progress because you can explain more things that happen in nature. How can you tell when you're making progress in computer science?
To me it seems like "computer science" lumps together too many different goals. It's like if we had a field called "word science" that covered story-writing, linguistics, scientific publication, typesetting, etc.
Now that it is "morning", I'm not sure that I can do justice to this question here...
But certainly we have to take back the term "computer science" and try to give it real meaning as to what might constitute an actual science here. As Herb Simon pointed out, it's a "science of the artificial", meaning that it is a study of what can be made and what has been made.
Science tries to understand phenomena by making models and assessing their powers. Nature provides phenomena, but so do engineers e.g. by making a bridge in any way they can. Like most things in early engineering, bridge-lore was put in "cookbooks of practice". After science got invented, scientist-engineers could use existing bridges as phenomena to be studied, and now develop models/theories of bridges. This got very powerful rather recently (the Tacoma Narrows bridge went down just a few months after I was born!).
When the first Turing Award winner -- Al Perlis -- was asked in the 60s "What is Computer Science?", he said "It is the science of processes!". He meant all processes including those on computers, but also in Biology, society, etc.
His idea was that computing formed a wonderful facility for making better models of pretty much everything, especially dynamic things (which everything actually is), and that it was also the kind of thing that could really be understood much better by using it to make models of itself.
Today, we could still take this as a starting place for "getting 'Computer Science' back from where it was banished".
In any case, this point of view is very different from engineering. A fun thing in any "science of the artificial" is that you have to make artifacts for both phenomena and models.
(And just to confuse things here, note how much engineering practice is really required to make a good theory in a science!)
Thanks for the answer! It seems like there's a distinction here between exploring how models can/should be built (a mathematical/philosophical task), helping people create and understand these models with computers (a design/engineering task), and using these models to formulate and test hypotheses about ourselves and the world (a scientific task). Maybe the lack of science is because we haven't figured out the math/philosophy/design/engineering parts yet!
Thanks. I've been thinking about your questions. I might be misreading you but I think that the answer is probably yes to both. So we should try to get out of the middle ages by inventing new theories and criticising and testing them like physics. But maybe just the physicists should do that. In the meantime the engineers should focus on being able to communicate clearly with the best tools that are currently available.(part of which is restricting their desire to invent)
Engineering is wonderful -- but think of what happened after real science got invented!
Today's "computer science" is much more like "library science" than it should be on the one hand, and too much coincident with engineering on the other (and usually not great engineering at that).
It's way past time for our not-quite-a-field to grow up more in important ways.
Agreed. It's really motivating to have someone who has shown a few times what can be done continuing to push for better. It's also helpful for you call a spade a spade when you talk of reinventing the flat tire. If more people would recognise both of these then maybe we could have a better future and more stable engineering present (rather than framework/language of the week!)
Computer science is defined by information theory, and we already have mathematical proofs binding together information theory with the laws of quantum physics (such as the example of the minimum energy needed to erase one bit of entropy from memory, something which is bounded by the ambient temperature).
Sort of. There's quite a few theories operating on computer science as we know it today. Especially in software and hardware. Examples include model-driven development, flow-based programming, lambda calculus, state machines, logic-oriented systems, and so on. The mathematical models involved underlying structuring and verification of anything built in these can be quite different although often with some overlapping techniques or principles. There's also been lots of work in high-assurance systems going from requirements and design specifications in a rigorous, mathematical (even mechanical) way down to an implementation in HW, SW, or both. None of them cite information theory. Heck, the analog computers might be outside of it entirely given they implement specific, mathematical functions with continuous operation on reals. I know Shannon had a separate model for them.
So, given I don't study it or read on it, I'm actually curious if you or anyone else has references on where information theory impacts real software development over the years. I study lots of formal methods & synthesis research but never even see the phrase mentioned. I've been imagining it's in its own little field working at a strongly theoretical level making abstract or concrete observations about computers. Just don't see them outside some cryptography stuff I've read.
EDIT to add example below where Bertrand Meyer presents a Theory of Programs that ties it all to basic, set theory.
Yes, that was what I was driving at. Anyone could do physics in the Middle Ages -- they just had to get a pointy hat. A few centuries later after Newton, one suddenly had to learn a lot of tough stuff, but it was worth it because the results more than paid for the new levels of effort.
Hi Alan!
I've got some assumptions regarding the upcoming big paradigm shift (and I believe it will happen sooner than later):
1. focus on data processing rather than imperative way of thinking (esp. functional programming)
2. abstraction over parallelism and distributed systems
3. interactive collaboration between developers
4. development accessible to a much broader audience, especially to domain experts, without sacrificing power users
In fact the startup I'm working in aims exactly in this direction. We have created a purely functional visual<->textual language Luna ( http://www.luna-lang.org ).
By visual<->textual I mean that you can always switch between code, graph and vice versa.
Data like that sentence? Or all of the other sentences in this chat? I find 'data' hard to consider a bad idea in and of itself, i.e. if data == information, records of things known/uttered at a point in time. Could you talk more about data being a bad idea?
Data without an interpreter is certainly subject to (multiple) interpretation :) For instance, the implications of your sentence weren't clear to me, in spite of it being in English (evidently, not indicated otherwise). Some metadata indicated to me that you said it (should I trust that?), and when. But these seem to be questions of quality of representation/conveyance/provenance (agreed, important) rather than critiques of data as an idea. Yes, there is a notion of sufficiency ('42' isn't data).
Data is an old and fundamental idea. Machine interpretation of un- or under-structured data is fueling a ton of utility for society. None of the inputs to our sensory systems are accompanied by explanations of their meaning. Data - something given, seems the raw material of pretty much everything else interesting, and interpreters are secondary, and perhaps essentially, varied.
There are lots of "old and fundamental" ideas that are not good anymore, if they ever were.
The point here is that you were able to find the interpreter of the sentence and ask a question, but the two were still separated. For important negotiations we don't send telegrams, we send ambassadors.
This is what objects are all about, and it continues to be amazing to me that the real necessities and practical necessities are still not at all understood. Bundling an interpreter for messages doesn't prevent the message from being submitted for other possible interpretations, but there simply has to be a process that can extract signal from noise.
This is particularly germane to your last paragraph. Please think especially hard about what you are taking for granted in your last sentence.
Without the 'idea' of data we couldn't even have a conversation about what interpreters interpret. How could it be a "really bad" idea? Data needn't be accompanied by an interpreter. I'm not saying that interpreters are unimportant/uninteresting, but they are separate. Nor have I said or implied that data is inherently meaningful.
Take a stream of data from a seismometer. The seismometer might just record a stream of numbers. It might put them on a disk. Completely separate from that, some person or process, given the numbers and the provenance alone (these numbers are from a seismometer), might declare "there is an earthquake coming". But no object sent an "earthquake coming" "message". The seismometer doesn't "know" an earthquake is coming (nor does the earth, the source of the 'messages' it records), so it can't send a "message" incorporating that "meaning". There is no negotiation or direct connection between the source and the interpretation.
We will soon be drowning in a world of IoT sensors sending context-or-provenance-tagged but otherwise semantic-free data (necessarily, due to constraints, without accompanying interpreters) whose implications will only be determined by downstream statistical processing, aggregation etc, not semantic-rich messaging.
If you meant to convey "data alone makes for weak messages/ambassadors", well ok. But richer messages will just bottom out at more data (context metadata, semantic tagging, all more data) Ditto, as someone else said, any accompanying interpreter (e.g. bytecode? - more data needing interpretation/execution). Data remains a perfectly useful and more fundamental idea than "message". In any case, I thought we were talking about data, not objects. I don't think there is a conflict between these ideas.
It contravenes the common and historical use of the word 'data' to imply undifferentiated bits/scribbles. It means facts/observations/measurements/information and you must at least grant it sufficient formatting and metadata to satisfy that definition. The fact that most data requires some human involvement for interpretation (e.g. pointing the right program at the right data) in no way negates its utility (we've learned a lot about the universe by recording data and analyzing it over the centuries), even though it may be insufficient for some bootstrapping system you envision.
I think what Alan was getting at is that what you see as "data" is in fact, at its basis, just signal, and only signal; a wave pattern, for example, but even calling it a "wave pattern" suggests interpretation. What I think he's trying to get across is there is a phenomenon being generated by something, but it requires something else--an interpreter--to even consider it "data" in the first place. As you said, there are multiple ways to interpret that phenomenon, but considering "data" as irreducible misses that point, because the concept of data requires an interpreter to even call it that. Its very existence as a concept from a signal presupposes an interpretation. And I think what he might have been getting at is, "Let's make that relationship explicit." Don't impose a single interpretation on signal by making "data" irreducible. Expose the interpretation by making it explicit, along with the signal, in how one might design a system that persists, processes, and transmits data.
If we can't agree on what words mean we can't communicate. This discussion is undermined by differing meanings for "data", to no purpose. You can of course instead send me a program that (better?) explains yourself, but I don't trust you enough to run it :)
The defining aspect of data is that it reflects a recording of some facts/observations of the universe at some point in time (this is what 'data' means, and meant long before programmers existed and started applying it to any random updatable bits they put on disk). A second critical aspect of data is that it doesn't and can't do anything, i.e. have effects. A third aspect is that it does not change. That static nature is essential, and what makes data a "good idea", where a "good idea" is an abstraction that correlates with reality - people record observations and those recordings (of the past) are data. Other than in this conversation apparently, if you say you have some data, I know what you mean (some recorded observations). Interpretation of those observations is completely orthogonal.
Nothing about the idea of 'data' implies a lack of formatting/labeling/use of common language to convey the facts/observations, in fact it requires it. Data is not merely a signal and that is why we have two different ideas/words. '42' is not, itself, a fact (datum). What constitutes minimal sufficiency of 'data' is a useful and interesting question. E.g. should data always incorporate time, what are the tradeoffs of labeling being in- or out-of-band, per datom or dataset, how to handle provenance etc. That has nothing to do with data as an idea and everything to do with representing data well.
But equating any such labeling with more general interpretation is a mistake. For instance, putting facts behind a dynamic interpreter (one that could answer the same question differently at different times, mix facts with opinions/derivations or have effects) certainly exceeds (and breaks) the idea of data. Which is precisely why we need the idea of data, so we can differentiate and talk about when that is and is not happening - am I dealing with facts, an immutable observation of the past ("the king is dead") or just a temporary (derived) opinions ("there may be a revolt"). Consider the difference between a calculation involving (several times) a fact (date-of-birth) vs a live-updated derivation (age). The latter can produce results that don't add up. 'date-of-birth' is data and 'age' (unless temporally-qualified, 'as-of') is not.
When interacting with an ambassador one may or may not get the facts, and may get different answers at different times. And one must always fear that some question you ask will start a war. Science couldn't have happened if consuming and reasoning about data had that irreproducibility and risk.
'Data' is not a universal idea, i.e. a single primordial idea that encompasses all things. But the idea that dynamic objects/ambassadors (whatever their other utility) can substitute for facts (data) is a bad idea (does not correspond to reality). Facts are things that have happened, and things that have happened have happened (are not opinions), cannot change and cannot introduce new effects. Data/facts are not in any way dynamic (they are accreting, that's all). Sometimes we want the facts, and other times we want someone to discuss them with. That's why there is more than one good idea.
Data is as bad an idea as numbers, facts and record keeping. These are all great ideas that can be realized more or less well. I would certainly agree that data (the maintenance of facts) has been bungled badly in programming thus far, and lay no small part of the blame on object- and place-oriented programming.
I think in the Science of Process that is being related as a desirable goal, everything would necessarily be a dynamic object (or perhaps something similar to this but fuzzier or more relational or different in some other way, but definitely dynamic) because data by itself is static while the world itself is not.
Not only is your perception based on an interpreter, but how can you be sure that you were even given all of the relevant bits? Or, even what the bits really meant/are?
Of course the selection of data is arbitrary -- but Rich gives us a definition, which he makes abundantly clear and uses consistently. All definitions can be considered arbitrary. He's not making any claim that we have all the relevant bits of data or that we can be sure what the data really means or represents.
But we can expound on this problem in general. In any experiment where we gather data, how can we be sure we have collected a sufficient quantity to justify conclusions (and even if we are using statistical methods that our underlying assumptions are indeed consistent with reality) and that we have accrued all the necessary components? What you're really getting at is an __epistemological__ problem.
My school of thought is that the only way to proceed is to do our best with the data we have. We'll make mistakes, but that's better than the alternative (not extrapolating on data at all.)
Isn't the interpreter code itself data in the sense that it has no meaning without something (a machine) to run it? How do you avoid having to send an interpreter for the interpreter and so on?
Thank you! I started to think on those lines too thanks to the Carl Sagan's Contact novel. That was the first thing that came to mind.
Now the question is, what if there are "objects" more advanced than others and what if advanced-object sends a message concealing an trojan horse?
I think this question was also brought up in the novel/movie too...
I think this is a real life and practical show stopper to develop this concept...
I think object is a very powerful idea to wrap "local" context. But in a network (communication) environment, it is still challenging to handle "remote" context with object. That is why we have APIs and serialization/deserialization overhead.
In the ideal homogeneous world of smalltalk, it is a less issue. But if you want a Windows machine to talk to a Unix, the remote context becomes an issue.
In principle we can send a Windows VM along with the message from Windows and a Unix VM (docker?) with a message from Unix, if that is a solution.
Along this line of logic, perhaps the future of AI is not "machine learning from big data" (a lot of buzz words) but computers that generate runtime interpreters for new contexts.
The association between "patterns" and interpretation becomes an "object" when this is part of the larger scheme. When you've just got bits and you send them somewhere, you don't even have "data" anymore.
Even with something like EDI or XML, think about what kinds of knowledge and process are actually needed to even do the simplest things.
Sounds pretty much like the problem of establishing contact with an alien civilization. Definitely set theory, prime numbers, arithmetic and so on... I guess at some point, objects will be equipped with general intelligence for such negotiations if they are to be true digital ambassadors!
It's hard for me to grasp what this negotiation would look like. Particularly with objects that haven't encountered each other. It just seems like such a huge problem.
I don't really know anything at all about microbiology, but maybe climbing the ladder of abstraction to small insects like ants. There is clearly negotiation and communication happening there, but I have to think it's pretty well bounded. Even if one ant encountered another ant, and needed to communicate where food was, it's with a fixed set of semantics that are already understood by both parties.
Or with honeybees, doing the communication dance. I have no idea if the communication goes beyond "food here" or if it's "we need to decide who to send out."
It seems like you have to have learning in the object to really negotiate with something it hasn't encountered before. Maybe I'm making things too hard.
Maybe "can we communicate" is the first negotiation, and if not, give up.
I remember at one point after listening to one of your talks about TCP/IP as a very good OO system, and pondering the question of how to make software like that, an idea that came to mind was, "Translation as computation." I was combining the concept that as implemented, TCP/IP is about translation between packet-switching systems, so a semantic TCP/IP would be a system that translates between different machine models, though, in terms of my skill, the best that I could imagine was "compilers as translators," which I don't think cuts it, because compilers don't embody a machine model. They assume it. However, perhaps it's not necessary to communicate machine models explicitly, since such a system could translate between them re. what state means. This would involve simulating state to satisfy local operation requirements while actual state is occurring, and will eventually be communicated. I've heard you reference McCarthy's situation calculus re. this.
Well, there's the old Component Object Model and cousins ... under this model an object a encountering a new object b will, essentially, ask 'I need this service performed, can you perform it for me?' If b can perform the service, a makes use of it; if not, not.
Another technique that occurs to me is from type theory ... here, instead of objects we'll talk in terms of values and functions, which have types. So e.g. a function a encountering a new function b will examine b's type and thereby figure out if it can/should call it or not. E.g., b might be called toJson and have type (in Haskell notation) ToJson a => a -> Text, so the function a knows that if it can give toJson any value which has a ToJson typeclass instance, it'll get back a Text value, or in other words toJson is a JSON encoder function, and thus it may want to call it.
The Internet Archive (http://archive.org) is doing the same thing. They have old software stored that you can run in online emulators. I only wish they had instructions for how to use the emulators. The old keyboards and controllers are not like today's.
I think for so many important cases, this is almost the only way to do it. The problems were caused by short-sighted vendors and programmers getting locked into particular computers and OS software.
For contrast, one could look at a much more compact way to do this that -- with more foresight -- was used at Parc, not just for the future, but to deal gracefully with the many kinds of computers we designed and built there.
Elsewhere in this AMA I mentioned an example of this: a resurrected Smalltalk image from 1978 (off a disk pack that Xerox had thrown away) that was quite easy to bring back to life because it was already virtualized "for eternity").
This is another example of "trying to think about scaling" -- in this case temporally -- when building systems ....
The idea was that you could make a universal computer in software that would be smaller than almost any media made in it, so ...
Over all of history, there is no accounting for what "the mainstream" decides to believe and do. Many people (wrongly) think that "Darwinian processes" optimize, but any biologist will point out that they only "tend to fit to the environment". So if your environment is weak or uninteresting ...
This also obtains for "thinking" and it took a long time for humans to even imagine thinking processes that could be stronger than cultural ones.
We've only had them for a few hundred years (with a few interesting blips in the past), and they are most definitely not "mainstream".
Good ideas usually take a while to have and to develop -- so the when the mainstream has a big enough disaster to make it think about change rather than more epicycles, it will still not allocate enough time for a really good change.
At Parc, the inventions that made it out pretty unscathed were the ones for which there was really no alternative and/or no one was already doing: Ethernet, GUI, parts of the Internet, Laser Printer, etc.
The programming ideas on the other hand were -- I'll claim -- quite a bit better, but (a) most people thought they already knew how to program and (b) Intel, Motorola thought they already knew how to design CPUs, and were not interested in making the 16 bit microcoded processors that would allow the much higher level languages at Parc to run well in the 80s.
It seems that barriers to entry in hardware innovation are getting higher and higher due to high risk industrial efforts. In the meantime barriers to entry in software are getting lower and lower due to improvement of toolings in both software and hardware.
On the other hand due to the exponential growth of software dependency, "bad ideas" in software development are getting harder and harder to remove and the social cost of "green field" software innovation is also getting higher and higher.
How do we solve these issues in the coming future?
But e.g. the possibilities for "parametric" parallel computing solutions (via FPGAs and other configurable HW) have not even been scratched (too many people trying to do either nothing of just conventional stuff).
Some of the FPGA modules (like the BEE3) will slip into a Blades slot, etc.
Similarly, there is nothing to prevent new SW from being done in non-dependent ways (meaning the initial dependencies to hook up into the current world can be organized to be gradually removeable, and the new stuff need not have the same kind of crippling dependencies).
For example, a lot can be done -- especially in a learning curve -- if e.g. a subset of Javascript in a browser (etc) can really be treated as a "fast enough piece of hardware" (of not great design) -- and just "not touch it with human hands". (This is awful in a way, but it's really a question of "really not writing 'machine code' ").
Part of this is to admit to the box, but not accept that the box is inescapable.
Thank you Alan for your deep wisdom and crystal vision.
It is the best online conversation I have ever experienced.
It also reminded me inspiring conversations with Jerome Bruner at his New York City apartment 15 years ago. (I was working on some project with his wife's NYU social psychology group at the time.) As a Physics Ph.D. student, I never imaged I could become so interested in Internet and education in the spirit of Licklider and Doug Engelbart.
You probably know that our mutual friend and mentor Jerry Bruner died peacefully in his sleep a few weeks ago at the age of 100, and with much of his joie de vivre beautifully still with him. There will never be another Jerry.
Information in "entropy" sense is objective and meaningless. Meaning only exists within a context. If we think "data" represent information, "interpreters" bring us context and therefore meaning.
Thank you - I was beginning to wonder if anyone in this conversation understood this. It is really the key to meaningfully (!!) move forward in this stuff.
The more meaning you pack into a message, the harder the message is to unpack.
So there's this inherent tradeoff between "easy to process" and "expressive" -- and I imagine deciding which side you want to lean toward depends on the context.
So the idea is to always send the interpreter, along with the data? They should always travel together?
Interesting. But, practically, the interpreter would need to be written in such a way that it works on all target systems. The world isn't set up for that, although it should be.
Hm, I now realize your point about HTML being idiotic. It should be a description, along with instructions for parsing and displaying it (?)
TCP/IP is "written in such a way that it works on all target systems". This partially worked because it was early, partly because it is small and simple, partly because it doesn't try to define structures on the actual messages, but only minimal ones on the "envelopes". And partly because of the "/" which does not force a single theory.
This -- and the Parc PUP "internet" which preceded it and influenced it -- are examples of trying to organize things so that modules can interact universally with minimal assumptions on both sides.
The next step -- of organizing a minimal basis for inter-meanings -- not just internetworking -- was being thought about heavily in the 70s while the communications systems ideas were being worked on, but was quite to the side, and not mature enough to be made part of the apparatus when "Flag Day" happened in 1983.
What is the minimal "stuff" that could be part of the "TCP/IP" apparatus that could allow "meanings" to be sent, not just bits -- and what assumptions need to be made on the receiving end to guarantee the safety of a transmitted meaning?
I don't think it's too late, but it would require fairly large changes in perspective in the general computing community about computing, about scaling, about visions and goals.
Data, and the entirety of human understanding and knowledge derived from recording, measurement and analysis of data, predates computing, so I don't see the relevance of these recent, programming-centric notions in a discussion of its value.
Wouldn't Mr. Kay say that it is education that builds the continuity of the entirety of human understanding? Greek philosophy and astronomy survived in the Muslim world and not in the European, though both possessed plenty of texts, because only the former had an education system that could bootstrap a mind to think in a way capable of understanding and adding to the data. Ultimately, every piece of data is reliant on each generation of humans equipping enough of their children with the mindset capable to use it intelligently.
The value of data is determined by the intelligence of those interpreting it, not those who recorded it.
Of course, this dynamic is sometimes positive. The Babylonians kept excellent astronomical records though apparently making little theoretical advance in understanding them. Greeks with an excellent grasp of geometry put that data to much better use very quickly. But if they had had to wait to gather the data themselves, one can imagine them waiting a long time.
This kind of gets into philosophy, but a metaphor I came up with for thinking about this (another phrase for it is "thought experiment") is:
If I speak something to a rock, what is it to the rock? Is it "signal," or "data"?
Making the concept a little more interesting, what if I resonate the rock with a sound frequency? What is that to the rock? Is that "signal," or "data"?
Up until the Rosetta Stone was found, Egyptian hieroglyphs were indecipherable. Could data be gathered from them, nevertheless? Sure. Researchers could determine what pigments were used, and/or what tools were used to create them, but they couldn't understand the messages. It wasn't "data" up to that point. It was "noise."
I hope I am not giving the impression that I am a postmodernist who is out here saying, "Data is meaningless." That's not what I'm saying. I am saying meaning is not self-evident from signal. The concept of data requires the ability to interpret signal for meaning to be acquired.
Yes, I think if we could get rid of this notion we can probably move in interesting directions. Another way to look at it: if we take any object with sufficient complexity in the universe, how could it interact with other object of sufficient complexity? If we look at humans, as first order augmentation devices for other humans, it's notable that the difference between levels of complexity of their internal state is much higher than the level of complexity of input at any sufficiently small time frame (whatever measurement you decide to take). Basically, the whole state is encoded internally, by means of successive undifirentiated input. In that sense, for example - neural networks don't work with data as such, the data presupposes an internal structure that is absent in an input from the standpoint of the network itself. It is it's job to covert that to something we can reasonably call "data". Moreover, this knowledge is encoded in it's internal state, essentially being the "interpreter" bundled in. Another angle that I like to think from is this: TRIZ has a concept of an ideal device, something performing it's function with minumum overhead required, best that the function be performed by itself, in absence of any device. If we imagine the computer (in a very generic sence) to be such a device, it stands to reason that ideally it will require minimum, or even no input. Obviously it means that we don't need to encode meaning or interpretation into it through directed formal input. The only way for it to happen is for a computer to have a sufficiently complex internal state, capable of converting directed, or even self acquired input to whatever we can eventually call "data". This logic could possibly be applied to some minimimal object - we could look for a unit capable of performing a specific function on a defined range of inputs, building the meaning from it's internal state. The second task then, would be to find a way to compose those object, provided they have no common internal state, and to build systems in which combination of those states would render a larger possible field of operation. Third interesting question would be: how can we build up the internal state of another object, provided we would want to feed it the input requiring interpretation further down the line, building up from whatever minimum we already have.
Sure, the message matters insomuch as it contains any information the receiver might be able to receive, but that doesn't guarantee it will be received, so how much does a message really matter? I don't see how the sender matters that much (unless perhaps the sender and receiver are linked, for example, they exchange some kind of abstract interpreter for the message). But does the message matter on its own if is is encrypted so well that it is indistinguishable from noise to any but one particular receiver? It's just noise without the receiver. I'm not sure what was meant, but this is the best I can do in understanding it.
data isn't the carrier, it isn't the signal (information), and it certainly isn't the meaning (interpretation). A reasonable first approximation is that data is _message_.
Like many here, I'm a big fan of what you've accomplished in life, and we all owe you a great debt for the great designs and features of technologies we use everyday!
The majority of us have not accomplished as much in technology, and many of us, though a minority, are in the top end of the age bell curve. I'm in that top end.
I've found over the years that I've gone from being frustrated with the churn of software/web development, to completely apathetic about it, to wanting something else- something more meaningful, and then to somewhat of an acceptance that I'm lucky just to be employed and making what I do as an older developer.
I find it very difficult to have the time and energy to focus on new technologies that come out all of the time, and less and less able as my brain perhaps is less plastic to really get into the latest JavaScript framework, etc.
I don't get excited anymore, don't have the motivation, ability, or time to keep up with things like the younger folk. Also, I've even gotten tired of mentoring them, especially as I become less able and therefore less respected.
Have you ever had or known someone that had similar feelings of futility or a serious slowdown in their career? If so, what worked/what didn't and what advice could you provide?
Thank you for taking the time to read and respond to everyone you have here. It definitely is much appreciated!
I'm a fair bit closer to the right hand side of the age curve than the left. My advice: Look at the brevity of Alan Kay's responses. When I was young I would have soared past them looking for the point. Now I see that one sentence and I weep. Why didn't anyone say that 20 years ago?
Maybe they did. I was too busy being frustrated with the churn of software development. All my time and energy was focused on new technologies that came out all the time. My young plastic brain spent it's flexibility absorbing the latest framework, etc.
Now that I have lost the motivation, ability and time to keep up with things like the younger folk, I can finally listen to the older folk (hopefully while there are still folk older than me to listen to).
These days I'm trying just to write code. All those young people have soared past the wisdom of their elders looking for the point. It's still there. Don't look at the new frameworks, look at what people were doing 10, 20, 30, 40, 50, 60 years ago. How does it inform what you are doing?
I was fortunate to grow up during a time when Alan Kay was a well-known figure in the personal computing world, and while what he said didn't make sense to me at the time, it still interested me intensely, and I always wondered what he meant by what he said. Strangely enough, looking back on my younger experience with computers, I think I actually did get a little bit of what he was talking about. It's just that I came to understand that little bit independently from listening to him. I didn't realize he was talking about the same thing. It wasn't until I got older, and got to finally see his talks through internet video that I finally started seeing that, and realizing more things by listening to him at length. Having the chance to correspond with him, talk about those things more in-depth, helped as well.
The way I look at it is just take in how fortunate you are to have your realizations when you have them (I've had my regrets, too, that I didn't "get" them sooner), and take advantage of them as much as you can. That's what I've tried to do.
I think we need a system, website, TV show, etc. in which experiences could be posted and rated. The best ideas and past experiences would rise to the top. You could vote and push things into view.
For example, your years of experience have realized that "yet another framework" is not the answer. We need a slower churn. But if the goal is to sell books ... well, now we are fighting capitalism.
There is a very old reading list online I made for the company that is now Accenture -- and this was the subject of a recent HN "gig". I think there is a URL for this discussion in this AMA.
I'm preparing a presentation on how to build a mental model of computing by learning different computer languages. It would be great to include some of your feedback.
* What programming language maps most closely to the way that you think?
* What concept would you reify into a popular language such
that it would more closely fit that mapping?
* What one existing reified language feature do you find impacts the way you write code the most, especially even in languages where it is not available?
So, given work like that, what remaining tough problems are there before you would find a metaprogramming system safe and acceptable? Or do we have the fundemantals available but you just don't like the lack of deployment in mainstream or pragmatic languages and IDE's?
Note: Just dawned on me that you might mean abstract programming in the sense of specifying, analyzing, and coding up abstract requirements closer to human language. Still interested in what gripes or goals you have on that end if so.
"Meta is dangerous" so a safe meta-language within a language will have "fences" to protect.
(Note that "assignment" to a variable is "meta" in a functional language (and you might want to use a "roll back 'worlds' mechanism" (like transactions) for safety when this is needed.)
This is a parallel to various kinds of optimization (many of which violate module boundaries in some way) -- there are ways to make this a lot safer (most languages don't help much)
I've always felt that the meta space is too exponential or hyper to mentally represent or communicate. Perhaps we need different lenses to project the effects of the meta space on our mental model. Do you think this is why Gregor decided to move towards aspects?
I don't think Aspects is nearly as good an idea as MOP was. But the "hyperness" of it is why the language and the development system have to be much better. E.g. Dan Ingalls put a lot of work into the Smalltalks to allow them to safely be used in their own debugging, even very deep mechanisms. Even as he was making these breakthroughs back then, we were all aware there were further levels that were yet to be explored. (A later one, done in Smalltalk was the PIE system by Goldstein and Bobrow, one of my favorite meta-systems)
Aside from metaprogramming, from reading the "four reports" document that is the first Google link, it seems PIE also addresses another hard problem. In any hierarchically organized program, there are always related pieces of code that we would like to maintain together, but which get ripped apart and spread out because the hierarchy was split according to a different set of aspects. You can't get around this problem because if you change what criteria the hierarchy is split on in order to put these pieces near each other, now you've ripped apart code that was related on the original aspect. I've come to the conclusion that hierarchical code organization itself is the problem, and we would be better served by a way to assemble programs relationally (in the sense of an RDBMS). It seems like PIE was in that same conceptual space. Could you comment on that or elaborate more on the PIE system? Thanks.
Good insights -- and check out Alex Warth's "Worlds" paper on the Viewpoints site -- this goes beyond what PIE could do with "possible worlds" reasoning and computing ...
This is a very interesting paper. Its invocation of state space over time as a model of program side effects reminds me of an idea I had a couple years ago: if you think of a program as an entity in state-space where one dimension is time, then "private" object members in OO-programming and immutable values in functional programming are actually manifestations of the same underlying concept. Both are ways to create fences in the state-space-time of a program. Private members create fences along a "space" axis and functional programming creates fences along the "time" axis.
And you get to use "relational" and "relativity" side by side in a discussion.
A lot of interesting things tend to happen when you introduce invariants, including "everything-is-a" invariants. Everything is a file, everything is an object, everything is a function, everything is a relation, etc.
I'm guessing safe meta-definition means type-safe meta-programming.
For example in Lisp, code is data and data is code (aka homoiconicity). This makes it very convenient to write macros (i.e. functions that accept and return executable code).
Unsafe meta-programming would be like the C pre-processor whose aptness for abuse make it a leading feature of IOCCC entries.
Me too. But if he doesn't answer it he may mean how languages don't have a well designed meta protocol. See the one they built for CLOS in that good book.
This reminded me of an interesting dream I had. I dreamt I created a nice language with a meta protocol. In working with the language and using this protocol I changed the language into a different language which gave me insights on changing that language -- all through meta protocols. I woke up having a distinct feeling of what is means to not be plodding around in a Turing tarpit.
Many mainstream programming tools feel to be moving backwards. For example, Saber-C of the 1980s allowed hot-editing without restarting processes and graphical data structures. Similarly, the ability to experiment with collections of code before assembling them into a function was advance.
Do you hold much hope for our development environments helping us think?
You could "hot-edit" Lisp (1.85 at BBN) in the 60s (and there were other such systems). Smalltalk at Parc in the 70s used many of these ideas, and went even further.
Development environments should help programmers think (but what if most programmers don't want to think?)
Hot-editing updates behavior while keeping state, causing wildly unpredictable behavior given the way objects are constructed from classes in today's languages. The current approach to OO is to bootstrap fresh state from an external source every time the behavior changes so guarantees can be made about the interaction between behavior and state. It seems to me the equivalent of using a wheelchair because you might stumble while walking, the concern is genuine, but the cure is possibly worse than the affliction.
I don't know what the solution is. Perhaps a language with a fundamentally different view of objects, maybe as an ancestry of deltas of state/behavior pairings, somewhat like prototypes but inheriting by versioning and incrementally changing so that state and behavior always match up but still allowing you to revert to a working version. Likely Alan has some better ideas on what sort of language we need.
I use hot-editing in python by default and I find it incredibly useful (now I feel crippled when I'm on a system without it). There are times when I need to reload the state completely but it's pretty rare (changing something that uses metaclasses, like sqlalchemy, is one such place).
Maybe there's something about the style I've adopted that lends itself more to hot-editing but it's definitely a tool I'd hate to be without.
It's pretty poor quality listening but you should get the point. You can send me an email (see my profile) if you wanted to go through it in more detail.
Yes. I think they have been slowly getting better.
Visual Studio has let you do hot code editing for over a decade now, they call it "Edit and Continue"[0]. Only works for some languages (C#, Visual Basic/C++). It also lets you modify the program state while stopped on a break-point with code of your devising.
Most browsers also let you adhoc compose and run code without modifying the underlying programs.
Thanks to hardware performance counters, profilers are now able to profile code with much less impact on performance (eg: no more adjusting timeouts due to profiler overhead). Network debuggers are getting better at decoding traffic and displaying it in a more human readable format (eg: automatic gzip decompression, stream reassembly, etc).
I don't know in what context "hot editing" was used to start this thread, but what I read in it is the idea that you can change code while it's running. Edit and continue has a different feel to it, because it works by a different method, by literally patching memory that the suspended thread is going to execute. It has the convenience of stopping the execution of the program before the patch is done. What "hot editing" in, say, Smalltalk has been able to do is you can have a live program running, you can call up a class that the thread uses, change the code in a method, compile it, while the thread is still running, and instantly see the change take effect. The reason it can do this is that method dispatch is late-bound. In .Net it's bound early. Late binding allows much more of a sense of experimentation. You don't have to stop anything. You just change it like you're changing a setting in an app., and you can see the change instantly. This gives you the feel that programming is much more fluid than the typical "stop, edit, compile, debug" cycle.
Kind of, but it was clunkier. You could Break out of an executing program, edit the code you wanted, and then type CONT to continue execution from the break point. The state from that point forward might not be what you want, though. At least inside VS it tries to revert state so that the revision executes as if the state came into it "clean."
I think all languages today annoy me -- just put me down as a grump. They seem to be at a very weak level of discourse for the 21st century. (But a few are fun when looked at from the perspectives of the past e.g. Erlang ...)
I would hope not. But, based on your work and other comments here, I think we'd agree that our collective sights have been set lower for no good reason. If the goals of the past cannot be fully realized, what hope can we have for those of today and tomorrow?
Concretely, I follow existing languages like Agda, Lean, Haskell, and Rust for pushing the envelope on language semantics, compiler ingenuity, and library abstractions; and http://unisonweb.org/ and http://www.lamdu.org/ for pushing the envelope on the programming workflow itself. While I don't believe editors and languages are orthogonal problems, I do believe there is enough independence to make pursuing these fronts separately in parallel worthwhile.
[Of all of those, http://unisonweb.org/ might especially fit your interests, if I understand them correctly.]
I tried to skim-read through the Unison About page, but all I saw was an under-designed variation of Jetbrains MPS for a single language. I assume you have spent longer with the project - do you care to summarize the differences?
We met at a retreat last fall, and it was a real treat for me to hear some fantastic stories/anecdotes about the last 50 years of computing (which I have only been directly involved with for about 1/10th of). Another one of my computing heroes is Seymour Cray, which we talked about a bit and your time at Chippewa Falls. While a lot of HN'ers know about you talking about the Burroughs B5000, I (and I bet most others) would have had no idea that you got to work with Seymour on the CDC 6600. DO you have any particular Seymour Cray/6600 stories that you think would be of interest to the crowd?
Thanks again for doing this, and I hope to be able to talk again soon!
Seymour Cray was a man of few words. I was there for three weeks before I realized he was not the janitor.
The "Chippewa OS" is too big a story for here, but it turned out that the official Control Data software team failed to come up with any software for the 6600! Hence a bunch of us from Livermore, Los Alamos, NCAR, etc. -- the places that had bought the machine -- were assembled in Chippewa Falls to "do something".
Perhaps the most interesting piece of unofficial software was a multitasking OS with graphical debugger for the 6600 that had been written by Seymour Cray -- to help debug the machine -- in octal absolute! I had the honor of writing a de-assembler for this system so we ordinary mortals could make changes and add to it (this was an amazing tour de force given the parallel architecture and multiple processes for this machine).
And it was also a good object lesson for what Cray was really good at, and what he was not so good at (there were some really great decisions on this machine, and some really poor ones -- both sets quite extreme)
1. What do you wish someone would ask you so that you could finally share your thoughts, but nobody has broached as a subject?
2. (This question is about interactive coding, as a dialogue).
Human dialogs (conversations) are interactive. I think in the past computers were limited, and computer languages had to be very small (as compared with any human language + culture) so that a programmer could learn what the computer could do. But now that services can be connected (programming as a service?), would it make sense to have a dialogue? My example is that in the 1980s it wouldn't have made sense for any programming language to have a function called double() that just multiplies by 2. There's * 2 for that.
But in 2016, it makes sense for a beginner to write "and double it" and considerably less sense for a beginner to have to learn x *= 2 if they wanted to double a number.
Human language is also ambiguous. It would make sense for an interactive language to ask:
"Did you mean, set x equal to x multiplied by 2?" which most people would select, but maybe someone would select
"Did you mean, set x equal to the string "x" appended to the string "x"?"
For these reasons: do you think it would make sense to have an interactive programming language that is connected with a server you "talk" with interactively?
Or should programmers still have to learn a fixed programming language that has no room for interpretation, but instead a strict meaning.
Among other things, this means programmers can never write "it", "that", "which" to refer to a previous thing (since the referent could be ambiguous if the compiler doesn't confirm.) But every human language includes such shorthand.
I'd love to hear your thoughts regarding a connected, interactive programming process similar to the above (or just on whatever lines).
1. I actually don't think this way -- my actual interior is a kind of "hair-ball" and so questions are very helpful.
2. Let's do a dialog about this. Note that taking the process far enough starts resembling prayer to a deity and gives rise to many kinds of hedging. Math is completely expressible in ordinary language, but instead, the attempt to make it less ambiguous leads to conventions that have to be learned.
That said, have you thought about (say) objects needing to negotiate meaning with each other?
Okay! I'm up for a dialog with Alan Kay if Alan Kay is up for a dialog with me :)
>gives rise to many kinds of hedging.
100% agreed. I have a great example of hedging. Human languages are usually quite ambiguous. But there's an exception: when there is a legal document, then if it gets to court the other party can take any ambiguity and turn it around. (For example argue that an "it" refers to other than the closest possible referent.) As a result, legal documents are very unambiguous. This makes them very, very explicit. Implicit contexts, and human culture, are difficult - they get lost with time. If you've ever read Shakespeare's dialogues, they're hard to understand without heavy glossing just 400 years later. But we actually have a copy of Shakespeare's will. Here it is: http://www.cummingsstudyguides.net/xWill.html
Without a single gloss or footnote and without modernizing the spelling (as we do for plays) you can understand nearly 100.00% of it 400 years later. Without even modernized spelling, you could likely turn it into "code" (it's very similar to code) with complete unambiguity.
But the actual "conversation" that led to that, in the room where Shakespeare was talking with the lawyer, if we had a transcript, would be likely impenetrable to us without careful reading and maybe footnotes -- just like the dialogue of Shakespeare's plays. Imagine calling up your lawyer and leaving a voicemail describing what you want in some document. That might lead to a brief dialog and then a draft for your approval. That dialog would be hard to understand, possibly even for an outsider today.
So the question is: is there room for an 'agent' (service) that builds up a shared context with someone interacting with it, then produces something for outside consumption? That might mean that the user can say "it" or "unless" but the service turns "it" into a referent (and makes sure it got the right want) and turns unless into "; if ( not ) { }" for outsiders. I say this because this is only one interpretation of "unless". Another interpretation is that an exception might happen that you want to address and then not continue...
>Math is completely expressible in ordinary language, but instead, the attempt to make it less ambiguous leads to conventions that have to be learned.
This is very true and extremely interesting. When people rearrange an equation in symbolic form (crossing out common factors, etc), they do so doing complex symbol processing that isn't linguistic in nature. I think it's a different tangent from the one I'm asking about - after all, would it be common for anyone to write "The limit of the function f of x, as x tends to 0" instead of the common lim notation?
So the line of thinking with symbolism is extremely powerful, and after all isn't that why we have whiteboards that don't have neat lines on them for you to write sentences into? Diagrams / symbols / pictures are all very powerful and aspects of thinking. This part is tangential to what I was thinking of.
It would be interesting, though, if the interactive process could produce a diagram for you to rearrange if you wanted. I don't know if you know electrical engineering (probably!) but magine being able to ask an interactive service "I'd like a simple circuit that lights a led from a battery", and you get one -- as well as some questions about whether you really didn't need any fuses in it? what sized batteries you were talking about? that it's a DC circuit right? And so forth. It's a separate question whether you could rearrange the results.
Of course, if you are allowed to say something like "I'd like a circuit around an ARM processor where all the components cost under $30 in quantity under 1000 including pcb setup costs". That's like praying to a diety!
Is there room for some level of interaction between "double x" and praying?
Wolfram Alpha certainly suggests there is. Although it doesn't ask you anything back / isn't interactive, I've certainly been shocked at some of the things it was able to interpret.
For example, I could ask it how far sound travels in 10 ms, so that I could judge what large of a perceived physical offset effect introducing 10 ms of latency would cause. Well, I just tried it again so I could link to you, and it didn't get "how far does sound travel in 10 ms", it didn't get "distance sound travels in 10 ms", but on my third try "speed of sound in air * 10 ms" it got me the answer -
its interpretation was:[1]
>speed of sound in dry air at 20 °C and 1 atmosphere pressure×10 ms (milliseconds)
and it gave me 3.432 meters.
What is interesting is that there was nothing interactive in this process, just me guessing until it got what I meant. It didn't ask me anything back. For me, the three phrases I just quoted are equal. For Wolfram Alpha, it misinterpreted the first two quite badly, and got the third one easily.
So the question is - could such a process be applied to programming? Could the user try to write "lowest common factor of a and b" have the compiler completely miss, try "least common factor of a and b", have the compiler completely miss, try "least common multiple of a and b" and finally have the compiler get it? Because that's not how programming works today. At all. (Well, in actual practice it kind of is thanks to Google - but it's not what goes on in the IDE.)
So it would be interesting to know if some progress could be made along these lines.
>have you thought about (say) objects needing to negotiate meaning with each other?
No, it's a tough one. For simplicity, I thought of the current output being boilerplate code (like existing C/C++ code), so that the headache you just mentioned doesn't need to be thought about :-D
What you ask here is very interesting to me, because some time back I got to thinking about what Alan Kay talked about with trying to negotiate meaning, in reference to using Licklider's concept of "communicating with aliens." Kay (IIRC) used a simpler concept of trying to order a burrito from a vendor, where neither the vendor nor the customer understand each other's language. He said by using gestures, you can get the idea across, because you can reference common concepts in each other's heads, and finally come to agreement about ordering a burrito. So, I started on a "sketch" for a language for a fairly simple task (converting between encoding schemes) where the code was suggestive, rather than prescriptive, and it just used a declarative style. One example is I just said, "a5 b6 10..." (including the double-quotes, and the trailing "...") to try to suggest a string containing a sequence of hex numbers of indeterminate length. I was saying, "Expect this type of information." As I did this, and thought about what else the language would need to express, I came to the exact same question you did, "There could be a variety of interpretations of this. Maybe the dev. environment should ask the programmer questions, 'Did you mean...?'" The programmer could respond in some way meaning "Yes" or "No." If no, the dev. environment could ask about other possible meanings it knows about, based on the syntax used. I could see this getting complicated very quickly, because what I hoped for was that the language would give a programmer the freedom to try expressions that made sense to them, in hopes that the language environment with which the programmer communicated would understand them, and that the language environment should try as "hard" as it could to understand them. However, not being skilled in AI, I wondered if the forms of expression I was thinking about were constrained enough in meaning that I could completely code a scheme whereby, looking at syntax "signals" in the code, it could reason through the expressions to make good guesses about their meaning, and then test them by formulating intelligible questions about them to confirm what the programmer was getting at. Going through that exercise told me I had a lot to ponder, learn, and explore with this idea, because I didn't come up with any ready answers that felt satisfactory. The best I could come up with was that I would code the language to allow a few possibilities with the syntax "signals," in combination with other "signals" that I could anticipate, but if the expression went beyond the coded meanings, I'd have to have the language say, "I don't understand what you're talking about. Please try expressing it differently," which would lead to a kind of guessing game between the programmer and the language, with the programmer quickly getting the idea, "There have got to be some rules to this system. Why don't I just look at its programming, and figure out how I use its rules to express what I want," which would defeat what I was going after. The whole thing would be an artifice, rather like an Eliza simulation.
Hey Alan, you once said that lisp is the greatest single programming language ever designed. Recently, with all the emergence of statically typed languages like Haskell and Scala, has that changed? Why do you think after being around for so long, lisp isn't as popular as mainstream languages like Java, C or Python? And lastly, what are your thoughts on MIT's switch to use Python instead of Scheme to teach their undergraduate CS program?
I should clarify this. I didn't exactly mean as a language to program in, but as (a) a "building material" and (b) especially as an "artifact to think with". Once you grok it, most issues in programming languages (including today) are much more thinkable (and criticizable).
The second question requires too long an answer for this forum.
The one thing that makes LISP great is the "functions are data" formulation.
Many of the "current best paradigms" of Computer Science are actually fads. LISP was a fad of the 1980s. Java was a fad of the 1990s. NoSQL databases are a current fad. It doesn't mean that one is better than another. It's human nature to think that new technology must be better than old technology. In fact, the programming environment on the LISP machines of the 1980s was far better than anything we had till the early 2000s, despite being "old".
Do you still see an advantage of using Smalltalk (like Squeak/Pharo) as a general purpose language/tool to build software or do you think that most of its original ideas were somehow "taken" by other alternatives?
Smalltalk in the 70s was "just a great thing" for its time. The pragmatic fact of also wanting to run in it real-time fast enough for dynamic media and interactions and to have it fit within the 64Kbyte (maybe a smidge more) Alto rendered it not nearly as scalable into the future in many dimensions as the original ideas intended.
We have to think about why this language is even worth mentioning today (partly I think by comparison ...)
Mmm yeah. The exploration, instant feedback and minimalist syntax are features that I wish more people would value.
I "secretly" think that Self would have achieved that too (and even better because is not constrained to the artificial abstraction of classes) but it never had a chance due to its unsuccessful IDE.
Our cognitive system converges too much to objectify things to ignore. We compulsively do that. There is something about objects that fits our cognitive system better. It's a waste if we don't take full advantage of it.
LISP machines, Solo, Edison, Oberon... quite a few systems and languages had that capability if the users and/or developers so desired. In the write-up he gave me, Kay seemed to suggest it had a unique combination of OOP support, conceptual brevity, and especially the late binding. I mean, good performance and stuff too. Those other things were considered key advantages over other systems I named.
Maybe also easier to match to hardware than LISP or FP DSL's along lines of Haskell. I'm a little out of my depth there, though. I just remember LISP machine and OS crowd having to innovate hard to fight performance issues. PreScheme & T being exceptions where low-level was easy.
Have you spent any time studying machine learning and how it might affect the fundamental ways we program computers? Any thoughts on how the tooling of machine learning (TensorFlow, ad hoc processes, etc) could be improved?
In a recent talk, Ivan Sutherland spoke in the lines of, “Imagine that the hardware we used today had time as a first-class concept. What would computing be like?” [1]
To expand on Sutherland's point: Today's hardware does not concern itself with reflecting the realities of programming. The Commodore Amiga, which had a blitter chip that enabled high-speed bitmap writes with straightforward software implementation, brought about a whole new level in game programming. Lisp machines, running Lisp in silicon, famously enabled an incredibly powerful production environment. Evidence is mounting that the fundamental concepts we need for a new computing have to be ingrained in silicon, and programmers, saved from the useless toil of reimplementing the essentials, should be comfortable working in the (much “higher” and simpler) hardware level. Today, instead of striving for better infrastructure of this sort, we are toiling away at building bits of the perpetually rotting superstructure in slightly better ways.
The more radical voices in computer architecture and language design keep asserting in their various ways that a paradigm shift in how we do infrastructure will have to involve starting over with computing as we know it. Do you agree? Is it impossible to have time as a first-class concept in computing with anything short of a whole new system of computing, complete with a fundamentally new hardware design, programming environment and supporting pedagogy? Or can we get there by piling up better abstractions on top of the von Neumann baggage?
[1] This is from memory. Apologies for a possible misquotation, and corrections most welcome.
What is HARC currently working on? Is it for now a continuation of the old CDG Labs / VPRI projects or are there already new projects planned / underway?
Also, how do you organize and record your ideas? Pen and paper? Some kind of software? What system do you use? I ask because I'm fascinated by the idea of software that aids in thought, collaboration, and programming - meshing them all together.
I've seen elsewhere (https://news.ycombinator.com/item?id=11940007) that you agreed that many mainstream "paradigms" should be "retired". Retiring implies that you replace. In particular, I'm curious what you would like to see filesystems or the Unix terminal replaced with?
We know you are not a big fan of web. Regardless how we got here, what is your view on how we should address the real world decentralization problems in the context of http://www.decentralizedweb.net/ ?
One of several major mistakes with the web has to do with thinking that the browser is some kind of application "with features" -- if you think of the actual scale of the Internet (and they didn't) you realize that at the very least, the browser, etc, has to be more like an operating system, and with as few features as possible: really to safely run encapsulated modules and deal out resources. It is crazy that after more than 20 years of the web that this CS101 principle still can't be done ... and it runs on machines that can do it....
In your point of view, on top of the current hardware infrastructure, how do we build a decentralized network operating system (contrary to the old centralized timesharing system)?
Take a look at the Internet itself -- and then take a look at Dave Reed's 1978 PhD thesis at MIT (can be found via the CSAIL website) -- we used many of these ideas in the Croquet project
Do you see further research paths for metacompilers [1] to reduce code and enable customizable user interfaces?
With containers and hypervisors now in desktop OSes (Windows, Mac OS, Linux), could an open-source research OS (e.g. KSWorld) be packaged for developers and end-users who want to test your team's experimental UIs?
Is there long-term value in "private machine learning" where some data and algos are focused on user/owner interests, with "public machine learning" providing variably-trusted signals to user-owned algos for intelligence augmentation?
Hi Alan, do you still do coding (any kind of, for any purpose) these days? If you do, what's your comfortable setup (say, language, editor, tools and etc)?
I think one thing that scared him away was the word "relevant"... VIM might have as well (very primitive). Emacs tries to do some things that are worth doing, but it brings the user into "textland," not "systemland."
More "textland" than anything else, though that has to do with the medium we're using to communicate "across space and time." What Alan has advocated is that the medium we're using in this particular instance should be a version of "systemland."
It's difficult at this point to come up with an example, since we don't have it yet (that I know of), but to give you an idea, take a look at Lively Kernel https://www.lively-kernel.org/
There's a lot of economic pressure against building new systems. Making new hardware and software takes longer than building on the existing stuff. As time goes on, it gets harder and harder to match the features of the existing systems (imagine the effort involved in reimplementing a web browser from scratch in a new system, for example), not to mention the massive cost benefits of manufacturing hardware at a large scale.
Many people working in software realize the systems they use are broken, but the economics discourage people from trying to fix them. Is it possible to fix the economics? Or maybe we need more people able to resist this pressure?
One start to this is not to do "a web browser" -- or other software -- with such bad initial conceptions (see elsewhere above). There was nothing necessary about this.
We have a technology in which hacks are easy and can be really easily multiplied by the billions and sent everywhere. If we translate this into the world of medicine and disease and sanitation, what should we do? How far should we go?
(But it is still really hard for me to understand the nature and force of the resistance to "real objects" -- which -- as actual virtual encapsulated machines -- were invented to deal with scaling and replacement to parallel the same kinds of thinking we did for the Internet with physical objects (called "computers")
I guess I'm thinking about this from the perspective of someone trying to make a computer for the general public. How will you convince anyone to buy and use a computer that can't browse the web?
(And I doubt this is a problem that will go away with time; the web is big enough that it seems unlikely to go anywhere anytime soon.)
To provide a little hope, I think it should be pointed out that the Frank project (VPRI) achieved web browsing that is compatible with the existing web protocol, using real objects. What it allowed is an extension of browsing on the web. So, it's not as if the two (good architecture and bad architecture) can't both exist in the same space, and be used at the same time. I think the key to answering your question is asking will people in the general public find the impetus to understand the limited nature of bad architecture, and (on one possible track) either use the good architecture to make the experience better, or (on another possible track) come up with a better computing/content architecture that does away with the web as we know it altogether?
Several companies making game consoles have done it repeatedly. Phone companies do it. People were actually buying dedicated, word processors for a while despite existence of MS Word & Internet. There's devices that only let you read books. There was one, popular computer that could only play about 5GB of music.
Seeing the patterns connecting them all? That it will be a niche market to begin with doesn't mean there's no market or it's not worthwhile.
Here's one for you given success of gaming and entertainment products: a all-in-one computer combining rapid iteration, memory safety, efficiency, and HW acceleration for common things (esp graphics); a Python or BASIC (eg DarkBASIC) designed for gaming w/ libraries for common features; a port of a game creator program plus examples & artwork to draw on; tutorials a la Realm of Racket or Land of LISP that teach you the language with successive building of game modules with increasing complexity or knowledge required; ability to live patch & debug a la LISP the games with failure isolation so no lost work or long times between runs.
Think people would buy it? Especially people new to programming who would find C++, Java, web stacks, and so on daunting with low-reward steps in the learning process? Could such a HW/SW combination be a 180 for them in motivation and learning?
Beside objects one true revolutionary idea in Smalltalk is the uniformity of meta facilities - an object knowing about itself and being able to tell you.
I see so many dev resources burnt just because people build boring UIs or persistence bindings by wiring MANUALLY in traditional languages. All this is a no-brainer when enough meta infos (objects and relations) are available and a program is reflected as data as in Smalltalk (not dead text). You can not only transform data but also your code. Pharo now makes some more additonal steps to enhance reflection (metalinks, slots, etc).
What do you see as next steps in using metadata/-infos for (meta)programming ...
I think one of the biggest blindnesses of our field is scaling (and I'm not sure quite why). This leads to an enormous amount of effort at roughly the same scales of some good ideas decades ago. (I think my background in molecular biology -- which continues to a very small extent -- helped this a lot. At some point one has to grapple with what is more or less going on and why it more or less works so amazingly well.)
What would a programming language be like if we actually took the many dimensions of scaling seriously ... ?
Would you consider the actor programming paradigm to be a good scalable model? It largely matches what I observe in both nature and where we seem to be headed with software engineering (containers in the cloud, near-trivial redundancy, stability, and scalability when properly designed). When I consider society I see a complex network of distributed actors, and when I consider my mind/brain I see the same. At this point in my philosophical development I am definitely resonating with the actor model, but I'm sure you are more familiar with this paradigm than I am - if not actors, where would you recommend searching?
Suppose we have a good model of atoms -- how much of this will be a good way to think about living systems? (Or should we come up with better and more useful architectural ideas that are a better fit to the scales we are trying to deal with -- hint: in biology, they are not like atomic physics or even much of chemistry ...)
A point here is that trying to make atomic physics better will only help a little if at all. So trying to make early languages "much better" likely misses most if not all of the real progress that is needed on an Internet or larger scale.
(I personally think that much enterprise software even today needs architectural ideas and languages that are different in kind than the languages of the 60s and 70s (meaning most of the languages used today).
(preface: Riffing wildly here -- and may have gone in a different direction than your analogy's original intent --)
So, regardless of whether or not actors are a good pattern, what we need is scale-free patterns?
I can see how getting hung up on actors as a programming language feature would impede that.
How can we make the jump to scale-free, though?
- With actors, historically, we seem to have gravitated to talking about them in terms of a programming language feature or design problem -- while in some sense it implies "message passing", we usually implement the concept at scales of small bits of an in-memory process.
- With processes in the unixish family, we've made another domain with boundaries, but the granularity and kind of communication that are well-standardized at the edges of process aren't anywhere near what we expect from the languages we use to craft the interior of processes. And processes don't really compose, sadly.
- With linux cgroups, things finally go in a tree. Sorta. (It's rough trying to stack them in a way where someone arbitrarily deep in the tree can't decide to take an axe directly to the trunk and topple the whole thing). Like processes, we're still handling granularity of failure domains here (better than nothing), but not defining any meaningful or scalable shepherding of communication. And we still haven't left the machine.
I'm sold that we need some sort of architectural ideas that transcend these minutiae and are meaningful at the scale of the-internet-or-larger. But what patterns are actually scalable in terms of getting many systems to consensually interoperate on them?
I'm twitchy about trying to define One True Pure Form of message passing, or even intent passing, which seems to be a dreamier name that still converges at the same limits when implemented.
But I dream that there's a few true forms of concurrent coordination pattern that really simplify distributed and asynchronous systems, and perhaps are scale-free. Perhaps we haven't hit them yet. Words like "actor" and "agent" (divorced of e.g. programming language library) sometimes seem close -- are there other concepts you think are helpful here?
One of many problems with trying to use Unix as "modules" and "objects" is that they have things that aren't objects (like strings, etc) and this makes it difficult for arranging various scales and extensions of use.
It's not so much "scale-free" but this idea I mentioned elsewhere of "find the most difficult thing you have to do really nicely" and then see how it scales down (scaling up nicely rarely even barely possible). This was what worked with Smalltalk -- I came up with about 20 examples that had to be "nice", and some of them were "large" (for their day). We -- especially Dan Ingalls and Ted Kaehler -- were able to find ways to make the bigger more general things small and efficient enough to work uniformly over all the scales we had to deal with.
In other parts of this AMA I've mentioned some of the problems when extended to the whole world (but go for "galactic" to help thinking!)
Almost nothing in today's languages or OSs are in the current state of "biology".
However, several starts could be to relax from programming by message sending (a tough prospect in the large) to programming by message receiving, and in particular to program by intent/meaning negotiation.
And so forth.
Linda was a great idea of the 80s, what is the similar idea scaled for 40 years later? (It won't look like Linda, so don't start your thinking from there ...)
Q: How do you think we can improve todays world (not just with technology)? What do you think is our species way forward? How as a civilization can we 'get to higher level'? Specifically, I'm interested in your views on ending poverty, suffering, not destroying the Earth, improving our political and social systems, improving education etc. I understand that these are very broad topics without definitive answers but I'd love to hear some of your thought about these.
Thank you and I just want to mention that I appreciate your work.
"What Fools these Mortals be!" Puck meant that we are easy to fool. In fact we like to be fooled -- we pay lots of money to be fooled!
One way to look at this is that the most important learning anyone can do is to understand "Human beings as if from Mars" -- meaning to get beyond our fooling ourselves and to start trying to deal with what is dangerous and counterproductive in our genetic (and hence cultural) makeups. This is quite different than what most schools think they are supposed to be about -- but the great Jerome Bruner in the 60s came up with a terrific curriculum for 5th graders that was an excellent start for "real anthroplogy" in K-5.
I was casting around recently looking for something to post to HN about it, but there's surprisingly little on the web. (I haven't yet watched the National Film Board documentary on it, which was the only substantive source I could find.)
I've been constantly surprised about how what I called "object-oriented" and "system-oriented" got neutered into Abstract Data Types, etc., (I think because people wanted to retain the old ways of programming with procedures, assignment statements, and data structures. These don't scale well, but enormous amounts of effort have been expended to retain the old paradigms ...
I think another part of it was people wanted to go on using systems the way they had been, with the operating system being a base layer on which things happen, which does not exist as a foundation for building a different system. To create code, you put it in files, files go in directories, you compile it into a static executable, which becomes temporarily dynamic when executed, and then goes back to being static when it exits, because it was based on a static image. The OS makes it difficult for the executable to update the state of its own image, assuming that trying to do so is a mistake. You run it on top of the OS, not in the OS, not part of it. It's the recapitulation of the application metaphor, where the app. only exists while someone is using it, and then all its state goes away when they're done with it, unless a small piece of it is serialized (unpacked into raw bytes, with no meta-code) for later retrieval. With that kind of setup, the thinking is there's no need for inter-entity relationships that become part of the larger whole, though eventually people wanted applications systems with capabilities, and so more cruft got added onto the pile, trying to create versioned late-binding in the midst of a system designed for abstract data types.
When I talk to people about OOP, I explicitly try to distinguish it from ADTs and systems like I've described above, though I'm at a bit of a loss to come up with a phrase for what to call languages that people commonly call "OO." I've sometimes called them procedural languages, with an extra layer of scoping, or that work with ADTs, but that's a mouthful.
Looking at your 1972 Dynabook paper [1], would you make any changes to the Dynabook vision now? Also, what do you see as the biggest missing pieces (software or hardware) today? What current software gets closest to the vision?
I remember a few weeks back, you said that you wanted to take a closer look at the Urbit project (www.urbit.org). Just wondering if you had gotten the chance to do so, and, if so, what your thoughts were.
I find the whole project (politics aside, though i wish this caveat was unnecessary) stunningly beautiful, especially the focus on verbalization (#spokenDSL), and having Alan Kay be interested in it is flattering even though I have no relation whatsoever :)
I could not find any books that you have personally recommended reading at https://news.ycombinator.com/item?id=11803165. It might be because there would be too many of them. I would be specifically interested in computers category.
As a community, we often think on quite short time-scales ("What can we build right now, for users right now, to make money asap?"). I feel like you've always been good at stepping back, and taking a longer view.
So what should a designer or a developer be doing now, to make things better in 10 years, or 100 years?
Not sure how seriously to treat that question, but...
It seems like a cheap answer, but it needs more people, with more perspectives, trying out various answers to this question.
I suspect the greatest long-term leverage comes from providing people relief from the cognitive load of fighting for food, shelter, health, transportation, and education.
Those are heavy goals that lots of people have spent lots of time shifting only very slowly. The little-Alan-Kay-in-my-mind replies, "We need better thinking-tools in order to do it faster", which I can half-believe, but beyond that, I get stuck. I don't see how to work backwards from that to a first step that can actually be taken.
Immediate, mutual, lucid comprehension of the shared challenges facing humanity (and our fellow beings) that lie beyond the traditional domain of both nations and weak, present-era international political bodies, so as to avoid further needless destruction and suffering. Unfortunately, so many people live on the treadmill of money that the motivational deficit perhaps must be addressed prior to the educational.
Better tools, where "better" is the most difficult to define, but simpler is a good direction.
I see equality in computing/storage/access as another great need, but teaching the world to write javascript isn't the solution, and "AI"s which do your shopping aren't either. I'm not sure where to place or how to phrase my imagined future, but the path there isn't a slow ascend (or decline, think "phoenix" - we're slowly approaching a locked in world). Maybe what we need is inspiration, a grand vision of the new world.
We also need to look up from our phones and teach our young to understand, see and act upon the world in greater time scales, unlearn our short-sighted perspectives and teach the next generation to care for their future generations, something we're failing to do.
Edit append: We also need culture (music, arts, etc.) to help us think.
I recall reading an article about 10 years ago describing a PARC research project in which networked computers with antennae were placed throughout a set of rooms, and the subject carried a small transmitter with them from room to room. As the computer in each room detected the transmitter, it triggered actions in each room. I think it was called "ambient computing."
Does this ring a bell for you? I have searched for this article recently and not been able to find it again.
Yes, this idea was originally Nicholas Negroponte's in the 70s. The Parc version was called "Ubiquitous Computing" and was led by Mark Wieser in the 80s ...
Thanks! This research was my first thought when I saw big companies starting to come out with things like smart watches, hyper-local beacons, and home automation products.
But strangely, it seems like companies are still treating those all as separate lines. For example, Apple markets their home and watch projects totally independently of one another. But I have to think that they are working toward ubiquitous computing, much as they worked toward the Dynabook over the decades.
The people working in R&D at these companies (i.e. Apple) are to varying extents aware of this history. Indeed there are different "brands" for a set of concepts that have be evolving since the 70s, IoT being the most popular term in public discourse as of the past few years, and "ubiquitous computing" having fallen almost completely out of favor except in academic circles.
Genevieve Bell and Paul Dourish wrote an interesting book few years ago that argued that Nicholas Negroponte's and Mark Wiser's visions had indeed come to pass, as predicted over and over again, just by different names and under guises we that we didn't recognize: https://www.amazon.com/Divining-Digital-Future-Mythology-Ubi.... The story continues!
This is only version accessible to most people today. Where the contextually driven actions are performed by your smartphone, not by the swarm of devices in your vicinity.
A few devices issuing orders, and a bunch of dumb devices taking orders.
You may have come across it already but you might be interested in "Time Reborn" by Lee Smolin. His opinion is that mathematics is a fine tool but it has lead physics to the "block universe" perspective. A reversible universe that is driven by pure immutable laws. He claims that time and spontaneous change are important.
It crossed over with some of my experience of programming. Recently he wrote "The Singular Universe and the Reality of Time" with Roberto Unger, but I haven't read that yet.
> They need a much better idea of time (such as approaches to McCarthy's fluents).
I am not sure what the time problem is for functional programming, but I reckon the Elm language/framework solves problems with time in a very elegant way with it's flavour of FRP and Signals.
In Elm, you can play back your UI interactions in a debugger as they happened and watch the variables as they would have been!
Do you believe that the gap between consuming software and creating software will disappear at some point? That is, do you expect we will soon see some homoiconic software environment where the interface for using software is the same as the interface for creating it?
I feel like the current application paradigm cannot scale, and will only lead to further fragmentation. We all have 100+ different accounts, and 100+ different apps, none of which can interact with each other. Most people seem to think that AI will solve this, and make natural languages the main interface to AI, but I don't buy it. Speech seem so antiquated in comparison to what can be achieved through other senses (including sight and touch). How do you imagine humans will interact with future computer systems?
One of the most interesting processes -- especially in engineering -- is to make a model to find out what it is that you are trying to make. Sounds a little weird, but the model winds up being a great focuser of "attempts at intent".
Now let's contemplate just how bad most languages are at allowing model building and having optimizations being orthogonal rather than intertwined ...
One of the concepts I've heard you talk about before in interviews and the like is simulation. I think simulation is huge and we should be seeing products that cater towards it, but largely aren't.
Do you still think simulation is an important promise of the computer revolution, and are there any products you know of or ideas you have that are/would be a step in the right direction?
Yes, and I don't really. An interesting one that has been around for a few years -- that can be used by the general public, children, etc. -- is NetLogo (a takeoff from StarLogo). There is also a StarLogo Nova that is worth looking at.
But these are not really general simulation languages of the kind we need in 2016 and beyond ...
I need to read what you've said regarding simulations and pseudotime, but I've been wondering if this is along the same lines has as an HDL like VHDL/Verilog/SystemVerilog.
I do chip implementations in those languages, which of course involves simulation and a time concept. I get the idea that is not the sort of simulation you mean, but I'm not sure. The other thought I had was simulation more in the SPICE sense that iterate toward a solution in steps.
Any chance you could briefly clarify, and/or toss out a resource that points in the right direction?
What would a simulation of an epidemic look like? Or a simulation of ants finding food via pheromone trails. Or simulation of dye diffusing in water? Or a simulation of an object being dropped?
How do you feel about the role of computing technology in society today ? is it still important or should be work on other domains (education, medicine, ecology, and the industrial tissue that is our interface to reality nowadays).
While I'm at it, just did a MOOC about Pharo (~ex squeak) and ST was indeed a very interesting take on OO (if I may say so ;). So thanks for you and your teammates work along the years (from ST to STEPS).
I watched an OOPSLA talk where you described how tapes were used when you were in the air force. The tape readers were simple because they stupidly followed instructions on the tapes themselves to access the data. You seemed to like this arrangement better than what we have with web browsers and html, where the browser is assumed to know everything about the format. One way to interpret this is that we should have something more minimal like a bytecode format for the web, in place of html.
So I'm interested on your take: Is WebAssembly a step in the right direction for the web? (Although it's not meant as a replacement for html, maybe it will displace it over time).
This is a perfect example of "systems blindness" in the face of real scaling.
What should a browser actually know in order to scale (and why didn't the browser folks realize this? (rhetorical question) -- this knowledge was already well around by the early 90s).
Learn a lot of other things, and at least one real science and one real engineering. This will help to calibrate the somewhat odd lore aka "computing knowledge". I would certainly urge a number of anthropology courses (and social psychology, etc), theater, and so forth. In the right school, I'd suggest "media theory" (of the "Mcluhan", "Innis", "Postman" kind ...)
Whenever I've gone to an anime con, it seems like the older genre masters like Yoshiyuki Tomino (Gundam) were always urging the audience to get to know something besides anime/manga. Specifically to go out and get involved in something to create media about, so as to avoid producing something completely self-referential and navel-gazing. That also seems to apply to the medium of programming. (As in: Do we really need another To-Do app?)
It's also related to what Scott Adams urges. It's pretty hard to get to be in the top best 10% at a single field. It's much easier to be in the top 25% of two different fields, which would make you one of the top 10% of that interdisciplinary combination.
> It's also related to what Scott Adams urges. It's pretty hard to get to be in the top best 10% at a single field. It's much easier to be in the top 25% of two different fields, which would make you one of the top 10% of that interdisciplinary combination.
This is really great, I've been thinking about that for years. Which Scott Adams said this, and where is it from?
The best tech founders often seem to have this quality. They're by no means top 10% in any individual category, but they're strong in technology and in some specific non-tech-related domain. In the right setting, dividing your attention between tech and a non-tech-related domain can actually make you more specialized in a way, not less.
Just after college, I took my first airplane trip, destination California, in search of a job. I was seated next to a businessman who was probably in his early 60s. I suppose I looked like an odd duck with my serious demeanor, bad haircut and cheap suit, clearly out of my element. I asked what he did for a living, and he told me he was the CEO of a company that made screws. He offered me some career advice. He said that every time he got a new job, he immediately started looking for a better one. For him, job seeking was not something one did when necessary. It was a continuing process.
This makes perfect sense if you do the math. Chances are that the best job for you won't become available at precisely the time you declare yourself ready. Your best bet, he explained, was to always be looking for a better deal. The better deal has its own schedule. I believe the way he explained it is that your job is not your job; your job is to find a better job.
This was my first exposure to the idea that one should have a system instead of a goal. The system was to continually look for better options.
There's coming up with ideas: learn to dream while you are awake, the ideas are there.
There's coming up with a good idea: learn how to not get buried in your ideas (most are mediocre down to bad even for people who have "good idea skills"!)
I write down ideas in notebooks to get rid of them. Every once in a while one will capture a different point of view.
And, there's the Princeton Tea joke of scientists comparing what they did for ideas. One says "I have them in the middle of the night so I have a pad by my bed". Another says "I have them in the shower so I have a grease pencil to write them on the walls". Einstein was listening and they asked him about his ideas. He said "I don't know, I've only had two!"
(Some people are better at filtering than others ...)
Bob Barton got most of his great ideas in sleeping dreams.
Most of my ideas come in "waking dreams" (this is a state that most children indulge in readily, but it can be retained in a more or less useful way -- I don't think you quite get into adulthood by retaining it, so it's a tradeoff).
Main thing about ideas is that, however they come, most of them are mediocre down to bad -- so steps have to be taken to deal with this major problem.
Apparently aphantasia doesn't appear to affect dreams (from research I've read), when sleeping I do have visual dreams, but awake, nothing but black even with eyes closed no matter how hard I try to imagine a picture.
Not sure I understand what you're implying, I'm not blind, and only recently realized other people actually see pictures in their mind while awake. But dreaming while awake would be awesome if achievable.
I'm saying that you don't see what is on your retina, but something that is manifested by your brain (and this is why you can see images when you dream).
So I'm guessing that there is a path for you to imagine images. It could be very similar to how most people who think they are tone deaf can be helped to hear pitches quite accurately.
Ah, yes, I'm hoping this is true as well hence my interest in what you said about learning to dream while awake. If you don't have any specific recommendations I'll continue researching on my own.
My personal experience is it's not as much "seeing" as "feeling" whatever I "see". Like with open eyes staring at but not looking at something, instead mentally focusing on peripheral, where the peripheral is more of an idea than the impression of photons hitting your retina. I often almost feel like I'm drawing the diagrams or interfaces I might think of, with sweeping gestures and so on. I definitely think it's something that gets better with practice.
I mean, I CAN have visuals like that, as I don't have aphantasia. But I don't believe most of my ideas originate from the visual imagining. It usually starts with concepts, is connected through logic, and then along the way images may or may not be generated, depending on what I am ruminating about.
I'm not Alan, but I saw one of his talks[1] about how they did invent all those things we're using now, which you are very much referring to.
What he said was important was to have a big horizon, at least 10 years. Anything less would not do.
Then envision what should be possible in that time. "It would be ridiculous if $X wasn't possible in $Y years" and then backtrack on what would technically be required to make that happen.
Rinse repeat. He said that with good company support and a good team, you often found yourself delivering at half that time none the less (but again, you should never constrain yourself to that little, or else you will stress towards performing, instead of thinking properly about the problems needing to be solved).
Alan: Hope I didn't butcher your message too much :)
10 years is too close -- 30 years out will generally lose the pernicious connection to the present and "how do we get there?". The idea is to "go out there and bring the ideas back" rather than try to go from the present to the future.
You seem very disapointed and upset with the way computing has gone in the last decade. Speaking as a younger (15) and more satisfied (I still think the UNIX abstraction is pretty solid, despite what others may say) programmer, how do you not get depressed about the way technology is going?
Also, what do you propose to eliminate the "re-inventing the flat tire" problem? Should every programmer be forced through a decade of learning all of the significant abstractions, ideas, and paradigms of the last 50 years before they write anything? Because I don't see another solution.
I do get depressed -- how could one not? -- the trick with depression is to not allow it to take you into in-action.
Re: Unix etc. try to imagine computer systems without "operating systems" as they are thought of today (hint: look at the Internet, etc.).
The basic heuristic here is to avoid the "when you criticize something you are implicitly buying into it's very existence!". First try to see if there is anything worth existing! (OSs are not necessary ...)
How long does it take to learn real science? And shouldn't computer science be a real science?
Another way to look at this is that anyone could be a doctor until recently (really recently!) because no one knew what was going on. And a lot of damage was done (and is still being done.) Once some real knowledge is obtained, we can't afford to have random practitioners dabbling into important things. (The idea that this might be OK is another pop culture delusion and desire ...)
...and I'm not sure I agree. The idea that code from randoms shouldn't be put into critical infrastructure is certainly true. But the idea that people shouldn't tinker, hack on code, and learn, even if they end of with non-optimal, possibly broken code, is a dangerous one, because that's how people learn. And yes, they should dabble in important things. How else do they learn them. It's then our job, as the Real World, to analyze our dependancies, and make sure we can trust them to be well written, and that they aren't just toys.
As for whether operating systems are necessary, by most common definitions (being, as I understand it, a set of software that defines a common base of abstractions over hardware, so as to allow multiple other pieces of software to access said hardware at a higher level of abstraction), yes, they are, if you don't want to go mad. But I get the feeling that isn't what you meant...
In short, I can respect your opinions, but I do not agree with all of them. Much of this derives from the mindset that tinkering with even the most important things, regardless of skill, is important to learning. If that makes me stupid, so be it.
It's not the learning process that is in question, but the ramifications of really extending the industrial revolution to distributing broken code.
(No language has ever been more set up to allow tinkering with everything for the purpose of learning than Smalltalk -- I think you probably realize this.)
I think most people would not recognize most exiting OSs in use by your definition in your second paragraph.
Okay. What is the definition of the OS in your opinion?
And thank you for taking the time to disagree with me respectfully. So many do not.
And I do recognize the tinkering capabilities of smalltalk. This is why I found your points odd.
I have yet to learn Smalltalk. The full environment, as opposed to text editor development feels odd to me, especillay after struggling with monsters like Eclipse. Also, SBE is several years out of date, and there isn't really much in the way of good documentation for those who don't already know what they're doing.
Is this because web browsers are all we need? (I'm about thinking ChromeOS as an example.)
Currently, OSs are needed to create web browsers. Do you think there will be a time when a web browser will be able to compile itself (without requiring an "OS")?
Also, will the web servers also not require an "OS"?
It's worth pondering the real implications of real objects as real virtual machines intercommunicating by messages (they were inspired in part by the pervasive world-wide networking that was part of Licklider's vision).
One way to think about this is that "hardware objects" are merely caches for the software objects that will embody the processes and intents.
As a cache, each computer needs a small amount of code to deal with resources of time and space and i/o to and from the net. This could be called a "micro-kernel", but in an object world, it is also objects, not a "stack". For "doing things", we can imagine a system of "real objects" residing on one or more of the hardware caches. What mix depends on the individual resources of the caches.
The wonderful Gerry Popek did a first pass at this kind of caching architecture that worked over a mixture of machine types in the 1980s -- it was called LOCUS -- and there is an excellent book from MIT Press that explains the issues and how it works.
Its main limitations were that it was made as an extension of Unix processes, but it definitely proved the concept of how this part of really distributed "objects" with cached resources worked. (I tried to get Apple to buy this system when I first got there in 1984.)
Bottom line is that there is nothing that resembles any of the huge monolithic OSs of today that "try" to make software dependent on them rather than to allow software to run everywhere and move everywhere.
So "OSs are not necessary".
(Smalltalk at Parc did not run on top of an OSs ... etc.)
Hi Alan. What are your thoughts on how rapidly GUIs are evolving nowadays? Many apps/services revamp their UI fairly often and this oftentimes hurts muscle memory for people who have just about gotten a workflow routine figured out.
Also, what big UI changes do you foresee in the next 10 years - or would like to see. Thanks.
1. It is known that you read a lot. Do you plan to write a book? You have been a big inspiration for me and I would love to read a book from you.
2. What is your opinion about Self programming language (http://www.selflanguage.org)? I've read „STEPS Toward The Reinvention of Programming“ pdf and this feels related, especially to with the Klein interpreter (http://kleinvm.sourceforge.net/).
Trygve Reenskaug (MVC inventer ) and Jim Coplien (Patterns/Hillside) developed the DCI paradigm as a way model code around the roles played by objects, rather than the concrete type (class) of each object.
Alan, of you are aware of DCI, what is your response to it?
It (or more specifically, Jim Coplien) claims to build on your vision of OOP, but also criticises "emergence" within the OO vision. (Personally, I think those problems are design issues apart from the OO model)
In FOAM (the Feature-Oriented Active Modeler) we replace classes with Models, which are just collections of Axioms. There are pre-built Axiom types for standard things like methods and properties, but also for things like imports, exports, traits, listeners, topics, templates, actions, inner-models/classes, etc. Axioms are themselves modeled, so you can create new types as required. In the end, you still end up with a class, but it's defined/built with an extensible composition of objects rather than by a more limited and static class definition.
I'm part of a generation who didn't grew up with the PDP but had LOGO and basic available in computers and calculators.
With the Amstrad CPC it was possible to interupt a program and to change a few lines of code to make it do something else which was a great way to keep interested. And with calculator it was possible to code formulas to resolve/check problems.
But how would you teach programming today to a kid?
Would you choose a particular medium such as a computer, a raspberry pi or even a tablet?
And if I may, do you recommend any reading for bedtime stories?
It's time to do another children's language. My answer a few years ago would have been "in part: Etoys", "in lesser part: Scratch (done by some of the same people but too much of a subset)".
Is there any instructions for getting the Frank system (that you show off at talk) on a Linux computer? Even instructions with some missing steps to be filled in. It would beat recreating things from scratch from glimpses.
I find it much easier to explore with a concrete copies that can be queried with inputs and outputs, even if they are far from their platonic ideals. For example, the Ometa interpreter I wrote recently was much easier to make by bootstrapping the creation of an initial tree from the output of an existing implementations of Ometa.
Do you believe everyone should be thought, or exposed to, programming in school? I'm afraid that universal inclusion of programming in the curriculum would have an opposite effect and make the next generation despise programming, in the same way some people feel about math today.
Everyone should get fluent in "real science" and many other things come along very nicely with this -- including the kinds of programming that will really help most people.
what do you think about interactive theorem proving (ITP)? Assuming that you are aware of it, is it something that you have tried? If yes, how was your experience, and which system did you try? If no, why not? What do you think about ITP's role in the grander scheme of things?
Semi-offtopic: I've always been fascinated by the origins of religions, their evolution, potential re-definitions and conflicts with other ideas. Specifically, I like thinking about how the founding members of a set of ideas, might retrospectively analyze the entire history of their ideas, and the "idea set"'s metamorphosis into a religion whose followers now treat it as dogma.
I have this, entirely unprovable, theory that most founders of these types of "idea sets" are actually poly-ideological, i.e. giving weight to all possible ideas, and just happened to be exploring ideas which made the most sense at the time.
While I enjoy your thoughts on "object oriented", "functional", etc, I'd love to hear your thoughts about philosophy of religion and its origins (i.e. a slightly meta version of the conversation around "object oriented", "functional", etc). You may be one of a handful of humans able to provide me more data. Is this a topic that interests you, and is it something you think about? If it is something you think about, regarding the dogma you potentially accidentally helped instigate,
Did you and your peers intend for it to become dogma? The rest of my questions sorta assume you did not.
Retroactively, do you feel it was inevitable that these ideas, i.e. popular / powerful / effective ideas, which were espeically extremely effective at the time, became dogma for certain people, and potentially the community as a whole?
Either retroactively or at the time, did you ever identify moments when the dogma/re-definitions were forming/sticking? If so, did you ever want to intervene? Did you feel you were unable to?
Do you have any lessons learned about idea creation/popularization without allowing for re-definitions / accidentally causing their eventual turn into dogma?
Again, if this type of conversation doesn't interest you, or because it cloud potentially be delicate, you'd rather not have it in public, I'd understand.
Bob Barton once called systems programmers "High priests of a low cult" and pointed out that "computing should be in the School of Religion" (ca 1966).
Thinking is difficult in many ways, and we humans are not really well set up to do it -- genetically we "learn by remembering" (rather than by understanding) and we "think by recalling" rather than actual pondering. The larger issues here have to do with various kinds of caching Kahneman's "System 1" does for us for real-time performance and in lieu of actual thinking.
1. Spaced repetition can make the recalling - and thus the thinking and pondering - easier. It can certainly make one more consilient, given the right choice of "other things to study" e.g. biology or social psychology, as you've mentioned in an earlier comment.
2. It takes quite a bit of training for a reader to detect bias in their own cognition, particularly the "cognition" that happens when they're reading someone else's thoughts.
What to do about System 1, though? Truly interactive research/communication documents, as described by Bret Victor, should be a great help, to my mind, but what do you think could be beyond that?
I think that the "training" of "System 1" is a key factor in allowing "System 2" to be powerful. This is beyond the scope of this AMA (or at least beyond my scope to try to put together a decent comment on this today).
There's a recursive sense in which "training" "System 1" involves assimilating more abstractions, through practice and spaced repetition, such as deferring to the equations of motion when thinking about what happens when one throws a ball in the air. Going as far as providing useful interfaces to otherwise difficult cognitive terrain (a la Mathematica) is still part of this subproject. The process of assimilating new abstractions well enough that they become part of one's intuition (even noisily) is a function of time and intense focus. What do you see as a way to aggregate the knowledge complex and teach further generations of humans what the EEA couldn't, fast enough that they can solve the environmental challenges ahead? What's HARC's goal for going about this?
Personally, I've found that discovering "hazy" intuitive connections between otherwise dissonant subjects/ideas (such as the mentioned physics example) cements new concepts at a System 1 level quickly if done early in the learning process. It's also surprising how far one can go on such noisy assimilations alone as well, before needing to dig deeper.
For big problems, "finding the problem" is paramount -- this will often suggest representation systems that will help think and do better. A language that allows you to quickly make and refine your sense of the context and discourse -- i.e. to make languages as you need using tools that will automatically provide development environments, etc. and allow practical exploration and progress.
What do you think about a "digital Sabbath," [1] specifically in the context of touchstones like:
Engelbart's Augmenting Human Intellect [2]
Edge's annual question, How is the Internet Changing the Way you Think? [3]
Carr's Is Google Making Us Stupid? [4]
...and other common criticisms of "information overload"
Candles, wine and bread aren't technologies? Hard to take this seriously. (And I like to play music, and both music and musical instruments are technology, etc.)
A better issue is not getting sucked into "legal drugs" that have no nutritional value.
"We are already stupid" -- this is why things could be much much better but aren't. We have to start with ourselves, and a positive way to do this is to ask "what can real education really do to help humanity?"
Hi Alan, do you think privacy on the web should be guaranteed by design or malleable such that in special cases the government can look up your google searches and see if you follow terrorists on twitter? When I say guaranteed by design, I mean should people be creating a system to obfuscate, encrypt, and highly confuse the ability of people who wish to track/deduce what people are doing on the web?
Is there any site that lists all arguments from all sides and reach a conclusion? If they don't reach a conclusion, do they have an issue tracking system and leave the issue open for anyone to find easily and respond?
Debates should have a programming language, have CI for new arguments, have unit tests to check logic, have issues tracked and collaborated on GitHub.
The problems with most arguments is that they arguers assume they are in a valid context (this is usually not the case, and this is the central problem of "being rational"). Another way to look at it is "Forget about trying to win an argument -- use argumentation to try to understand the issues better and from more perspectives ..."
I think another way of looking at what you said is people can feel threatened or diminished by certain arguments, even if they're not directed at them, when certain cherished premises are challenged, and part of learning to argue well is to learn to take those feelings as signals that the argument needs to be looked at closely and seriously, but not as truth. It's not the final word, but more checking needs to be done to see if one's own premises are close to reality or not. Of course, that always needs to be done, but from what I see, many people are not equipped to deal with an argument that hits a nerve, much less to listen to ideas that don't agree with each other, and consider the argument based on its content, not on attitudes ("What it sounds like or feels like").
Since "fast and slow" have been a part of the discussion here, perhaps what I'm describing has to do with the relationship between "System 1" and System 2?
"There aren't any really good ones" -- so computing major students especially should have to learn 4 or 5 very different ones and write fairly major systems in them.
We should take another pass at design for both majors and non-majors ...
You mentioned Bob Barton's lecture a few times [1], emphasizing the role he played in debunking some of your (and his own) ideas about computing. Could you touch on some dogmas that haven't yet been evoked in this thread or in your videos on YouTube ? Either from the 70's or today. Let me link to Ted Nelson's remarks about the tradition of files/lumps [2] for a start.
Bullet-point may not be the ideal form for an answer but feel free to get away with that :)
Well what is thinking about then? What was the mistake the greeks made? In this video[0] you said thinking is not about logic and that was the mistake the greeks made.
It's basically confusing "math" (which they had and like all of us were overwhelmed by how neat it is and how much you can do with "thinking rationally") with "science" which they didn't really have. "Math" is hermetic and Science is a negotiation between our representation systems and "What's out there?"
I.e. it is really difficult (didn't happen) to guess the "self-evident" principles and operations that will allow you to deduce the universe. And being super-rational without science is one of the most dangerous things humans have come up with.
Alan, what is your view of deep/machine learning as an approach to programming? The Deepmind Atari player is only 1500 lines of code, because the system learns most of the the if/thens. Software is eating the world, but will learning (eventually) eat the software?
This is one of the best threads HN has ever seen and we couldn't be more thrilled to have had such an interesting and wide-ranging discussion. I know I'm not the only one who will be going back over the wealth of insights, ideas, and pointers here in the weeks to come.
Alan, a huge and heartfelt thanks from all of us. The quality and quantity (over 250 posts!) of what you shared with the community surpassed all expectations from the outset and just kept going. What an amazing gift! Thank you for giving us such a rich opportunity to learn.
(All are welcome to continue the discussion as appropriate but the AMA part is officially done now.)
Thanks to all who made this happen. It was really mind bending trying to parse and understand both the questions and responses, like simultaneously traveling to both the past and the future of computing.
Since then all other HN threads have felt so lightweight, and it is just now that that feeling is starting to wear off..
We should find a way to mine some of the rich veins of discussion here—perhaps picking one thing and going into it in more detail. Every time I read or hear Alan I end up with a list of references to new things I'd never heard of before.
I agree. Alread read several hundred of them. Looks like I gotta come back for anotger 200 or so. Just so many interesting subthreads here with who knows what impact waiting to happen. Thanks to you and HN for arranging it.
You have inspired me deeply, thank you. I love working with man's greatest invention, but I have a deep sense of dread. HN is very good about projecting a fantasy about the future, that technology can solve all problems. I would love to see a world where people use computers to compute. However, global warming is a real threat and my biggest fear is that our pop-culture will prevent us from solving our problems before the chance to solve them is taken away from us.
With such a huge threat to humanity on the horizon, do you maintain a sense of optimism here? Or will humanity forget how to "compute" the same way Europeans forgot how to make Roman concrete?
Do you think we'll ever have a programming language that isn't fully text based and gets closer to a direct mapping of our own thoughts than current systems? If so any ideas what properties this language would have?
A few years ago I asked another visionary, Marvin Minsky, if he thought that, in the future, we'd do our programming in something other than plain text. He said, "If it's good enough for Aristotle and Plato, it's good enough for me."
I've seen similarities between your work (OOP) and that of Christopher Alexander (Patterns).
Do you have anything to say about how your/his works tie together?
(Note that Alexander's work is perhaps even more misrepresented in software than OOP has come to be).
For example, he talks a lot about how anything that is to evolve naturally, or even properly serve the human component, must be composed of living structure.
video of C.A. addressing the software community's (mis-)application of his work at OOPSLA:
https://youtu.be/98LdFA-_zfA
Here is a PDF of a version with an added preface with that disavowment. His reasoning (focus on human application rather than just on method itself) makes sense to me, but I'm still excited to see read the methods.
At the OOPSLA keynote speech in 1997, you mentioned that "The Art of the Metaobject Protocol" was one of the best books written in the past ten years. Any new candidates for "best" books?
How do you seek out the people you choose to work with, now or in the past? Is it an active process, or do you find interesting people naturally glom around a nucleus of interesting work?
I would work on the early grades for all students, especially gifted ones. The epistemological stance you wind up with gets set fairly early -- not in stone but also hard to work with -- and the early grades are where we should be putting our resources and efforts.
So many of the questions point up the problems with the format and media for an AMA. So many of the best questions are out of the scope ...
But, let's just pick one biggie here: how about taking on "systems" as a lingua franca and set of perspectives for thinking about: our universe, our world, our cultures and social systems, our technologies, and the systems that we ourselves are?
This would be one very good thing to get 21st century children started on ...
"Big History is an emerging academic discipline which examines history from the Big Bang to the present. It examines long time frames using a multidisciplinary approach based on combining numerous disciplines from science and the humanities,[1][2][3][4][5] and explores human existence in the context of this bigger picture.[6] It integrates studies of the cosmos, Earth, life, and humanity using empirical evidence to explore cause-and-effect relations..."
I'm rather a fan, though I've only explored it quite briefly.
If you're not familiar with the Santa Fe Institute and its work, I suspect you'll find it fascinating. The general rhubric is "complexity science", applied across a large number of fields. Geoffrey West and Sander van der Leuuw are two other fellows. Founders included Murray Gell-Mann and Kenneth Arrow.
"SFI's original mission was to disseminate the notion of a new interdisciplinary research area called complexity theory or simply complex systems. This new effort was intended to provide an alternative to the increasing specialization the founders observed in science by focusing on synthesis across disciplines.[4] As the idea of interdisciplinary science increased in popularity, a number of independent institutes and departments emerged whose focus emphasized similar goals."
And, since many of your responses point at challenges to the AMA (or HN) format, what would your preference be? Is there an existing platform or model that fits, or is there a set of requirements that
On friends winding up at SFI: that's good to hear multiple ways -- for your friends, your familiarity, and yet another endorsement of the Institute.
Thinking in Systems (and not just Meadows' book) is something I'd also like to see developed more fully. Big History is more than that, but it's also one logical development -- systems pervasive throughout the academic curriculum. I think that's a powerful concept.
There's also the possiblity that many people don't and cannot get systems thinking. Another author I've been reading, William Ophuls (most especially Plato's Revenge) discusses this in the context of Jean Piaget's theory of cognitive development, and comes to sobering conclusions regarding facing social challenges based on typical population cognitive foundations. Basing your Solution to the World's Problems on "all the children are above average" is bound to fail.
My understanding was that you were there at the keynote where Steve Jobs launched the iPad. From what we've heard Steve came up to you after the event and asked you what you thought (implicitly acknowledging your work on the Dynabook).
Subsequent interviews suggested you thought that the iOS range of products "were the first computers good enough to criticise".
My question is: what has to happen next for the iPad to start achieving what you wanted to do with the Dynabook?
I won't presume to speak for Alan, but my understanding of what he's said is very different historically. As I recall, he said that the first Macintosh was "the first computer worth criticizing" (something like that). When Jobs introduced the iPhone, he asked Alan whether it was "worth criticizing," and he said no, but if Jobs were to make it a certain size (I forget the dimensions he mentioned, but one of them was 8"), he would "own the world." This suggested the iPad, which did become very popular, though arguably the iPhone became even more so.
I remember one thing Alan said (this was a couple years after the introduction of the iPhone, but I think it was before the iPad), which relates to your question is, "I wish they'd allow people to program the darn thing," which I took to mean, "program on the darn thing!"
Tablet computers existed long before the iPad, though they were bulky and expensive. I used to write software that ran on Telxon tablet units back in the mid-1990s, which used WiFi. A bit later they got enough memory and processing power that we were able to run Microsoft Windows on them, though they cost several thousand dollars each (not a consumer price point). I remember in video of a reunion of former Xerox PARC employees (around the year 2000. It's on the internet), Chuck Thacker held up a Tablet PC running Squeak, and he said, "This is a Dynabook right here." That sort of thing is more of a Dynabook than the iPad, because the software development licensing restrictions for iOS don't even allow people to share code from one unit to the next, because it's considered a security hazard. Apple let up on the restrictions such that people could write code on them, in an environment such as Scratch, but they're not allowed to share code from unit to unit. Instead, they have to jump through hoops, posting code on a web server for others to download.
Part of what the Dynabook was supposed to do was allow people to share code without the thought that it was a security hazard, because it would be designed to be a safe environment for that sort of thing. The iPad has the hardware form factor of a Dynabook (if I may say so), but its system ideas are far from it. It's designed as a consumer product where people are supposed to use it for personal interaction, gaming, consumption of digital content, and little else.
I am completely ignorant. So, it's going take a lot more than three books :) How about I blindly guess at my own answer and ask for alternatives?
1) Start with books like: How To Read A Book, Thinking Fast and Slow
2) Then, move on to books like: The Secret of Childhood, Instead of Education, Mindstorms
3) Then, you are ready for: Flow, The Children's Machine
Thanks for coming back to answer more questions.
If you've missed 'Hare Brain, Tortoise Mind', I do recommend it. IMHO, Gladwell's Blink was HBTM with most of the material replaced with funny stories.
Hi Alan. A lot has been made of the visions that your colleagues and you had that ultimately became the fabric of our industry.
Did you have any ideas, predictions or visions which ultimately didn't play out? (And any ideas on why?)
Thank you very much to your contributions to our industry. Anyone blessed to be working in this field today owes you an enormous debt of gratitude. You have made a dent in the universe.
Our most unrealistic assumption was that when presented with really good ideas most people will most definitely learn and use them to make progress. In reality, only a tiny percentage can do this, and it is an enormous amount of work to deal with the norms.
Very insightful. There's a large gap between the intellectually curious early adopters and the majority in the middle. I just recall how long people were still auto-paying AOL when better options were available. :-)
What is your take on the future of our general relationship with technology in the light of the new optimistic view of AI with recent advances in machine learning? I can't help but think we are over-estimating the upside, and under-estimating the problems (social, economic, etc.) much like the massive centralization of the net has had downsides hi-lighted by Snowden and others.
You have stated before that the computer revolution hasn't happened yet. It seems we stopped trying in earnest back in the early 1980's. Why?
And what could be done to re-spark interest in moving forward?
My gut feeling says that it would require a complete overhaul in almost every layer in the stack we use today and that there's reluctance to do that. Would you agree to some degree with that?
Yes, of course; we agree. Let me refine my question:
In our not-quite-an-industry, we seem to laud attempts to optimize the artifact of a residual hack, and we are absolutely dismissive of attempts to rebuild the stack as being too ambitious. And the problem with being dismissive is that it's a judgement without trial. We have precious few "crazy professors" and no tolerance for them.
What can we do?
There's the 2020 group in San Francisco. Is that kind of meetup the right direction?
Things have happened in the past when passionate and confident people with chops have decided to do something. These have not always correlated with "What is actually needed" (often not) but they usually get things into a state where people who are mainly trying to advance their own goals see some advantage (including "tech chic").
We are certainly not in a position where this can't happen a few more times.
Looking back, I've been struck not by how few really good researchers there are, but more so by how few really good managers of researchers there have been, and even more so by how really really few good funders there have been.
Maybe too simplistic, but in my view great funding has caused great stuff. So the funders should get the gold medals rather than the researchers! (Think about it: the good funders give out the gold in advance knowing full well that if even if they are very luckly 70% of the gold will turn to lead in just a few years!)
Bob Taylor has been generally praised as a great manager, and I believe them of course. But when I heard stories of his management style, it seemed to go against every instinct we have on how to foster creativity. Could you comment on that?
Also, in terms of funding, I wonder -- haven't things changed? Wasn't funding more important in, say, the 1960's and 70's due to the cost of computer time, especially at the processing level that would let you "see the future." A $1000 computer today is not so different, in terms of power, from a similar machine 5 years ago, right?
Things have happened in the past when passionate and confident people with chops have decided to do something.
But wasn't the past more open? When a field is just forming, everything is crazy, so nothing is. It's only when it has solidified (and in exactly the wrong direction) that you would more expect to find a "crazy professor" having more of an impact, right?
If you are computing on a $1000 computer, you are computing in the past. Part of the idea here is that you want to do research and development on supercomputers of the present that will give you the resources that will be available at lower prices in the future.
With salaries as they are today -- and real estate much worse -- it is actually more expensive to fund the same kinds of research today.
The past was similar to today in that most people in computing back then were orbiting around some local vendor's and local fads notions of computing. And it was a lot harder to make computers and other tools back then. It wasn't a crazy professor having an impact back then, but a whole research community that was required to make an impact.
The stories I know came from Dealers of Lightning and what I remember (correctly I hope) is that the weekly meetings were quite contentious and that he would even encourage haranguing the presenter. Is that true? Did the people there feel it as healthy constructive criticism? I would worry that this would more likely stifle creative thought than encourage it.
If you are computing on a $1000 computer, you are computing in the past.
I know that this was true in the past but it's my contention that hardware has had Moore's Law where software has stagnated, and that the passing of time has meant that now current software doesn't often take full advantage of the hardware available. If you did architect based on where the puck is going (optimized for multi-core, no main memory) current hardware wouldn't slow you down the same way as it did before and sometimes you would even see a performance gain.
The research community was "an arguing community", and knew how to argue "in good ways" (i.e. no personal attacks, only trying to illuminate the issues, etc.) This worked pretty well almost always ... (The weekly meetings were not for being creative, but for discussion ...)
Let me respectfully disagree with your contention. The point that is missed is not what the hardware could do, but what can you do without optimizing. It is very very hard to put on the optimization hat without removing the design hat, and once you've removed the latter you are lost.
The key to the Parc approach was to be able to do many experiments in the future without having to optimize. (There was a second part to this "key" but I'll omit it here)
I should have assumed, given the results that PARC had, that it must have been that way. It's fantastic that I just got the answer to that question from... you.
For my second point -- I definitely didn't mean optimization that way. I guess the word I meant to use was "targeted", as in "targeted for multi-core" but I see your point and respect it fully.
I'm thrilled and deeply, deeply honored to have had this time with you, Dr. Kay. Thank you so much again. If you're ever in the Bay Area on the first Saturday of the month and feel inclined to stop briefly by the 2020 group meeting (which is an attempt at a replacement for the Future of Programming Workshops that used to take place at Strange Loop), we would be over the moon. You can reach out to Jonathan Edwards if you need the details.
The basic principle of both points above is that "problem finding" is the hardest thing in "real research", so a lot of things need to be done to help this ... (this is a very tough sell and even "explain" today -- almost everyone is brought up -- especially in schools -- to solve problems, rather than to actually find good ones) -- and virtually all funders today want to know what problems their fundees are going to solve - so they underfund (to the point of 0!) the finding processes ...
The ARPA/Parc process "funded people, not projects" -- and today this seems quite outre to most.
As you probably know, Jonathan is doing some work with us ...
Yes I know about Jonathan, which is why I mentioned him because I didn't want to publish the details of the group here and he's been something like an adviser to the group.
And that group is probably also the reasons for my views. I would never qualify to be in HARC and yet I don't just want to be content to scorn the state of the industry and state of the art. I see that there's additionally interesting research coming from garages and weekend projects. I feel that "problem finding" is somewhat interchangeable with "point of view" and that can come from surprising sources. Funding might be needed for the hardware but my own experience, for what it's worth, hasn't borne that out. The main component then is time, and while weekends aren't much, they'll have to suffice. The last step is to see if we can't get further as a community, giving that feedback so critical at PARC, and so that is why the group was created.
As Alan has said, the importance of funding is more about finding the problem. There's a lot of wandering and false leads involved (though, finding out the false leads is nevertheless valuable. It's still new knowledge). In terms of hardware, the thing to anticipate is what computing power will be necessary for accomplishing things decades from now, and paying the bill for that, not considering what one can buy from a computer retailer today, because that doesn't create the excuse to think of what computers will be able to do.
Using funding to find the problem is inefficient; side projects are a much more distributed and efficient method and they don't need funding.
As for hardware, you're repeating what's clear. What I'm saying is that you used to have to purchase the future but now you can mimic those coming changes and, while slower, offer perfectly acceptable performance.
I guess I'm trying to lead Alan to say something to the effect that the next PARC can be groups, working together, on their own time, over Skype. I'm not sure he believes that but I do.
Hi Alan, what do you think of the Unison project? [1]
On the surface it's a structured editor for a type safe language in which it's impossible to write an invalid program, but the author has some pretty lofty goals for it.
Since you care about education so deeply, but also seem to be critical of a lot of recent technological developments that appear to be more accessible than some of the systems / approaches we had in the past:
Do you think that it is a reasonable path to embrace technologies with shortcomings, and perhaps even to utilize a reduction in expressiveness of UIs, such as hiding the ability to multitask, or to employ the power of games to "draw us in [and keep us spellbound]" (where [this] part may be a danger), if this can form points of entry for young people who may otherwise not have found their way into technology (think smartphones in rural areas without reliable other means to guarantee access to information to large numbers of people; e.g. lack of mentors and role models), or would it be more promising to rather focus on developing alternative means for "on-boarding"?
A prime UI design principle -- which is also an education principle -- is that you have to start where the learners/users are. ("All learning happens on the fringes of what you know" -- David Ausubel)
For children especially -- and most humans -- a main way of thinking and learning and knowing is via stories. On the other hand most worthwhile ideas in science, systems, etc. are not in story form (and shouldn't be). So most modern learning should be about how to help the learner build parallel and alternate ways of knowing and learning -- bootstrapping from what our genetics starts us with.
And -- everything we are immersed in causes "normal" to be reset, for most people invisibly. This should be a conscious part of the "helping learning" process.
I have read a lot about you and your work at Xerox.
Do you enjoy travel? What continents have you been to? What's your favorite country outside the US?
How many hours a day did you sleep per day during your most productive research years? Cos i usually wonder how very productive people seem to achieve much more than others within the same 24 hours we all have.
I used to sleep about 5 hours a night well into my 50s, but then started to get respiratory infections, especially brought on by plane travel. After many years, my sleep habits were examined and I was told to get at least 8 or more hours a night. This is hard, so I usually make up with an afternoon map. This has cured the infections but has cut down the number of active hours each day.
Imagine we have already gotten to a place where software hasn't been built like pyramids but with some serious object architecture. Let's just assume a Benjamin Franklin turned this computer science into a real science(as the good old Ben did with electricity). We're there. There are now trillions of objects within my reach, without creating a huge indexing table and depending on data, how do I reach the object I am looking for?
This was a huge problem on the internet and Google had a solution that many tolerated and even accepted. In biology you have structures and many cells compose bodies but as I understand it, the objects would be like functioning cells scattered around on the floor. Maybe give me a general and vague idea what the process for this input could look like: "discover songs that I might like but have never heard before"(I have a hard time dealing with abstractions).
I've heard you frequently compare the OOP paradigm to microbiology and molecules. It seems like even Smalltalk-like object interactions are very different from, say, protein-protein interactions.
How do you think this current paradigm of message-sending could be improved upon to enable more powerful, perhaps protein-like composition?
This is a worthy goal, and I think quite possible. Note that Biology requires a lot of organization in order to do this, so it is likely not to be straightforward from where we are. But we had to make the Internet -- etc. -- self-healing in many respects (we had to go to dynamic stabilities rather than trying to make perfect machines ...)
Do you think that genetic programming and machine learning are effective avenues to pursue regarding this? Or is that introducing unnecessary complexity in many/most cases?
I think "real AI" could help (because it could also explain as well as configure). Systems that can't explain themselves (and most can't) are a very bad idea.
Because when things go sideways a human can't fix anything without copious amounts of reading/testing/poking around? I'm taking your meaning of "explain" literally here, which might be shortsighted.
Either way, the idea of machines or systems as "living" and able to communicate intent and process, even if only within their own "umwelt", is really interesting. Even a taste of that would make modern systems easier to debug and understand, if not more robust (which would be a better starting point for many systems anyway I suppose).
I believe he is referring to something like expert systems explanations, which was the holy grail 20 years ago (I don't know if it has been achieved), and as opposed to neural networks which are more like black boxes (at least to me).
Ah I see, that's quite interesting. So the idea of a system that could explain its own decision-making and inferences?
Neural networks definitely are black boxes, at least at an individual level. Sure the concept remains the same generally, but the internals are different and hidden from case to case
I am sometimes involved with mentoring younger people with STEM projects (arduino etc). Its all the buzz. But I heard one of my younger relatives lament about the tendency of young people to gravitate towards a quantitative field of study / training - is there too much hype?. "Learning to Code" is a general movement that is helping many youth to improve their career prospects. Do you think it's being effective in improving education on a meaningful scale.
What kind of educational initiatives would you like entrepreneurs (of all shades) come up with. Do these need to be intrinsically different in different parts of the world?
Finally, as a person who gets scared away from bureaucracy ("the school district") - what would you advise. School districts don't always make the best technology investments on precious dollars.
It's a big problem along many dimensions. A good start is to try to get a handle on what you think should be required. One hint is to forget about vocational goals and focus on "adults as real citizens in a large diverse society"
Hello Mr. Kay, Are you still going to be active with VRI or CDG now that HARC! has formed?
PS:
I once ran into you in Westwood and you invited me to check out the CDG lab. Unfortunately I missed you when I came by. I'm always tempted to try again, but I'd hate to interrupt the serious thinking of the fellows stationed up there.
I think there is a lot of potential here (think 10 years and you might get something useful in the first 5). This will require great will on the part of those trying to make it happen (the market is not demanding something great -- and neither are academia or most funders).
Something that I find striking about you and your work is your cross discipline approach to hardware, software, and "humanware".
Can you speak about people and subjects which have inspired you from fields other than computer science and how they have changed you as a person and technologist.
I grew up in a house filled with books and parents and relatives who were in many fields. This ruined me for school (and that helped, though it was a struggle).
Do you think we're yet at a position where we could catalog a set of "primitives" that are foundational to programming systems? (Where "systems" are fundamentally distributed and independent of software platform, programming language or hardware implementation)
Hi Alan,
Since you may spend another day answering questions..
a got some more for you :-)
What do you think about the different paradigms in programming?
And what do you think about type theory, etc?
Bonus question:
I am trying to develop a new general programming system for children.
I was inspired by Smalltalk and ELM.
http://www.reddit.com/r/unseen_programming
It is a graphical system that uses function blocks connected with flow-logic. So basically it is functional,
but much simpler.
The function-blocks form a system, very similar to classes/objects in Smalltalk.
What do you think about such a system,
or what tips do you have about designing a new language?
A good heuristic for designing programming languages is to try to take the largest most complicated kinds of things you want to do, work them out, and then see if there is a "language lurking".
Most people make the big mistake of lovingly making small neat examples that are easy to learn -- these often don't scale at all well. E.g. "data" in the small "seems natural" but the whole idea scales terribly. And so forth for most of the favorite paradigms around today.
The problem with the idea of procedures acting on data structures is that as a system scales up, it gets more complex in terms of its data structure, and the amount of code that must operate on it, and be dependent on it. As that happens, it gets harder to change, both in terms of the structure, and the procedures that work on it. The dependencies between the structure and procedures grow in number and type. The attempt to understand it creates a cognitive load that makes it difficult and inefficient to keep track of them (if not impossible), and keep consistency in how they operate. Secondly, the amount of code that's required to operate up to spec. becomes so voluminous that it creates a cognitive load that is too much to handle, in terms of finding and fixing bugs.
Part of scaling is understanding the relationship between what is necessary to express to carry out the complete, intended model, and the number of relationships (the best I can express this is "in chunks") that we can keep track of simultaneously. Modern engineering in other fields of endeavor understands this notion of cognitive load and complexity, in terms trying to organize resources such that a constructed structure can carry out its intended function well as a result of principled organization methods.
I get the impression from the book Dealers of Lightning that Bob Taylor played an indispensable role in creating Xerox Parc. What are the Bob Taylors of today up to, and why aren't they doing something similar?
Edit: just noticed HARC and YC-Research. I'll check it out.
"Dealers of Lightning" is not the best book to read (try Mitchell Waldrop's "The Dream Machine").
That said, Bob Taylor cannot be praised too highly, both for Parc and for his earlier stint as one of the ARPA-IPTO directors.
Simple answer: There aren't a lot of Bob Taylors in any decade (but there are some). Big difference between then and now is that the funders were "just right" back then, and have been "quite clueless" over the last 30 some odd years. An interesting very recent exception is what Sam Altman is doing -- this is shaping into the biggest most interesting most important initiative since the 70s.
Hello Alan,
In light of the poor results of the OLPC project, in reference to the Children's Machine, leaving aside commercial factors, do you think the Sugar user interface is appropriate for the task? If not, how can it be improved, what is good/bad about it?
A related question: in your opinion, what were the successes and failures of the OLPC project, what openings and obstacles contributed to that, and where do we go from here?
I've studied the Sugar design and source code, and programmed Sugar apps and widgets, including wrapping the X11/TCL/Tk version of SimCity [1] in a thin Sugar python script wrapper to make it into an activity, working on reducing the power consumption of the eBook reader, and developing pie menu widgets for Sugar in Python/GTK/Cairo/Pango [2].
My take on the OLPC project is that they were tackling so many interdependent problems at once, both software and hardware, that it was impossible to succeed at its original lofty goals. But it was also impossible to achieve those goals without tackling all of those problems at once. However, a lot of good came out of trying.
It was like the "Stone Soup" folk story [3], that brought together brilliant people from many different fields.
A great example of that effect was that we were able to convince Electronic Arts to relicense SimCity under GPLv3 so it could be used on the OLPC. [4]
One of the many big goals was reducing power consumption, which cross-cut through all parts of the system, requiring coordination of both hardware, software, and the firmware in-between.
Some of the designs were brilliant and far ahead of their time, especially Mary Lou Jepsen's hybrid display, and Mitch Bradley's Open Firmware Forth system.
RedHat modified Linux to support a tickless kernel, consolidating periodic interrupts together to run at the same time so they didn't each wake the CPU at many different times. [5]
Many of the solutions to problem the OLPC project was working on have benefitted other more successful platforms.
John Gilmore credits the OLPC in lighting a fire under laptop vendors to make competing low power low cost laptops like the Chromebook a reality.
Some of the ideas were ridiculous, like the silly crank to charge it.
Sugar had too many dependencies on all the other stuff already being in place and working flawlessly. And its was far too ambitious and revolutionary, while still being layered on tons of old legacy cruft like X11, Python, GTK, etc.
I love Python, but the OLPC came at a time when it would have been a better to implement the entire user interface in JavaScript/HTML.
Sugar app developers realized they needed the services of a deeply integrated web browser (not to mention the ability to run in any desktop or mobile web browser outside of the Sugar ecosystem), but the overhead of plugging xulrunner into Python and integrating JavaScript and Python via XP/COM and GTK Objects was just too astronomically complex, not to mention horribly wasteful of power and memory and simplicity.
You have to navigate the trade-offs of building on top of old stuff, and building new stuff. And I think Sugar chose the wrong old stuff to build on top of, for that point in time. Python and Cairo are wonderful, but JavaScript won, and Cairo migrated down the stack into the web browser rendering layer, HTML Canvas component, etc.
Also there was no 3D acceleration (or OpenGL/WebGL), which was a big disappointment to game developers, but at the time it was necessary to keep power usage low.
I'll try to specifically address your question about Sugar's appropriateness for the task and what's good and bad about it. I'll quote some stuff I wrote about it when I was porting SimCity to Sugar (prefixed by ">), and then make some retrospective comments (no prefix): [5]
>Sugar is based on Python, and uses the GTK toolkit, Cairo rendering library, Pango international text layout library, and Hippo drawing canvas, and many others useful modules. Once SimCity is integrated with Python, it will be great fun to create a kid-friendly multi-player user interface that's totally integrated with the OLPC's unique hardware design (like the hires mono/color LCD screen, which flips over into book mode with a game controller pad) and Sugar's advanced features, like scalable graphics, journaling, mesh networking, messaging, collaboration, and (most importantly) applying Seymour Papert's philosophy of "Constructionist Education" to SimCity.
Sugar was trying to reinvent far too many wheels at once.
Python was a great choice of languages, but it was in the process of being eclipsed by JavaScript. Python is strong at integrating native code (its API is easy to use, then there's SWIG, Boost, and many other ways to wrap libraries), and meta-integrating other code (GTK Objects, COM, XP/COM, etc).
But there's an overhead to that, especially when you mix-and-match different integration layers, like for example if you embedded a web browser in a Sugar interface, registered event handlers and made calls on DOM objects, etc.
On top of Cairo for graphics and Pango for text, which are two wonderful solid well tested widely supported libraries used by many other application, Sugar had its own half-baked object oriented graphics drawing canvas, "Hippo", which was written in a mish-mash of Python and GTK Objects. Nobody should have to learn how to wrangle GTK Objects just to draw a circle on the screen.
And there's this thing about universal object oriented graphics representation APIs, which is why the OLPC didn't support PEX, X11 PHIGS Extension (Programmer's Hierarchical Interactive Graphics System), and why we're not all using GKS (Graphical Kernel System) terminals instead of web browsers.
As a Sugar programmer at the time, all parts of the system were in flux, and it was hard to know what to depend on. For the Sugar pie menus, I stuck to the solid Python/Cairo/Pango APIs, with a thin layer of GTK Objects and event handlers.
As for all that stuff about journaling, mesh networking, messaging, collaboration: great ideas, hard problems, fresh snow, thin ice. As Dave Emory says: "It's food for thought and grounds for further research."
Sugar activities are implemented in Python.
What I had to do with SimCity to integrate it into Sugar was to take an existing X11/TCL/Tk application, and wrap it in Sugar activity that just launched it as a separate process, then send a few administrative messages back and forth.
That was also the way Scratch/eToys and other monolithic existing applications were integrated into Sugar.
The idealistic long term plan was to refactor SimCity to make it independent of the user interface, and plug it into Sugar via Python, then finally re-implement the user interface with Sugar.
As progress towards that goal, which was independent of Sugar, I stripped out the UI, refactored and reformatted the code as C++ independent of any scripting language or UI platform, and then plugged it into Python (and potentially other languages) with SWIG. I then implemented a pure GTK/Cairo user interface on top of that in Python (without any Sugar dependencies), and developed some interfaces so you could script your own agents and zones in Python. (As an example, I made a plug-in giant PacMan agent who followed the roads around, turning at corners in the direction of the most traffic, eating cars thus reducing traffic [7], and a plug-in Church of PacMania whose worshippers generate lots of traffic, to attract the PacMan to its neighborhood, and sacrifice themselves to their god [8]!)
As it turned out, the SimCity kernel plugged into Python was also useful for implementing a SimCity web server with a Flash web client interface, which is a much better architecture for an online multi player game than the half-baked, untested collaboration APIs that Sugar was developing.
At the high level, there were a lot of great ideas behind Sugar, but they should have been implementing on top of existing systems, instead of developed from scratch.
>The goals of deeply integrating SimCity with Sugar are to focus on education and accessibility for younger kids, as well as motivating and enabling older kids to learn programming, in the spirit of Seymour Papert's work with Logo. It should be easy to extend and re-program SimCity in many interesting ways. For example: kids should be able to create new disasters and agents (like the monster, tornado, helicopter and train), and program them like Logo's turtle graphics or Robot Odyssey's visual robot programming language!
Even if we didn't achieve those goals for Sugar, we made progress in the right direction that have their own benefits independent of Sugar.
Choose your lofty goals so that when projected onto what's actually possible, you still make progress!
I am familiar with your port and found it unplayable on the XO laptop (although I commend you on your apparently painful task of making it run in the first place!).
While I appreciate your thoughts on the OLPC, I am more interested in Alan's thoughts on Sugar.
It's great meeting you, and wonderful getting some honest feedback from somebody who's used it. Thank you! I also hope Alan has a chance to answer your question, and my four.
Could you please tell me more specifically about what made it unplayable for you? What was the nature of the problem? Did you remember to disable disasters? ;)
Please don't blame it on Sugar -- the user interface was based on a 1993 version of TCL/Tk, so it looks pretty klunky since it was designed to emulate Motif, whose widget design (according to Steve Strassman) is from the same style manual as the runway at Moscow International Airport [1].
Here's a demo of SimCity running on the OLPC [2] -- does that show any of the problems you had that made it unplayable?
Once it passed EA's QA regime, I didn't put any more effort into the TCL/Tk user interface, instead refactoring it to remove TCL/Tk and plug in other GUIs. Have you given the pure GTK/Cairo interface a try?
What was totally unplayable was the X11 based multi player feature [3], which I removed from the OLPC version, since no child should be forced to wrangle xauth permissions on the command line, and David Chapman's MIT-MAGIC-COOKIE-1 tutorial isn't suitable for children [1]. I also disabled the Frob-O-Matic Dynamic Zone Finder [3 @ 3:35], since that was a prank I played as a tribute to Ben Shneiderman [4].
Again, thanks for the feedback, which I appreciate!
Hi Alan, are you envisioning a way to participate/connect-to YC Research as an independent researcher? I don't mean as an associate since many of us have the daily focus in startups but as a place where our ideas and code would be better nurtured.
1. Do you think the area of HCI is stagnating today?
2. What are your thoughts on programming languages that encapsulate Machine Learning within language constructs and/or generally take the recent advancements in NLP and AI and integrate them as a way to augment the programmer?
I have been designing and hacking my own languages (to varying degrees of completion) for almost as long as I have been programming. A lot of the time, their genesis is a thought like, "what if language X did Y?" or, "I've never seen a language that does this, this, and that... I wonder if that's because they're insane things to do?"
When you're working on a system, how do you approach the question, "Is this really useful, or am I spinning my wheels chasing a conceit?" Is the answer as simple as try it out and see what happens? Or do you have some sort of heuristic that your many years of experience has proven to be helpful?
You've said here a few times here that maybe "data" (in quotes), is a bad idea. Clearly data itself isn't a bad idea, it's just data. What do you mean by the quotes? That the way we think about data in programming is bad? In what context?
I've been thinking & reading about Data Flow programming & languages - datalog, lucid, deadalus/bloom etc... in the context of big data & distributed systems and the work that Chris Granger has been doing on Eve, the BOOM lab at Berkeley, etc... - and that seems like a lot of really good ideas.
What's your opinion on data flow/temporal logic - and how does that square with "maybe data is a bad idea"?
Just to say one more time here: the central idea is "meaning", and "data" has no meaning without "process" (you can't even distinguish a fly spec from an intentional mark without a process.
One of many perspectives here is to think of "anything" as a "message" and then ask what does it take to "receive the message"?
People are used to doing (a rather flawed version of) this without being self-aware, so they tend to focus on the ostensive "message" rather than the processes needed to "find the actual message and 'understand' it".
Both Shannon and McLuhan in very different both tremendously useful ways were able to home in on what is really important here.
Most humans are quite naive about this -- but it is endlessly surprising to me -- and depressing -- to see computer people exhibit similar naivete.
For example, the extent to which most code today relies on "outside of code" programmer views (and hopes) is astounding and distressing.
Do you have any thoughts or favourite authors on the topic of technology and innovation, and the process of that specifically?
I've been particularly interested lately in the works of the late John Holland, W. Brian Arthur (of PARC & Stanford), J. Doyne Farmer, Kevin Kelley, David Krakauer, and others (many of these are affiliated with the Santa Fe Institute).
In particular, they speak to modularity, technology as an evolutionary process, and other concepts which strike me being solidly reflected in software development as well. Steve McConnell's Code Complete, for example, first really hammered home to me the concept of modularity in design.
In your paper "A Personal Computer for Children of All Ages", where you introduce the concept of the DynaBook, you explicitly say that the paper should be read as a work of science fiction. I understand that you're a big fan of science fiction. Do you draw any inspiration from science fiction when inventing the future?
Thanks,
Kevin
Related: I've given a talk on "What Computer Scientists can Learn From Science Fiction":
I was a big fan of science fiction in the 40s and 50s -- pretty much literally "read everything" -- and tapered off in the 60s -- partly because science was taking more of my time than science fiction, and partly because in many areas science and technology over-ran sci-fi, and partly because my favorite authors were writing less (and writing less good stuff).
I admire the compactness and power of Smalltalk. What advice would you give language designers looking to keep the cognitive load of a new language low? What was your design process like, and would you do it that way again?
What kinds of learning do you want your prospective programmers to go through? It's what you know fluently that determines cognitive load. In music you are asking for an answer in a range from kazoos to violins.
The main thing a language shouldn't have is "gratuitous difficulties" (or stupidities).
That said, it's worth thinking about the problems of introducing things in our culture today that even have the learning curves of bicycles, let alone airplanes ...
Or ... what is the place of tensor calculus in a real physical science?
Hi Alan, this is a bit of a long shot, but I'd like to try anyway. I've been following CDG from early on, and am really interested in the exploratory research that's going on in there.
I'm a 20 year old computer science student looking for an internship, would it at all be possible to pursue an internship at CDG? My primary selling point is that, given the right environment, I have a lot of motivation.
I understand this is not the right place to discuss these matters, but I know it's highly likely that this message will be read here, I am happy to take this topic elsewhere.
Thank you for spending some of your time here and writing your thoughts.
I would like to ask you for some advice.
The idiom "Everything old is new again" is currently picking up steam, especially in the hardware and software scene.
Amazing stuff is happening but it is being drowned in the mass pursuit of profit for mediocrity in both product and experience.
What would you say to those who are creating wonderful (and mostly educational) machines but finding it difficult to continue due to constraints and demands of modern life?
Most don't have the privilege to work at a modern day Xerox PARC. Then again there is no modern day Xerox PARC.
When I write code it is usually either "kiddicode" for future "kiddilanguages" or "metacode" (for future languages ".")
I did have a lot of fun last year writing code in a resurrected version of the Notetaker Smalltalk-78 (done mostly by Dan Ingalls and Bert Freudenberg from a rescued disk pack that Xerox had thrown away) to create a visual presentation for a tribute to Ted Nelson on his 70th birthday:
https://youtu.be/AnrlSqtpOkw?t=135
This particular system was a wonderful sweet spot for those days -- it was hugely expressive and quite small and understandable (my size). (This was the Smalltalk system that Steve Jobs saw the next year in 1979 -- though with fewer pictures because of memory limitations back then).
Can you confirm that a message is a declarative information sent to a recipient which can (or can not) react to it?
And what is your opinion about inheritance in OOP? Is it absolutely essential feature in an OOP language?
I like "messages" as "non-command" things. And I left out inheritance in the first Smalltalk because I didn't think Simula's version was semantic enough (and still don't).
The idea of a "generalization" from which you can get instances is good. Now the question -- then and now -- is how can you define the "generalization". We didn't do it well back then, and I don't know of a system that does this well enough today.
How satisfied are you with the tablets that finally satisfied your vision (did they?) of a personal computer? How much were you able to infer about how they would work? Any lessons from this?
I'm not satisfied. They have more computing power and display resolution and depth than my minimums back then, but don't pay strong enough attention to "services" (one of the simplest they don't do well enough -- and they could -- is to really allow a person who knows how to draw to draw (and this means allow a person who doesn't know how to draw to learn to draw)).
There are a lot of similar quite blind spots in today's offerings, and none of them are at all necessary.
2. Why do you think current programming paradigms are bad?
3. What changes to current operating systems need to happen?
[2] My view is you want to pass terse but informative information to a compiler in order for optimizations to take effect and there are three roads programming languages take: abstract away by layering which burdens the programmer to unravel everything (C++), abstract away from the hardware so much that specifics are hidden (most high level languages) or something similar to C.
You've been a long-time proponent for creating educational software (e.g squeak etoys) helping teach kids how to program and have been fairly critical of the iPad in the past. What are your thoughts on Apple's new iPad Swift playground (http://www.apple.com/swift/playgrounds/) in teaching kids how to learn how to program in Swift?
Do you think UI aesthetics are important in software for kids?
That was an elucidating dialog; and I have appreciated watching recordings of your lectures as well.
I realize I am replying a bit after the fact, so I am writing more on the off chance that you may read this rather than further the dialog. It seems unlikely that I will have an opportunity to directly collaborate with you given the present realities of my existence, but I deeply appreciate your contributions to the field of computing and education and for sharing your knowledge and wisdom.
Hi Alan, I'm a CS student thinking about graduate school.
1. Would you suggest going into a popular field that excites me but already has lot of brilliant students? (for example AI and ML)
Or rather into a not-so-popular field where maybe I can be of more help? (for example computational biology)
2. If I had to choose between studying my current favorite field at an average research group, or another still interesting field with a top group, would you suggest going with the latter for the overall learning experience?
Please try to avoid too much planning for your future at this point and just try to get to a place where things are going on. Any good educational experience will take someone who is trying to get from A to B and instead get them to C (note that an under-educated person is not in a good position to make big choices -- so learn more!)
You've been involved in visual programming environments like GRAIL and Etoys for kids. What do you think of the current state of visual programming for both kids and adults?
I wasn't involved in GRAIL, but was (and am) a huge admirer of what these people were able to do (and when they were able to do it). Worth really looking into the history here!
The current state of programming (visual or not) for children and adults is not good enough.
Hi Alan - the innovation from PARC appears to be the result of a unique confluence of hardware, software, market forces, recent government research investment, and Michelangelo-level talent for bringing big ideas to fruition.
Do you think that any factors that were significant back then are going to be difficult to reproduce now, as HARC gets started? Conversely are there novel aspects of today's environment that you wished for at PARC?
Parc in the 70s was an outgrowth of the ARPA projects that were started to be set up in 1962. Bob Taylor was a factor for both, and wanted young researchers who already had imbibed the mother's milk of "the ARPA dream", This created a culture that never argued about what the general vision and goals were, and also was able to argue in good ways about how to get there.
Such a homogeneous culture organized around a particular vision doesn't exist today (that I know of), and it means that places like HARC will have to do some of the culture building that was done in the ARPA projects (I think of the HARC initiatives as being more like the ARPA projects than like Parc at this point)
Do you think some kind of more "flexible/fluid" syntax can help dyslexic people? Sometimes they are excellent at problem solving and architecture, but the micromanagement of syntax limits them.
From your comments, it's clear that you are not happy with the state of programming languages as it stands.
You mentioned that the current languages lack safe meta-definition and also that the next generation of languages should make us think better.
Apart from the above, could you mention more properties or features of programming languages, at a high level of course, that you consider should be part of the next generation of languages?
I've written and talked about this over the last decade or so. I think a huge problem in even thinking about this is that the people who should be thinking about it have gotten very fluent in many ways of "programming" that are almost certainly not just obsolete but make it very difficult to think about what are likely to be the most important issues of today.
If we look at CAD->SIM->FAB in various engineering fields -- mechanical, electrical, biological, etc. -- we see something more like what is needed. From another view, if we look at the great need for designing and assessing, etc. we can see that the representations we need for "requirements", "specs", "legalities", etc have to be debuggable, have to run, and might as well just flow into a new kind of "CAD->SIM->FAB" process for programming. A lot of what has to happen is to replace main-stream "hows" with "whats" (and have many of the hows be automatic, and all of them "from the side).
It seems that dynamic or at least sloppily typed langauges like javascript and python have become more and more popular. Do you think typeless/dynamic languages are the future? I personally really like "classless OOP[0]".
It would be great to have a notion of type that would actually pay for itself in real clarity. For example, what would be really useful in many dimensions is "semantic types" rather than value oriented types (which I don't think are particularly valuable enough).
When will we get better at saying what we mean? I don't think this just important when speaking with computers, but also human-to-human interaction.
What is the best interface for computer programming? I have settled on the keyboard with an emacs interpreter for now, but I'm curious if you believe voice, gestures, mouse or touch are or will be better ways of conveying information?
Did you guys ever talk about Man-Computer Symbiosis in terms of the computer unfairly benefiting some men over other men?
One example could be, give me money and I'll give you a computer that can translate English to Spanish.
Another example could be, Apple share holders profit from iphone sales, and the iphone UI leads naive/normal people to think texting while driving is ok.
Thank you along with everyone for coming back to answer all the questions. A very interesting approach, not something I'd considered but definitely worth trying.
I would love to hear your thoughts on how to "train" "System 1", in order to make "System 2" more powerful. Not necessarily here, due to the time factor, but if you find some time to think more deeply on this, please let me know and we can think through this together.
I came up with an idea that seems a bit like the Dynabook. It helps the user to understand design decisions. Here's a short video about it (under 2 mins):
What are your thoughts on Ethereum and DAOs (Decentralized Autonomous Organizations)? Do you believe they will lead to a new way to think about and distribute software? It kind of reminds me of the "fifth generation computer", with constraint/logic programming, smart contracts and smart agents.
A big problem with the general industry is the lack of subsidized "sabbatical years" -- these are a good way to recharge and re-orient. Xerox, amazingly, used to have them for employees. But I haven't heard of any such thing recently.
If you think about life as needing "renewables" then you shouldn't allow yourself to be "stripmined" (easier said than done).
What do you think about EAST paradigm which tries to revamp the original spirit of OOP you stated ?
Do you think that the machine learning community suffer from the syndrome of "normal considered harmful". Like using vendor hardware instance of designing their own (FPGA for instance)
A bit late to this AMA, but I've been watching Alan's videos on Smalltalk/Squeak, and I'm wondering why the OLTP didn't use Squeak as the basis for its software (instead of the linux/Sugar/python combo that they went with instead).
Why haven't machine learning and neural networks been applied to programming languages with as much interest as human languages? Wouldn't AI augmentation of writing computer code lead to faster breakthroughs in all other fields within computer science?
If there are lots of resources more or less available, then a lot of "hunting and gathering" types can do things with them, and some other types can see about what it takes to make resources rather than just consume them. The former tends to be competitive, and the latter thrives on cooperation.
The biggest problems are that the "enterprisers" very often have no sense that they are living in a system that has many ecological properties and needs to be "tended and gardened".
Not an easy problem because we are genetically hunters and gatherers, we had to invent most of the actual sources of wealth, and these inventions were not done by the most typical human types.
Yet another thing where education with a big "E" should really make a difference (today American education itself has pretty much forgotten the "citizenship" part, which is all about systems and tending them.
I'm curious whether you think it might be an important/interesting direction for program editors to depart from character sequence manipulation to something along the lines of AST editors. Or is this only a red herring, and perhaps not so deep a change?
I'm curious if you've read much about Activity Theory. (in particular, Yrjö Engeström's Learning by Expanding.) I feel like it's compatible with much of what I've heard you discuss in lectures. Is it something you have an opinion on?
Logo is ~50 years old now, squeak 20 and olpc ~10. Do you know innovators who are now in their 20ies, 30ies and 40ies and who at least partly credit their mental development to childhood exposure to logo, e-toys, mindstorms, Turtle Geometry etc?
I'm not someone that would be widely considered to be an innovator, but I had a few of the very first microcomputers that came out alone with some accompanying books of how to program them in BASIC, which was very important to the start of my career. The only free games I had early on were the ones I wrote myself, though it was only a few years before I was copying commercial games from others in an Apple II computer club. I also had the 1979 Big Trak- the real world version of logo; in fact if you were to attach a piece of sidewalk chalk to the back of it, it would be even more similar. I've often thought of getting the newer version for my daughter, even though it's not exactly the same: https://www.amazon.com/BBT-BIGTRAK-Big-Trak/dp/B0035IZ85G/
If you're looking for ideas for youth to get into programming, I've had the most success with Scratch: https://scratch.mit.edu/ , but I think Legos and Minecraft (or the free imitation, Exploration Lite)- things that you build with- are also important. Read "Jeff Bezos on the best gift he's ever received": http://www.marketplace.org/2014/12/08/business/jeff-bezos-be... And of course, getting kids into music is a great thing for creativity.
Q: I've always been a big fan both of text console accessible UI's like CLI's and REPL's as well as of GUI's. In my mind they each clearly have a different mix of strengths and weaknesses. One way a user might have a bit of the "best of both worlds" potentially is an app or client featuring a hybrid input design where all 3 of these modes are available for the user to drive. Any thoughts on that?
I'm writing a paper in my free time about some architectural ideas in this area and would love to hear your thoughts. Feel free to tell me this is a FAQ and that I should go read a particular book/paper of yours, and/or to get off your lawn. :-)
ok that's what I thought you meant. thank you, sir! I began to realize after I hit return it was probably a redundant question, I apologize. I get starstruck a little when dealing with people whose work I've admired for so long. each time, boom, my effective IQ probably just drops right through the floor. :-)
one of your influences on me as a software engineer has been to strive for solutions which make certain things easier, all things possible, because of architecture choices, while still allowing best-of-both-worlds modalities. eg. that which is to UI's as DSL's are to a general purpose programming language.
Too weak a model of meaning on all counts. Not a new idea, and still they did it again. (This is not an easy problem, and machine learning won't do the job either.)
Whats the next step to improve remote working? Face to face still seems to be so superior for relationship building and problem solving despite the wealth of video conferencing, social and collaboration tools we have. I don't want to wear goggles...
I skype frequently. To feel connected, simply having the camera imbedded in the monitor itself is enough to convince the other person that I'm looking at them in the eye.
I remember Sama saying that you thought the missing component of remote tools could be something chemical so I was hoping to hear something along those lines. Discovering such mechanisms would be truly exciting!
Do you think the object ~ biological cells metaphor can be related somehow to automated programming using GP or neural networks? (I've sometimes imagined neural networks as networks of many small objects with probability-based inheritance)
Biggest thanks for helping create the modern computer and its peripherals and helping advocate programming for children! Computers is the base which I enjoy the most as a hobby and make my living off.
Have good parents (and probably some pre-dispositions). Then be really stubborn about wanting to understand what is going on (no one else really wants you to)
I'd like to go deeper into your notion of "pop cultures" vs. "progress", in the context of innovation, but also the arts. Can you recommend some readings that might fill out those concepts?
Try to understand the nature of "pop culture", especially as it relates to "traditional cultures", to human genetics, etc. What does a pop culture want? (What do people in a pop culture really want?) And why?
What are "developed cultures" all about? What are the strengths and weaknesses etc.
What kinds of criticism obtain and help in various kinds of cultures?
Alan, while trying come up with a good question, I learned you are a musician. Great! As a fellow musician (also jazz and classical) I'm curious whether you feel this has influenced your engineering.
[Do you prefer "[computer] science", "design", or another term? I personally get a bit queasy about calling it "science" when the field isn't centered around experiments and their analysis. Moreover "sciencing" is not a verb :).]
I like "computer science" when doing real "computer science". Science is making explanatory models from and about phenomena and looking for more phenomena.
Most of the time I think I'm "designing".
In English "sciencing" can be a verb. And there's the great line from "The Martian" -- "I'm going to have to science the shit out of this!"
What TV show or movie have you seen that has realistically portrayed advanced computer technology, or is growing into it? In other words, now that we have Amazon Echo is the Forbin Project more realistic?
It would be great for the programs themselves to somehow be "literate". That said, it is quite a bit of work to write an essay (Don Knuth makes it look easy, but ...)
Hi Alan, If you are familiar with Go, what do you think about it's simplicity as a language? It's something other languages should start thinking about in their design?
I would take a different perspective that puts as "higher forces" things like:
-- what helps thinking about things in general, about problems, and resolving them (epistemological concerns, which include the whole environment as intrinsic to "langauge")
-- representational matchups to what we are trying to model and create dynamic inference processes for (mathematical concerns -- this is why "mathematics" is a plural, in real math you invent maths when needed ...)
-- orthogonal axes for many areas, including meaning and optimizations, including definitions and meta definitions, debugging, reformulation, etc. (pragmatic concerns for eventually winding up with workable artifacts)
Do you have an opinion on text based vs visual programming languages? I think the latter is good for learning, but feel impractical in my day-to-day job. Is there a sweet spot?
Could you recommend a small number of historic papers in computer science for undergrads to read so that they can have a bit more context for the state of modern tech? Thanks!
Did you ever experiment with 'late-binding' hardware? IE something akin to FPGAs today? Could that be considered the progression of the Alto's microcode design?
Possibly related to that is the idea of 'late-binding' of physical items (ie 3D printing). It's interesting to think about the combination of computer driven synthesis and manufacturing of physical/computational products (ie smart materials).
Is durufle Requiem hindered or helped by a full pit? Does Chip excite you the way OLPC XO did/does? Salutations/felicitations, appreciation for the ama.
Steve was not the kind of person to have friends, but he and I were "pretty friendly" right up until his death -- this is partly because our lives intertwined closely a few times -- not just for Apple, but also for Pixar, and then later as I tried to get him back to real education as an Apple goal.
I had the great -- and lucky -- benefit of falling into a well established research community -- ARPA-IPTO -- in the 60s with roots going back into the 50s and 40s. They "knew how to make progress" and I learned most of what little I understand about "process" from growing up in it.
It's worth thinking about what scales and what doesn't scale so well. For example, names are relatively local conventions. We could expect to have to find better ways to describe resources, or perhaps "send processes rather than messages". Think about what's really interesting about the way Parc used what became Postscript instead of trying to define a file format for "documents" for printers ... (a programming language can have far few conventions and be more powerful, so ...)
Granted, names have their problems. But as a tactical solution URLs can be used to send high-level messages or at least as a name resolution scheme to initiate communication between two or more OO systems. And URLs are just names... (credit where it is due: I first heard the thought expressed by you and it gets the job done until something better comes along)
Admittedly this does not (directly) address the issue of sending processes (which could be handled indirectly as payload.) Or am I missing the bigger picture you're driving at here?
My example definitely presumed things like service discovery which gets you to URLs. But I think I see the larger point you make about what if you don't even know what kind services are available (which my example assumes you already know) and once told of their existence how do you negotiate their capabilities and usage (my example, as I envisioned it, completely falls apart here without some major plumbing which might require running an open sewage line through the kitchen. i.e. probably not the best way to do it)
P.S. I have found the team's work on STEPS quite thought provoking. If anyone could find the time once things settle down, it would be most appreciated if some quick docs re: how to reproduce (i.e. build) the Frank environment could be put together. (if they already exist, a pointer to them would be helpful)
One thing I've found about the web, as it exists, is that the people who set up a system for the outside world to use change it over time. The URLs change. The CGI their service will accept changes. As things exist now, a program must use "sticky" names, and if a name changes, all of a sudden the connection is broken, even though the exact same functionality still exists. It would be good if a program could find the functionality it needs based on a functional description of what it needs, rather than a name. That gets more to what's really important. I once complained to Alan that Smalltalk had the same problem. If I change the name of a class, all of a sudden all the code in the system that needs that class could no longer find it, even though I had changed none of its functional code. This seemed like an extremely brittle scheme for finding resources. The name is not the important thing about what a program needs. It's just a reference point that does nothing. Names are still good, because they allow us to quickly identify resources, but they should mainly be for us, not the program, because when you really think about it, a program doesn't care what something is called.
Actually, this is not quite true about Smalltalk code (which is linked to its class). But referring to things is also done via variables and selectors of various kinds, and these are names which have to be locally known. Smalltalk can also find quite a few things by description (for example it can find things like the "sine function" via examples of inputs and outputs).
> it can find things like the "sine function" via examples of inputs and outputs
I think this is a really important idea. On one hand, it can save us from re-inventing code which already exists (e.g. "this existing code will satisfy your test suite"), it can help us discover relationships between existing things ("the function 'sine' behaves like the function 'compose(cosine, subtract(90))'"), it can aid refactoring/optimisation/etc.
On the other hand, it could also help us discover services/information which we could not obtain by ourselves. For example, discovering a database mapping postcodes to latitude/longitude.
It's also closely related to inductive programming (e.g. inductive logic programming, inductive functional programming, or even superoptimisation), where combinations of existing functions are checked against the specification. Of course, that leads down the path to genetic programming, and on to AI and machine learning in general!
I misspoke. I just tried doing what I described, with String. Squeak notified me about "obsolete references" to the original class name in existing source code. It's been a long time since I did this. I might've seen that before when I did this, and I guess it left me with the impression that it was not capable of adapting to the change by itself. The name change didn't cause a problem in continuing to use String, even in the compiler, such as doing in a workspace: 'hello world ', 'hi there'.
I am familiar with Method Finder. I've used it several times with varying degrees of success. Sometimes I found that what I was trying to describe couldn't be expressed with the conventions it uses. I am also familiar with the fact that all of that functionality is accessible from a set of objects, but my understanding is you have to explicitly use those objects in your code to look up by description. If you just say "Classname new", you're not going to get that. That's what I was talking about (at least trying to :) ) in my previous comment.
BTW, I read Ted Kaehler's paper on using the aforementioned objects to try to access objects, if I remember correctly, within Squeak strictly by description, as a means to try out methods for "computing through negotiating with objects." It seemed to come with a little difficulty, as I remember him talking about some number of "destructive actions" that happened as a result. Interesting experiment. I had the thought around then that in order for negotiation to happen safely, the receiving system would need to put objects in a "test harness" to prevent such destructive actions.
Working with Method Finder was one of the examples I used as inspiration for trying to "sketch" a language where the idea was I could "program by suggestion," which I described in an earlier comment here (https://news.ycombinator.com/item?id=11945188).
> it can find things like the "sine function" via examples of inputs and outputs
I really like the idea, and reading through comments this question came to my mind.
Names are important for us. They synthesize what a "thing" is, and they help us organize knowledge.
So even though it might be true that for computers names are not that important (i can describe what i want), i believe they are for us, humans, when trying to understand a given system.
We grow systems both for humans and for computers.
What's your take on this?
How would you find balance between the two?
Thanks, Alan. I guess sometimes is hard to think about that level of scaling when working on the industry... or at least in projects that are not that massive.
I assume your are talking in the lines of "call by meaning" when you mention that names are relatively local, right?
As for "send processes rather than messages", isn't that what objects are about?
I mean...sending the real thing, not just "data" as part of a message. That reminds me of the Burroughs 220 and "delivering video + codec together" example you mention in your talks.
Im afraid I'm missing the point about "sending processes rather than messages".
The modularity thing sounds pretty much to well designed objects to me, but it seems that you're trying to make a difference between that and processes.
What do you have in mind or, better said, which could be a concrete example of it?
The question is whether a "message" has enough "stuff" to reify into a real process (so it can help interpretation and negotiation) or whether the receiver has to do all the work (and thus perhaps has to know too much for graceful scaling of the system).
How would we communicate with an alien civilization? How would we establish a common frame of reference from which to establish a communication protocol. Think along those lines...
Re. consider "send processes rather than messages"
Interesting thought to ponder!... What came to mind is Smalltalk-80 kind of did that, though the processes (blocks) were not in the "driver's seat" of the interaction. They came as "payload."
Pay a lot of attention to realizing you (and all of us) are in "boxes". This means what we think of as "reality" are just our beliefs" and that "the present" is just a particular construction. The future need have no necessary connection to this present once you realize it is just one of many possible presents it could have been. This will allow you to make much more use of the past (you can now look at things that didn't lead to this present, but which now can be valuable).
I personally vouch that this isn't fake. (But anyone familiar with Alan's work could tell that just from reading his comments here—who could possibly fake these?)
HN has always worked informally. We don't have 'processes'; it isn't clear that we need them, and the thought of having to set them up makes my soul cry. But anyone who has concerns about fakes, abuses, and the like can always get an answer by emailing hn@ycombinator.com. We appreciate such emails because occasionally they point out actual abuses that we need to take care of!
This is assuming the real one wouldn't have articulated anything unique and contextually valuable. (Unless you've got some sure-fire method to determine which statements are objectively so and that's what's getting confused.)
In this kind of cases you can send an emails to the mods hn@ycombinator.com . Most of the times dang replies very soon. (And if the story is a fake, they'd usually remove it from the front page.)
Let's give Dan Ingalls the majority of the credit here. I will admit to "seeing" what was possible, but Dan was able to make really great compromises between what Should Be vs what would allow us to make great progress in the early 70s on relatively small machines (give Chuck Thacker the majority of the credit here for similar "art of the possible" with the Parc HW).
I liked the MOP book because they carried the model more deeply into the actual definitions -- I very much liked their metaphor that you want to supply a "region of design and implementation space" rather than a single point.
It doesn't suck "getting old" -- and you only find out about stamina by trying to do things ...
(We are fortunate that most of what is "new" is more like "particular 'news'" rather than actually "new". From the standpoint of actual categorical change, things have been very slow the last 30 years or so.)
But, it's also worth looking at places where there's been enough change of one kind or another to constitute "qualitative". This has certainly happened in many areas of engineering and in science. How about in computering?
Can you explain more about what in UIs you think has gone downhill? I've seen you refer to this idea in quite a few of your comments and it would be great to get insight on what aspects you think need improving/exterminating/rethinking.
But let's see -- how about UIs giving up on UNDO, not allowing or not showing multiple tasks, not having good ways to teach how to us the UI or the apps, ...
A PhD program is very much about the context of forefront ideas and people. I lucked into one -- that got me to realize that this is what prospective grad students should be looking for.
The Dynabook was/is much more of a "service idea" than a physical manifestation (there were actually 3 physical ideas for it). Today's tablets don't fulfill the service ideas.
"Most ideas are mediocre down to bad" -- and mine certainly have been.
Beyond ideas, I would do some process things differently, especially those that could have been done better if I had been able to understand people better.
this is a spectacular thread. I gave up everything and traveled to the U of U in 1976 because alankay1 had done his thesis there. They got so much right in such a short time - Eliot Organick (Multics), Tony Hearn (Reduce, symbolic OS for TI 92, 89). All inspired by Alan Kay.
Nygaard (along with Møller-Pedersen) later designed the Beta language. A very fine "object oriented" language that tried to unify the concepts of classes, procedures and structures into "patterns". Seemed like a far better language than C++, Java etc. but it never quite took off. I wonder if it "lost" because a) it was not brashly promoted by an American company or b) it used (# ... #) instead of { ... } :-)
1) To become as mainstream in a general-purpose way as OOP and functional programming paradigms are. One of the problems I know is lack of composability.
Historically, (1) has not particularly depended on merit, and (2) would require a lot (because "nice atoms" were more interesting back in the 60s than today).
alankay1 says UI's have declined in usefulness. But aren't there hordes of UI programmers today, getting paid handsomely for their "work"? How to explain this?
Thanks for all your questions, and my apologies to those I didn't answer. I got wiped out from 4:30 of answering (I should have taken some breaks). Now I have to. I will look at more of the questions tomorrow.
Thank you very much for your time here. More importantly, thank you for your contribution to the field. All of us here live and work in a more exciting place because of the visions that you pursued and built. We are better for it.
You say the problem with Xerox is that they were only interested in billions (instead of trillions).
Should we currently be interested in quadrillions, upper trillions, or, perhaps, larger? Once we become interested in an appropriately large number, what preparations should we be taking so that we can operate at that level? Do we just start putting product out there and collect the value on the open markets, or, do we need to segment markets to maximize value? Can you tell us about any other mistakes you feel Xerox might have made in realizing the value of PARC?
I don't think it was hyperbole and I am being 100% serious. You make a good case that PARC contributed $35+ trillion in value, of which Xerox was only able to capture a portion.
If we extrapolate out an exponential trend from forty years ago, a quadrillion seems like it might be about right. If I'm doing my math correctly, it is only about $100k value/person. Companies will soon be doing $1 trillion in annual revenue, so an 1000x multiplier doesn't seem out of question.
I guess my question is: what needs to be done to realize, operate, and maintain the enormous potential? But, maybe it is just something that happens as a result of the changes?
Maybe we can talk about ways context should be changed?
I still don't walk into a store with my phone and then quickly walk out with exactly those items I predetermined. Occasionally I will use my phone as a list, but a small piece of paper is easier to reference while in the store. I don't ever see anyone else doing any better and most never use their phone or any other device.
I think this means that we do not yet have true personal computers.
What might be the most important context to change/solve?
What role do people like Terry A. Davis (and his TempleOS) serve in imaging what's possible in computing? I'm thinking of Jaron Lanier's idea of society getting "locked in" after certain technical decisions become seemingly irreversible (like the MIDI standard).
It is not unlikely that you will never get to the same league as 'the inventors'.
Most people are mediocre at everything they do and will always be. Most likely that includes you. We live in a culture that doesn't just tell everyone that they can easily outgrow mediocrity, no, this culture tells people they are above it from the very start.
The following advice doesn't apply to geniuses or those who can be, but it applies to the majority who will read this:
The best you can do is look back on your life and see if there is only mediocrity. If there is, you have to be honest with yourself and recognize if it's so because of factors that you can still change, or not. For most people reading this the latter will be the case, which means you simply have to live with it and stop trying to influence the world, because everything you end up doing is going to make things worse (for you, and everyone else).
Although I can imagine ways to read this comment as a positively intended one, (a) they're not obvious and (b) this is too far afield and doesn't belong in this thread. So we've detached this subthread from https://news.ycombinator.com/item?id=11943951 and marked it off-topic.
At first, I was tempted to see this as a disappointing comment, promoting the idea of "If you didn't understand this early, you're not going to, no matter how hard you try, so stop trying," but I think what you could really be saying (correct me if I'm off) is that just because you want to do something, doesn't mean you know how to do it. So, trying to change things for the better takes some serious preparatory work that takes some time to do, learning alternate perspectives that will help you understand better what's really going on, and what tendencies people have that can be used, if people are willing, to help them realize what I've just described. However, if you don't have time for that, just acknowledge it, but also understand that you're not going to have a positive impact on that goal, because you haven't gotten out of your common sense perspective and natural urges, which are only going to make the problem worse if you try to make that impact, because you're not going to understand what you're doing. The comment Alan made at https://news.ycombinator.com/item?id=11940497 is indicative of this concern. However, in defense of this thread, I think everything that Alan has been talking about in here is an attempt to guide people toward learning how to learn so that for those who find the opportunity, they can give that advice a try, and see if this really is for them or not.
This comment is so annoying - the perfect HN mix of pretentious and nihilist.
If you really think that average people trying to create anything will make things worse, then you're probably fine with the entertainment-addiction cycle in the first place. Maybe you work for supercell.
I believe this comment is not directed to people who are trying to create something, but to people who feel entitled/special because they are trying to create something they perceive as improvement for humanity.
Really? Because I quite enjoy my "Wasted" time. I can't imagine what situation you could possibly be in that not only is it not enough for you not to "waste" time being happy, but you need other people to do the same. You are whats wrong with the world.