Those are the one that come to mind when I think of simple and very powerful tools. There are many others. So I think Alan Kay and friends have demonstrated that we have made things complicated for dubious reasons.
I've read the OMeta paper, the object models paper, and most of the VPRI papers. I'm generally sympathetic to their point of view -- I hate bloated software.
But what has been built with these tools? If OMeta was really better, wouldn't we be using it to parse languages by now? They already have a successor Ohm, which is interesting, but I also think it falls short.
I'm searching desperately for a meta-language to describe my language, so I would use it if it were better.
I think they generally do the first 90%... but skip out on the remaining 90% (sic) you need to make something more generally useful and usable.
I think this is the case with all software. None of it is 100% or even 95%. This is why I've given up on learning anything language specific. If you understand the concepts then you'll be able to re-create the pieces you need in your language of choice because most of the time the other 5% or 10% is context dependent.
I tend to agree and most of my shell is either written from scratch, or copying and adapting small piees of code that are known to work. I take ownership of them rather than using them as libraries.
That is even the case for the Python interpreter, which is the current subject of the blog.
I'm not quite sure, but I think OMeta falls even shorter than the typical piece of software. I'm not saying it's not worthwhile -- it seems like a good piece of research software and probably appropriate for their goals. But to say it generalizes is a different claim.
This is the first thing that came to my mind while watching the video. Web development has become so overly complicated. I believe it is mostly due to working around limitations of the platform and the lack of a de facto standard of how web development should be done. In contrast for smartphone development, largely speaking, the way to do it is the way Android and Apple provide. The web has no single vendor to dictate that and what we have now is a result of that.
I think a lot of it is a lack of documentation. This has two effects.
1) People just get started and get used to not really understanding the interface they are working with. Who hasn't felt like they were playing whack-a-mole with CSS layout? This means lack of conceptual clarity fails isn't notable.
2) Nobody ends up noticing just how complex the rules are.
It was 9 years from when I started looking for a tutorial like http://book.mixu.net/css/ to when I actually found it. That is...really quite bad.
He actually did lead a project which took this on: STEPS. (I think this is the last annual report on the project: http://www.vpri.org/pdf/tr2012001_steps.pdf) They did build a functional proof of concept which was significantly smaller/less complex than Smalltalk/Squeak which were predecessor projects he and his team worked on. Unfortunately, it's not based on the trinity of files, curly brackets and semicolons so it's not likely to take the mainstream computing world by storm.
His critical error is evident in his comparative analysis that places Physics and Programming on the same level. The systems that underly natural sciences are givens. The entire kettle of soup of software complexity boils on the fact that software engineering must first create the 'terra firma' of computing. That is the root cause of the complexity in software: it lacks a physics.
He tackles your question during the Q&A. He (and his PhD student, Dan) talk about how so much of the effort to create 'terra firma', as you call it, is caused by the terrible hardware sold to us by Intel. He argues that much of the hardware we have is just software that's been crystallized too early. If he had a machine more like an FPGA he could build all of these abstractions in powerful DSLs right down to the metal.
In physics, we don't know what the fundamental rules are, we can only see complicated outcomes and have to infer (guess) what the rules might be.
In computing, we know what the fundamental rules are (universal computation; whether that's turing machines, lambda calculus, sk logic, etc. they're all equivalent in power), but we have to deduce what the complicated outcomes are.
>In computing, we know what the fundamental rules are
In a limited way. Because we're making systems that involve people. Important and relevant aspects of human nature must go far deeper than our present understanding.
There's been numerous logics and unified methods for specifying, synthesizing, or verifying software. The problem wasn't that we didn't have one. The problem is intrinsic complexity of the domain. It leaks through in all the formalisms where the formalism gets ugly if you want to automate it and it gets clean only with much manual labor.