My view of this is: all of us who have written a lot of code feel intuitively there is something wrong with how coding is done. It feels like we are repeating too much work every time, reinventing the wheel for the umpteenth time. It feels like the tools and notations we have are missing something fundamental and thus forcing extra menial work that is not essentially necessary. Eve was one of the multiple attempts at converting that intuition in actual knowledge and in an actual product. They weren't able to do it, same as Intentional Software and others before them, but they explored uncharted territory using that intuition, and that is an interesting exploration. Kudos to them for trying. And hopefully someone will find the underlying core concepts we're missing to make that leap into future programming!
FWIW I agree that something feels wrong and repetitive, but I don't think Eve addressed my problem at all.
A big part of my pain, and the pain I've observed in 15 years of industry, is programming language silos. Too much time is spent on "How do I do X in language Y?" rather than just "How do I do X?"
For example, people want a web socket server, or a syntax highlighting library, in pure Python, or Go, or JavaScript, etc. It's repetitive and drastically increases the amount of code that has to be maintained, and reduces the overall quality of each solution (e.g. think e-mail parsers, video codecs, spam filters, information retrieval libraries, etc.).
There's this tendency of languages to want to be the be-all end-all, i.e. to pretend that they are at the center of the universe. Instead, they should focus on interoperating with other languages (as in the Unix philosophy).
Some languages want to be their own operating system, but empirically that doesn't work. The JVM had this explicit goal of "making Windows irrelevant", but instead it ended up as just another process on a Unix system!!!
(Something like Mirage in OCaml is an extreme example of this, although I think it's a pretty interesting project.)
This only really comes up after you become at least an "intermediate programmer". When I started, I wanted everything to be in my favorite language so it would be easy for me to understand and modify. But now I realize that it's more efficient to be able to reuse the best tool for the job PERIOD, not the best tool for the job in language X.
Microsoft actually had some degree of success with this with COM. You can do a surprising amount of automation from JavaScript or Visual BASIC in much the same way. But IMO the "language cacophony" situation is worse on the server side, even though to some extent the problem is easier because you have more computing resources.
In this regard, Eve actually makes things worse! I say that every new language makes things locally better, but globally worse. Because you never have a system in just one language. If you think you do, then your view fo the system is too narrow.
I'm chipping away at the edges of this problem with http://www.oilshell.org/, but honestly it's not close... I chose to replicate the Unix shell first -- a big success story, though one with many flaws -- and then maybe many years from now, extend it with a form of integration tighter than bytes over pipes.
It was important for us to treat Eve (at least initially) as the whole world, because part of our task was understanding the implications of building programs in this different fashion. Datalog applications have a rich design space for tooling, and after 3 years poking around in the space we barely scratched the surface. The interoperability between a datalog world and the current state of things is another great research question, and one we made some good progress on this front with v0.4, but there was still much left to do in the end.
Has there been any discussion about open-sourcing Eve? It's a shame to see the project abandoned without at least making it available for other people to try to maintain. It may not find the right set of maintainers, but if the project shuts down without putting the code out there, it will definitely die in darkness.
Microsoft's CLR followed that path in the VM space. Got similarly good results. Tools like Racket and mbase do with DSL's where they share underlying LISP-like language to avoid problems with DSL's own weaknesses.
Actually on Windows we have come full circle as those CLR ideas, which were originally thought for COM Runtime, have been brought back into COM as UWP.
IBM mainframes also follow the multi-language environment philosophy.
> A big part of my pain, and the pain I've observed in 15 years of industry, is programming language silos. Too much time is spent on "How do I do X in language Y?" rather than just "How do I do X?"
Basically it’s right nit to make everything a nail just because you‘ve got a hammer.
But also, if you only got a hammer and no/limited ways to get another tool, the question is very well „how can i build a table with just nails“ even though i know with screws it might be easier.
If you have a limited amount of developers with limited pre-existing knowledge in languages it’s not so unwise to look for solutions inside the language range of your team.
If only one person knows the language one of your components is made in, the risk of becoming unable to maintain that thing is pretty high.
Also, If you have only one developer, it’s pretty wise to try to limit the use of languages to those he is very proficient in - and that might be one or two, maybe three - and avoid context switching for syntax and core library functionality.
As a counterpoint, this "being interoperable with everything" mindset is predominant. Everyone has an interop story, some languages are all about interop (Lua, before Python).
The worse case scenario shows up in Verner Vinge's scifi as the programmer archeologist (see https://en.wikipedia.org/wiki/A_Deepness_in_the_Sky), that all the code that could have been written has been written, and programmers of the future will be more like archeologists sniffing it out.
In that future, there is no room for green fields, which I find to be very depressing.
Shouldn't line of business software naturally trend from being a creative pursuit to a standard engineering or trade, where defined systems are applied to a semi-novel problem using known patterns and rules? Rediscovering the whole thing every time is certainly inefficient.
I think there will always be room for green fields, though. Even in very old fields scientists and inventors still discover new approaches and solve new problems on a daily basis. Yes, this will move to the fringes, but is that a bad thing? I'd rather spend 10% of my time on research and reimplementation and 90% on a Fun New Problem/Approach rather than 90% of my time reinventing the wheel and 10% of my time on new stuff.
I'm pretty sure ML eventually eats the programming cake, though it might not be in our lifetimes. Definitely by the time we have developed FTL travel.
We have reached a very pronounced local maxima with our current programming practices and ecosystems. Reinvention can get us out of that, though is more likely to fail than not.
Strongly agreed. I've had a fairly long career and worked in dozens of languages. Languages like Crystal (staticly typed, but with strong type inference that makes them feel like a dynamic language) feel like they're from 10 years in the future. Using languages from the ML-family (OCaml, F#, Haskell, Purescript, Elm, Idris, Agda) makes me feel like I've gone 25 years into the future and been set down in front of a computer. Especially watching Idris write my program for me just based on the type signatures. That blows my mind every time it works (which it often does).
Well, you can take it too far for sure, but I don't think we're even close. I disagree with most of what you wrote:
- There will still be green fields because there are new problems to solve. New languages (and new platforms) are justified for new problems. For example, you will have languages for machine learning, for quantum computing, for computing with security, etc. Microsoft has an interesting P state machine language which adds something new.
I would love to free up some of my brain real estate learn those new domains / languages, rather than doing the same thing over and over again in different languages.
Although as I understand Eve was a Datalog, and I think that adds something "new". Although making that Datalog interoperate may have been more successful than building a cohesive platform around it (but perhaps contrary to their goals).
- "the being interoperable mindset if predominant". I don't agree, e.g. every language has its own package ecosystem. They interoperate (poorly) with C, but not any other language really. The JVM ecosystem is more or less completely distinct than the Go one, although they cover a similar problem space. Likewise for say Rust and Go.
- Python's interop story is not great; it is littered with implementation details, There are 10+ wrappers like SWIG, CFFI, ctypes, CLIF, etc. on top of the Python/C API because it's so prickly. Alternative implementations like PyPy don't get much adoption; there is version lock-in with Python 2 vs. 3. Python 2 vs. 3 is a great example of where beginners get stuck in mind-numbing detail and complexity.
- Lua was designed with good C interop, but not that's about it too. If anything, it's the exception that proves the rule. It's not a very popular language either, at least 10x less popular than Python, Ruby, JS, etc.
So if you agree that much of programming is doing the same thing over and over again in different languages, then it follows that adding new languages doesn't help that. It might help beginners, but it doesn't make things better for working programmers, who write most of the code in the world.
I understand that for beginners, you want a coherent experience. Beginners can't learn two languages at once -- that's a very good argument. But I do think there is an economic problem with trying to make a single "cohesive platform" to solve all those problems. Beginners can imagine something that is logically 10 or 20 lines, that will take you 100K lines, 1M, or 10M lines of code to implement on the back end.
I think that is the problem Eve ran into -- it's simply too much work to try to interpret extremely high level programs at the level of the user's intention.
For example I just wrote a 20-line piece of JavaScript on Google App Scripts, embedded in a spreadsheet, that sends e-mails. It was a surprisingly good experience -- I had low expectations. But I bet there's at least 1M lines of code in the background to make this all work (probably more, having worked at Google), and there's a whole team of SREs, etc. Some of the code in there is 10-15 years old, etc. It's a huge job.
In other words, I think Eve was a full-fledged platform, and platforms have problems getting off the ground without a billion dollars. Even if your platform is backed by a big company, it might be hard to gain traction. Google App Engine provided a lot of value and a nice experience, enough that Snapchat was built on it, but I think it faltered in the limited availability of runtimes (it couldn't run PHP for a long time, etc.) In other words it didn't interoperate with the rest of the world enough, and I imagine Eve interoperated even less.
I'm curious what you folks think of things like Racket and it's ability to create DSLs for various problem spaces. This seems like it would let you stay in one language yet have the flexibility to still create the right tool for specific jobs.
The same goes with the abilities of its macros too. Does that give you the ability to define what you want and have the language create the code for you?
Caveat, I'm new to programming and just now learning Racket so it has me all excited but I barely know what I'm talking about. This thread has been very interesting to read so I'm curious what others think of languages which have the flexibility to adapt. Maybe it creates a maintenance or understanding other people's code nightmare or something. I'm just curious why Racket's approach or something like it hasn't taken off.
I don't have much experience with Racket, but I've read a lot of the papers. I think they're exploring exactly the right problems.
I'm not convinced Lisp is the best substrate for it. It makes more choices than you think (number representations, etc.) -- it's not completely neutral.
S-expressions turn out not to be the lowest common denominator -- strings are! "Everything is a file" in Unix means that everything is a big lump of bytes -- e.g. as opposed to the early Mac file systems which had metadata too.
The web is very much an extension of the Unix philosophy -- composing languages, ad hoc string manipulation, etc. with HTML/JS/CSS and dozens of other DSLs. Of course, this architecture has a lot of downsides, and that's what I'm working on in Oil.
-----
Other anecdotes:
There's a research shell called "Shill" built on top of Racket. I heard that they moved off of it for some reasons related to Racket's runtime.
Also, I heard that Racket is thinking of replacing their runtime with Chez Scheme (which is one of the more impressive Scheme runtimes as I understand, being in industrial use for 20+ years.)
So I think that Racket is good within its own world, its own runtime, but weaker when it has to interoperate with the rest of the world. Unix and the web are the rest of the world... and they don't speak s-expressions, so s-expressions really offer no advantage in that case.
-----
If you think about it, Unix already has the same architecture as Racket. It's a bunch of DSLs (shell, make, C, C preprocessor, linker scripts, etc.) on the same runtime (the kernel, which talks to the hardware). It's just that you are forced to parse everywhere and serialize. That is definitely annoying, and problematic. But empirically, that design seems to be more evolvable / viral / robust.
Anything you can do with macros, you can also do with a traditional lexer/parser/compiler. Macros make things easier, but conceptually the architecture is the same.
Parsing is difficult for sure, but I do think that most programmers have an unnecessary aversion to it. I hope to tackle this problem a bit with Oil, i.e. make it easier to write correct, fast, and secure parsers.
Anyway that's my take, hope it helped!
EDIT: I should also say to not take this as discouragement from learning Racket. My first college class was SICP and it had a huge and permanent effect on my thinking. (I learned from reading Racket papers that my TA went on to work on Racket itself.)
I just didn't use Lisp for any industrial job thereafter. But that doesn't mean the experience wasn't valuable. I would definitely learn it for the ideas.
I see all these efforts to recreate programming with a new way of doing it as foolishness; it's the "let's rewrite the system from the ground up" fallacy writ large. Want to see some really interesting "visual" functionality that you can actually use to write useful apps? Take a look at the things IntelliJ is doing with Scala: https://blog.jetbrains.com/scala/2018/03/27/intellij-scala-p... . Today one programs in a way that makes extensive use of the GUI, to the point that the code is difficult to work with in a plain text editor, but everything's been put on as a progressive enhancement starting from thinks that were known working and addressing the concrete pain points.
I love some of the work Alan Kay and others are doing at http://vpri.org/. His work on STEPS is trying to produce a whole working environment in 20 KLOC through heavy use of DSLs.
I hate programming silos. I think the world would be a much better place with good platforms, where libraries can interop easily and everything feels so integrated. Emacs is a watered-down reminiscence of this approach.
I think one of the main problems is that we still describe too much of "How" we want to do something, rather than "What we want to do", ie too many implementation details are specified.
Programming languages over time have moved towards higher layers of abstraction, from punch cards, to assembler, to c, to c++, to java/c# etc etc.
We haven't done the next leap yet and come up with a mainstream descriptive language that allows us to abstract away things like threading, tasks and optimizations.
If programming was even more about declaring intent a lot of work could be simplified I think. We could also put more intelligence into the interpreters/compilers and even adapt execution for different scenarios
One of the biggest problems is the flat one-dimensional aspect of text files.
Overly-graphical solutions are clunky and not concise enough. However overly-flat environments like a plain text file also comes with huge problems.
What I have wanted forever is a smart environment that looks similar to a text editor but under the hood it's more of a structure editor (wayyy easier said than done).
Two main problems with text is,
1. parsing
2. inability to easily associate metadata with parts of the program without visual/syntax clutter
I'll expand on each point,
Parsing: parsing implies syntax and language structure and it causes inflexibility. the problem domain is infinite but we have only so many characters to work with. this creates some blessed first class concepts like "strings" and puts everything else at a disadvantage.
A smart structure editor can bypass the parsing problem. It comes at the cost of some fluidity in the authoring process because effectively you would have to say what you are about to place somewhere before placing it. So instead of typing "string" you would have to somehow say "here comes a string", then provide its value.
The problem with this is the fast text-based editing and typing that we are used to becomes difficult and clunky.
Metadata:
This is a big topic. But to give a summary, this gives us a way to have information present that doesn't get in the way.
For example consider an assignment 'name := "John"'. When we are stuck in a flat environment if you want to make that assignment a "const" now you have to turn that into 'const name := "John"'.
In many cases there are lots of attributes and extra information that you would like to attach to a piece of the program.
You don't necessarily want to invent words and keywords and language rules for them. You also don't necessarily want to always see all of them.
If we could easily attach metadata to the pieces of the program it would obviate the need for many language complexities as many things could be turned into metadata key values.
So if you wanted to make a name binding constant you could set "is mutable := no" on that part.
Once you have the ability to associate arbitrary and not arbitrary metadata to any piece of the program it opens up so many opportunities.
I honestly don’t feel this way most of the time when doing pure FP. There are a few tools I wish were better but my code reuse is astounding, the refactor-ability is high, and it is easy to jump into unfamiliar code bases. Just mho from the experiences I’ve had.
I think that's because FP does a number of things right, like immutability, which leads to better composability of programs. These two aspects were what gave Eve a lot of its nice properties.