Hacker News new | past | comments | ask | show | jobs | submit login
Emacs-ng: Emacs with Deno runtime and TypeScript (github.com/emacs-ng)
177 points by karlicoss on March 15, 2021 | hide | past | favorite | 147 comments



The Emacs community, like the Lisp community with which it overlaps, is very conservative, in the sense that it doesn't throw things away quickly or make huge changes lightly. For that reason, I'd be very surprised if this took off — even though it sounds like it would be mostly backwards-compatible. I think Emacs types would shy away from building JavaScript into their editor, having V8 be the engine, adding more layers of abstraction to understand and maintain, and splitting the extensions between Emacs Lisp and JavaScript. I'm also not convinced that a significant part of the community really wants this.


If any, having Elisp support on Guile would have far more sense, and with a JIT and non-blocking IO Emacs would be usable enough even under a high subprocess load (cough, GNUs).

I am not an Emacs user but I like Scheme (scm user here) under nvi, and a lot of people could profit from a Modern Guile based Emacs environment.


I'm not sure. Guile-Emacs has been talked about for ages but never materialized. Scheme is nice and all from a language purity sense, but emacs lisp just feels so very practical for its use case.


I meant Guile supporting Elisp, it already supports a few more languages than Scheme.


There is branch for elisp support in guile. https://git.savannah.gnu.org/cgit/guile.git/log/?h=wip-elisp

It works in principle but is slow due to some impedance mismatches that have to be bridged. Especially, Emacs has a special string format internally that is not native utf-8.

Andy Wingo is making a great job with JIT-support for Guile. We have to see whether emacs-guile can become fast enough to replace native-elisp. At this time it doesn't look like it...


> I think Emacs types would shy away from building JavaScript into their editor, having V8 be the engine, adding more layers of abstraction to understand and maintain, and splitting the extensions between Emacs Lisp and JavaScript. I'm also not convinced that a significant part of the community really wants this.

I'm reminded of when an old C hand tells me that Rust will never replace C because it's too complex, brittle, and frustrating for the kinds of applications C programmers use C for. I always tell them: Rust isn't aimed at you -- it's aimed at your replacement.

The Emacs-ng team doesn't have to convince the Emacs community -- they only have to convince their replacements. Much of computing in the very near future will be built on two languages: Rust and JavaScript. By basing Emacs-ng on those two languages, they just opened Emacs up to extension and hacking by a huge community who have no interest in touching Emacs's ancient, doddering Lisp nor its C underpinnings (C being, inherently, unsafe at any speed to work in). So ugly as it is, from a social standpoint it's absolutely the right approach and may well overtake GNU Emacs in terms of ecosystem size and vibrancy by the late 2020s.


> I'm reminded of when an old C hand tells me that Rust will never replace C because it's too complex, brittle, and frustrating for the kinds of applications C programmers use C for. I always tell them: Rust isn't aimed at you -- it's aimed at your replacement.

I don't think this is a fair comparison. Emacs is a specific piece of software where implementation details are part of the offering. I don't care if parts of the Linux kernel get rewritten in Rust, I have never (extensively) looked at its code. I'll like it as much.

Being able to effortlessly examine and change other people's code is a big thing for me in Emacs. Many big Emacs users make money programming in Lisps too, it's not just some vague preference they're not willing to act on.

To implement a real mode, you WILL have to call elisp code all the time. Emacs does almost nothing asynchronously. Every tiny thing in Emacs triggers a lot of elisp that assumes sequential execution. You cannot do all the hard work in an isolated context and call Emacs once you're done. Most of Emacs performance issues have to do with latency, not long running computations. With libgccjit, native-comp Elisp is consistently 4-5 times faster (yes, really), but the actual perceived improvement is much less. Can hardly tell a difference. Emacs still hangs at all the same stuff and there's no easy fix. You'd have to completely change how Emacs works at which point it may as well be something different. I don't see how this kind of a thing can incrementally take over.

If you want to make my life better find me a way to fix long lines or get rid of GC pauses without breaking everything.


> Much of computing in the very near future will be built on two languages: Rust and JavaScript.

Ah, another Silicon Valley. Guru. Gook luck with that.

Good luck replacing C, Unix which are everwhere, and not a tiny niche compared to these hipster JS trends. Not even close. EVERY teleco background is tied to C and Unix for standards and data exchanging/defining protocols.

And well, even Scheme being a niche, it's having good stuff such as Guix, Artanis and a JIT.


Says Gartner


> Much of computing in the very near future will be built on two languages: Rust and JavaScript.

It will be interesting to see if this prediction pans out. There have been similar predictions about C++, Java, TCL, php, Perl, etc. it seems “the one true language” never emerges, but it is just around the corner. Though I really hope the future is more polyglot, because that is just way more fun.


Am I the only one that thinks JavaScript is living on borrowed time? Once webasm matures and has a consistent story for DOM interop, JS will lose its status as the language we all have to know and be forced to compete head to head based on its merits. I wouldn't bet on JS in that scenario.


> Though I really hope the future is more polyglot, because that is just way more fun.

I hope so too, but... I've worked with people who really don't want to touch anything besides JavaScript. Such people could be coaxed to use Rust to write kernel drivers and the like, due to its safety guarantees -- but not C, nor C++. JavaScript is in everything. Both of the major DEs for Linux embed JavaScript interpreters. Microsoft is adding it to their office suite, to coexist with and eventually supplant VBA. Inasmuch as developers are still writing desktop apps, odds are good they're Electron apps. It's as near to a universal language as we've had in decades. Instructions are being added to ARM to better support JavaScript.

The dream of the Lisp machine (which Emacs to some extent embodies) is dead. Computers of the future will be, largely, JavaScript machines.


Is someone who only wants to write JavaScript really going to be coaxed to write kernel drivers in any language? Yes there is a dominant JS crowd, but I don’t really see why they should affect what developers in other areas choose.


These people are delusional, they think they can do low level device driver thinkering with just JS...

Eh, no. Not even close.

Back in the day I wrote a personal patch for BTTV in order to support my video card with different tuner and radio settings. It was damn hard for a C newbie like me. For these people it would be a nightmare.


Yeah, and now imagine if you didn’t have to worry about simple things like memory safety or overflow or null pointer checks or so many C problems that Rust fixes.

The point of Rust is to enable newbies to write systems software.


>That's what Go has been built for, but if you trust a newbie writting low level stuff, bad shit will happen either sooner or later.


So the solution is what, exactly? To isolate people and gatekeep them from learning systems programming?


Yes? This is a meritocracy, if don't know how to do low level stuff, head to the College, or learn it properly.


tech is not a meritocracy.


It is.


- TCL and LUA interpreters are everywhere, too.

- On JS machines, I doubt it, because if some better language it's better than JS, that could yield to huge losses to ARM CPU producers. And that will happen, sooner or later.

>I hope so too, but... I've worked with people who really don't want to touch anything besides JavaScript.

These people are professionaly dead if they want to create something more performant than a bullshit, slow JS application.

QT5 it's being used in LOTS of professional software. Not even WASM Google Earth can be close to the performance of Google Earth Pro used in offices for professional tasks.


> The Emacs-ng team doesn't have to convince the Emacs community -- they only have to convince their replacements.

Those are using VS Code already, isn't it?


Not quite. You are able to extend VSCode using TS/JS for sure, but it is not even remotely customizable to the extent that emacs is.

There is definitely ample room for a hacker-centric text editor with a minimal core where every aspect of UI and behavior can be customizable through TS.

This is not really a criticism of VSCode. If you want to prevent the kind of cross-extension conflicts and unstability that has plagued emacs since forever, a restricted extension runtime is absolutely the best way to go. But there are certainly people who would want to *build* an editor uniquely tailored to their preferences out of low level blocks and deal with the complexity associated therein.

Atom was expected to fill that niche, but it's web based UI is too slow for most large projects.


> Much of computing in the very near future will be built on two languages: Rust and JavaScript.

JavaScript runtimes will written in C++, and will run on OS's written in C.

"Built on" is the wrong phrase to use here, because the actual foundations aren't Rust or JavaScript and will never be.


Maybe Linux won’t switch over, but it’s not unlikely that Apple would consider Rust for portions of their kernel, given that they only have to support a limited number of devices, and have a strong LLVM culture already. Googles Fuschia OS also uses Rust heavily.


> will run on OS's written in C.

C++ is slowly but surely displacing C on the OS side too. The field is obviously full of, hmm, time-proven code but new project tends to use C++.


I've been hearing this for 30 years, and it hasn't happened yet.

Wake me up when it happens.


It's true that they're conservative, but there have been non-trivial forks in its history. If a fork proves itself by doing something significant that the original can't or won't, things can get interesting! I wouldn't write this one off so quickly.


And let's not overlook that it doesn't make elisp any faster. This isn't going to have a performance improvement for those who use elisp regularly for productive reasons.


While technically true, you can still improve performance of existing Elisp code by redefining certain commonly-used Elisp functions perform better in Deno.

Sure the Elisp bytecode itself doesn't evaluate faster, but that distinction isn't really important here. The end-to-end perf of certain functions can be improved in a backwards-compatible way.


If we're entertaining writing elisp methods in a faster language to improve overall performance, then it's already been done and being done via writing methods in C.

I don't see the point of polluting Emacs with Javascript.


Instructing users who want high-performance code in Emacs to write their code in C and rebuild their Emacs binary, or figure out how to make it a dynamic module, is about the most user-hostile way I can imagine to improve Emacs performance.


Well, basically this is someone trying to implement VSCode in Emacs ...

Ain't gonna get far once the main developer loses interest in the project. Emacs users use Emacs because they want Lisp and don't care about web rendering and javascript runtimes ... Oh yeah and Rust is thrown in too for good measure.

Good 90% of Emacs code is in the extensions which are in Lisp and nobody is going to rewrite them to Javascript even if the code was running 50x faster.


Nah I don't use emacs cos I want lisp. I use it because it was the only thing that made any sense at all on the HP-UX machines I got access to back in 1993. A side effect of that is that emacs does everything I need (with a little bit of pandoc on the side), and I haven't had to learn anything new since.


> A side effect of that is that emacs does everything I need

I'm certain you're telling the truth. And I'm sure that's the case for a lot of old-timers emacs users. But I have the feeling it's less and less true.

People that have used emacs for a long time don't need anything else, so they don't want any change.

People for whom emacs doesn't cover everything want change, but clash against the first group of people (who have been there/contributed for longer), and in the end migrate somewhere else.


I slowly moved everything I need into emacs. That's not because it's superior in all places. It's just that it's the same interface, all the time for me. So I don't have to learn a new editor every now and then.

But my way of working is : many little projects in various languages, much text reports with Latex/markdown, lots of notes (orgmode), bit of email, bit of IRC. In a traditional business, I'd had to use beefier IDE such as IntelliJ (hat beats emacs without a doubt) and maybe Word. But for the rest, emacs fills all the little holes...

One area where I find emacs lacking is appointment (org mode is too big for little things such as quick reminders for today's stuff); a good calculator (calc is very clumys if you compare it to SpeedCrunch for example); a good calendar (emacs calendar is a nightmare to use, for exampe why on earth doesn't display what happens in a day below the calendar and instead forces me to hit 'd' which opens a new mostly empty buffer...); a good console on Windows (on Linux vterm is mostly perfect). Email support is OK with Wanderlust but a nightmare to set up.

Also, since it's very old, there's this warm comforting feeling that will last forever. And also I'm GPL zealot, which helps too :-)


I've used Emacs for 2 years and haven't had to write a single elisp function.


I've used Emacs for... 10 years now? I didn't customize it for an embarrassing long amount of time, but by now I'm comfortable writing elisp code. It is a game changer.

Given enough time I can make Emacs do pretty much anything. I just don't have that much free time (anymore).


I like Emacs for everything except elisp. I never learnt it despite a few attempts, it feels odd and orthogonal to everything else I do. My .emacs is cobbled together from googled snippets.

I would love a normal extension language, that also isn’t a hack (like Python integration). I’m not a JS guy but I’ll take it over lisp.

I’m not sure what the web render things are, but if it means some kind of browser integration, that sounds good too. I use org mode, often with html export and latex maths, having it render quickly and seamlessly would be a win for me.


basically this means that `apt install emacs-nox` now takes 2GBs of downloads, pulls in hundreds of fast-changing, hardly audited extensions and probably is not available in sane distros. No, thanks.


Well, that would be bad, agreed. But as I see upside in the project, I certainly won't bemoan it as useless ahead of time.


The demand for lisp in emacs is overrated by a vocal minority. Most people using emacs do it for the things which are not lisp-specific. The features, the plugins, the community. All that works without touching elisp as a user. And even the swallow elisp-contact that most user have when writting their init.el could be done with any other language.

Though, it's true that the many elisp-code around usca heavy burden. But how much of that is actually still used? And how fast would a propsering community replace them really? Hacking-friendly tools tend to have a wild growing ecosystem. Emacs itself is not shy of having many clones of similar featursets and regular new implementations of old but still popular features.

In that view I don't thing a new language would a real problem long term for the prosperty of emacs. It might be even a benefit, as it mixes in new blood, more people and a broader ecosystem from outside the emacs-scope.


Emacs speed was good enough 45 years ago, with modern processors it's just lightning fast.

You can have C / Rust / any_lang extensions like RipGrep to boost some intensive workloads as well.


I don't know if I agree with this. I love Emacs and nothing has convinced me to give up on it yet, but it is a chore to use on large projects. Helm, projectile and the likes always work with passable performance at worst, but opening a jsx file for the first time literally freezes the entire program for 5 seconds. It's really annoying and I've no idea what causes it.


“Eight megs and constantly swapping” isn’t as much of an issue these days, it’s true...


Eight megs fit in the L3 cache nowadays :)


>Emacs speed was good enough 45 years ago

Uh, in the age of the 386 and 4MB of RAM, if you had that setup against a 486 an 8MB, you chose between running X and running Emacs. OFC you could run jmacs perfectly.


Rust isn't "thrown in", it's what Deno is built with.


No, Deno is built with V8, and V8 is written in C++.


V8 is the only C++ component of Deno. The other bits of the runtime are written in Rust, and Rust is the native extension language.


The "other bits" are the ones that don't matter.

It's like saying that a browser skin is "built on Rust" when inside it's still Webkit and Chromium.


This doesn't require rewriting extensions, it is backwards compatible.


It does not improve elisp performance, however. So it's pointless for those who use and rely on elisp.


I don’t think that’s a fair characterization, you should be able to redefine widely-used Elisp functions to run in deno. One that immediately comes to mind is the crappy HTTP client that Emacs ships with. (I have not tried this, just speaking hypothetically.)


... That still isn't an improvement of elisp speed. It might improve the performance of Emacs, but it's not improving the performance of elisp.


Now we're just quibbling. No it doesn't improve the speed at which the Elisp interpreter can churn through Elisp bytecode. We can however dynamically improve the speed at which some Elisp functions return, which would have the net effect of speeding up consuming Elisp code.


Not entirely sure I see the point.

I would like to see Emacs/Guile. Emacs need a more modern elisp (or in this case, Scheme) implementation. It's much easier to make it compatible with the mountain of existing (and useful!) elisp code if you are also in a form of Lisp. That's still not trivial.

It also needs a better front-end renderer, across platforms. I could understand if the "NG" part were to be just about that. But WebRender is an optional feature. Then again, it looks and runs perfectly fine for me on Linux, it's when I'm using on OSX that I see the warts. This is doubtless in no small part due to patches being refused because they are for proprietary platforms.

Async I/O is needed but I am not sure we need to bring Typescript and a full JS environment to get that.


Came to say the same, and guile emacs was attempted in the past.

It doesn't make sen to have multiple interpreters jury-rigged up (the horror!) And it also doesn't make sense to throw away all the lisp by having no interpreter for emacs lisp. Guile does support emacs lisp just for this purpose.


I see a lot of parallels between Emacs' problems now and Vim's problems a few years ago: opaque maintenance/contributions, poor performance inherited from core design decisions made decades ago, the burden of backwards compatibility for legacy systems, etc. I think Emacs users and the community would benefit from a ground-up modern rewrite much like Neovim did for Vim, especially compared to this project which adds even more layers to maintain. It certainly addresses performance issues, but Emacs seems to have enough problems with things breaking spontaneously - I don't think adding a JavaScript runtime to the mix will do any bit of good there.


The last time I had things randomly break under me due to elisp was a long long time ago. Such that, I really can't remember it.

And as a user of emacs, the stability has been quite nice. The speed has mostly only hurt around things I'd expect to hurt. (Long long long lines, for example.)

I can see the allure of language servers. But, to me they are basically a modern cscope. With json being the transport. Not bad, but hardly magical or unforseen. (Granted, getting the pockets of microsoft has been nice.)

What sort of spontaneous breaks are you referencing?


I tend to agree with this - Emacs is my daily driver and the way I use it, it's really 1) a tightly integrated window manager/text editor powered by elisp, and 2) a great community providing additional functionality. It would be relatively easy to make something to replace (1) for me, but (2) is tough. I'd love to know how far we could get with an updated editor and rendering system, with backward compatibility for emacs code via some sort of emacs/new-system translation layer - I was really excited for xi-editor for that reason because it seemed like it might be a good platform for those kinds of experiments.


This sounds like a description of Emacs most of a decade ago, not long after I started using it, rather than of Emacs today.


Port of Emacs to rust https://github.com/remacs/remacs


Emacs-ng is actually a fork of remacs. Many of the same people are involved.


Neovim is not a rewrite.


The upcoming 0.5 release will feel like a rewrite to some folks though, the entire config system is Lua-first right down to support of an init.lua instead of init.vim in vimscript. I've been converting my setup over to Lua and it's so much more understandable and sane vs. some bespoke scripting language that made more sense in 1990.

(it should also be mentioned the old vimscript support isn't going away, you can still use old configs)


No, but it tackles the same aforementioned issues:

> opaque maintenance/contributions, poor performance inherited from core design decisions made decades ago, the burden of backwards compatibility for legacy systems, etc.

The main goal of NeoVim is to ditch the backwards compatibility and simplify the codebase. Sure, it isn't a from-scratch rewrite, but it's a very deep fork.


>Sure, it isn't a from-scratch rewrite, but it's a very deep fork.

Not really. I contribute to Neovim and in my experience the architecture is the same as in Vim. The features that were added in Neovim first do have a different architecture and are better designed though. However these new features only make a tiny fraction of all of Vim's features.


The magic of it is it just keeps on working. All those big changes didn't cause any instability for even once.


I wonder how the performance compares to gccemacs, with its native code compilation of elisp. That would be the fair comparison.

https://www.emacswiki.org/emacs/GccEmacs


The readme says it’s based off native-comp so I think compiled lisp stuff should be no worse than gccemacs.


"emacs-ng's JS implementation clocks in over 50 times faster than emacs 28 without native-comp for calculating fib(40). With native-comp at level 3, JS clocks in over 15 times faster."

I missed the native-comp note the first read through. Thanks!


Finally, Emacs runtime can catch up with modernity to make all size jokes relevant again.


>it's an ecosystem of powerful tools and approaches that Emacs just doesn't have currently.

Guile-Emacs should.


Yeah, let me know when that gets released in a form that's usable as a daily driver. I'll install it on my production-ready Hurd system, right alongside the first version of GIMP that doesn't send professional designers into fits of rage.

Even then, there is no other ecosystem that has quite the volume of powerful tools and approaches that HTML/CSS/JS have.


>Even then, there is no other ecosystem that has quite the volume of powerful tools and approaches that HTML/CSS/JS have.

GNU Artanis, forget Hurd, get Guix. Also, on HTML/CSS/JS, they are still 20 years behind the Pascal IDE from Borland or Lazarus, and don't let me talk on NPM and dependencies.

On speed and features, QT5 slaps JS so hard that trying to get something as performant it's a ridiculous claiming.

And I use OpenBSD and CWM, but FFS, JS is a toy compared, to, for example, Lazarus IDE, QTCreator and their compilers/backends running fast as fuck software even on scrap computers.

And Turbo Pascal/Lazarus had RAD developing features since 20 years aso as I said. Something much faster and more native (totally native :p) than any JS toolkit is trying to do ever. Add a control? Drag and drop, edit the bindings. Cross compiling? Lazarus does that even for damn Windows 95 from a Linux machine. And so on.


And to complete the circle back to sexpr run cljs on the js engine.


I'm curios, if this has direct TypeScript-Support, does this mean it can use VS Code-Parts and Plugins? Effortless plugin-installation would be a killerfeature... Similar advantage could become direct reuse of VS Code-Parts, like the code for language servers or the monaco editor-part.

This could unfold into a project which demand relativ little work, while pushing it to a level where it can compare with VS Code. It could become a good bridge beween modern VS Code and old GNU Emacs that way.


> Effortless plugin-installation would be a killerfeature

How is that different from what Emacs package managers provide?


They are not really effortless, especially when distributions like doom or spacemacs are used. Sure, today it's easier than 10 years ago, but compared to the modern solutions it still pretty backward.

In VS Code you can search for ne plugins from inside the application (which emacs also allows), while also showing all information including pictures and animations about the plugin in-app (which emacs does not allow). VS Code also shows which bindings, functions and settings the plugin adds (in emacs only indirect possible, with effort).

And best part: it just works. With emacs there are always a gazzilion problems coming with plugins. Be it some conflict or just additional work neccessary to make it work. Thouhg this is mostly a solvable problem with emacs, it just does not get solved. The Emacs communities solultion for this is the usage of distributions, which come with other problems and have only support for the more popular plugins.


The inclusion of Deno/webrender almost turns this into a web browser. I wonder if this project could be imagined another way, instead of turning emacs into an application runtime, port emacs to be a SPA. It would still get to take advantage of all the latest technology, and with the new FileSystem APIs emerging it would have native file system access.


Looks interesting. Some examples showing how JavaScript integrates with the emacs runtime would be helpful.


I agree that using a JS runtime makes more sense that Emacs's weird e-lisp runtime, with all of its dynamic-scope weirdness.

But dynamic scope, and e-lisp, isn't the only thing weird about Emacs. Emacs also calls "files" "buffers", and calls "windows" "frames" and "frames" "windows". It has weird keyboard shortcuts. Every third command also copies to the clipboard as a side-effect, which means you constantly obliterate the contents of the clipboard while getting ready to paste what you thought was there. Oh, but to address this problem, it makes the clipboard into a ring of clipboards, so that when you replace the clipboard with something else, you can still access the other clipboards in a ring by hitting the paste command multiple times in a row. Oh, but if you press another key in between, you get lost in clipboard madness.

Emacs also has an "undo", but no "redo" command. Except that when you "undo", it actually pushes the "undo" itself onto the list of "things to undo", so that you can undo that undo, in order to redo. But people also need to undo multiple undos before getting interrupted with a redo, so Emacs only undoes the undos if you press some other key after undoing a few dos in a row. But hey, this means it doesn't need a "redo" command.

So, I agree that the elisp runtime is a weird thing about Emacs, and something that is rational to change. But the thing is... if you're going to change that... why not everything else about Emacs?

I mean, at some point Emacs looks a lot like Bitcoin Core, or Wikipedia -- a beautiful historical community with weird consensus rules that flowered into a very exotic, strange, and usually-functional artifact. Although sometimes it doesn't work as well as the newer technologies.


> But dynamic scope, and e-lisp, isn't the only thing weird about Emacs. Emacs also calls "files" "buffers", and calls "windows" "frames" and "frames" "windows".

Using "buffer" as a name for text being edited is hardly worthy of the title weird, vim does it too. Lots of other text editors as well. "Buffer" != "file", the first exists in the memory of the text editor process, the second exists on the file system. Their contents may differ, and one may exist without the other.

Likewise, calling the subdivisions of the screen "windows" is something which vim does too, and likely other editors also. For a text mode editor, it makes perfect sense. When you then port such an editor to a GUI, you end up with "windows" which are subdivisions of the GUI "window" – vim has that too, not just emacs. (Unlike Emacs, vim doesn't have the "frame" concept, because vim doesn't appear to support multiple GUI windows.)


> Using "buffer" as a name for text being edited is hardly worthy of the title weird, vim does it too. Lots of other text editors as well. "Buffer" != "file", the first exists in the memory of the text editor process, the second exists on the file system. Their contents may differ, and one may exist without the other.

It doesn't matter. When people talk of editing documents in Word, they speak of opening a file, not a buffer. That's the metaphor people are used to. It may not make sense, but saying "well, ackshually, files refer to the copies on disk while buffers refer to the copies in memory" just confuses people more. Have we learned nothing since Macintosh 1984? Actually, we haven't; we've gone backwards. The spatial Finder was based on very deep psychological notions of how humans relate to the world: through spatial orientation and object manipulation. Meaning that there is only one place on the screen where a given file appears, and it appears at the same place every time unless the user moves it. In short, it works like an actual desktop with real physical pieces of paper. And Apple brushed it aside in favor of NeXT's Unixoid "file manager".

> Likewise, calling the subdivisions of the screen "windows" is something which vim does too, and likely other editors also. For a text mode editor, it makes perfect sense. When you then port such an editor to a GUI, you end up with "windows" which are subdivisions of the GUI "window" – vim has that too, not just emacs.

Again, it doesn't matter. The ordinary user's concept of a "window" maps directly to a GUI window, so if you expect your program to be used in a graphical environment without confusing the shit out of people, you need to adopt that environment's terms and concepts.

Meet. Your. Users. Where. They. Are.


> Meet. Your. Users. Where. They. Are.

I think you're missing the point. Emacs is an editor that is tied to programmers, where most of its users are programmers. It's not a general purpose text editor.

And while MacOS as an operating system for general purpose usage has done incredibly well due to its intuitive design first approach when it comes to developers many have pointed "your users" have pointed out that its interface and philosophy is restricting.

> It doesn't matter. When people talk of editing documents in Word, they speak of opening a file, not a buffer.

But nothing in emacs says you can't open a file. Just that the contents once it's opened are in a buffer. But if you open files outside of emacs they'll be opened as independent windows. By the same token we shouldn't be able to have a window split in frames, because Word opens each file in a window? The naming is a necessity for disambiguation.

And in fact, the idea of what you have open is a file does introduce problems. How many people you know *ucked up because they didn't save changes? They afterwards learned the hard way what is a buffer that every program uses (even in photoshop, or other software) but didn't learn it's name, just that you should save it every N minutes...


Meet your users where they are is great advice for consumer apps you want people to onboard to quickly and the users only interact with them periodiclly.

We're talking about the power tools of our profession. Nobody designs a band-saw to meet their users where they are either. These are apps we spend 8hrs+ per day living in, and frankly the buffer wording is superior because a buffer is not a file, but a file is in a buffer.

Meeting your users where they are is how we got everybody moving from Textmate to Atom and now to VSCode as the new hotness that would finally kill Emacs and vi. And yet that's the mentality that prevented those newer editors from having the legs to really last. Meanwhile other power tools like Eclipse which absolutely meet nobody where they are are going strong too.


> Meeting your users where they are is how we got everybody moving from Textmate to Atom and now to VSCode as the new hotness that would finally kill Emacs and vi. And yet that's the mentality that prevented those newer editors from having the legs to really last.

I'd like to think that too, but the sad fact is that most developers these days use VSCode. It is the new hotness that killed Emacs and vi. Emacs and vi are both mere footnotes compared to VSCode in terms of market and mind share, and even Eclipse has been largely supplanted within its niche by the far easier IntelliJ suite.

I suppose vi and Emacs will be around as long as they have diehards still using them, but they're not capturing new users' hearts and minds like they used to. Most professional developers I've met hear Emacs and think it's something that last ran on dusty old DEC iron -- and that includes people who worked at DEC in the 80s! The fact that Emacs is still maintained and runs on modern systems came as a genuine surprise to them.

Emacs and vi held on for as long as they did because there was a significant contingent of developers and technical people still familiar with a traditional Unix environment up until the 2000s or so. As some of these people aged out of the work force, they were replaced by an even larger contingent of younger developers, who only ever grew up with Windows and Mac, and have a reasonable expectation that programs work the Windows and Mac way. By and large they don't even have much interest in Linux on the desktop. Linux is for servers/the cloud. Their dev workstation is a Mac. And Emacs and vi are relics from the before times, whose 1970s UIs just aren't worth grappling with because there are better options available. Those are the people Emacs has to reach if it is to stay relevant. Which means that one way or another it will be "modernized" whether we like it or not.


>Most professional developers I've met hear Emacs and think it's something that last ran on dusty old DEC iron -- and that includes people who worked at DEC in the 80s! The fact that Emacs is still maintained and runs on modern systems came as a genuine surprise to them.

Bullshit, a lot of Linux/Unix developers used Emacs in the 90's and early OO's.

I was there.


> Emacs and vi are both mere footnotes compared to VSCode in terms of market and mind share

According to StackOverflow's 2019 developer survey [0], 50.7% of editors used VSCode, 25.4% used Vim. So that doesn't really support your claim that vi is a "mere footnote". 25% of market share isn't 50%, but it is still decent and more than a mere "footnote".

[0] https://insights.stackoverflow.com/survey/2019#development-e...


Wow even Emacs doing better than I would have thought at 4.5%.

People overestimate the value of massive community and apply a winner takes all mentality too often. You don't need a giant community, and almost by definition giant communities tend to have a lower average user quality leading to lower quality of libraries and support (looking at you JS ecosystem).


This. And those Mac zealot Gen-Zers know shit about what happened in late 90's and early 00's where people used vim and Emacs like Crazy, specially under X because Emacs were really affordable and fast under a mouse while keeping keyboard bindings.

They think the world revolutes around them with JS and hip editors while in the end they are the utter minority in serious IT/dev environments.


> Meet. Your. Users. Where. They. Are.

But who are your users?

I started using emacs back in the 1990s, when I was in high school; I actually started using it under OS/2 (as part of EMX), later switched to running it under Linux. I never had trouble with the concept or name "buffers". I just read the manual and the manual explains everything very clearly. (I really loved the GNU Emacs manual, it was a pleasure to read.) In more recent years, I've abandoned emacs completely for vim, but vim calls them buffers too. I am totally at home with a text editor with "buffers", and changing the name isn't going to make anything less confusing for me – if anything, the change is going to confuse me.

Tools like emacs and vim are not targeted at "ordinary" users. If a tool isn't aimed at "ordinary" users, it shouldn't be judged for not meeting their needs, since it was never trying to.

I even wrote a text editor myself (half-finished personal project, I've never released it publicly, don't know if I ever will, it was really just to prove to myself I could do it). And it has "buffers" too, because that is the concept and terminology I am comfortable with.


You're underplaying your hand, you know. Why leave out that cut, copy, and paste have keyboard shortcuts in Emacs that are used nowhere else, and can't be rebound to match muscle memory in any way that's really useful?

I mean - if you're going to ignore everything that makes Emacs great on your way to cavil about how it's different, why not go all out, right?


> cut, copy, and paste have keyboard shortcuts in Emacs that are used nowhere else

Try `C-w`, `C-k` and `C-y` inside a Bash shell sometime, you might be surprised.

`M-w` is an exception, though.


Oh, interesting. I knew about C-s and C-r there, and knew C-k as kill to end of line, but didn't know yank worked there too - thanks for the tip!


> Emacs also has an "undo", but no "redo" command

They don't have default bindings (unfortunate), but try installing a recent build of Emacs 28 and binding these commands to the keys of your choice:

- undo-only

- undo-redo


I like emacs, and I'm not a lisp wizard, but it's an amazing editor and sure it has problems (long lines can grind you, although I've never halted emacs even with gb single line files - I did have to kill buffers though, and there's probably other things, on the other hand, I did grind plenty of other editors with much less) but...

> Emacs also calls "files" "buffers", and calls "windows" "frames" and "frames" "windows"

Well, as someone pointed out, buffers are buffers, files are files, why would your editor try to fool you regarding the fact that what you have in front of you is in a buffer (that might be persisted into a file) and not exactly a file? Every buffer tells you if it's pointing to some file...

Frames aren't windows. Like tabs in a browser aren't browsers or windows either?

> It has weird keyboard shortcuts.

Yeah, the worse part of it all is you can't even change them, right?

> Oh, but if you press another key in between, you get lost in clipboard madness.

Yeap sometimes that happens, on the other hand the times it doesn't and you can cycle through things in ways that are not easy to do in other editors is great.

> Emacs also has an "undo", but no "redo" command

This ties back to the ring design? You can switch the direction you're going with two key-strokes. Granted, if you change the buffer (outside of just navigating the ring or copying into buffer, save, etc) then it does get weird, but it's usually pretty consistent and useful. It's like having a step by step git to your inputs into the buffer.

But `this`, this, this: > I agree that using a JS runtime makes more sense that Emacs's weird e-lisp runtime, with all of its dynamic-scope weirdness.

If the JS runtime, with the million of hours and $ poured into it to make it performant at the runtime engine, libraries and general language improvement, when compared to others created decades ago (and mostly voluntarily kept) didn't have an edge in some things, it would probably be the biggest facepalm in programming history ever.

Having said all this, I'm happy people keep experimenting and building stuff for others to use, even if in JS, or Rust, or Zig, or brainfuck.


it is fascinating to see how (particularly in older software / systems) radically different decisions were made about what is nowadays considered "fundamental" UX, and to imagine what the world would have been like if this other thing happened to become the consensus standard.

Blender is, I think, another example of this... It's quite good, but relative to the other 3D editing tools available, it has an absolute space-alien UI. Quite internally consistent; a bear to get into if you learned 3DS Max or Maya first.


> Blender is, I think, another example of this... It's quite good, but relative to the other 3D editing tools available, it has an absolute space-alien UI.

When did you last start Blender? This used to be true. Nowadays it's at least on par with commercial tools if not better.


I use Blender all the time; it's a great tool. What I mean is that its default keybindings, UI manipulation metaphors, etc. are notably different from the applications that live closer to 3DSMax and Maya on the family tree of 3D editors.

* Moving an object in Blender: Select, press 'G', now the mouse floats the object without the need to hold a mouse button down, optionally hit 'x', 'y', or 'z' to constrain to those axes. You can also take advantage of the 3D cursor.

* Moving an object in 3DS Max: Select to bring up manipulation widgets, click and drag the image of the widgets to move the object.

Starting to learn Blender if you learned 3D scene creation in Max is like starting to use emacs if you learned word processing in Microsoft Word. The metaphors aren't categorically better or worse; they're very different.


Compare the recursive Fibbonaci function against Scheme with Guile's JIT.


As far as I understand Guile-emacs is pretty much dead in the water, it hasn't seen any substantial development in quite a while. Your best bet for better emacs performance is the gccemacs branch, that integrates the libgccjit and builds off the existing emacs bytecode infrastructure and compiles it to native code. It is much much further along than guile-emacs and on track to be merged into emacs master at some point.

That would cover one major point of complaint with emacs performance, the other would be getting some form of real threading and parallelism into emacs so that long-running elisp doesn't block the UI thread, but that is a pretty huge task. With that said, I am sure that the community realizes this and is slowly working their way towards this.


EDIT: as pointed out, this is run with Guile 2.2, which is pre-JIT. Results to be posted later in this thread...

Okay.

    $ cat fib.scm
    (define (fibonacci n)
      (if (<= n 1)
          n
        (+ (fibonacci (- n 1)) (fibonacci (- n 2)))))
    (fibonacci 40)
    $ time guile fib.scm
    guile fib.scm  8.52s user 0.01s system 99% cpu 8.559 total
Node also uses V8.

    $ time node fib.js 
    node fib.js  1.29s user 0.01s system 100% cpu 1.296 total
But okay, you wanted deno.

    $ time deno run fib.js
    deno run fib.js  1.41s user 0.02s system 99% cpu 1.423 total
Meanwhile, using `emacs --script fib.elc` to evaluate byte-compiled (fib 40) takes about 32s. And `emacs --script fib.el` takes about 2x that, at 64s.

edit: for reference, the equivalent C code runs in 0.27s on my laptop.

edit edit: okay, okay, compiling the Guile code before running it speeds it up to an exciting 8.16s execution time, still about 6x slower than deno.


This got me curious about other Scheme implementations.

Chicken compiles to C

    $ csc fib.scm
    $ time ./fib

    real 0m8.461s
    user 0m8.383s
    sys  0m0.061s
Racket has a modern JIT

    $ cat fib.rkt 
    #lang racket
    (define (fibonacci n)
      (if (<= n 1)
          n
          (+ (fibonacci (- n 1)) (fibonacci (- n 2)))))
    (fibonacci 40)
    $ time racket fib.rkt
    102334155

    real 0m1.396s
    user 0m1.142s
    sys  0m0.127s
For reference

    $ sysctl -a | grep .brand_string
    machdep.cpu.brand_string: Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz


This got me interested in other lisps, as well. Looks like sbcl gives (time (fib 40)) to be about 1.7 seconds on my machine. (Ryzen 5 3600x.)

For lulz, I clocked a dumb loop version.

    (defun fib (n)
        (loop repeat n
              for a = 1 then b
              for b = 1 then c
              for c = (+ a b)
              finally (return a)))
And... well, serves as an amusing reminder that tree recursion has its downsides. :D

Edit: I confess the speed of the loop got me curious and.... javascript has numeric overflow. That would be a hell of a surprise if you weren't expecting it.

Edit2: (I realize overflow isn't the right term, but it is amusing that I can calc the 10000th fib in elisp... not so much in javascript. And yes, even elisp falls over sooner than sbcl does.)


>And... well, serves as an amusing reminder that tree recursion has its downsides. :D

OFC I would rewrite that as a iterative form (and any Schemer here could :D), but here we are testing performance :D.


Agreed. I didn't mean that necessarily as a criticism of anything. Just hard to have an intuition for just how much it slows down.

I suppose there is a bit of something in there about using the right algorithm. But I can understand picking one that will take time for comparing speed. I'm not sure either is really representative of work you expect an editor to do, though. Curse of benchmarks.


The speed of the naive recursive fib scales with the number it computes, which is roughly 1.6 ^ n (the eigenvalue of the matrix for the recurrence relation is the golden ratio, phi).

Intuition: if you were to expand and flatten the recursive calls, you'd end up with something like

(+ 1 0 1 1 0 1 1 0 ... 1 0)

with exactly (fib n) 1s (base case (fib 1)), and no more than (fib n) 0s (base case (fib 0)).

Simple example:

     (fib 3)
     = (+ (fib 2) (fib 1))
     = (+ (+ (fib 1) (fib 0)) 1)
     = (+ (+ 1 0) 1)
     = (+ 1 0 1)


  With scm, using (trace proc):
 
 scm -f fib.scm
 
 call fib 9
    call fib 8
      call fib 7
        call fib 6
          call fib 5
          retn fib 5
          call fib 4
          retn fib 3
        retn fib 8
        call fib 5
          call fib 4
          retn fib 3
          call fib 3
          retn fib 2
        retn fib 5
      retn fib 13
      call fib 6
        call fib 5
          call fib 4
          retn fib 3
          call fib 3
          retn fib 2
        retn fib 5
        call fib 4
          call fib 3
          retn fib 2
          call fib 2
          retn fib 1
        retn fib 3
      retn fib 8
    retn fib 21
    call fib 7
      call fib 6
        call fib 5
          call fib 4
          retn fib 3
          call fib 3
          retn fib 2
        retn fib 5
        call fib 4
          call fib 3
          retn fib 2
          call fib 2
          retn fib 1
        retn fib 3
      retn fib 8
      call fib 5
        call fib 4
          call fib 3
          retn fib 2
          call fib 2
          retn fib 1
        retn fib 3
        call fib 3
          call fib 2
          retn fib 1
          call fib 1
          retn fib 1
        retn fib 2
      retn fib 5
    retn fib 13
  retn fib 34
  call fib 8
    call fib 7
      call fib 6
        call fib 5
          call fib 4
          retn fib 3
          call fib 3
          retn fib 2
        retn fib 5
        call fib 4
          call fib 3
          retn fib 2
          call fib 2
          retn fib 1
        retn fib 3
      retn fib 8
      call fib 5
        call fib 4
          call fib 3
          retn fib 2
          call fib 2
          retn fib 1
        retn fib 3
        call fib 3
          call fib 2
          retn fib 1
          call fib 1
          retn fib 1
        retn fib 2
      retn fib 5
    retn fib 13
    call fib 6
      call fib 5
        call fib 4
          call fib 3
          retn fib 2
          call fib 2
          retn fib 1
        retn fib 3
        call fib 3
          call fib 2
          retn fib 1
          call fib 1
          retn fib 1
        retn fib 2
      retn fib 5
      call fib 4
        call fib 3
          call fib 2
          retn fib 1
          call fib 1
          retn fib 1
        retn fib 2
        call fib 2
          call fib 1
          retn fib 1
          call fib 0
          retn fib 0
        retn fib 1
      retn fib 3
    retn fib 8
  retn fib 21
  55
 
 scm -f fib-iter.scm
 
 call fib-iter 1 0 10
    call fib-iter 1 1 9
      call fib-iter 2 1 8
        call fib-iter 3 2 7
          call fib-iter 5 3 6
          retn fib-iter 55
        retn fib-iter 55
      retn fib-iter 55
    retn fib-iter 55
  retn fib-iter 55
  55


Right. I "know" how tree recursion scales up. It is still neat to "see" the impact on the speed.

That is, knowing all of these facts, it is still impressive how fast you can get (fib 1000000) using iteration and bignums.


There, that runtime is roughly O(n^2) when dealing with bignums, since addition scales linearly with bit length of the operands, and bit length increases linearly with the number of iterations (~0.7 bits per iteration).

Which is still not fast -- on my laptop, Python takes about 9s for fib(1000000). But compared to an exponentially-growing function, that's a blink of an eye.



Apologies, I should have acknowledged there are ways to get around that today. My guess is most folks writing javascript are not aware of BigInt. And will only become so when it bites them.

Granted, this is also true of fixednum in common lisp. At least "biting you" with that should only be performance. (I can't think of many other ways for that one to get you.)

Edit: I should further ack that "getting around" it is as simple as adding "n" to one of the constants. So, it is nicely done.


I don’t think I’ve seen BigInt in the wild Wild West of JS at all, so that’s probably true that most folks are not aware of it.

I wonder if there’s some room for improvement since using GMP is pretty standard for most languages.

Edit: forgot to link to a BigInt comparison in major language implementations http://www.wilfred.me.uk/blog/2014/10/20/the-fastest-bigint-...


Chez Scheme gives:

> (time (fibonacci 40)) (time (fibonacci 40)) no collections 1.364130664s elapsed cpu time 1.364456065s elapsed real time 0 bytes allocated 102334155


Chicken's default is fairly unoptimized; try higher optimization levels.


Oh dang...you're right

    $ csc -optimize-level 5 fib.scm 
    $ time ./fib

    real 0m5.506s
    user 0m5.489s
    sys  0m0.010s
I could get a few fractions of a second better playing with the stack size (4 MB was the best, 8 MB or above would segfault)

    $ csc -optimize-level 5 -stack-size 4m fib.scm 
    $ time ./fib

    real 0m5.295s
    user 0m5.277s
    sys  0m0.011s
Everything else was negligible or extremely varied.


Yep, to get beyond that you basically have to stop writing scheme and start writing a tortured variant of Chicken, for performance. ;)

I stopped writing Chicken scheme code a few years ago; it was fun due to the ease of interoperating with C, but I found Chibi a better fit for that niche eventually.


        guile --version?
EDIT: Guile3 brings JIT support...

Guile always compiles code before running it, but adding a JIT it's only from v > 2.91.x, and by default at v3.


Ah. This is on Guile 2.2, apparently. Installing guile3 and rerunning...


Better, but still slower by about 1.5x.

    $  time guile3 fib.scm
    guile3 fib.scm  2.01s user 0.02s system 99% cpu 2.047 total
edit: apparently my cpu was still being throttled post-build or something, as the initial runs were around 3s. Rerunning slightly later, it was noticeably faster.


Well, not bad considering Guile3 and it's JIT are pretty early on release compared to the enormous work from a company as huge as Google with a full team working on v8 across years.

Guile3 will improve that, too.


> Meanwhile, using `emacs --script fib.elc` to evaluate byte-compiled (fib 40) takes about 32s

Which Emacs version is this?

Any chance you want to try the native-comp branch and post its numbers here as well?


This was emacs 27.1 (which is the version in the Arch repo).

I can install native-comp via AUR. It'll take some time.


I think this is what you're looking for.

    time emacs --script fib-0b03fe9a-6b386e71.eln    
    emacs --script fib-0b03fe9a-6b386e71.eln  22.47s user 0.09s system 99% cpu 22.664 total
About 50% faster, which is nothing to sneeze at, but still appreciably slower than most other things I tried.

(I copied the .eln from the cache to confirm that's what I'm running against; I got a similar speedup from running against fib.elc.)


Asking as wasn't mention, was fib compiled with comp-speed 3?

That should be considerably relevant in this benchmark.

PS you can also run a wider set of benchmarks to compare against stock Emacs using:

https://elpa.gnu.org/packages/elisp-benchmarks.html

Here some not very updated results:

http://akrl.sdf.org/gccemacs.html#org4297f0f


Hi Andrea (right?),

IIUC, you're saying the fib benchmark gets optimized out at speed 3 (and thus run faster than all the implementations discussed here).

But what about speed 2, which is the current default? If we're 15 times slower on this benchmark, does that mean that Node has much cheaper function calls, or something like that?

Because we have to keep Emacs Lisp functions advice-able (which I agree is a good thing)?


IMO benchmarking is a way broader topic.

Specifically a single fibonacci example is not a good performance indicator by any means.

For instance let's assume V8 is faster in average at running this kind of nanobenchmarks (probably at least for now it is), but how much does it cost to convert non trivial data structures from Lisp to JS and back in a real case?

I expect there will be a lot of back and forward if the system if JS and Lisp get mixed and have to cooperate.

How much does it cost to go through foreign functions calls?

This are just initial thoughts that are not accounted at all here.

And even benchmarking nano-benchmarks can be surprisingly tricky ;) :) https://github.com/emacs-ng/emacs-ng/issues/187#issuecomment...


Sure.

My questions here are about whether these results indicate further optimization potential for native-comp (in the default configuration, hopefully).

emacs-ng is an interesting experiment, but I think I will only be able to seriously consider it if it comes to reimplementing the Elisp VM on top of V8. Then it will be a single runtime, and little to no data structure conversion and FFI will be required.

In the meantime, I'll keep following the native-comp progress ;), as well as dreaming of Web Workers in stock Elisp API.


Thank you. That's an interesting result.


I’m not convinced emacs performance is that bad. It’s pretty highly optimised for typing at a normal rate though I agree the data structures aren’t great for more graphical things (lots of properties, random access inserts, even just long lines). But then the way to improve this is to improve the data structures rather than replacing emacs lisp.

Emacs performs better than many other applications at important objective measures, eg the time between pressing a key and the character showing up on screen. You might expect vim to be good at this but many terminal emulators are optimised for throughput rather than latency and are slower than gui emacs.

Certainly legacy is a problem but I worry that replacing eg font-lock with something with better performance properties (does this actually do that?) means throwing out the baby with the bath water as so many modes depend on these things.

Typically the reason that emacs becomes slow is due to an overload of features or bad asymptotics. Examples:

- global-auto-revert-mode plus lots of buffers plus some in a slow file system like nfs or maybe lots of open fired buffers

- some mode using a configured alist that works fine for small lists but really sucks for big ones (ie perf is linear in size of config, but often this comes hand in hand with trying to process big inputs so it can be worse.) I think this happened with spacemacs and which-key-mode for example

- flyspell + flycheck + build running in background + some autocompletion server. Even a powerful computer can be slowed down by this. Some of these are likely to slowly improve over time as they become more synchronous. Others are problems you’re likely to have with any editor (a nice emacs thing was that it was easy for me to add some advice to pin my build to not use all my cpu. It can be really hard to poke around in other plugin systems to do that sort of thing without a config option for it)

- just piling on the nodes with something like spacemacs and not paying attention to the performance of the whole. I think doom-emacs shows a lot of promise here.

I do think there are advantages that could be made by modernising emacs’ core and improving data structures (whatever happened to remacs?)

I feel like the fundamental design of emacs is good however and I like:

1. keymaps and the command loop

2. Buffer-local variables, advice, dynamic scoping

3. Documentation

4. Fundamentally text-based interface

Obviously you should disregard all of this though as I am very biased due to my emotional reaction to getting rid of emacs lisp which, frankly, you can pry from my cold dead hands.

————————————

Some other thoughts I had on emacs lisp as a good language for a text editor:

1 https://news.ycombinator.com/item?id=19343908

2 https://news.ycombinator.com/item?id=22881597

3 https://news.ycombinator.com/item?id=18605001

On the “fast” measurement that actually matters: https://news.ycombinator.com/item?id=23432292


> I’m not convinced emacs performance is that bad.

In my experience with modern hardware - a gaming PC - Emacs is noticeably sluggish compared to most other editors in a few specific ways: particularly syntax highlighting. When is the last time you used a different editor?

> whatever happened to remacs?

It's still there: https://github.com/remacs/remacs/wiki/Progress

As far as I can tell, its goal isn't to change anything about the core design though; just to re-implement the core in Rust.


The last time that page was edited was 2 years ago. The last commit wasn't particularly recent either.


remacs is dead, they don't accept merge requests. For some reason they did not find it reasonable to mention this in the README, so sometimes potential contributors arrive and do unnecessary work until someone tells them it's "stalled".


one fundamental thing I run into fairly often is Emacs horrific performance on long lines. Not really an issue for code, but looking at large JSON or text files routinely almost crashes my Emacs.


Agreed: for most of my day to day tasks, emacs is performance enough, subject to the constraints of my CPU vis a vis interpreting the underlying lisp...but I do find myself actively avoiding a long line of JSON blob, because of the reliable repeatability of emacs locking up and becoming unusable on a long line.

It’s not enough to stop me using emacs, but it is enough to make me go out of my way to avoid it (and to grumble about).


As an Emacs user, I just have no use for this, and I get the feeling most Emacs users won’t either.


I will add to this, that I also have no reason to forbid something like this. It is scratching an itch for those involved. Kudos on that, and best of luck with it. I just can't pretend this is solving a problem I have. :D


This looks very cool


I personally look forward to the day when every program I run embeds JavaScript and an HTML engine in it. Then we will be living in an age of true modernity in software development.

Systemd-ng, anyone?


I'm sure Samsung, Micron and every other company selling DIMMs are looking forward for that day too. It would need how much more memory for everything?

My beefy desktop computer is often already struggling just to run Chrome.

Emacs used to stand for "Eight Megabytes and constly swapping", maybe they could name it now Egacs (Eight Gigabytes ...)


In the days of the 486 being nothing more than "junk" compared to a Pentium 2, running Emacs with SICP on Texinfo format was the light approach compared to open a full browser to read HTML pages.

Ditto with Groff+Mom occuping a few MB (totally doable in 1997) vs a GB LaTex install lasting several minutes to render a PostScript, and then running X (if even) just to display the resulting file and nothing more.

Nowadays even QT5 software looks lightweight enough compared to some Electron monsters...


Is this satire?


You're not thinking modern enough; what we need is uboot-ng!


That's Gnome3-4.


I don't like this one bit. Good luck to the author and team, but I despise attempts to bring emacs "into the future."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: