Internally, we've been replacing Draft.js with Lexical. To be clear though, they're different projects with little API compatibility. We hope to add some docs explaining how we've approached our upgrade path in the future.
To expand on this further, I'll pull this from another HN thread:
Draft.js was built a long time ago when many of the concerns around making contentEditable work stemmed from patching browser-support. Today, it's nowhere near as bad. We can leverage modern events and we can try and tackle things from a different point of view.
One of the core things we've tried to do is make the developer experience and performance better. DraftJS pulled in a lot of JavaScript and much of it was hard to reason with because of the lack of types. ImmutableJS just didn't scale how we would have liked it to, and from our experience, developers didn't really like using it all that much. DraftJS also had a block based approach, which quickly fell apart when you wanted to do something more complex. Not to mention compatibility with React 18+ and the countless issues with having to depend on ReactDOM for rendering when fighting with browser extensions that want to take over control of the DOM from Draft.
With these things in mind, we looked at how we could keep the good ideas from Draft, Slate, ProseMirror and also invent some new ideas of our own. Lexical doesn't have any dependencies, so you can use it with Svelte or Solid (once their bindings have been created), or any other framework of your choice. Lexical also doesn't need ImmutableJS, which means the APIs are fully typed in Flow and TypeScript, reducing issues. Lexical is also around 22kb gzip+min, so it's far smaller than Draft. Typing performance in our testing is around 30-70% faster compared to Draft.
> typing performance in our testing is around 30-70% faster compared to Draft.
At how many "Lines of Code" ? I am skeptical that scaling characteristics from 10k to 100k to 1 million is linear, so I'm curious what 30-70% actually means.
For example, VSCode tried a new text buffer and benchmarked at different source text sizes. There is a critical point where their PieceTree implementation scales , whereas the line-based text buffer approach does not scale.
Mostly referring to high-traffic surfaces here, like Facebook Feed or Messenger. The % varies depending on the environment and the previous Draft.js implementation (and their plugins) but overall it's been very positive, especially with low-end devices that went way over 16ms response time per keypress.
This is my point. Facebook surfaces include stuff like editing a post, sending a message. All of these have relatively small character limits versus a text editor that handles millions of lines with tens of millions of characters.
In the case of text editors, 1 object node per line implementations work fine for small files. They'll even edge out the overhead of more esoteric implementations requiring trees. Lookups are O(1) versus O(log n) in the depth of trees, but insertions get slow as you have to shift all the array elements. Again for few number of lines, that's fine. For small data ( tens of thousands of characters), even a giant string will do fine and within tens of milliseconds, too.
Facebook Blog / Article posts may get relatively long, but still not in the Megabytes ( metadata, not images ).
My point of contention is the claim and marketing that the library is suitable as a high performance text editor implementation. Depends on what the constraints and use case is. For Facebook posts and small editor widgets, I'll believe that. There is a hint that it can serve as the foundation for an IDE, but likely only a small snippet editor or a toy IDE implementation. The devil is in the details for a "serious" IDE / text editor like VSCode / Ace. It's an apple to bananas performance claim.
> low-end devices that went way over 16ms response time per keypress
Shaving milliseconds of keypresses for users on low end devices is msotly a Facebook and FANG concern. This performance concern is invisible and not important for someone looking to build an IDE where the users are accessing from a powerful device. For people editing huge files, the bottleneck are caused by data structures choices which I don't think will be mitigated by having a high level and convenient programming paradigm in the case of Lexical.
I think it would be helpful to explicitly say that draft.js is deprecated here [1]. I was recently caught out by this, and started using draft.js without realising it’s been abandoned.
Great news to hear Lexical replacing Draft. Also great that you're handling `beforeinput` (like Slate).
I work on a text expander browser extension and supporting Draft is a pain[1].
Are there any plans to make inserting text from the "outside" easier on Lexical? Maybe exposing the Lexical instance on the dom node (like CKEditor does)? Right now we're using `execCommand` to support Lexical.