Hacker News new | past | comments | ask | show | jobs | submit login
Goodbye, Node.js Buffer (sindresorhus.com)
217 points by ingve on Oct 24, 2023 | hide | past | favorite | 98 comments



The proposal for native base64 support for Uint8Arrays is mine. I'm glad to see people are interested in using it. (So am I!)

For a status update, for the last year or two the main blocker has been a conflict between a desire to have streaming support and a desire to keep the API small and simple. That's now resolved [1] by dropping streaming support, assuming I can demonstrate a reasonably efficient streaming implementation on top of the one-shot implementation, which won't be hard unless "reasonably efficient" means "with zero copies", in which case we'll need to keep arguing about it.

I've also been working on documenting [2] the differences between various base64 implementations in other languages and in JS libraries to ensure we have a decent picture of the landscape when designing this.

With luck, I hope to advance the proposal to stage 3 ("ready for implementations") within the next two meetings of TC39 - so either next month or January. Realistically it will probably take a little longer than that, and of course implementations take a while. But it's moving along.

[1] https://github.com/tc39/proposal-arraybuffer-base64/issues/1...

[2] https://gist.github.com/bakkot/16cae276209da91b652c2cb3f612a...


Thanks for doing this!

I had to convert a ReadableSteam to base64 recently, and I was shocked at how much boilerplate it required in 2023:

    if (isReadableStream<Uint8Array>(value)) {
      const chunks = [];

      for await (const chunk of value) {
        chunks.push(chunk);
      }

      if (chunks[0].byteLength) {
        const length = chunks.reduce(
          (agg, next) => agg + next.length, 0
        );

        // Make the same species TypedArray that ReadableStream gave us.
        value = new (chunks[0].constructor as Uint8ArrayConstructor)(length);

        for (let i = 0, offset = 0; i < chunks.length; i++) {
          value.set(chunks[i], offset);
          offset += chunks[i].length;
        }
      } else {
        throw new Error(`Unrecognized readable stream type: ${ chunks[0].constructor.name }`);
      }
    }

    return  `data:application/octet-stream;base64,${
      btoa(
        (value as Uint8Array).reduce(
          (result, charCode) => result + String.fromCharCode(charCode),
          ''
        )
      )
    }`;

I had to learn/implement a lot of little details to simply change the encoding from binary to its most common text representation. That's a day I would have rather spent doing product work.


This specific proposal will only really help with the last part - it will let you write

    return `data:application/octet-stream;base64,${(value as Uint8Array).toBase64()}`;
but you'll still have to do the work of reading the stream to a buffer yourself.

There is a _very_ early stage (as in, it's literally just an idea one person had, which may never happen) proposal [1] to do zero-copy ArrayBuffer concatenation, which would further simplify this - once you'd collected the chunks you could `value = new Uint8Array(ArrayBuffer.of(chunks.map(chunk => chunk.buffer))` instead of manual concatenation.

Finally, there's the Array.fromAsync proposal [2] and/or async iterator helpers proposal [3] (which I am also working on), which would make it easier to collect the chunks. Putting these together, you'd get something like

    if (isReadableStream<Uint8Array>(value)) {
      const chunks = await Array.fromAsync(value);

      if (chunks[0].byteLength) {
        value = new Uint8Array(ArrayBuffer.of(...chunks.map(chunk => chunk.buffer)));
      } else {
        throw new Error(`Unrecognized readable stream type: ${ chunks[0].constructor.name }`);
      }
    }

    return  `data:application/octet-stream;base64,${(value as Uint8Array).toBase64()}`;
    

[1] https://github.com/jasnell/proposal-zero-copy-arraybuffer-li...

[2] https://github.com/tc39/proposal-array-from-async

[3] https://github.com/tc39/proposal-async-iterator-helpers


It's nice that in an imaginary future that code would be shorter, but it's unfortunate that the conceptual understanding needed to write it isn't. You'd still need to know how to juggle a whole bunch of related concepts - ReadableStream chunks, Uint8Arrays, ArrayBuffers - to write that transformation.

Why does de-chunking a byte array need to be complicated:

    new Uint8Array(ArrayBuffer.of(...chunks.map(chunk => chunk.buffer)))
esp when chunking is specified by the platform in ReadableStream?

-----

You have made me realize I don't even know what the right venue is to vote on stuff. How should I signal to TC39 that e.g. Array.fromAsync is a good idea?


Yeah, in your case I think most of the complexity is actually on the ReadableStream side, not the base64 side.

The thing that I'd actually want for your case is either a TransformStream for byte stream <-> base64 stream (which I expect will come eventually, once the simple case gets done; it's also easy in userland [1]), or something which would let you read the entire stream into a single Uint8Array or ArrayBuffer, which is a long-standing suggestion [2].

---

> Why does de-chunking a byte array need to be complicated

Keep in mind the concat proposal is _very_ early. If you think it would be useful to be able to concat Uint8Arrays and have that implicitly concatenate the underlying buffers, [3] is the place to open an issue.

---

> You have made me realize I don't even know what the right venue is to vote on stuff. How should I signal to TC39 that e.g. Array.fromAsync is a good idea?

Unfortunately, it's different places for different things. Streams are not TC39 at all; the right place for suggestions there is in the WHATWG streams repo [4]. Usually there's already an existing issue and you can add your use case as a comment in the relevant issue. TC39 proposals all have their own Github repositories, and you can open a new issue with your use case.

Concrete use cases are much more helpful than just "this is a good idea". Though `fromAsync` in particular everyone agrees is good, and it mostly just needs implementations, which are ongoing; see e.g. [5]. If you _really_ want to advance a stage 3 proposal, you can contribute a PR to Chrome or Firefox with an implementation - but for nontrivial proposals that's usually hard. For TC39 in particular, use cases are only really valuable pre-stage-3 proposals.

[1] https://github.com/lucacasonato/base64_streams/blob/7c4ed815...

[2] https://github.com/whatwg/streams/issues/1019

[3] https://github.com/jasnell/proposal-zero-copy-arraybuffer-li...

[4] https://github.com/whatwg/streams

[5] https://bugs.chromium.org/p/v8/issues/detail?id=13321


It's good to see that people care about native JS standards. I'm a bit concerned how Bun seems to be adding their own proprietary APIs to their JS runtime, and can't help feeling that it could be intentional vendor lock-in. Since they are venture backed I'm sure they want some ROI from the project eventually and if they start charging money it will be harder to switch away due to the proprietary APIs without refactoring your project.


It's really because we have some wonderful collaboration between vendors now, and they don't want to have to come up with their own APIs.

https://wintercg.org/


Cough. Vercel.


The accusation of malice goes a bit far, but I agree it's a problem whether it's intentional or not. Bun takes a slapdash approach to shipping features and APIs, which means things get out fast but also rubs me the wrong way re: long-term implications and vision


I’ll vouch for non-malice. Bun’s creator frequently tweets about hypothetical APIs he may want to introduce, often seeking feedback explicitly. The ideas are always earnest “would you find this helpful?” sorts of things. (For what it’s worth, I frequently reply that these hypotheticals are a bad idea when they break expectations etc, at least if I can fit my reasoning in tweet length. I think this has been at least marginally successful in pushing back in some cases.)

The problem, with Bun but really with the ecosystem at large, is that shipping stuff is (still) the de facto way that standards kind of congeal into something resembling actual standards.


Yeah. I follow him on twitter too, and it's both impressive and distressingly chaotic the way he spitballs and then settles on APIs in the form of twitter surveys


That works just fine if you are the main person working on something.


seems to work for twitter's CEO - though he does sometimes ignore what the survey says


> can't help feeling that it could be intentional vendor lock-in.

it's always vendor lock-in. can never trust companies, mate!


It's still OSS, not proprietary APIs. Feel free to fork it in the future, once they need to make their money back.

If it's faster than node and the API to do what I need is better (Bun.FileSystemRouter comes to mind), I don't see why not to switch over.

Heck, I would accept even less retro-compatibility to get rid of transpiling, that completely oscene political mess they did with modules and treating TS as a 2nd class citizen.

The amount of time I'm still hit by ESM vs CJS is insane. Truly a Python 2vs3 moment for the node.js community.


The term "proprietary" does not always refer to software licenses, but that's one of the common causes of something to be proprietary.


How would you define it?

The wiki article for it in the context of software (https://en.wikipedia.org/wiki/Proprietary_software) seems to pretty much stand it in opposition to open licenses, but I also have a sort of gut feeling that it could also refer to interfaces in OSS that are not designed to re-implementable. Not sure about that though.


It depends on the context I guess.

In the context of not software licensing, I would define it roughly like you said. Interfaces that are designed in a way that is not compatible with existing software or easily implemented by others.

An example would be QT, the GUI framework for C++. It has it's own implementation of stuff that already exists in the standard library, like the string container for example (std::string). You can't use standard C++ types with QT and you can't use QT types with non-QT C++ libraries or types.

I am not a C++ or QT expert so there may be some level of compatibility between the types when using generics or in general, but from my very tiny use of QT it seemed like you had to use their types.


"Nonstandard and controlled by one particular organization." <https://en.wiktionary.org/wiki/proprietary>

XBL, a precursor to Web Components in use in Firefox for 15+ years before any ordinary webdev was talking about shadow DOM, was open source in every sense, for example. It was also proprietary.

See also: "Problems with XUL".

> XUL is a proprietary technology developed by Mozilla and only used by Mozilla.

<https://mozilla.github.io/firefox-browser-architecture/text/...>


I don't know when that page was written, but there were other developers that were using XUL before it was deprecated. The only one that comes in mind right now is ActiveState and their Komodo Edit/IDE. But there used to be more developers and I recall their being a dedicated page that listed those other products built on XUL.

The documentation of it, and the jump in complexity when it came to XPCOM (which were written in C++), were the reason (in my opinion) why platforms like Electron got popular instead of XUL.


Not relevant to this topic.


I think it's relevant as the linked post states that Mozilla was the single user of the XUL runtime.


> Not relevant to this topic.


I had to deal with binary data in a project recently and all the options made my head spin. File, Blob, Buffer, ArrayBuffer, Uint8Array, and so on.

Was very confused on what to use!


- Blob: Immutable raw data container with a size and MIME type, not directly readable.

- File: Like a Blob, but with additional file-specific properties (e.g., filename).

- ArrayBuffer: Fixed-length raw binary data in JavaScript, not directly accessible.

- Uint8Array: Interface for reading/writing binary data in ArrayBuffer, showing them as 8-bit unsigned integers.

- Buffer: Readable/writable raw binary data container in Node.js (subclass of Uint8Array)


Nit: "fixed-length" is no longer true as of very recently [1].

[1] https://github.com/tc39/proposal-resizablearraybuffer


Now it's bounded-length.


A File is a Blob, every file is `instanceof Blob`.


TBH I prefer Buffer as an abstraction. Too bad it didn't make it into native JS standard. I can't think of many use cases in JS land were Uint8Array, Uint16Array, Uint32Array, Int8Array would be absolutely necessary. Seems more useful for perf optimizations. For WebAssembly? Surely a plain array of numbers can be used in most cases. The main use case for Buffer IMO is to convert between different formats like Base64, hex or for sending raw binary data over a transport; ultimately we just need an object to represent binary. It doesn't seem appropriate for a high level language like JS to concern itself with the details of CPU architecture which warrant thinking of binary in terms of 8-bits, 16-bits or 32-bits in the first place. As an abstraction, it's rather arbitrary to treat these numbers as special. It really seems to come down to a marginal performance optimization.


Typed arrays are essential for web apps that use WebGL and WebGPU. Being able to send this type of data to run computations on the GPU can give you 1000x speed up.

You can see it in action on this WebGL fluid simulator[0] by PavelDoGreat.

[0] https://github.com/PavelDoGreat/WebGL-Fluid-Simulation


This isn't an argument against buffer though. Raw memory is raw memory


This simulator os awesome.


> I can't think of many use cases in JS land were Uint8Array, Uint16Array, Uint32Array, Int8Array would be absolutely necessary.

Buffer is a subclass of Uint8Array.


Sized numerics are useful even in high level, loosely and dynamically typed languages. Two examples are binary (de)serialization and OpenGL. In Python, libraries for those generally use sized numerics.

High level languages will still want to do low level things


Buffer is a utility. It combines several abstractions that are common in nature into one highly useful class. I quite prefer using it as well.

This is what I feel the node people get right over the ES people, the ES ideology is so abstracted and pushes everything out into small utility classes so you have to create a handful of different objects with odd combinations of methods to get one useful conversion done.

The ES people also seem to have no love of the CLI or for type types of debugging and testing done there. As a result, I almost always choose the node created abstractions over the ES specified ones.


> For WebAssembly? Surely a plain array of numbers can be used in most cases.

So a list of floats? No! Let's not use floats everywhere just because JavaScript does.


> buffers expose private information through global variables, a potential security risk.

Does JavaScript's security model let you effectively sandbox scripts running in the same context from each other? If not, then why does this matter?


Consider a web server serving multiple ongoing requests from different users (with separate permissions). Each of them uses a buffer at some point (either a Buffer or a Uint8Array)

If the buffer doesn't have any cross-contamination with global state, there's no way one user could access another's data (because it's behind an object reference that never comes into the scope of the request logic for the other user). But if it did, and a malicious user found some other kind of vulnerability, they could potentially access data across-scopes


Replace buffer in your example with plain object or just array - that rest of it still stands, right?

There's something other at play:

> buffers expose private information through global variables, a potential security risk.

This links to following piece of code [1]:

> // Somewhere in your code

> const privateBuf = Buffer.from(privateKey, 'hex');

> // Rogue package can access

> Buffer.from('1').buffer

I've just run it in node, and my god am I shocked!

> const privateBuf = Buffer.from('DEADBEEF', 'hex');

> Buffer.from('1').buffer

> ArrayBuffer { [Uint8Contents]: <2f 00 00 00 00 00 00 00 de ad be ef 00 > ....

[1] https://github.com/nodejs/node/issues/41588#issuecomment-101...


From a cursory read, it matters because all Buffer.from(...) calls use a shared buffer, and buffer over-reads are a much more common vulnerability than easy access to arbitrary memory.

Not a security expert etc.


I find buffer useful for its conversion functions to/from different encodings, e.g., `Buffer.from(data, 'hex')` or `Buffer.toString('base64')`. Is there a good way to do this with `Uint8Array`?


People are working on bringing Base64/Hex conversion to JavaScript: https://github.com/tc39/proposal-arraybuffer-base64

I also provide a package to make the transition easier: https://github.com/sindresorhus/uint8array-extras (Feel free to copy-paste the code if you don't want another dependency)


Thanks for all the utility belts you've provided to the ecosystem.

It's insane to me that something as simple as concatenating an array needs a library, but as I've shown upthread, Uint8Arrays are way too complicated to work with.


You can use ‘atob’ and ‘btoa’ functions for some of that.


Those functions are fundamentally broken: https://developer.mozilla.org/en-US/docs/Glossary/Base64#jav...

See the whole section on converting arbitrary binary data and the complex ways to do it.


Although those functions operate on "binary strings", not Uint8Arrays, and there is no especially clean way that vanilla JS exposes to convert between the two that I am aware of.


Polyfill Buffer.


I would prefer not install any dependency if possible, I don't see a necessity of installing a polyfill just to not use the built in one because it's node specific.


I wonder why you would introduce an extra dependency for the base64 example. It seems more trivial than left pad.

https://github.com/sindresorhus/uint8array-extras/blob/cbf24...


> // Required as `btoa` and `atob` don't properly support Unicode: https://developer.mozilla.org/en-US/docs/Glossary/Base64#the...


Buffer's `slice` is nice to have in many cases though. The pool makes it generally faster. And `allocUnsafe` is a great feature! I like Buffer.


As discussed in another thread, `subarray()`[0] fills the same purpose.

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...



Personally, I won't make the switch until the utility methods are supported natively. I don't think it's worth the extra package overhead to execute `buf.toString('base64')`. The second that lands, I'm all-in on abandoning buffers.


I agree with using Uint8Array (although it's really ArrayBuffer that's the key difference), but what is up with the author using an NPM package to do something as trivial as checking if an object is a Uint8Array? `Uint8Array.prototype.isPrototypeOf` should be perfectly adequate and doesn't involve adding an attack vector to your application.


`Uint8Array.prototype.isPrototypeOf` and `instanceof Uint8Array` do not work across realms (frames, Node.js VM, etc).

Feel free to copy-paste the function to your own code base if you don't want the dependency:

``` const objectToString = Object.prototype.toString;

export function isUint8Array(value) { return value && objectToString.call(value) === '[object Uint8Array]'; } ```


I understand what you're saying, but that's actually in support of my point. This is still extremely trivial code to implement and, from what I can tell, doesn't warrant downloading an NPM package. Have we already forgotten the left-pad fiasco?

This isn't meant as a personal attack on anyone, but we really need to frown upon needless dependencies, especially given the growing number of malicious NPM packages.


You're talking to someone who has published well over a thousand packages, many of them tiny.

I suspect your philosophies are irreconcilable.


No one is forcing you to use it. You can choose to reimplement the code yourself or you can choose to copy-paste the code. I made the package for my own convenience as I need to transition a lot of packages from `Buffer` and I don't want to maintain duplicates of the code in every package. Others are free to use the package or not.


Hey, that's totally fine if that's what you want to do, especially if it's for your own convenience. What I'm trying to communicate really has nothing to do with whether anyone is being forced to install anything. My point is that there easily avoidable problems that are inherent to pulling in packages hosted elsewhere, and that programmers should consider whether they should avoid suggesting that using a third-party package for something that can be written by hand in a few minutes. That's all I'm saying. For your own use, this makes a lot of sense. If it were me, I would avoid sharing it, and I hope more programmers move away from relying heavily on other people's packages for tiny units of functionality. But I probably wouldn't have been vocal about that here if I knew your intent with that package (or that you even wrote it, which perhaps I missed somewhere).


What do you mean by "across realms"?

Is that just another way of saying `Uint8Array.prototype.isPrototypeOf` and `instanceof Uint8Array` are not available in all JS environments?

I guess what I'm asking is the definition of a "Javascript Realm" in case I'm thinking it's something different.


https://weizmangal.com/2022/10/28/what-is-a-realm-in-js

Examples of this are frames in the browser and the `vm` module in Node.js.


Ah thanks, I thought "realms" sounded familiar and that helps clear things up a bit. Also Lavamoat and SES look really interesting thanks for the link.


> `Uint8Array.prototype.isPrototypeOf` and `instanceof Uint8Array` do not work across realms (frames, Node.js VM, etc).

That sounds like a bug in those implementations.


That’s a reasonable intuition, but it’s not a bug. Global scopes are isolated between realms by design, and that applies to built-ins as well as their prototype chains.


Somewhat related, who has played with performant JS dataframe libraries? I mostly want to serialize from pandas/polars to JS in a fast way, with some very minor selection and indexing operations JS side. Ideally I could find a library that does dataframe like operations from reading base64 encoded Buffers/Arrays into their JS objects, then doing the right things.

I'm investigating https://arrow.apache.org/docs/js/ https://github.com/vega/falcon https://github.com/pola-rs/nodejs-polars https://github.com/uwdata/arquero


I added a github issue to my project where this matters literally this morning. I'd love to talk to someone dealing with the same stuff. I have baked dataframe -> JS serialization utils so many times over the past decade. I never want to do much in JS, just simple selection and filtering. But you end up needing both sides of the serialization to do it right.


nodejs-polars is node-specific and uses native FFI. polars can be compiled to Wasm but doesn't yet have a js API out of the box.

As for the fastest way to serialize data to Pandas data to the browser, you should use Parquet; it's the fastest to write on the Python side and read on the JS side, while also being compressed. See https://github.com/kylebarron/parquet-wasm (full disclosure, I wrote this)


I've been using the native Uint8Array that Deno provides and it's been great overall. From time to time I may need to covert away from a Buffer type and it's a little bit of a headache, but I think the ecosystem is supporting Uint8Array, at least on the server very well.


Having just spent the past few months dealing with the utter carnage caused by updating an old Node project to use modern dependencies, I predict doing this will cause a bloodbath as some libraries update but don't correctly specify semver for the breaking changes, or other libraries that depend on the Buffer usage not bothering to limit their dependencies to a major version unexpectedly breaking when `npm update` is run.

I wonder what unmaintaned libraries will have to be dropped this time as the ecosystem grinds on and causes them to break?

(How did we get to a place where everything is this bad?)


> (How did we get to a place where everything is this bad?)

Not vetting the bedrock you chose to build upon; deciding to rely on the unreliable.

("Everything" is an overstatement.)


What's better?


Vetting it.


My absolute go to issue these days with Node is the persistence on doing the same type/interfaces as the browser. The absolute sh* show trying to make a FormData request in Node to work with native Node streams is mind boggling, just because fetch/FormData needs to be EXACTLY like in the browser. Sure, but how about I do not want to put a 100mb file into memory just because I want to use native fetch ....


Personally, I believe it's better to keep on using the proven buffer vs some person's package. We don't need another left-pad. Also another reason to use a strongly typed language.

You call a function that returns a buffer. The interfaces changes and you now get a UInt8Array.

You will never know.


Is that what he is suggesting? I thought the article just wanted people to use UInt8Array instead of buffer.


I recently ran into an issue where I needed to store an Uint8Array as a BYTEA in postgres. When i stored it directly, the Uint8Array would be modified and made bigger. When i converted it to a buffer and then stored it, it would work. Still not sure why that was the case.


I wonder if that's a lack of good support for Uint8Array in the client you're using with postgres?


Not sure. I was using Sequalize for the ORM, but don't know if it was a Sequalize or postgres issue.


A total tangent, but it is always fun when two totally separate things you're doing connect in a weird way. I just installed some node library and then I see this unrelated blog post that is written by the creator of that library.


The author has written a lot (1000+) of libraries, and earned a certain amount of controversy for how they manage them, and whether all those packages are necessary. I think the most recent controversy was that they switched all of their packages over to the new module system before the ecosystem was ready.

It's one of those names where I tend to raise an eyebrow when I hear it. This looks like something similar to the module/require changes, where the overall change is good, but a slower transition may help people more.


Geez you aren’t kidding, this guy is prolific. I’ve seen some of his other projects on here but they all seemed to be Mac apps. Surprised to see someone churning out so many of those also has time to build and maintain 1000+ npm projects. Also this guy seems to have started the awesome list trend.

I do have to wonder how many of those projects are essentially a wrapper around some native functionality just with their preferred API. Regardless, that level of scale is impressive.

Thanks for the heads up though, I’ll have approach with caution.


Great post, like good docs; clear, informative, and with examples.


Can someone tell me when to use Uint8Array vs ArrayBuffer in my method signatures?

Like, I would think that replaces a lot of Buffer usage....but the articles mentions it exactly zero times.


The problem is that Buffer is just too damn convenient. I can do everything with a Uint8Array, but everything takes about 4-5x more code for no discernible benefit.


So the only important difference is that Buffer creates a mutable view at slice(). It could use a better/distinct name for sure, e.g. mutableView(). But do you actually want to always copy a slice that is big enough?

This ideological pedantic purism which eliminates practical use cases is in my book just that - impractical pedantic purism. It’s easy to advocate for when your job is shaping landscapes without having to walk these.

And yes, I’m writing this still having flashbacks from ESM-only “transition”, which was more like throwing everyone into freezing waters with the promise it will warm up eventually.


> So the only important difference is that Buffer creates a mutable view at slice(). It could use a better/distinct name for sure, e.g. mutableView(). But do you actually want to always copy a slice that is big enough?

I think you are missing the point. Mutable and copied slices both have their use, and maybe a method that sometimes copies and sometimes shares does too. The problem with Buffer is that it overrides a method with well-defined semantics, then violates those. Buffer is a subclass of UInt8Array, but not a subtype nor Liskov-substitutable.

In practical terms, when you get a UInt8Array and even check that it in fact is a UInt8Array, you no longer know that .slice() does even though UInt8Array's documentation explains how every UInt8Array behaves, because that documentation is now wrong. If the UInt8Array you get is a Buffer, then it is both a buffer and a UInt8Array but its .slice() behaves differently than expected.


> So the only important difference is that Buffer creates a mutable view at slice().

No, it also (from the article):

> introduces numerous methods that are not available in other JavaScript environments. Consequently, code leveraging Buffer-specific methods needs polyfilling, preventing many valuable packages from being browser-compatible.


The polyfill argument is good, but it addresses a world where you just push html, css and js files. We live in a different one. It is valid, but more on a wishful thinking side rather than practical. And when you avoid calling it “polyfill”, it simply turns into a useful compatibility/extension library. I find it strange to argue about one dependency but then importing few others, with the only difference in taxonomy.


Probably less common, but I write a lot of JS code interfacing with devices over serial connections. This makes Buffer's convenience methods for encoding and decoding data types in both big and little endian ordering really useful to me.

It looks like I could accomplish the same thing using a DataView of an ArrayBuffer, but I don't see enough of a benefit to justify converting everything to this approach.


I guess this is our new nonsense after the ESM nonsense. Come on man start fucking all the npm packages as soon as possible. Don't forget to throw away backward compatibility.


Ok, first we screwed buffers by making them globally tracked instead of just a piece of memory. Now its time to break all binary modules again.


> Buffer also comes with additional caveats. For instance, Buffer#slice() creates a mutable segment linked to the original Buffer, while Uint8Array#slice() creates an immutable copy, resulting in possible unpredictable behavior.

I can see how the Buffer behavior here would be preferable in cases where performance is important or memory constrained environments.


Uint8Array has this too, but it's called `.subarray()`. The problem is that Buffer is a subclass of Uint8Array, but changes the behavior of the `.slice()` method.


>Buffer also comes with additional caveats. For instance, Buffer#slice() creates a mutable segment linked to the original Buffer, while Uint8Array#slice() creates an immutable copy, resulting in possible unpredictable behavior. The problem is not the behavior of the Buffer#slice() method, but the fact that Buffer is a subclass of Uint8Array, but changes the behavior of an inherited method.

But this is an implementation detail, not specified behavior. Changing method behavior in subclasses is a key aspect of inheritance.


Completely different behavior is not an implementation detail.

Changing method behavior in subclasses is part of inheritance, but it shouldn't confuse or mislead. In the case of Buffer and Uint8Array, the altered `.slice()` functionality isn't a mere implementation detail; it's a significant deviation. This inconsistency can lead to unexpected bugs, especially for those who assume similar behavior based on the inheritance hierarchy. It's crucial for reliability that such fundamental behaviors remain predictable across subclasses.


Any behavior that is not defined in the spec[0] is, by definition, an implementation detail. Relying on undefined behavior is a recipe for bugs. If you need an immutable array, and the spec doesn't require the returned array to be immutable, you should create one yourself.

[0] https://tc39.es/ecma262/multipage/indexed-collections.html#s...


You are reading the wrong spec. That is `Array#slice`, not `TypedArray#slice`.

Correct spec: https://tc39.es/ecma262/multipage/indexed-collections.html#s...

Steps 14.g.i to 14.g.ix detail the transfer of data from the original TypedArray (O) to the new TypedArray (A). It involves reading values from the original and writing them to the new array's buffer, effectively duplicating the data segment. The process ensures both arrays are distinct with separate memory spaces.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: