Hacker News new | past | comments | ask | show | jobs | submit login
BigInt: Arbitrary precision integers in JavaScript (github.com/tc39)
75 points by fagnerbrack on Nov 9, 2018 | hide | past | favorite | 60 comments



What is the value of JSON.parse(JSON.stringify(BigInt(Number.MAX_SAFE_INTEGER) + 2n))?

> Finally, BigInts cannot be serialized to JSON.

Hmm, ok. I guess that's fine. But what about large numbers coming from over the network? Can we get a BigInt-aware JSON.parse() standardized ASAP?


no. This is not related to parsing, it’s related to the JSON format.

JSON is a stable interchange format, and for that reason we’ve got a huge base of tooling that uses and emits it. It’s all interoperable because there is just that one syntax.

So if you say “add JSON.parse() API”, you’re saying “I want to transmit non-Json strings but claim they’re json “.

That breaks:

* old browsers - not supporting bigints would mean they couldn’t parse any of your “json” data, even if you weren’t using the bigints. Eg feature detection wouldn’t work.

* all shipped products that read json, because as with the browsers they could not parse any “json” that contained invalid data.

[edit:

Ok, let's try to do something about the endless downvotes:

Say you do JSON.parse("9007199254740994") - 2^53+2

Should this parse to a BigInt or a Number? In JavaScript today it will parse as a Number, without losing precision, but it is beyond the range of exactly representable integers in JS. You can see this by doing console.log(9007199254740995) and see that the output is 9007199254740996. So what happens if I do that following:

JSON.parse("[9007199254740992, 9007199254740993, 9007199254740994, 9007199254740995]")?

the first and third are exactly representable as a double, the second and forth are not. So should this parse as Number, BigInt, Number, BigInt? Or BigInt, BigInt, BigInt, BigInt? What value should be the trigger for treating an integer as a BigInt vs. a Number? What if we're doing

JSON.parse("[9007199254740992, 9007199254740993, 9007199254740994, 9007199254740995]").map(x=>x+2)

If any of the values get interpreted as a BigInt this will throw. But in existing browsers it won't.

]


> It’s all interoperable because there is just that one syntax.

And that syntax and specification leaves open the possibility of arbitrary precision integers. From the JSON RFC[1]:

> This specification allows implementations to set limits on the range and precision of numbers accepted.

Those limits vary by implementation. The subsequent paragraphs warn, of course, that not all implementations support sending or receiving numbers outside of certain ranges, and explicitly calls out the contiguous integer range of an IEEE double as a good range for good interoperability.

> “I want to transmit non-Json strings but claim they’re json “.

Those strings are already JSON.

There are serializers out there that support arbitrary precision integers. Python, for example, will happily serialize such integers:

  In [6]: json.dumps(2 ** 512)
  Out[6]: '13407807929942597099574024998205846127479365820592393377723561443721764030073546976801874298166903427690031858186486050853753882811946569946433649006084096'
(This will parse in today's browsers, too, but you'll get an imprecise result, as JavaScript's Number cannot represent that value exactly. Larger numbers result in Infinity.)

[1]: https://tools.ietf.org/html/rfc8259#section-6


in JS your number would drop precision, because it doesn't have the 'n' suffix. Because you can't mix and match numeric types, JSON.parse() cannot distinguish between a number, and a bigint.

so take:

"[{x:0,y:1},{x:2, y:3456789....}}]"

What should JSON.parse() do? There's no reason for it to parse any of the values before the final extra precision value as a regular number, so if it parses the large one as a bigint, it would break.

> (This will parse in today's browsers, too, but you'll get an imprecise result, as JavaScript's Number cannot represent that value exactly. Larger numbers result in Infinity.)

Your example of python output is exactly the thing that would cause a problem. Your data will work in all existing browsers - it just loses precision. But in any browser that tries to parse it as a bigint it will result in failures, as now any subsequent arithmetic will fail because you can't (for good reasons) mix and match BigInt with non-BigInt values.

You also can't actually make a decision about whether a value should be parsed into a BigInt, vs a Number.

Basically, take something like 2^54 -- 18014398509481984 -- should this be parsed as a Number, or as a BigInt. It's exactly representable with a double, but it has the same value as 2^54+1 (which is not exactly representable). So should that be parsed as a BigInt or as a Number. Without a suffix (not compatible with existing spec) they cannot be distinguished, and so to retain backwards compatibility must be parsed as a Number.


Indeed, so it would seem like the obvious solution is just to have an alternative parsing function, or perhaps for it to accept options.

e.g.

    JSON.parse(str, null, { useBigint: true});


Yup, this is exactly the problem. Currently JSON.parse() will silently truncate large numbers. This behavior made sense when there was only one numeric type in the language, but BigInt opens the door for arbitrary precision integers.


What is the one language? json serialized data can be consumed in many different languages which have very different numeric data types. The json spec even says

> JSON is agnostic about the semantics of numbers. In any programming language, there can be a variety of number types of various capacities and complements, fixed or floating, binary or decimal. That can make interchange between different programming languages difficult. JSON instead offers only the representation of numbers that humans use: a sequence of digits. All programming languages know how to make sense of digit sequences even if they disagree on internal representations. That is enough to allow interchange

and later more precisely defines a number in a way that does not restrict its maximum size.

> A number is a sequence of decimal digits with no superfluous leading zero. It may have a preceding minus sign (U+002D). It may have a fractional part prefixed by a decimal point (U+002E). It may have an exponent, prefixed by e (U+0065) or E (U+0045) and optionally + (U+002B) or – (U+002D). The digits are the code points U+0030 through U+0039.

http://www.ecma-international.org/publications/files/ECMA-ST...


But you can't determine (without a suffix) whether a number you encounter is a BigInt or a floating point value that lost precision so doesn't have a decimal point.


You're getting downvotes because what you've said is incorrect. This wouldn't be changing/violating/breaking the JSON format - it already allows for arbitrary precision.

However changing the behaviour of `JSON.parse` would indeed be a breaking change. That's an issue with the JS spec and web compatibility - not with JSON. The solution there is simple - either make a new function with new rules or add the ability to pass in options to `JSON.parse`.


The spec breakage is that you’d need the n suffix to indicate it’s a bigint.


No, you wouldn't. You'd just put the number in without the "n". In JSON, an integer is an integer - there's no limit to its size and no need to specify whether the integer is big or small.

The "n" notation is just to ensure compatibility within JS - has nothing to do with representing the number in JSON.


This isn't about parsing the JSON format but the mapping between JSON datatypes and your language datatypes.

JSON numbers are arbitrary precision, how you choose the represent those numbers in your language varies widely, and up to you.

Adding hooks into JSON.parse would actually solve a whole class of problems related to datatype mappings. Right now they're statically defined by whoever implements the parse method but it turns out not to be very flexible.

What if I want to represent my JSON array as an ArrayBuffer or a SortedArray?

What if I want to represent my JSON numbers as strings or my libraries number type for working with currency.

What if I want to map my hashes to my own HashMap?

Being able to say JSON.parse({ array: SortedArray, number: BigInt }) would be very nice.


JSON.parse already supports that through the reviver parameter.


It doesn't quite do what I'm looking for since it only allows you to transform data after it's already been mapped to a type. So if my integer can't fit into the Number type without loss of precision then my reviver function is going to get the lossy number so it's already to late to do a conversion to BigInt.

I need to actually hook into the parser so I can say, "hey when you're about to parse a number pass the token to me and I'll take it from here."


How about the backwards compatible way of:

{"value": "90071992547409949007199254740994n"}

Which would get parsed into a BigInt. This specifically means that value isn't a number but a bigint. All JSON consumers that don't support that get strings, all JSON consumers that do get BigInts (when parsing).

The only downside I see is existing json that has strings containing a number and a "n" at the end.


> The only downside I see is existing json that has strings containing a number and a "n" at the end.

Imagine being on the receiving end of that bug!


The other problem is that this would be a bizarre JavaScript-specific quirk, and would fail to interop with JSON implementations in other languages.


I'm not sure if I agree, I think the BigInt type as proposed does no exist in other languages.


The JSON spec doesn't bound numbers in any way, and plenty of languages have support for arbitrarily large numbers (for example, Python's number type).


This would require changing the JSON spec (good luck) as it doesn't fit the grammar for JSON numbers.


Yes, it doesn't fit grammar for ordinary JSON Numbers, but since it is plain old standard JSON there is nothing wrong about it: it just puts some demand on consuming application. Instead of

    {"boring Number not greater than 2^53": 1234}
you could send something like

   {"exciting huge BigInt": {"value": "12(…)89", "type": "BigInt"}}
where you have all hints what to do with it. You can even invent some funny perks, like

    {"exciting huge BigInt serialized to base36": 
      {"value": "12(…)yz", "type": "BigInt", "base": 36}
    }
assuming you could create some parseBigInt(radix) function.


This seems like it unnecessarily couples the JSON representation to the language types. JSON numbers are already arbitrarily precise. How you represent the type is up to the parser.

JSON.stringify(5n) === "5"

JSON.parse("5") === Number(5)

JSON.parse("5000000000000000000") => Error

JSON.parse({ use_bignum: true }) === 5000000000000000000n

A project that does something like this:

https://www.npmjs.com/package/json-bignum


Just pass your own serializer/reviver to JSON.parse/stringify to use something like `{ "$bigint": '0f0a0e...' }`.


I don’t see how BigInts can’t be serialized to JSON? Isn’t the “n” suffix exactly that — a serialization?


An integer suffixed with the letter "n" is not valid JSON and will not parse if you follow the spec. The "n" suffix is how you write BigInts in JavaScript, not JSON.


How JS chooses to show the string interpretation of the datatype has nothing to do with whether you can map the type into an abstract JSON number, which you can.

JSON.stringify(1n) === "1"

JSON.stringify(100000000000000000000n) === "100000000000000000000"


why does it matter? I can't serialize/deserialize Map, Set, Date, etc nor can I serialize/deserialize class Foo or Bar. If you want to serialize higher level concepts buold a high level lib. Don't clutter Json


  VM143:1 Uncaught TypeError: Do not know how to serialize a BigInt
      at JSON.stringify (<anonymous>)
      at <anonymous>:1:17


Since this is stage 3 this means browsers are beggining to implement it. If you want to try it Chrome has had it since version 67 [1].

[1]: https://developers.google.com/web/updates/2018/05/bigint


Not trying to be too snarky, but does that mean that JavaScript is going to be the first language that gets bigints before getting actual integers? How is that a reasonable sequence of steps in language evolution?


> before getting actual integers

Javascript has always had "actual integers". As long as you stay within Number.MIN_SAFE_INTEGER...Number.MAX_SAFE_INTEGER (-9007199254740991..9007199254740991) the internal representation is an integer representation.

http://2ality.com/2013/10/safe-integers.html


To get the operators to actually work correctly you need to use typed arrays however. For example

let a = new Uint32Array(3);

a[0] = 5;

a[1] = 2;

a[2] = a[0]/a[1]


How is a bigint not an actual integer? If you mean restricting it to a certain number of bits - there are many other languages avoid exposing the low-level system details like that. Python and Ruby are two easy examples, and I'm sure there are many more.


Having an abstraction that transparently switches between fixnums and bignums is fine. This is what Ruby or Python or Lisp languages do. And the abstraction is purposefully a thin one - no one is really hiding that there's fixed-bits integers underneath, because this distinction has huge performance implications.


That isn’t fair - JavaScript has numbers, that are already used on billions of sites, so that can’t be broken. An important part of evolving a language that is always transmitted as source code is that you can’t break the existing code. Python tried with python 3 - that attempted break now means all systems have python 2 and python 3 installed, and there are libraries that can’t be used together because they have different language versions. C and C++ can change the language in breaking (-ish) ways, because changing the language syntax or semantics doesn’t effect shipping (eg compiled) programs.

It also has a full set of 32 bit operations - “64 bit” used to be “big int” and required software implementations.

The choice to only support floating point numbers also makes sense - you have a dynamic language with no type annotations, so you can’t distinguish between 1(an int) and 1(not an int), and as a result you can’t determine whether a given arithmetic operation should be the integer version or the floating point one. So floating point makes the most sense, unless you want arbitrary precision floating point, which just isn’t feasible for performance or sanity reasons. Especially a few decades ago.

So we get bigint- or more correctly arbitrary precision int. That has a distinctly different use case from regular arithmetic numbers, and so is always a distinct type.

So yeah, going floating point, and then bigint is a perfectly reasonable evolution of the language.

As an aside, I don’t think it’s first because I believe python uses arbitrary precision ints, or floating point. Haskell defaults to arbitrary precision as well (although it supports fixed precision ints as well).


> JavaScript has numbers, that are already used on billions of sites, so that can’t be broken.

For purposes of doing silly things in browsers, sure. But then someone wants to count money with it, and you suddenly have articles on HN about it, and half of their points can be summarized by "watch out for floating point errors". In the rest of industry, the rule of thumb is, "don't use floats for money".

> The choice to only support floating point numbers also makes sense - you have a dynamic language with no type annotations, so you can’t distinguish between 1(an int) and 1(not an int), and as a result you can’t determine whether a given arithmetic operation should be the integer version or the floating point one.

No, it doesn't (at least not for those reasons). Dynamic languages aren't untyped; in those languages, it's the values that have types. I may not know when reading the code whether x has a float value or string value, but I can query for that at runtime. The way to tell whether or not 1 is an int or a float should be by means of functions like is_int() or is_float(). Sane programming languages handle this fine (hint: writing 1 usually means you want an int, writing 1.0 suggests floating point). Hell, even PHP can handle this fine.

At this point I hope that whoever implements bigints in JS realizes that "small" bigints like 123n can be implemented as actual machine integers for much performance gain, and this way we'll get fixnums through a back door.


Javascript number type can be used like 32 bit integers for the most part, including bitwise operators and modulus.

I agree that pure 32 and 64 bit integers would be nice, though. Especially since I have need for 64 bit bitwise integer math. Bigint surprisingly seems to be able to do that, I'm just not sure if the performance will be okay.

But being able to use bigints with binary literals is awesome: 123n & 0b1111n => 11n


I use them in a Vulkan API for node.js to handle 64bit interopability when mapping Vulkan memory. Mapping Vulkan memory returns you a numeric address to the memory region. To handle the address I use BigInt, which can then be used to create an ArrayBuffer as a direct JS-side memory view where you can write e.g. your texture data into. See [0] how its used on node-side and [1] the C++ implementation using the V8 API

[0] https://github.com/maierfelix/node-vulkan/blob/master/exampl...

[1] https://github.com/maierfelix/node-vulkan/blob/master/genera...


So, you can add a string to a BigInt, but not a Number? Their 'explanation' for not allowing Number and BigInt to be mixed is that you could lose precision. Not if you return a BigInt.


That seems a pretty odd choice in both cases. Adding a number to BigInt should return a BigInt. I am also always wary of languages that allow mixing of strings and numbers. From my experience this can cause a lot of problems.


It's hilarious, because:

  "1" + 1 === "11"
  "1" - 1 === 0
  "1" + 1n === "11"
  "1" - 1n: Exception


Our testers write a lot scripts in PHP and I have spent countless hours debugging stuff like this. A special kind of fun is to figure out what expressions count as TRUE or FALSE. JavaScript seems to have the same, slightly different, problems.


Shouldn't this be called either "arbitrary size integers" or "arbitrary precision integer arithmetic"?

Integers by definition already have infinite precision...


Javascript actually uses floats to describe integers, so with very large integers, you actually lose precision.

This does 2 things. It allows for arbitrary size integers (whereas before, javascript had integers up to FLOAT.MAX), and it represents all these integers with perfect precison.


As someone mostly stuck in the C, C++, C#, assembly, and BASIC, this is new to me.

In those languages, an integer is actually infinite precision with limited range. A quick Google shows that the "number" primitive in JavaScript is a double. Are you telling me JavaScript has NO integer type?

Maybe I haven't seen enough languages yet, but that seems pretty crazy to me.


I agree with you on all points. I don't think I've ever written more than 100 lines of javascript. And a lot more of the C like stuf.

Yet, I do see the morbid sense in only using floats.


No you’re right. There is no integer-like type in Javascript. Only floating point


Domain terminology - integer on a computer means fixed precision.

BigInt is for whatever reason the most common name used


Integer in all programming languages I've seen so far means infinite precision with limited range.

Floating point numbers are used for values that have finite precision but typically much larger range.

I guess I'm making a distinction between precision and range. I was pretty sure they couldn't be used interchangeably. Now I'm seeing "precision" being used to describe the size of the integer. I feel like that's a bit of a misnomer.


Arbitrary precision is the correct terminology. I can already represent any size integer I want on any computer. I could just declare that MAXINT represents Graham's number (for example). That doesn't mean that I can represent every integer between zero and Graham's number, though.


But it's not the precision that's arbitrary it's the range. If your number format represents 0-100 and you want to represent 200 then your number format isn't imprecise, it's limited range.


The "range" is merely established by convention. If your computer supports 100 distinct values then you could say those values represent the integers [0,100) or you could say they represent the even integers in [0,200). The range can be arbitrarily large, but the precision is fixed.


Hopefully JavaScript will get a new Number system before 2038, which is the new Y2k problem.


What about BigDecimal?


I wonder the same. The readme says:

"The / operator also work as expected with whole numbers. However, since these are BigInts and not BigDecimals, this operation will round towards 0, which is to say, it will not return any fractional digits."

What BigDecimal are they referring to? I don't find any tc39 big decimals, only third party libraries


There isn’t a BigDecimal proposal anywhere, the authors are just making it clear that their proposal is limited to integers.


> There isn’t a BigDecimal proposal anywhere

Why? I find it so weird there still is no BigDecimal in JavaScript.


I actually didn’t look hard enough: there is a proposal[0] for something that could be called ‘BigDecimal’ in ECMAScript, but it’s a stage 0 proposal[1]. As for why a decimal type hasn’t progressed further in ECMAScript you’ll want to read the notes [2] from the meeting where the latest proposal was discussed.

[0] https://docs.google.com/presentation/d/1jPsw7EGsS6BW59_BDRu9...

[1] https://github.com/tc39/proposals/blob/master/stage-0-propos...

[2] https://github.com/rwaldron/tc39-notes/blob/b8da60318b564f13...


It is relative common for software to need integers larger than 53 bits. It seems quite uncommon for software to require Decimal arithmetic. Do you have a motivating example?


Anything that deals with money. Every e-commerce or fintech app.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: