Hacker News new | past | comments | ask | show | jobs | submit login
64-bit bank balances ‘ought to be enough for anybody’? (tigerbeetle.com)
245 points by todsacerdoti on Sept 19, 2023 | hide | past | favorite | 362 comments



Specialising in financial software, over the past 2 decades I have been fighting over this with countless people, teams and companies.

For accounting you should only ever use arbitrary precision math library with ability to specify rounding rules. If your programming environment/language does not have one, it is unsuitable to be used for accounting, billing, invoicing, payments, etc.

Having the underlying library is, of course, not enough. You also need to be able to properly store (data structures, databases), transfer (wire formats, data structures), process (order of operations, rounding rules), present (UI toolkit) and so on.

I have never in my life joined a software project for any organisation that was able to do basic arithmetic on money correctly (and I worked for companies ranging from small startups just needing invoices and very simple billing to risk management departments for largest financial institutions on Earth processing trillions of dollars on a daily basis).


The article is about data types for storage, not for intermediary values used as part of calculations, though. Are you proposing that everybody is storing monetary values "wrong", too?

And as a meta-point:

> I have been fighting over this with countless people, teams and companies. [...] I have never in my life joined a software project for any organisation that was able to do basic arithmetic on money correctly [...]

Are you absolutely sure that you are the only person that understands how to do accounting arithmetics on computers correctly?

My guess would be that the status quo is a combination of a lot of legacy code and procedures, but more importantly of differing priorities.

Maybe you value arithmetic correctness over simplicity of procedures (sometimes these need to be published in regulatory texts or even laws) or compatibility with other entities and their procedures much more than the industry average?


A shocking number of people (edit: who implement billing related software) are unaware of how many decimal points of accuracy their local tax code requires to calculate vat or sales tax correctly. And those things are often specified in terms of arithmetic correctness.

I had our CFO stand behind me while I talked him through every step of our VAT calculations once, because he was legally responsible if I made us round it wrong, to the wrong number of digits. And had we e.g. done something grossly incompetent like used floats for those calculations it most certainly would have been wrong, but so would it if I used fewer than five digits past the decimal point or failed to round in the right direction after that.

It's usually not hard, but it requires being aware that you need to look up the right rules. And know better than using floats.


A shocking number of people who work with software that handles money are just winging it. I worked on a project once that handled payment processing functionality for other products at the company (it was a B2B SaaS where clients could take payments through our software). We also handled payments related billing, since the clients would owe fees to us and to our payments gateway for their transactions.

The payments gateway we worked with calculated their fees to a thousandth of a cent, but of course we could only bill whole cents. So basically every billing period there would be a < $0.01 balance due that carried over to the next bill, and every so often the carryover would add up to a full cent that needed to be charged. When we implemented our MVP we (engineering) explained this to the product and business teams, and it blew their minds. Our suggestion was to have a tooltip on the 1 cent charge with a link to a help article explaining how the accounting worked, but they were strongly against it and had us list the 1 cent as something like "other fees" with no explanation. They seemed convinced it was a thing that would happen only rarely, even as we were telling them otherwise. Anyway, that 1 cent charge just infuriated clients for some reason, and every month or two we'd get bug reports about it or requests to explain why it kept showing up. Fun times...


Why was it listed separately at all?

If the month's charges were 406.783228 and I get a bill for 406.79 then that seems perfectly good.

If I get a bill that says 406.78, plus a separate 0.01, that's weird.


> Why was it listed separately at all?

Because the bill included a detailed breakdown of all the fees by type and transaction. Typically our clients were charged a fixed monthly fee, a fixed authorization fee charged every time a payment was attempted (even if it was declined), and a fee applied to successful payments that was a percentage of the payment amount. There were other fees for things like processing chargebacks, but IIRC they were the same for everyone.

> If the month's charges were 406.783228 and I get a bill for 406.79 then that seems perfectly good.

Yeah, we definitely weren't allowed to round up and keep the change. I can't claim to know all the details involved but I suspect that doing so would've at least violated our contract with the payments gateway. Might've actually been illegal.

> If I get a bill that says 406.78, plus a separate 0.01, that's weird.

That's not how it worked. For the sake of simplicity let's assume that your activity is always the same, therefore you have new charges totaling exactly $406.783228 every month.

* Month 1: You owe $406.783228, your bill is $406.78, a balance of $0.003228 rolls over.

* Month 2: You owe $406.786456, your bill is $406.78, a balance of $0.006456 rolls over.

* Month 3: You owe $406.789684, your bill is $406.78, a balance of $0.009684 rolls over.

* Month 4: You owe $406.792912, your bill is $406.79, there's a $0.01 "other fee" line item on the bill, and a balance of $0.002912 rolls over.


It's unlikely that it would be illegal, or violate the contract with the payments gateway, if you charged the customer 406.78, and you made up the $0.003228 discrepancy so the gateway is paid off.


I wasn't talking about rounding up. I meant exactly the same thing you're saying about month 4.

> Because the bill included a detailed breakdown of all the fees by type and transaction.

Was it all rounded [down] to the nearest penny?

That type of bill can already fail to add up to the total very easily, like x.xx4 + x.xx4 + x.xx4. So I'm still not sure why there was a need to have a line item to explain this single penny.

Was there only a single charge on each bill that used fractional pennies? So that this was the only time that things wouldn't add up perfectly?


Canada abolished the penny some years ago. If some retail item in a store costs $2.03 and you insist on paying with cash, you will have to use a nickel and pay $2.05.

Yet, life goes on; nobody goes to jail.


Generally such rules are only about cash, though.


> Anyway, that 1 cent charge just infuriated clients for some reason, and every month or two we'd get bug reports about it or requests to explain why it kept showing up.

There is an obvious alternative here, which is to just absorb that fraction of cent, so the customer doesn't see it. Handling bug reports and infuriated clients costs more.

You could even just round up to the penny and ding the customer; if there is a 0.3 cent fraction coming from the payments gateway, turn it into a customer-facing penny, thereby collecting an extra 0.7 cents.

That way, too, there would almost certainly be zero complaints and bug reports.

If the customer is supposed to pay $103.45395 every month, you could turn it into $103.46, pocketing an extra $0.00605, or into $103.45, where you're out $0.00395.

Think about it; when you eat at a restaurant, often the prices for a meal are round like $11.50, even though the restaurants expenses are down to the penny. Why is that? Because they arbitrarily set the price. They don't say, oh, our ground beef supplier charges to the penny penny, so lunch will have to be $11.57.

Oh, the business teams found the approach puzzling---but what do they know, right? If they were smart, they would be software engineers.


> Oh, the business teams found the approach puzzling---but what do they know, right? If they were smart, they would be software engineers.

Well in this case the business teams selected the 3rd party payments gateway that the company would work with, negotiated the contracts with them, and worked with the 3rd party to set up how the customers would be charged. They and/or the product team determined that we'd use the 3rd party system to handle the billing because building it in house wouldn't generate any new revenue. They weren't stupid, but they choose an approach to payments processing that was pretty low level (because it generated more revenue, of course), without anyone at the company having a good understanding of low level payment processing details. The engineers learned because we kind of had to, but three years into the project (when I left) business/product would still routinely struggle with the details.

So anyway, WRT to billing, the task handed to engineering was: pull the billing detail from the 3rd party's API and assemble it into a statement that can be handed to a client. We had zero control over how the charges were generated or applying any rounding to the total.


If someone had a gun to my head saying, don't hide the sub-penny slices from the billing, I would just do the billing in thousandths of a cent, rather than make "leap cents" appear every couple of bills:

  Amount owing: $123.45678

  Please pay one of: $123.46 (a credit  of $0.00322 will be applied)
                     $123.45 (a balance of $0.00678 will carry forward)
On the next statement, if they paid $123.46:

  Previous balance: (  $0.00322) [credit]

  New charges:        123.45678  { here we have a detailed breakdown }

  Amount owing:       123.45356

  Please pay one of: $123.46 (a credit  of $0.00644 will be applied)
                     $123.45 (a balance of $0.00356 will carry forward)
etc.

That's literally "put the billing detail from the 3rd party API and assemble it into a statement". Since the billing detail from the 3rd party API is in thousandths of a cent, then that implies the statement must have thousandths of a cent.

If the two payment options were determined to be too confusing, one of the two could be dropped.


> I would just do the billing in thousandths of a cent, rather than make "leap cents" appear every couple of bills

Again, that was not something we could control.


I think that's a case for just dropping that cent. As long as it's your own money and not tax you're underreporting it's not a problem as long as it's consistent.


Carrying the remainder microcents as a bill seems overkill and not necessarily correct. Accounting regulations and accepted practices have well established rules on rounding, like bankers rounding (1).

You could probably treat the extra microcents as being on the next pay period. Though that's annoying as if I close an account I'd expect it to be paid in full, not a few microcents remaining.

1: https://stackoverflow.com/questions/45223778/is-bankers-roun...


>A shocking number of people are unaware of how many decimal points of accuracy their local tax code requires to calculate vat or sales tax correctly.

What is your jurisdiction? In Canada, I can't for the life of me imagine the CRA would remotely care about decimal-point accuracy. In fact, most of their online forms explicitly remove the decimals.


UK. The rules may have changed now, it's a long time since I implemented the rounding rules here, but the last time I did it required 5 decimals accuracy. The rules also used to specify how you needed to account for line items vs. sub-totals in your invoices to ensure you didn't find any "workarounds" to shave off some pennies of tax (In fact, the last time was while the tax authority was still called the Inland Revenue, which it hasn't for years.)

For aggregate totals of your VAT liability across your total set of invoices, you'd be fine with rounding up to the nearest pound, to the Inland Revenue's benefit. For individual invoices however, you were required to stick to very specific rounding rules.


> For aggregate totals of your VAT liability across your total set of invoices, you'd be fine with rounding up to the nearest pound, to the Inland Revenue's benefit

On personal tax forms you have to round in the taxpayer favour. If your income is 12345.67 you round it to 12345. If your expense (say giftaid) is 12345.67 you round it to 12346.

Surprised it's the other way with VAT, but then I do very little with tax other than click a few buttons and confirm "yes, you have to tax me as I have children".


I'm not sure it's the other way with VAT, as I said it's been many years and the rules may well have changed multiple times.

The key, though, is that with taxes, if at all unsure you've got the rules right, the safest option is to round in the tax offices favour.

It's in general a lot less painful to explain an overpayment than underpayment if something is broken.

Of course, better yet, get it right.


That's for final amounts though, right?

I bet they care about you not throwing away decimals in intermediate calculations for VAT or sales tax.


I think the point is that the integer arithmetic implementation your CPU provides is wrong in at least one jurisdiction, so (for example) the machine code in the article is wrong.


> done something grossly incompetent like used floats for those calculations

So, in another life I worked on reporting software for a foreign branch of a US bank. You've heard of the bank. You would probably recognize the CEO's name, in fact.

We had been fucking this up for years. I fixed it. We had some customers who yelled at us because our reports were "wrong" i.e. they were double checking our work and apparently making the same mistake. They could not be reasoned with. Bear in mind, we're talking about differences of pennies, or a few dollars on very large transactions. Some of our customers insisted we were calculating the values incorrectly and demanded we "fix" it.

What do you think happened next? You have one guess.


> A shocking number of people are unaware of how many decimal points of accuracy their local tax code requires to calculate vat or sales tax correctly.

I would imagine almost no-one knows this (-: What's a shocking number?


> What's a shocking number?

Almost every developer I've worked with who haven't implemented invoicing or billing at least once and had their finance team yell at them for producing wrong numbers...

(and I'll edit this to add the limitation "who implement billing related software" - it's still true, and closer to my intended point)


> A shocking number of people ... are unaware of how many decimal points of accuracy their local tax code requires to calculate

A shocking number of people who create tax codes have no idea how many decimal places they are using.

It's probably better now, but I recall having to reverse engineer the tax tables to figure out how many decimal points of accuracy were used, and what rounding rules were used, so we could match their numbers.

These numbers would change from year to year, with no change in the underlying tax codes.


I always hated tables. At least when I last had to implement UK VAT rules the rules were very precisely defined. But I've had to deal with stupid tables implementing rules they couldn't be bothered to spell out before. Yikes.


in my experience, these are often written off as error.

if the error is less than their hourly salary rate, it don't even worth mentioning.

if it worth a day or two of salary, its nice to fix but never a priority


For it to be written off as error you need to know the discrepancy, which means you need to know what it's supposed to be. When e.g. calculating the VAT or sales tax you owe the government, if the rounding deviates from legal requirements then unless it's in their favour you can be in for a bad time.


I think for most large businesses there are pretty considerable error bars here. If you say you owe 1,000,000 a year in VAT and the government says you owe 1,010,000 it's cheaper to pay the difference then dig into why it's off.


If you report 1,000,000 and certify that it's the right number, and the government audits you and find you should have paid 1,010,000, then depending on jurisdiction you might be entirely ok, or you might find you're not going to be paying just the difference, and interest, but also a fine, and bearing the cost of additional audits going forward, and that your finance director will not appreciate having to address questions aimed at figuring out whether anything criminal is involved. Repeat the mistake a few times, and the level of scrutiny will escalate.

There's a reason that in 28 years of working in software, the only thing the financial teams I've worked with have obsessed over have been whether or not we get the VAT calculations right, and the "sticker price" of the discrepancy has never been what they worry about. For calculations that does not involve getting tax amounts wrong, they often couldn't care less about much bigger discrepancies, but get tax wrong in the wrong jurisdiction and it's a lot of pain.


It's strange. In Russia, small error in VAT will get you a letter from tax service "pay us a small error voluntarily, or we will schedule an inspection". Letter will be automatically generated by ASK-NDS system (translated as Auto Check Vat).


How do they know there's an error?


Cross check with your contragents. For every bit of incoming VAT should be outgoing VAT from your supplier. And for outgoing VAT should be incoming VAT for your client and/or sale to physical customer.

If your incoming VAT are not matched with outgoing VAT from your supplier, you will be charged.

If your supplier declared VAT, but failed to pay it, you have choice: either you have to pay it or you will be inspected to proof that it was not a fake.

Every sale to physical customer in Russia should be uploaded to tax service cloud. You (as a customer) could check your receipt online or using app, and get a reward for reporting tax evasion.

This system boosted VAT revenue x1.5 in a few years.


Why would it be wrong to use floats?

That would have been my default assumption


When you're doing floating point arithmetic on a computer, it will approximate and round certain values in ways that don't match the way humans do it when they're, e.g. doing accounting.

So you need to run a massive physics simulation really fast? Yes, floats are great.

You need to calculate taxes on a massive corporation's fiscal year? Bad idea.

Some libraries advertise "arbitrary precision", many computer systems have a "decimal" type intended for currency, etc. and then they won't make all the same mistakes, but as the OP said you still need to control rounding rules and make sure they match the law.


> You need to calculate taxes on a massive corporation's fiscal year? Bad idea.

That depends on whether the hundred-billion-dollar corporation cares about being off by a dollar.

And by "off" I mean "different from how humans round", not necessarily further away from an infinite-precision calculation. In fact at "massive corporation" level I would guess that binary floating point is more accurate than a typical fractional penny system.


> That depends on whether the hundred-billion-dollar corporation cares about being off by a dollar.

How many hundred billion dollar corporations are private? Public companies would care a great deal about accounting accuracy.


Is it worse for a hundred billion corp to be off by a dollar than for a hundred million corp to be off by a third of a penny?


it's not so much how much it is off but that it's off at all. If the numbers don't add up then they don't add up. If there's any kind of difference then it has to be found and accounted for and it becomes a needle in a haystack search to account for the difference. Think about trying to find $0.05 spread across hundreds of thousands of transactions due to rounding issues.


I'm going to let you in on a secret...

Every single publicly listed company, every single one of them, is off when it comes to calculating their taxes by way more than just a dollar. And I don't mean clever accounting tricks or tax avoidance schemes, I just mean in terms of actual mistakes being made.


If they could just pay the dollar and never have to worry about it again, sure. But the point is for them to have confidence that the math is unimpeachable and identical to whatever auditor or tax official would compute at every step of the way so you don't just have to guess at correctness with some waving of hands.


Surprisingly common values like 0.1 don't have a precise representation in binary for most formats, including standard floating point number formats. See https://0.30000000000000004.com/ for more detail than you can shake a stick at.

Also if the local tax code states using 5 decimal places for intermediate values when you will introduce “errors” using formats that give greater precision as well as those that give less precision. Having worked on mortgage and pension calculations I can state that the (very) small errors seen at individual steps because of this can balloon significantly through repeated calculations.

Furthermore, the name floating point gives away the other issue. Floating point numbers are accurate to a given number of significant figures not decimal places. For large numbers any decimal places you have in the result are at best an estimate, and as above any rounding errors at each stage can compound into a much larger error by the end of a calculation.


IEEE standard floating point uses a binary mantissa.

And binary has trouble representing fractions that are common in prices:

  $ bc
  obase=2
  scale=20
  1/5
1/5 in binary is a repeating binary fraction: 0.0011001100110011...

Just as you can't express 1/3 or 1/7 precisely as a non-repeating decimal fraction, you can't express 1/5 and 1/10 as a non-repeating binary fraction. As a result, most prices involving cents in currency cannot be expressed precisely as binary floating point numbers.

edit: fixed formatting


The biggest issue is you now need programmers who know about epsilon computation and error propagation when working with incorrect numbers. Then you need to know when to fudge the visual representation of your incorrect number (and you probably also need to understand when your programming language / libraries do fudge the output for you).

FP numbers have their use but they re better reserved for scientists doing actually scientific stuff and not just to represent what are actually tiny numbers (in the grand scheme of things) and which can be represented perfectly by other means.


If there's an applicable law or regulation that says "you must do x", and you do y (and that yields different results), you'll get into trouble, even if your way yields "better" or "more accurate" results.

This is not to say that using floats and rounding correctly necessarily does yield different results, by the way (although most likely it will) – but if they do differ, you're going to have a bad time using floats.



> 0.3-0.2-0.1

-2.7755575615628914e-17

And now you overdrafted


Floating point calculations without some final rounding step before presentation/export/storage are almost always wrong, since you're implying much more precision than is justified by your source data.


Ths problem isn't rounding the final result. The problem is that the source data itself can't be accurately represented.

There is no floating point value equal to 0.3.


That’s not a problem by itself.

You can represent 0.3 as 0.300000…0004, which rounds to 0.3 again in the end.

But you need to reason about the number and nature of intermediate operations, which is tricky, since errors usually accumulate and don’t always cancel out.


> That’s not a problem by itself.

No, it really is the original sin here.

> since errors usually accumulate and don’t always cancel out.

The problem is that from the system's perspective, these aren't "errors". 0.3000000....4 is a perfectly valid value. It's just not the value that you want. But the computer doesn't know what you want.


> The problem is that from the system's perspective, these aren't "errors".

When I say "error" here I mean the mathematical term, i.e. numerical error, from error analysis, not "error" as in "an erroneous result".

There is a formalism for measuring this type of error and making sure it does not exceed your desired precision.

> It's just not the value that you want.

My point is exactly that if you're looking at 0.300000...4, you aren't done with your calculation yet. If you stop there and show that value to a user somewhere (or are blindly casting it to a decimal or arbitrary precision type), you are using IEEE 754 wrong.

You know that your input values have a precision of only one or two sub-decimal digits, in this example, so considering more than ten digits of precision of your output is wrong. You have to round!

It's the same type of error that newspapers sometimes make when they say "the damage is estimated to be on the order of $100 million (€93.819 million)".

Yes, this is often more complicated and error-prone (the human kind this time) than just using decimals or integers, and sometimes it will outright not work (since it's not precise enough – which your error analysis should tell you!)! But that doesn't mean that IEEE 754 is somehow inherently not suitable for this type of task.

As a practical example, Bitcoin was (according to at least one source) designed with floating point precision and error analysis in mind, i.e. by limiting the domain of possible values so that it fits into double-length IEEE 754 floating point values losslessly – not because it's necessarily a good idea to do Bitcoin arithmetics using floating point numbers, but to put bounds on the resulting errors if somebody does it anyway: That's applied error analysis :)


Do it a million trillion times and we're talking cents overdrafted (almost)


If a rounding error put me a million-trillionth of a cent into my overdraft, I’m pretty sure my bank would still activate that $20/mo overdraft fee :P


If you just add up the errors, sure. What is riskier is that you risk tipping values the wrong direction right before applying a rounding step, or end up with an error right before multiplying a now wrong per-unit value with some large-ish factor.

Often these things are not a big problem on their own, but then later gets compounded because someone does something stupid like passing these imprecise values around to be distorted further all over the place.

And sometimes the reason it doesn't become a legal problem turns out to be because your finance department quietly works their way around it by expending expensive manpower accounting for discrepancies that shouldn't be there in the first place, and so increases the cost to the business by many magnitudes over the loss the developers might have assumed to be the worst case (if they're aware of the discrepancy at all).

This is one of those things you can get away with many times, many places, with no ill effects. But when it finally bites you it can get expensive and/or really bad to deal with, and it's fixed by simply never doing money calculations on datatypes with imprecise arithmetic, and having a five minute conversation with your finance team about what your local rules for rounding tax amounts are.


- "or end up with an error right before multiplying a now wrong per-unit value with some large-ish factor."

Where in financial accounting do people multiply an amount of money by a multiplicand larger than order-of-unity?


In accounting, no, while preparing input to the accounting in the form of generating invoices, I've lost count (sorry) of the number of times I've seen people doing tax calculations etc. on unit prices and then multiplying by number of units ordered, and then further compounding potential issues by adding up these numbers from multiple invoice lines. None of which is usually the right thing to do, all of which you often "get away with" without causing sufficient discrepancies, and so which people often fail to catch in testing. Until you suddenly don't.


- "multiplying by number of units ordered,"

Yeah, that's one example. I wasn't imaginative enough; thanks!


Decimal systems have to round too, so that's a pretty weak dismissal.


Laws are typically written by humans, and we use base 10, not base 2. We think $0.03 is an exact number, but floats can’t represent 0.03 exactly.


I think the historical interpretation is also relevant. The systems that did accounting before digital computers used base 10, so the first computerized systems for accounting used base 10 also. This legacy extends to the point that mainframes often had (and I believe still have) special decimal floating point math instructions. There have been several ways to accomplish this BCD (binary coded decimal) where numbers are stored in base 10 using a 4 bit encoding. I believe this can be arbitrary precision, but don’t have any experience myself. Some hardware also has decimal32 and decimal64 floating point hardware, which is part of recent versions of the ieee754 spec[1]. Databases also often have a DECIMAL type for doing calculations on money values [2]. So I think it’s not just that laws say it should be a certain way, but also that it is important to maintain consistency between systems over time.

1: https://en.wikipedia.org/wiki/Decimal64_floating-point_forma... 2: https://dev.mysql.com/doc/refman/8.0/en/precision-math-decim...


Floats lose precision unexpectedly with certain fractions that are perfectly representable in decimal, and also with certain integers once you get high enough.

The standard in ad-tech (not sure about banking) is to use int64s representing either microdollars or microcents, so a max capacity of 9.3*10^13 or 10^11 dollars


floats are an imperfect representation of real numbers and as such, there are an infinite count of real numbers that cannot be accurately represented with floats (and doubles).

It gets even worse when you start doing calculations on floats/doubles.

These inaccuracies are ok for a lot of things. graphics often uses floats and the errors are small enough they don't matter.

But currency absolutely needs to be accurate, and for that reason, floats/doubles are in appropriate.


Floats are not how you do money math, if you run into anyone trying to do money math and they say they use floats, there is a 99.999999% chance they are wrong.

> Maybe you value arithmetic correctness over simplicity of procedures

Lots of places(not typically the USA) has this codified in law. For example, do a web search for: EU money rounding rules. You will find several different rounding and precision rules, depending on the context of what you are doing with the money, all from places like the Central Bank and the EU Commission.

It's mostly US developers that are clueless here, because US laws are fuzzy at best, and the general rule is, you do whatever your bank/regulatory authority does, and if they don't happen to know (and I've met several that don't), then you have to figure it out yourself.

In the USA, we use decimal.ROUND_HALF_UP, because we have seen in practice this is what our USA based banks & govt tend to do in the wild. It should be noted IEEE 754 rounding recommends using decimal.ROUND_HALF_EVEN. https://en.wikipedia.org/wiki/IEEE_754#Rounding_rules

In other places, we do whatever their laws require, or treat them like the USA and do whatever our local bank/govt authority tends to do in practice.


I agree with the parent. Almost everyone thinks floats are fine. I work in lending, and many of my coworkers, who claim to have CS degrees, do not understand floats at all.


GP isn't just saying "don't use floats", though. (And even that is only a heuristic: It's possible to get correct results using floats, but you need to be very diligent about when and how you round, so in practice it's easiest to just avoid it.)

They're saying that only arbitrary precision arithmetics are acceptable, and additionally claiming that everybody else in the world gets money arithmetics wrong.

I doubt both of these statements, and especially the assertion that there's exactly one "correct" way of doing arithmetics with money.


Arbitrary precision decimals are the best solution in most cases. I have worked at multiple companies where people represent currency with floats and then wonder why they get strange results in some cases.


I have a personal anecdote on this subject. A long time ago I worked at a bank and I had to calculate a large number of accounts regarding agricultural loans. These were state sponsored loans. When I finished my task (this was a Java job), I found that sometimes the results were off by $0.01. So I asked my boss how I should do the rounding, to which he replied that an error of up to $1 was acceptable. If I recall correctly, the amounts where in the hundreds and thousands.


On my first day in a new company, not even senior dev yet, I met with the head accountant. I asked about her top problems, she said her top problem was that the application would produce different invoice on screen, different invoice when printed as PDF and a different invoice in the accounting software. About 1% of all invoices were affected but due to amount of billing they were doing (telecommunications and advertising) they needed to have 3 FTEs just to correct the invoices.

And correcting the invoices meant playing with numbers so that at least the PDF and accounting software agreed on the total value and tax.

She also said they had at least 2 different employees and an external company look at it and not able to fix it. She also told me not to bother because she does not believe the problem can be fixed (that's what she was told).

I looked at the software, it had two separate copies of the invoice calculation (separate for on screen and for printing to PDF). And of course it would send the invoice to the accounting software which calculated the invoice in a different way still.

I ran couple of experiments to reverse engineer how the accounting software did the calculations -- the exact order of them and the exact rounding rules. Then I built a small module that captured those calculations. Then I changed all doubles to arbitrary precision.

It took two days and the problem was fixed but it took couple more days before accounting department actually believed it.


> I looked at the software, it had two separate copies of the invoice calculation (separate for on screen and for printing to PDF).

That's actually kinda normal, in some industries.

For example, an amazon marketplace seller's warehouse management system might not be tightly integrated with Amazon's basket/checkout display logic.

In some situations the results of recalculating are supposed to be different. For example, if there's a "5% off when you buy 3 widgets" offer and you check out with 2 widgets in your basket, the offer doesn't apply. But if you checked out 3 widgets, thus getting the offer, then the seller found they were low on stock and could only send you 2, you should get the 5% discount on those 2.


> It took two days and the problem was fixed but it took couple more days before accounting department actually believed it.

Those 3 people that now didn't had a job probably weren't all that happy xD


if there ever was a reason to apply DRY at all cost, this is one


> I looked at the software, it had two separate copies of the invoice calculation (separate for on screen and for printing to PDF). And of course it would send the invoice to the accounting software which calculated the invoice in a different way still.

Your setup made it sound like something crazy and inane, like "the PDF printer used on those machines changed floating point rounding modes" or what have you.


In olden times we would "print to PDF" as in have a little piece of code to format the document that would be sent to a printer which could be a PDF document.

In this particular case we had two separate pieces of code, one running on the client (for on screen presentation) and one on the backend (to create the PDF on a shared location and produce a download URL).


There was the famous "Xerox photocopier changes digits sometimes" bug, but that's not what's happening here.


The hero I want to be


It is easy to be a hero when the company is shitty.

My career advice is to work in a field / company / job where you can be somewhere in the top 10-20% of all employees. Just don't overdo it, if you are top 1% you are probably aiming too low and could be working for better paying, more rewarding field / company / job.

For a lot of my career I was working for financial institutions like banks. A lot of really badly managed projects with definitely not top level developers. Easy to be a top performer. I really like helping people and projects and it was working well for me especially when it was easy for me to provide valuable help.

I got hired once for a really good company with really top performers and suddenly I lost the status that I was so used to. I was keeping up with my work, sure, but I was no longer a shiny star. I got back to working for banks.


Exactly. I think many people start overthinking things in banking. Most accounting/finance departments are ok with rounding pennies every month.

I run a Commercial Real Estate Servicing platform, where we are accruing interest on large balances daily. Our method is to not do the rounding daily, but add up all the numbers for a given period, say a month, and then round to the penny and create a single adjustment rounding transaction along with it. Accounting departments love us for it.

If we rounded daily before storing the amount, the adjustment for accounting is usually a few pennies at least every month they have to make. Our method, it's roughly $0.01 per year with monthly periods, adjusted usually at the very end. Which on a $20MM loan, is very well within the bounds of acceptable.


I love how small details, like rounding always up for 2.50 would be significantly skewing the numbers to the higher values, so there are functions like ROUND_HALF_EVEN that would round up on even numbers, and down on uneven ones.


Yeah. I worked on a lending platform that used floats (!!!), and their response when I brought this up was that as long as the result is within something like $10, it was not an issue.

I brought up specific math problems that floats couldn't handle and they weren't phased


The real reason is because Cathrine Zeta Jones and James Bond actually did implement a program in Malaysia that collects all those rounding errors on a seperate bank account. And since it wenr global, affecting everypne, everyone thinks it ia simply normal.

Or it is because rounding errors happen and accounting is a bitch. The first option makes for a better movie plot so.


I guess I’m old because I credit that plot to Superman III. TIL it’s called Salami Slicing https://en.wikipedia.org/wiki/Salami_slicing_tactics#


Man, you and I remember Office Space differently! /s


Whenever I input my tax data on forms, it always rounded to the nearest dollar. It was strange that accuracy didn't seem to be a big priority.


I imagine over millions of returns it probably evens out. Also, for the majority of taxpayers in the US, the tax table does things like:

AGI is 40,001 to 40,025 then your tax is X dollars.

Being accurate to the penny isn't worth the trouble.


Yes, practically I can see why. But when my W2 contains cents, and my tax forms make me sign for accuracy under penalty of whatever blah blah, seems pretty odd that they don't accept my cents and then do their own internal rounding. Maybe it was just the tax software I was using.


Sounds like the start of a movie script



I'm assuming you did the correct thing and engaged an an Office Space-esque penny-stealing operation after learning this? :)


I had such case too, the solution is simple:

round in favor of the bank / financial institution you are working for


No, it does not work this way. Any good accountant will see a different in values, even smallest one, as a sign of an incorrect calculation. Does not matter which way it goes, they will feel compelled to figure out who is wrong. At least a good accountant will.


There is no accounting error; let's say customer has purchased / used a service for 1.433 USD.

You issue an invoice for 1.44 USD (aka, amount due), then the 1.44 USD is used as a basis for accounting and is all consistent.

Then, if you are a nice company and the situation applies in your case, you may issue a credit in favor of the customer for 1.44 USD - 1.433 USD that will be used as a discount on a future invoice

The best part is that the moment where you decide to issue the credit invoice or not, is the perfect moment to track the rounding errors and even keep a very detailed journal of the entries (e.g. for auditors).


You added 10 of those items as inventory with a total value of 14.33, then you sold each individually as a total value of 14.40. After that transaction, your inventory account has -0.07 on it, but there aren't any items at all there.


From experience, those differences surface during stock taking. Amd most companies are really bad at that. And if they surfacey they are corrected by inventors adjustments (in unit of measure, not value, which is a differwnt can of worms). As long as those adjustments aren't to extreme, nobody really cares.

A good accountant so will sooner or later investigate those rounding errors, as they will show up somewhere ultimately. And a general policy of rounding in one direction is the last thing you want an auditor to find.


So accountants are like number detectives doing what's essentially debugging work just like a coder would?


They design the system, detect the failures, explain and correct them. So, their work even is more like programing than your comment implies.


One can use more than 2 decimal places for rates, but final invoicing has to be two decimal places (atleast in most countries). Banks/tax authorities dont carry anything beyond 2 decimal.

Eg: fuel is usually priced 3 decimal, so 4 gallons x 25.5444 usd/gallon gives = $102.1776 to 4 dp, but will be billed as $102.18


Any rounding discrepancies simply get posted to a rounding-error account. For example, oracle's ledger will complain if there is rounding and no rounding-error account to post to. https://support.oracle.com/knowledge/Oracle%20Cloud/2411363_...


That is absolutely not how it works. Rounding is exactly specified in the underlying contract always and you need to implement the correct rounding. For example, here is the rounding table for compounding calculations in the ISDA definitions (these are very standard for a wide range of contracts, but this particular table is for various overnight swap rates used in interest rate derivatives)[1].

[1] https://globalmarkets.cib.bnpparibas/app/uploads/sites/4/202...


Interesting side note:ASME also has a standard for rounding on engineering drawings. A few years back I had to build a custom function in Excel to match the standard, because our calcs weren't matching the customer's calcs.


And how do they round ties? Does 0.00005% round to 0.0001%? Is tie breaking usually included in such contracts?


For these benchmarks, yes, that is defined.

Where issues could come in is when these things are multiplied with a bunch of other numbers (each number with defined rounding but not after each operation) and then have some defined rounding at the end. There different computer numerics could in give slightly different results, but those can easily be resolved on settlement (for small stuff that stays well within the back offices - at least that is how I remember it).

Also, not totally unusual for one or both participants to forget about some rounding they might have agreed bilaterally if it was some one off etc.


I've worked in banking.

I assure you, I would have had a million bugs filed on that before it even hit production.


Heh. But usually it's the other way around unless they remembered to specify it the right way. Forgiving $0.01*N customers is cheaper than dealing with an irate customer.


People care less about the dollar value than about reconciling. If there is some external system they should match, they really want it to match exactly.



I did join somewhere that could do it correctly, because they had some very long-running POS software. It could even do things like "split bill three ways" correctly allocating both the spare penny from the division and the tax calculation, such that you could add the bills back together again and get the same numbers as the split bill.

Using a "money" class that stores things as integer pennies gets you a long way there. "Division" and its close friend "multiply by noninteger number" are the only real problems, so you need to be careful not to provide a generic method and instead methods like divideWithLocaleTaxRounding(). You also need to check whether you're supposed to apply tax per-item or you can do it to the whole bill.

I think we had a "apply tax to whole bill but then re-distribute it to line items" method, which guaranteed that the total would be correct.

There are reasonable arguments for "integer decimal fraction of penny" as the correct unit. Digikey price some parts in 0.1 of a penny or cent, for example.

Attempting to convert things between binary "fractions" (floating point) and decimal fractions will result in misery.

I think we also had a "rational number" class for storing things like "1/3".


Integer cents is fantastic for POS software and similar things that deal with at most a few thousand dollars at a time. The place where it starts to fail is when the absolute numbers get really large, not really small. Think "United States Federal Reserve" or "UBS". Then remember that some of these institutions need to deal with accounts denominated in Zimbabwean dollars.


Really small can be an issue when it gets really small e.g. tarsnap’s accounting is pretty infamously done in attodollars (1e-12). U64 only has room for 19 decimal digits so you’re limited to 7 figures up.


Tarsnap's pricing is in picodollars (1e-12) but the accounting is attodollars (1e-18). Tarsnap uses 128-bit balances -- 64 bits of attodollars and 64 bits of whole dollars.

If I ever have someone paying me more than 2^64 dollars, I'll rewrite Tarsnap's accounting system.


Funny thing, I actually wrote one POS application (magstripe + EMV chip & pin + contactless).

EMV uses integers encoded as BCD for money. If I remember well, in most cases it is 6 bytes or 12 digits. That is more than 2^32 (most of POS machines were 32 bit ARM until relatively recently).

The terminal I worked on had 32 bit ARM. Rather than convert 12 digit numbers to 32 bit integers I decided to write my own arithmetic library that did operations directly on BCD strings of arbitrary length (but, in practice, EMV only allows 6 bytes for the amount anyway).


The current US national debt represented in integer cents requires 52 bits. It can trivially increase 4,000x before we need to worry about 64-bit balances.


Peak inflation in Argentina was 20262.80%.

So, not as much margin as one might think.


Python can deal with 1000+ digit integers and you can store them as strings in the database. Not sure about other languages.


Rounding monetary values is a complex, opinionated and business-defined operation. Sometimes it's even orthogonal to the datatype used.

Things get out of hand when you need to round multiple different things that have to sum up at the end.

For example:

- Items in an invoice are rounded and summed. (eg. $1.1234 * 5.678kg)

- Payments of an invoice can be paid in multiple installments, with interests that are also rounded (eg. 1.77% per month).

- The value paid of interest *per item* must match the total value paid of interest in all installments of all invoices in the same period.


Would IEEE decimal128 be sufficient (instead of arbitrary precision)?

"Formally introduced in IEEE 754-2008, it is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations."

https://en.wikipedia.org/wiki/Decimal128_floating-point_form...


Hypothetical solutions that do not exist are none of my concern.

Did you know different countries and different currencies have different rounding rules, for example for tax-related calculations? Does "IEEE decimal128" support this? Unless you can get all countries on our planet to agree on a single standard, any solution that does not allow specifying rounding rules is pretty much useless (unless you want to implement rounding yourself which tends to be very tricky -- I know because I attempted this couple of times).


This is not hypothetical.

Yes, of course IEEE decimal supports setting the rounding mode. The authors of the spec aren't ignorant of what's needed for financial and tax computations.

Use fe_dec_setround from ISO/IEC TR 24732, "Extension for the programming language C to support decimal floating-point arithmetic".

The modes listed at https://www.ibm.com/docs/en/zos/2.5.0?topic=functions-fe-dec... are:

  FE_DEC_DOWNWARD
    rounds towards minus infinity
  FE_DEC_TONEAREST
    rounds to nearest
  FE_DEC_TOWARDZERO
    rounds toward zero
  FE_DEC_UPWARD
    rounds toward plus infinity
  FE_DEC_TONEARESTFROMZERO
    rounds to nearest, ties away from zero
  _FE_DEC_AWAYFROMZERO
    rounds away from zero
  _FE_DEC_TONEARESTTOWARDZERO
    rounds to nearest, ties toward zero
  _FE_DEC_PREPAREFORSHORTER
    rounds to prepare for shorter precision


The authors of the spec made some provisions, and very likely all of them are useful and correct, the issue is how the programmers will use them, and in some cases there isn't even a "correct" solution that everyone uses.

A classic example in invoicing is an item that is advertised for 60.00 (to the final user) VAT 10% included.

If you try making an invoice for that sum in a few programs you will find three or four way it is implemented.

Some will have 54.54+5.45=59.99, some will have 54.54+5.46=60.00, some will have 54.55+5.46=60.01 (and possibly a "discount" of 0.01), some will have 54.545+5.45=60.00, some will have 54.54545454+5.45=60,00.


Yes, it isn't possible for the software to know your local accounting laws and practices.

My point is it's probably better to use existing, well-tested provisions than to build your own from scaled integers, to get one of those three results.

As a bonus, you might get hardware support in the future.


Yes, I understand what you are saying, I was highlighting that those (if adopted) would only fix (maybe) part of the problem, they are just (better) tools.

At the end of the day what I want (and I presume any other customer wants) is a correct invoice with the correct net, tax and total, and this will only happen when (if) the programmer understands the base issues and uses the correct library/algorithm/whatever.


This list seems to be missing something since afaik, IEEE floating point also specifies round-to-even (bankers' rounding) for round to nearest ties. Unless 'FE_DEC_TONEAREST' is that, the documentation does not say.

https://en.wikipedia.org/wiki/IEEE_754#Roundings_to_nearest

EDIT: apparently IEEE does not specify a "round-to-odd" for ties despite this having been used for banking in the UK :/

https://en.wikipedia.org/wiki/Rounding#Rounding_half_to_odd


For goods items, Danish customs require specifying 3 decimals for weights under 1kg, otherwise no decimals. Off the top of my head I don't recall exactly how they expect rounding to be done, I'd guess towards infinity.

Many duties are calculated based on net weight, and often the net weight per goods line is the result of a calculation, for example you're importing N items with a per-item weight of X. If you have a large number of goods items above 1kg but less than 10kg that has weight-based duties, the rounding mode can matter a lot.

None of the rounding modes mentioned captures this below/above 1kg split, so you have to do this in code anyway. Might as well do the rounding there too, to be sure some injected code doesn't mess up the expected rounding mode or similar[1].

[1]: https://irrlicht.sourceforge.io/forum/viewtopic.php?t=8773


Sure, you'll need to handle special cases yourself. But perhaps you don't have to handle all the cases yourself?

As I understand it, one of the new things in IEEE 754 is the idea of a "context", which stores this information. This can be global, but does not need to be. With Python's decimal module it is a thread-local variable.

If you are concerned about, say, mixing thread-local and async, you can also use context methods directly, like:

  >>> import decimal
  >>> x = decimal.Decimal("54.1234")
  >>> y = decimal.Decimal("987.340")
  >>> x+y
  Decimal('1041.4634')
  >>> decimal.getcontext()
  Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999,
  capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero,
  Overflow])
  >>>
  >>> c1 = decimal.Context(prec=4)
  >>> c1.add(x, y)
  Decimal('1041')
  >>> c2 = decimal.Context(prec=5)
  >>> c2.add(x, y)
  Decimal('1041.5')
  >>> c3 = decimal.Context(prec=5, rounding=decimal.ROUND_DOWN)
  >>> c3.add(x, y)
  Decimal('1041.4')
I don't know what the C or C++ API proposals are.


What do you mean by hypothetical solution that does not exist? This is actually specified.

IEEE does specify multiple rounding modes. Does it make more sense to use an existing spec, or roll your own numeric library with rounding modes?


I'm going through this pain right now in helping my kid with homework in rounding and discovering my internalized rules are different from what they're being taught now. EXACT same pain as yours, depending on the precision and rounding rules of your pain scale.


Less than four rounds down to zero, five and above rounds up to 10.

What has changed? Or how different were your internalised rules?


Round to the even number, which eliminates a bias toward the higher number. This is the way Python does it.


Well you did turn your hypothetical ignorance into practical one, I got calculator using it... actual physical object.


Must be a SwissMicros!


> I have never in my life joined a software project for any organisation that was able to do basic arithmetic on money correct

This observation should tell you that's it actually quite viable to be off as long as the errors are small enough.


Quite right. Either this OP spends every day writing letters to banks, shops and credit card companies complaining about the fractional cents that they have been cheated out of, probably in green ink. OR they should be able to recognize that plenty of people have good-enough solutions for this.


As someone who works in fintech, my observation is not that the caveat is "as long as the errors are small enough" but rather "as long as whoever governs the business logic is aware of the impact".

The size of the errors almost always is a large factor in their decision, but ultimately the software we write exists to serve the needs of the business, and if the business decides that larger errors are okay for some reason, then so be it.


The world runs on "good enough."


True, but the world also runs on standards (whether explicitly defined or customary), and doing things differently from everybody else makes it painful to work together.

Sometimes there's also value in doing something objectively poorly, but in a predictable and well-understood way.

Unilaterally starting to "do numbers better" sounds like a recipe for, let's say, interesting times in the finance/accounting world.


It's viable because customers generally have no recourse and companies don't care as long as the problem is in their favor.


Yeah even in university in my intro-level accounting classes they said in the real world nobody cares about discrepancies less than a dollar (and that amount scales with the size of the business). I don't know if that's actually true and I wasn't an accounting major so I don't know what they said in the more advanced classes.

But if I imagine myself as a business owner I would be annoyed with my accounting firm if they spent billable hours chasing down a discrepancy of a few pennies.


Years of practice have led me to this practical wisdom:

As long as things are consistent, no one cares if you are correct. If you lose a penny in the backend calculation, and the frontend shows the amount without the penny, and the email contains the amount without the penny, and the PDF download contains the amount without the penny, no one will care that there should be a penny there.

It becomes problematic if some places are wrong and some are right, and they are not consistent. You won't get credit for being right in only some places.

As long as the error is small enough to be inconsequential, being consistent is more important than being correct.


What are the specific challenges to writing financial software? What are common mistakes you see? What are common data structures for representing money (both the common incorrect implementations but also the correct implementations)?

Also, provided you have a data structure that can represent money, you should presumably be able to serialize that data structure and store or send it just like any other data, right? Why do you need special database or wire format support? The trivial example is to marshal it to JSON put it on disk or write it to the network using the protocol suite of your choice, right?


String (json), decimal/numeric (db) is enough to passively store amounts. Calculations and rounding going to be funny though. E.g. split $10 bill in 3 exactly the same parts, store, sum up to $10 again


Are there actually some systematic approaches to handle these cases? Or are there some libraries making this easier? We have so many places where we keep track of the offset and spread it over the items afterwards to mitigate this. It’s annoying as it’s most often a two step process and e.g. becomes even more complex when you have constrains on the numbers like applying discounts.


In this particular case, it's easier to think of the problem as "allocating $10 among 3 parties" rather than "dividing $10 among 3 parties." The latter insinuates equal distribution. Often, having the 3 parts sum back up to $10 is more important than giving an extra penny to one person and not dividing equally.

What I've used before is "Express the value in pennies. Divide by X (number of parties). Take the whole number part and give it to each person evenly. Take the modulus and distribute it one by one to each remaining party, until there is nothing left."

Example: allocate $10.01 among 3 parties:

1. You have 1001 pennies

2. Divide by 3 - give each person 333 pennies

3. Take the modulo 1001 % 3 - you have 2 pennies remaining. Time to distribute them round-robin!

4. Give one penny to person A. You have 1 penny remaining.

5. Give one penny to person B. You have no pennies remaining.

If you do this many times, randomly establish the order of the parties each time you round-robin them. In that case, no one will be systematically under-allocated, just because their name starts with the letter Z.

This is not a universal truth, and each situation is different. It may not apply to other kinds of money dividing situations.


I never said you need special database or wire format. Maybe if you are using JSON, transfer monetary values as strings rather than decimal or float data types. Some databases can actually handle money, some don't. Those that do not, usually require you to store money as strings to not lose information. When you get warned by your DBA that the database can't do arithmetic on strings, tell them that "thank you, it could not do correct arithmetic anyway".


Just use integers, and fields ending with "_cents". There is no ambiguity, rounding problems are not even an issue, serialization and deserialization will pose no problem when dealing with external actors, and you can still sum money easily in your db requests.

Problems will arise when you'll do multiplicative operations on money, for instance when working out taxes. There are precise rounding rules to apply, and the solution is to tackle these issue one abstraction layer above, on the operations rather than the values, because some time you'll want to carry out rounding between each tax operation, sometimes at the end, on one line or on a whole batch of transactions.

Other problems you'll bump into is the asynchronous nature of money flows. You won't realize it with credit cards (well you'll figure it out soon enough when you'll stumble upon "race conditions"), but this becomes explicit when dealing with mandates, direct bank transfers or checks. You need to move the money out of sight of the user in a ledger specific to that person that holds transactions being processed (can takes days or weeks in some case) and move it back to the original ledger if the transaction fail. Otherwise you'll bump into issues of double spending. This is something that is out of your control sometimes (had the unfortunate experience to issue withdrawals multiple times on a big bank's payment processor API and theses dunces sent the money multiples times). Use idempotency keys profusely as well as (distributed) locks. The hardest part is not getting your part right, it's handling external actors bad implementations. Also fuck HTTP w/ hooks. Some actors do not even make sure you received the webhook. The non binary aspect of HTTP is a bullshit argument and I'd gladly trade it for a binary protocol that has strong quality of service mike MQTT, since anyway I'll have to implement some kind of smart broker when issuing orders to a shitty HTTP api anyway.


You said something to the effect of “it’s not enough to have financial libraries, you also need storage (e.g., db) and transfer (e.g., wire formats)” which suggests that you can’t just use a standard library to dump JSON to a file or through an HTTP connection.

But yes, I can see how databases not having a money type with corresponding routines is going to make life harder.


If your database has a MONEY or CURRENCY data type, use that.

Just as you should use proper DATE or DATETIME data types for time and not roll your own with strings, integers, seconds-from-epoch, or any other schemes at least if you want to keep your sanity.


> If your database has a MONEY or CURRENCY data type, use that.

But hopefully only after understanding how it treats these, and whether that's compatible with your requirements.

> Just as you should use proper DATE or DATETIME data types for time and not roll your own

I'm always happy to use the database's date/time format – if it's actually implemented in a sane way and is compatible with my data.

For example, I work with data provided by external partners that specifies dates:

Sometimes they're only specifying the year as a single-digit integer (and you have to guess which one they mean, based on the current date and the hope that the files they send you are not older than 5-10 years). Sometimes there is no year at all. Sometimes the timestamps have an implied timezone, sometimes they're UTC, and sometimes they're supposed to be UTC, but really are in some unspecified local timezone.

In these cases, it can indeed be better to store these as strings and deferring interpretation until you actually process them.


What would be the problem with TigerBeetle's approach to use a smaller and configurable unit of measure so that you only have to deal with integers?


The article and the comment you're responding to aren't even talking about the same thing, so it's futile to discuss the pros and cons.

The article is talking about serialized representations, i.e. how you store amounts in a database. The comment is talking about how to arrive at amounts as part of arithmetic calculations, e.g. determining interest, percentage fees etc.


Many decades ago I worked in a Java team who they were using some money as a double and some as a BigDecimal. When I asked why, ... they said that at the start they didn't think they'd have any issue with doubles. Years later and still had lots of tech debt because manually rounding doubles to BigDecimal.


As much as SAP has problems, this is a solved problem there. In my experiences sometimes this is implemented incorrectly, but discovered very soon, as it's about money. I encountered this when people made some simple calculations on the frontend-side then submitting to the ERP system, where it was discovered because of the emerging inconsistencies.


For very simple billing, arbitary precision sounds like overkill, as do rounding, and order of operations.


Oh, sure, because when you are a small company you don't care that people get correct invoices.

I can certainly sympathise with this stance. There is about a billion things you can do better but you have limited time to do anything so you have to prioritise. And if one invoice in ten thousand is incorrect by one cent, and only one client in ten thousand who received the wrong invoice will actually find it out, then it is hard to argue you should be spending time on fixing this one problem.

Just don't say you can do accounting correctly on floats and we will remain friends.


You can make the same arguments against fixed precision decimal types. My systems represent currencies to 4 decimal places. At that level of precision, rounding/order of operations errors could accumulate much faster than with a 64 bit float.

Decimals are still the way to go, you just have to pick a level of precision acceptable for your application.

My management definitely does not want me spending my time chasing errors over fractions of a pennies. The only time those errors are discovered is when I compare the output of new code against old code.


Let me guess, the last 10 times you had to move jobs it was because of a difference in opinion with your boss about the importance of correcting one-cent-errors in one invoice out of every ten thousand?


Haha... no. But I may be focusing way more towards the reliability than 99.99% or so developers.

The way I solve this problem isn't by constantly hopping projects. I try to find projects that actually require extreme reliability so that I can be doing what I want in an environment where there is a business case for it.


For simple accounting I've always used integers and done all operations in cents, only converting on the frontend. what's my downside here? I guess it wouldn't support unit prices less than a penny


If you have different currencies you need to keep track of the number of decimals used, e.g. YEN has 0 decimals, bitcoin has 6, etc. It could even change over time like Icelandic ISK did in 2007. If you have different services with different knowledge about this you're in big trouble. Also prices can have an arbitrary number of decimals up until you round it to an actual monetary amount. And if you have enough decimals, the integer solution might not have enough bits anymore, so make sure you use bigints (also when JSON parsing in javascript).

Example in js: Number(9999999.999999999).toString() // => 9999999.999999998

And make sure you're not rounding using Math.round

Math.round(-1.5) // => -1

or toFixed

(2090.5 * 8.61).toFixed(2) // => 17999.20 should have been 17999.21 8.165.toFixed(2) // => 8.16 should be 8.17

The better solution is to use arbitrary precision decimals, and transport them as strings. Store them as arbitrary precision decimals in the database when possible.


Also many types of operations could give you the wrong result from incorrect rounding. E.g. let's say you're calculating 10% of $1.01 ten times and adding the result together. The correct result is $1.01, but with your method you will get $1.00.


The correct answer will depend on the specifics of your environment. In some places, tax is calculated per line item. If you go to a dollar store and buy 10 items with 7.3% sales tax, it adds up without those 0.3¢ bits. In other places, the tax is supposed to be calculated on the total for the tax category in the sale. If you wanted to keep it by line item you'd need the extra digits of precision.


Well, yes, which is why you need to be in control of your rounding and not just let the width of data type you chose for the implementation dictate that.


I enjoyed Mark Dominus's blog post [0] about the billing system he cowrote, moonpig. It restates much of the other responses, namely that ignoring infinitesimal errors/rounding would have instilled a culture of - at minimum - doubt. Perhaps another way to see this is to look at a visualization [1] of the discontinuous coverage that floating point gives to the numbers we want to represent.

[0] https://blog.plover.com//prog/Moonpig.html#fp-sucks

[1] https://observablehq.com/@rreusser/half-precision-floating-p...


Having an arbitrary precision library configured to use cents is not enough.

Over a full year it needs to keep track of all the rounding it does when it pays you interest and when that rounding reaches a penny it's supposed to pay you that penny.


"For accounting you should only ever use arbitrary precision math library"

I'm a bit surprised by this advice. I thought the common wisdom was to use a decimal type like BigDecimal in Java.


BigDecimal is an arbitrary precision type so using BigDecimal is how you would follow said advice in Java.


BitInteger and BigDecimal are arbitrary precision types in Java


nobody cares about precision in financial calculations except for nerds. particularly areas like reporting tend to have significant errors, and not just sigma.

this is now a fintech anecdotes thread, but my first ever fintech job was part of a two man special priviledges team directly under a director of X at one of the sifis. we were supposed to cut across multiple departments and through red tape with the main goal if eliminating significant overhead in certain processes (we brought some multi-day multi-step calculations down to 8 minutes in one instance, 2 minutes in another, kept us on a vendor list for a very very very long time). i didn't know how things are done, so i used bigdecimal throughout, with a lot of precision and explicit rounding rules. i did back of napkin numerical analysis for at least inner loop code. we duplicated work done by other departments (!!!), things like instrument pricing, because we couldn't wait for their batch jobs to complete. at the end we were getting from slightly to wildly different results. it took a lot of conversations with business to realize that 1) our calculations were correct 2) for them to realize what the sources of errors were and where they were coming from 3) for everyone to just kind of go eeeehh not a big deal.

i wrote a translator for a subset of j/k to java bytecode using java asm, i was pretty proud of that system, because it allowed to express pricing rules in a what i thought was much more readable way, dynamic reload without restart, but man i am very very sorry for the developers who inherited that system.


Serialization as a byte string ought be good enough if the storage layer doesn't support arbitrary precision natively.


> arbitrary precision math library with ability to specify rounding rules

I got you:

https://godocs.io/math/big#example-RoundingMode


In my experience most financial firms just use binary floats/doubles

Fixed point decimal if you're lucky (or unlucky, since fixed point sucks)

Arbitrary precision decimal floating point essentially never used


> For accounting you should only ever use arbitrary precision math library with ability to specify rounding rules

Do these rounding rules need to vary by jurisdiction?


Absolutely. IIRC most tax codes have specific rounding rules. Also e.g. some countries have done away with pennies or penny equivalents and have rules about how you are supposed to handle that, etc.


Not sure about that. In Germany, especially SMEs, no one cares about cents. Your tax reports are done in rounded Euros anyway. People, companies, and the taxation and financial state dept. are well aware of rounding issues, different ways to round that no one really cares about cents.

Besides that, using BigDecimal with two decimal places is sufficient in the java world imho. Depends on your use case. I'm entirely sceptical of people claiming general things. Depends on the requirements I'd say.


Only final values are rounded to full Euros (down, if I remember correctly).

If a large taxpayer starts rounding down as part of intermediate calculations of their tax liability, I think they'd get some questions.

But yes, rounding does happen a lot – what's important is that everybody uses the same, transparent rules for that, or it becomes impossible to double-check somebody's books, tax declaration, invoice etc.


Well, how much money can you save by efficient rounding? Surely not that much that anyone would bother. There's a thing called "Kaufmännisches Runden", which is kind of cheating as well.

> But yes, rounding does happen a lot – what's important is that everybody uses the same, transparent rules for that, or it becomes impossible to double-check somebody's books, tax declaration, invoice etc.

They don't, that's why it doesn't matter that much ;)


> Well, how much money can you save by efficient rounding?

You and me? Probably a few cents.

A bank or a large corporation selling things billed in sub-cent amounts? Single-digit percentages of their gross revenue, i.e. many millions.

Just as a very simple example: I'm your bank/phone provider/..., and I'm charging you a flat fee of 1.9 cents per transaction/call/... You're my customer and make a billion of such transactions per year.

Option 1: 1000000000 * 0.019 = 19000, you owe me $19000000.

Option 2: There are no fractional cents, so let's just round up each individual billing event. 1000000000 * 0.02 = 20000000, you owe me $20000000. Cool, a free extra million for the company!

This is why these things are precisely regulated when it comes to sales tax/VAT, for example.

> There's a thing called "Kaufmännisches Runden", which is kind of cheating as well.

Always depends on which side you're on. If you're getting a refund, it can work in your favor! Importantly, it's a precisely defined rule so that it's not possible to cheat in the implementation.


Same in the Netherlands. There is an official rule that you don't have to do mathematical rounding to get to whole euro's, you can do it in whichever direction is more favourable to you. That rule is probably there just to save the tax services some extra work and IT costs.


they'd probably care if you're compounding interest daily on that rounded up euro


"All right, so when the subroutine compounds the interest, it uses all these extra decimal places that just get rounded off. So we simplified the whole thing and we just... we round them all down and just drop the remainder... into an account that we opened."


Glad you enjoyed the reference!


The hyperinflation example is quite interesting because:

1) If it happens, it happens rapidly, and you don't want to implement this in a hurry

2) If it happens, the global economy could well be melting down, and your financial institution will have other priorities to attend to

3) Retaining existing staff, and hiring new engineers, will be challenging at best

You really don't want to be implementing this change in those circumstances.


I think it's the worst example. I very much doubt we will ever see the price of, say, a car approach a billion billion dollars (or cents). It's too many zeros for people to work with on a daily basis. I think what will happen is that, at regular intervals, the old hyper-inflated currency will be replaced with a new currency that's, say, 2^32 times less than the original one.

Besides, if you think a billion billions might be reached (64 bits) then why wouldn't a billion billion billion billions also be reached (128 bits)? 128 bits seems just as arbitrary as 64 bits in this context.


> It's too many zeros for people to work with on a daily basis. I think what will happen is that, at regular intervals, the old hyper-inflated currency will be replaced with a new currency that's, say, 2^32 times less than the original one.

I lived through hyperinflation (in Brazil). What happened was that, at regular intervals, the old hyper-inflated currency was replaced with a new currency that's 10^3 times less than the original one. Cutting decimal zeros is better for humans, and three is convenient because it matches the usual three-digit grouping (that is, 123.450,00 becomes 123,45). This had to be done because, otherwise, calculators would become useless (most common pocket calculators had only eight digits).


Before 2009 would you have predicted that a 100 Trillion Dollar bill would never be issued by a central bank?


Germany had notes and stamps in the trillions Marks in 1924 - nothing new there


2^63 / 100 Trillion = 9.2E6 so even in this case you have few decimal positions left.


You may be overlooking the potential for someone to own more than one of those $100T bills. Apparently it was worth about 30 USD at the time.


If $100T = 30 USD, then probably nobody will mind if we round to the nearest $1B. Just like how I have no way to compensate someone for less than 0.01 USD.


The government cares, the precision necessary for taxes is sub cent as mentioned in some replies to the top comment on this post.


By the time this is a problem the currency has collapsed and no one is using it anymore.


I can imagine some contractual obligation being litigated in those cases... on the other hand, maybe attorney's fees would far outstrip any sum being litigated for.


I was under the impression that BCD was generally recommended for money, often because of (IEEE?) rounding and machine precision/epsilon:

* https://en.wikipedia.org/wiki/Binary-coded_decimal

* https://en.wikipedia.org/wiki/Machine_epsilon


BCD is better than IEEE 754 floating point, but simple integers with an implied decimal point is much better than either of those.


BCD makes stored value -> human-readable trivial, at the cost of complicating math on those values.

So it's useful for applications where you're mostly doing human input/output <-> stored value.

But as soon as you do any non-trivial math on those values, using (fixed point?) integers wins. At the cost of a simple stored value <-> human readable conversion.

I'd think most financial applications fall into the category "do math, so integers win over BCD".


I honestly don't understand the argument here or the parent comment makes.

Why is a BCD decimal 128 worse at math then a fixed point integer? You are saying it is more CPU efficient? Are you saying some operations with fixed point integer math operations are more accurate then dec128?

I've seen this asserted several times, both in the post and in comments, but I've never seen a single concrete example of it being better. Can someone provide an example?


Most CPUs don't have BCD math instructions, so you need multiple instructions, with probably around a 10-100x slowdown in math.


Do you know what BCD is? If you did, it becomes pretty obvious why BCD is significantly slower than fixed point integers.


I know what BCD is, I know they are slower.

What I was asking for is, when they said it was just "better", are they specifically saying "CPU computations is a bottleneck, thus BCD is not as good as fixed point integers"? Which is fine if it is, I just would like that to be stated clearly. In my line of work, BCD CPU is NEVER the bottleneck, it never will be, and it is likely that the time it takes for a CPU to compute the BCD operation it will still be stalling on pref etching the next instruction from main memory anyway.

But maybe, for their specific ledger specific database, it is better. If so, show the benchmark, and how it impacted their specific code. But don't expect me to just take fewer CPU instructions for math operations to directly translate to more desirable.


While IEEE 754 does have curiosities, the 1/10 problem the article points out isn't really addressed by anything mentioned in the article (or here). BCDs have exactly the same problem with e.g. 1/3.

What you really want is a rational (fractional value: numerator and denominator) of some form.


no, integers are generally used. BCD is fixed-point in any case so it's just an inefficient integer representation.


I'm under the impression that people use decimal floats from IEEE 754-2008, which is Crenshaw's General Decimal Arithmetic at https://speleotrove.com/decimal/decarith.html .

As I understand it, the regulations related to money can require specific rounding modes and a specific number of digits for intermediate representations. These are much easier to manage with, eg, Python's decimal module than doing everything as integers.

For example, at https://news.ycombinator.com/item?id=36687627 I pointed to US law at https://www.law.cornell.edu/cfr/text/7/1005.83 with:

  (3) Divide the result in paragraph (a)(2) of this section by 5.5, and round
  down to three decimal places to compute the fuel cost adjustment factor;

  (4) Add the result in paragraph (a)(3) of this section to $1.91;

  (5) Divide the result in paragraph (a)(4) of this section by 480;

  (6) Round the result in paragraph (a)(5) of this section down to five decimal
  places to compute the mileage rate.


why add the complexity of bcd when you can just put in it units (e.g. cents) where there's no decimal point?


What do you do if heavy deflation causes the government to release a $0.001 coin? A cent isn't a fundamental unit.

Edit: as another hypothetical, what if the $0.001 coin is released to support micropayment use cases?


Actually that would break so many things (the cent as the smallest unit in law and in custom where that is currently the case) that governments may by preference issue a whole new currency instead of stir that pot.


I thought the smallest unit in the US was the mill. https://en.wikipedia.org/wiki/Mill_(currency)

> https://www.law.cornell.edu/uscode/text/31/5101 says "United States money is expressed in dollars, dimes or tenths, cents or hundreths,[1] and mills or thousandths. A dime is a tenth of a dollar, a cent is a hundredth of a dollar, and a mill is a thousandth of a dollar."

> [1] So in original. Probably should be “hundredths,”.

About the only time you see values given in mills is with gas prices, like $4.999/gal, though often denoted as tenths of a cent. It's also indirectly used in property taxes.


This year on our local ballot there is a tax levy to approve an increase to the property tax by 5 mills per 100,000 dollars of assessed property value. Allegedly allowing them to raise a few million over the next 10 years to pay off an addition to the school.


These systems use something like 10^-8 as the implied decimal in the real world. Some forex exchanges even go to 10^-9


Fiat currency can go to -4 i think, crypto currency going to -18. And that's whole units like cents P.S. crypto things are using uint256 internally, and this type don't exist in most languages. Using int64 can work out, sometimes, and will usually break soon enough


In a situation like that I think it would be better to have the system not work until it’s fixed than to potentially lose precision and work with untested inputs


Worth noting that hyperdeflation has never happened, which is kind of interesting because it theoretically could.


I believe that economists generally think that a small amount of inflation is good for the economy, and that deflation is bad (because it leads to reductions in spending and investment, potentially causing a vicious circle). It's also relatively easy to counteract - just print more. In order for hyperdeflation to occur you'd need a currency where the issuing body didn't believe deflation was bad, or didn't care.


Bitcoin has a hyper deflationary monetary policy hard coded in.


Yup, and as a result, nobody wants to actually treat it as the digital currency that it was originally set out to be.

Anybody who truly thinks Bitcoin could hit $100K value certainly doesn't want to spend them.


It's not really symmetric. It's not hard to imagine a situation where everyone stops believing something (a currency) is not worth anything anymore. But why would somebody believe that nothing except currency is not worth anything?


Not sure what exactly you call hyperdeflation, but something like that has occurred in communist Czechoslovakia in 1953.

https://cs.wikipedia.org/wiki/%C4%8Ceskoslovensk%C3%A1_m%C4%...


I think you could easily convert every stored value to the new unit of measure by multiplying by an appropriate factor. In your example that would mean multiplying all the old values by 10. But when you're designing such software, you could also be conservative and use a smaller unit than cents (which probably many financial sw already do, as you already have things considering fractions of cents like gasoline prices).


Different currencies already have different rules. Any good accounting software isn't going to assume US cents. Update your currency table to include a version of USD in increments of $0.001. Update the amounts on a need-be basis. ( Multiply by 10 and change the currency type)


Division. 1/x is a common operation in finance (particularly in trading), and you'll get all sorts of trouble if you try to express everything in cents.

So, you'll need subpenny fractions (e.g. 8 decimal points), or BigDecimal, or decimal-normalized floats.


Because it is very common to need to deal with fractions of a cent in intermediate calculations.


Decimal float has a larger range and a more natural representation when debugging than fixed point. Either way works well though.


Every financial system I've seen uses either decimal floating point or integers. Using normal float is just asking for trouble.


I've seen floats for money in production, not a financial system per se, but it moved amounts around external systems that often would include some financial ones. It worked surprisingly well given the amounts involved (anywhere from tens to thousands of EUR/USD), and when I asked about the off by +-0.01 errors every now I was told they were "not worth fighting by the customers".


Most customers may not care, but reports and automated processes likely will when a customer is 'in debt' thanks to small differences, and the tax authority in certain countries also care when the sum of the differences is large enough.


Float can be justified where performance is more important than accuracy. Which does happen sometimes in the financial world.


I've never seen anyone use floating point for money in their bespoke applications


You _can_ use floating point if you are very careful and know what you are doing and know about decimal normalization (see e.g. OpenHFT implementation for high-frequency trading: https://github.com/OpenHFT/Chronicle-Core/blob/ea/src/main/j...)

But if you are not an expert, you better stick to BigDecimal and absorb the performance costs.


Nice share! Interesting sorcery in their code:

> final double residual = df - ldf + Math.ulp(d) * (factor * 0.983);


Every banking client did use double for the actual calculation phase of their derivatives trades.

(And some of our back-end systems then did ludicrous broken wrong-headed rounding to turn them into fictional currency values... Ho hum.)


QuickBooks almost certainly used float in the early versions of their currency conversion. We saw lots of accumulated rounding errors that really couldn't be explained any other way.

That was maybe 15 years ago - hopefully they've fired that programmer and fixed it in the meantime. We don't know, because we don't use QuickBooks any more.


Nubank used floating point - not in their real backend systems, but somewhere in their app. It was a bit amusing when people on Twitter found out certain very specific, arbitrary-looking amounts were not able to be transferred, then the computer scientists noticed what they had in common.


Count yourself lucky.


I worked on some software in the past where I converted double to C#'s decimal type. Unfortunately it wasn't so easy on the shop's old php stuff, but there I was more concerned with fixing their SQL injection issues rather than their rounding errors


> For the same reason, digital currencies are another use case for 128-bit balances, where again, the smallest quantity of money can be represented on the order of micro-cents (10-6)… or even smaller.

To give a concrete example, Ethereum has a lot of precision (1 ether = 10^18 wei), and there are like 120M ether, so that's more than 64 bits just to represent the supply.

Idk though, is this a real concern with traditional finance? Zimbabwe had their huge bills, but I assume the small ones were unused. Just like you round US cents, you could probably round their currency at some cutoff. If you were processing Ether outside the blockchain like just some other currency, you'd probably round somewhere too. If you look at Stripe's API for example, they do everything with fixed-size (JSON's 53-bit) integers defined as cents or whatever.


> Idk though, is this a real concern with traditional finance? Zimbabwe had their huge bills, but I assume the small ones were unused. Just like you round US cents, you could probably round their currency at some cutoff. If you were processing Ether outside the blockchain, you'd probably round somewhere too.

Yeah... I started reading this article and was waiting for the crypto angle, because in normal finance I don't see how it could plausibly matter (disclaimer: not my domain).

The only reason I could imagine needing to track this level of precision on fractional units of currency was the sort of financial trickery that is enabled by cryptocurrencies.


Yes, we actually held out on the digital currency angle.

What tipped the scales, was realizing that this applied also in normal finance, especially exchanges (not crypto) that operate with high precision: https://news.ycombinator.com/item?id=37573939


Even in cryptocurrency, I feel like the level of precision is mainly aspirational + cuz they can. Who cares about 1 wei in the context of a transaction that costs around 100 trillion wei on its own? The smallest unit of ether I've ever seen in any kind of UI is 1 gwei (1 billion wei).


Forgot to also add, cause various parts of a cryptocurrency ecosystem are much harder to change than traditional finance. So I can see why they throw in a ton of 0s and use 256-bit ints just in case.


> Idk though, is this a real concern with traditional finance

CME have contracts with 1/128ths of a dollar

they also have a tenancy to add another power of 1/2th every 10 years or so

this causes problem for fixed point, but amusingly this works perfectly well with floating point


One annoyance with floating point binary is when it happens to not line up so well with floating point decimal, for example

  >>> 0.1 + 0.2
  0.30000000000000004
which is still gonna round to 0.3, but ugh. Also, during aggregations, it's not nice how you get different answers depending on the order of operations when they shouldn't matter.


on CME they're not decimal though, they're binary fractions

so floating point works fine as floating point is binary fractions


Oh right, I didn't make the obvious connection to 128.


I'd like a report back from TigerBeetle on how many applications they actually support where the high order 64 bits are nonzero. I would note that the entire US GDP is less than 10^15 cents, and that 2^64 accommodates just shy of 10^19 in signed integers. So, even if your database had a justification for thousands of a cent transactions (not intermediate results, but recordable transactions), you'd still need to have transaction entries larger than the US annual GDP to roll over into the high order 64 bits.

TigerBeetle may have made the right choice for some market, but I predict that there are vanishingly few sales calls where this becomes an important selling point, unless it's potential customers wondering why they are wasting all those bits and checks for a whole lot of freakin' zeros.


There's one small part of the article that IMO is sort of key to understanding the why:

> [for every account] we keep two separate strictly positive integer amounts: one for debits and another for credits

It is not sufficient that their integer type is able to handle individual transactions. Their integer type must be able to handle the sum of the absolute value of all transactions that have occurred on an account. And I think it's easy to come up with realistic situations where you hit that.

So say you take the NYSE, which trades about ~$18 billion per day [0]. This is ~$1.8 trillion cents, or about 2^51 microcents. After 2^12 business days (~=16 years) you'll already be hitting the limit. (This is just a toy example ofc.)

[0] https://www.nyse.com/trading-data#:~:text=The%20New%20York%2....


To be fair, I used to think like this too, but chatting with multiple large exchanges/brokerages made us realize this was a thing.

> less than 10^15 cents

There are systems that don't work in terms of cents (cf. the examples of issues in many of the comments here), or even in thousandths of a cent, but with significantly more precision.

Literally, in other words:

Where you run into problems then, with 10 ^ n integer scaling, is when n is large. When n is large, you aren't left with sufficient room in the remaining bytes to represent the whole-number part. In trading systems, for instance, you can easily hit 10 ^ 10 to represent fractional traded prices.

Concretely, if you need to scale by 10 ^ 10, then your whole-number part is 2 ^ 64 / 10 ^ 10 = 1,844,674,407, which isn't terribly large.


I'm not sure I understand what you're saying. Are you saying that these exchanges are trading at prices to a precision of 10 ^ -10 dollars (or other currency)? I saw private exchanges pricing to hundredths of a cent (a long time ago - it's been a decade and half since I worked on Wall Street or in the City) or yen, but never any finer than that. Even then, all transactions were recorded in whole cents.

I'm sure you've done your homework, and I'm long out of the finance business, but even so, I think the applications where this matters are, as financial applications go, very unusual.


Yes, exactly. And we were surprised by this too. But it can be typical for prices that are fractionally traded. See https://news.ycombinator.com/item?id=37574440


Why do exchanges use such high precision for fractions?


To represent prices that are traded fractionally.


But nothing can do that exactly, for example 1/3 has an infinitely long decimal or binary representation. So why round to 10^-10 as opposed to something like 10^-3?


For sure. We didn't pick 10^10 scaling. It's just what some massive brokerages/exchanges actually use. The fact that these were not necessarily crypto made us take note.

At the same time, you can understand that 10^10 scaling is at least significantly more precise. And I can imagine these things are viral too, who you trade with, also determines your minimum resolution. You can always downsample in presentation, but once downsampled in storage, it's impossible to upsample.

It also wasn't the only use case. But it tipped the scales.


Yeah, was just curious if the brokerages explained their reasoning in detail. Even if it's just viral, someone big had to have a reason to start the trend.


Hey! Thanks for your curiosity. This was news to us too. And I think we spent about a year thinking about this before we swapped for a bigger piggy! :)


Some languages do have proper support for ratios, so if you define x = 22/14, the value stored is 11/7. Multiply later by 7 and you get 11.

You can maintain exact calculations through an entire data pipeline this way, as long as your base numbers are all integers and ratios, and optionally have one rounding step at the end if you want a decimal value. Most math languages and lisps do this.

There must be libraries for other languages that can do it too, but it’s much nicer to work with when it’s built-in.


Known to gophers as a "big rat": https://pkg.go.dev/math/big#Rat


Yeah, I thought about that. Wonder how efficiently a database can be tweaked to support this, cause that might matter even more than language support.


I find the “no negatives” very confusing even in a double entry ledger for reasons the last sentence of that paragraph hints at:

> When you need to take the net balance, the two balances can be subtracted accordingly and the net displayed as a single positive or negative number.

This means you have to reconcile credits and debits using a different type than each of the columns is in. Not a huge deal by itself, but now that secondary type has to be a weird 129 bit integer (or larger) to ensure it can represent every intermediate sum of credits and debits.

If they just would have sacrificed 1 bit they could have used a single signed 128 bit data type for all storage and computation.

I suspect they have to enforce the “all debits and credits are positive” invariant in a number of places anyway (or risk coercing a negative signed 128 input into an unsigned 128 but ledger entry), so why not sacrifice a bit that will probably never be used otherwise anyway and have a uniform structure for numeric types?


Using a single signed 128-bit integer does not solve this. What happens when you have to add two of these together? Do you need a 129-bit signed integer? What about if you need to add 64 of these together; do you need a 134-bit signed integer? The point is that 128 bits is well large enough to hold the sum of quadrillions of transactions in Zimbabwean dollars. Nobody is switching to a 129-bit number to hold the results of sums; they are explicitly not handling the case where any account is anywhere close to 10^38 of the base unit.


Yes, we could have explained this more! (Joran from TigerBeetle here)

The motivation is to preserve information, and I think this becomes less confusing once you understand that accounting is in fact a type system.

I hope this comment makes this more clear: https://news.ycombinator.com/item?id=37571942


Man, reminds me of my first work project, we rebuilt the time clock in 2006 for our department. I knew about inaccuracies in floating math so I thought I was taking precautions, but in the end my first native PHP float implementation introduced enough drift in just the hours field that we'd lose $40 a year. Rebuilt for the Decimal object and it lasted long enough until the entire organization bought it's own COTS solution.

Now I'm trying to remember if I watched Office Space before or after...


It sounds like they use a rational number with a fixed power of 10 denominator. Much like if they used Go's *big.Rat type with a fixed denominator of say 100 or 100000 depending on the currency, then the numerator they are moving from a 64 integer to a 128 bit integer.

I personally don't see the advantage this has over a decimal 128 numeric value. In either situation, if you have 10 / 3, you will get 3.33 and need to round. Ultimately, the math concepts in finance are different then in abstract math. If for some reason you need to divide $10 by 3, it should result in three numbers: 3.33, 3.33, and 3.34.

But fundamentally, I don't see how their fixed integer math fixes something as fundamental as that, over a standard decimal 128 representation.


> If for some reason you need to divide $10 by 3, it should result in three numbers: 3.33, 3.33, and 3.34

Exactly! We don't try to "fix this math problem"; instead, we just want to reduce the surface area by using plain integers instead of decimal floating points.


I’m a bit surprised that these two concepts are seen together. I’d of thought that financial math and data management would always be done with some level of abstraction so it doesn’t matter at all what your computer’s implementation is.

Then again, 128 bits is plenty for microcents or whatever the minimim unit is.

I’m guessing back in the day when things were 32 bit or less, there were entire financial database implementations that handled this abstraction?


If you actually want to do something useful and accurate and auditable and performant and space-efficient, because you have a lot of money values to bank with, you really don't want too much abstraction.

My experiences in various bits of banking include where juniors ignore advice NOT to store currency in floating point values and then come whining that arithmetic is broken, and tech dudes in a lab deciding that every single FX flow in an investment bank should have 2MB of (unshared) calendar hidden inside its abstraction which made some individual trades too big to load even for powerful machines...

Fixed point calcs in integers are good.


Floating point feels like an incredibly grokkable concept that was just not taught well for a long time. Maybe too mathematically (of course). Or maybe that was just my experience.

I feel like any dev team should pick up a copy of this for on-boarding: https://jvns.ca/blog/2023/06/23/new-zine--how-integers-and-f...


Floating point being grokkable doesn’t make it any more suitable for this application.

Floating point is inherently an approximation - your bank balance should not be an approximation.


> Floating point is inherently an approximation - your bank balance should not be an approximation.

I think this is a perfect example of bad floating point teaching. Floating point is not an approximation in any sense. If the numerical result of your calculation is representable in floating point you will get an exact answer always. And for results that aren't representable you decide exactly what should be done about that. It's like saying integers are an approximation because 5/2 == 2.


In programming, an `int` perfectly represents the integers between INT_MIN and INT_MAX.

A `float` on the other hand, approximates the real numbers. It can perfectly represent exactly 0% of them.

Having control over the rounding behavior is meaningless - floats cannot correctly represent any non-contrived calculation. Exact representation is important in financial systems.


> floats cannot correctly represent any non-contrived calculation

Like adjusting all your financial calculations to use microcents and partitoning instead of division to keep the result representable by integers? Neither can represent 1/3 even shifted. When you want to do exact calculations with floats, and you can, you just have to set yourself up so that the result is exactly representable, it's not as intractable as you make it seem.

> A `float` on the other hand, approximates the real numbers

Okay so that's not at all what they do, they represent subsets of the reals, just like how integers represent a subset of the reals. Even arbitrary precision libraries can only represent a subset of the rationals.


> they represent subsets of the reals

Sure. My point is that this subset is useless. Because trying to add, subtract, multiply, or divide members of this set will result in a number outside the set.

> When you want to do exact calculations with floats, and you can, you just have to set yourself up so that the result is exactly representable, it's not as intractable as you make it seem.

In the general case, you absolutely cannot. Lets look at some examples.

In forex trading, you need 9 digits after the decimal place in the price. So right off the bat, a valid price like 1000000.000000001 cannot be represented by a float. If the exchange sends your system that price, your system is guaranteed to be wrong.

Lets say you start at a representable price, like 1000000.0 and want to tick it up or down by the tick size, say 0.025 . The result of that addition / subtraction is not representable, so you cannot calculate and round prices correctly.

If you don't have control of your inputs, and you need precision, floats will never work.


I think you're getting the impression that my stance is "floats are usable for all problem domains" when it's really "floating point arithmetic is not the same as approximate calculations."

Nearly all mathematical calculations cause the result to be outside the range of integers. You can't do much else other than subtract without accounting for edge cases. No matter what tool you use you must work with your chosen representation and around its limitations and make sure your domain can be modeled exactly. For example Python's base random function chooses a floating point uniformly in the range [0, 1) but it achieves this by requiring that the result be a multiple of 2^-52 which is exactly representable so rounding doesn't introduce bias.

> If you don't have control of your inputs

Well you clearly do to some degree because you're sure you can model anything you might might receive with fixed sized fixed precision integers. I'm not saying this means you can just switch to floats but that you're doing the same thing, mapping the real life problem domain exactly to a subset of the reals that is closed under the operations you want to perform.


> When you want to do exact calculations with floats, and you can, you just have to set yourself up so that the result is exactly representable, it's not as intractable as you make it seem.

Can you expand on what you mean by that? Whenever I have dealt with calculation errors (either in fixed or floating point) "set yourself up so that the result is exactly representable" has been the key problem to prevent errors accumulating.

I obviously disagree on some other points, but circular discussions go nowhere!

Edit to add: Essentially I'm fishing for tactics. A big one in graphics development is detailed here: https://developer.nvidia.com/content/depth-precision-visuali... - but in fixed point I got very used to working out how to premultiply variables depending on their expected ranges.


TBF, be careful here. The IEEE floats can represent a subset of integers in their range exactly. For example, 64 bit floats can represent the range of 32 bit ints accurately (and more).

That said, it is bizarre to claim that if the result can be represented you get the exact result, when the core problem is that the result cannot be represented because the representation is an approximation.


> The IEEE floats can represent a subset of integers in their range exactly. For example, 64 bit floats can represent the range of 32 bit ints accurately (and more).

I know, I am being a little facetious. A double has a 52 bit mantissa, so it can exactly represent integers that need 52 or less bits.

Still, as a percent, a float can represent 0% of the reals. There are an infinite amount of numbers it cannot represent, even if we give it lower and upper bounds. Whereas an int can represent 100% of the integers within a lower and upper bound.


More to the point binary fp absolutely is a bad approximation to, and does poor arithmetic on, common legal sub-1 decimal currency values, ie those that do not have an exact binary fp representation.


Indeed. I'm just making a tangential comment. Definitely want to work in unsigned fractional cents.


> be done with some level of abstraction so it doesn’t matter at all what your computer’s implementation is.

This still doesn't do that, its just integer math. It doesn't matter how your CPU implements it, as long as `x = a + b` gives the correct value for x.

You don't have to immediately reach for an abstraction layer when you hear the word "bits" ;)


But without a sufficient stratigraphy of abstractions, I can't ever dig down if I run into issues! ;)


> I’m guessing back in the day when things were 32 bit or less, there were entire financial database implementations that handled this abstraction?

This issue is what COBOL was designed to solve. Fixed decimal point arithmetic in base 10


Handled in hardware on some machines (e.g. VAX)


>Surprisingly, we also don’t use negative numbers (you may have encountered software ledgers that store only a single positive/negative balance). Instead, we keep two separate strictly positive integer amounts: one for debits and another for credits.

It's funny I've always thought of two-column bookkeeping as a kludge that was invented because the author was unaware of negative numbers. But here there's actually a justifiable technical reason why they're superior (in this specific context)! History is a silly thing


Accounting for computer scientists* is a good read if you've got some interesting in accounting (and have a CS background) and are wondering why accounting is just so danged complex.

* https://martin.kleppmann.com/2011/03/07/accounting-for-compu...


Oh I'm familiar B)

Used to use this CLI tool "ledger" to keep the books for a small nonprofit I was treasurer-ing for. Awesome tool, and (in conjunction with this read and a few others) it taught me a ton about all that good GAAP stuff


This is also the way to do it if you want a number that multiple nodes can edit independently.

When you sync back up, do a big tally at the end, and that's your final number.

(This constitutes a CRDT, and is known as the Positive/Negative Counter).


The idea is similar/equivalent to a stack of git deltas yeah? Each line is a change of one of two types and then you combine them to get a net change?


I'm actually not too familiar with git deltas, but that does sound very similar to an Observed/Removed Set, another CRDT.


How are they superior though? What do they gain by doing this (other than added complexity)?


Hey! Joran from TigerBeetle here.

(I know this only because I somehow happened to major in Accounting back in university), but when you represent a general ledger of accounts, there are always two positive columns for amounts in any given account, one for debits, the other for credits.

The golden rule is that you always add to either column. You always preserve information.

To see why two columns (or two integer balances) preserves more information, take this example:

  1. An account A with a debit balance of $1m and a credit balance of $1m, and
  2. An account B with a debit balance of $0 and a credit balance of $0.
Account A contains volume information in that you can immediately see not only that account A was transacted against, but with significant amounts. Conversely, account B shows no volume. But it's also clear even that there were no transactions between A or B.

Whereas, if you take the net of the two amounts, and reduce that to a single amount in storage (as opposed to only taking the net in presentation), and if you use negative numbers, you lose this information. It's a subtle thing.

The other angle here, is to consider why some engineers shortcut to negative numbers in the first place. I find that it's usually because they haven't fully grokked that accounting is itself a type system. You get assets, liabilities, equity, income and expenses (as account types), and the debit/credit balances, when considering the type of an account, tell you further information. For example, did a bank account suddenly transition into overdraft? Different account types, increase on different sides.

I think this is also the reason that you find accountants typically wrapping amounts in parentheses, and then specifying the DR or CR side (or type of account), rather than using a negative sign. It's a tradition of preserving information.


> You always preserve information.

Using two positive numbers preserves some information, but still destroys plenty. Just less than a single signed number would. Consider two accounts:

    1. An account A with a debit balance of $1m and a credit balance of $0, and
    2. An account B with a debit balance of $1m and a credit balance of $0.
One of those was opened by a lottery winner last week who did a single transaction dropping in their winnings. The other was opened by a retiree 60 years ago who has been dripping small deposits in for their entire working history. Which is which?

That distinction is still lost by summing debits and summit credits.

To really preserve information, you'd need the full list of all transactions. But, obviously, that comes at a significant performance cost. So the way to look at storing sums for debits and credits is that it's a trade-off which gives you a little more information than just a single balance but is still a relatively small fixed-size amount of data.


> You always preserve information.

This is the principle.

Not to suggest that the sum of debits and credits alone is sufficient to that end, but rather to explain why it's at least necessary.

> To really preserve information, you'd need the full list of all transactions.

Exactly. And so this is, of course, what TigerBeetle also does (with full durability).

And, at the same time, TigerBeetle doesn't decompose account balances to a single integer, for the reasons given.

Both of these are important.

To be clear, the context for this thread is the latter: Why not decompose account balances to a single positive/negative integer?


Reference for the "justifiable technical reason"?


I'm referring to the OP which discussed the technical justification for their choice. An employee from the firm also elaborated a little more elsewhere in the replies


I once argued for using BigDecimal instead of Doubles in an invoicing software but had a hard time to come up with a practically relevant example.

Is there an example where it makes a noticeable difference (at least one cent in the final result) that does not involve unrealistic amounts or numbers of items?

I'm not arguing for Doubles, just collecting arguments to convince.


I'm curious about this too. If you were engaged in millions of arithmetic operations then I can see how inaccuracies might accumulate in theory, but in practice floating point operations are intentionally designed to minimize that.

And for everyday individual transactions it's hard to see a problem. Maybe the problem is more when you're summing up every single financial transaction for the year? But even in that case, if the smallest financial resolution is a cent, all the floating point noise seems like it would be occurring many decimal points beyond. Even if you're dealing with billions of dollars.

To be clear, I wouldn't do it myself -- I'm too risk-averse, too afraid of unknown unknowns. But it is hard to see what actual real-life negative consequences there would be for 99.9% of businesses, unless I'm missing something? Like the parent commenter, I'm looking for where I'm wrong here.


The part of using float for this that concerns me is not epsilon. It is all of the other weird edge cases & states. I don't like that 2 flavors of infinity and something that isn't a number are explicitly representable.


Technically you can have noticeable differences before you even make it to 1 cent. For example, if you're trying to determine if a result of a calculation is negative, 0, or positive for whatever reason. With floats/doubles, you would probably need to consider "0" to actually be "a number sufficiently close to 0, if not 0 exactly" and then remember to handle that everywhere.

It can also be noticeable if you're just trying to calculate something like "is the invoice paid off". Maybe your view layer is showing $0.00 balance to the end user, but the backend hasn't correctly rounded off those extra bits from a floating point calculation, so your backend logic is now saying the invoice is not actually fully paid off, even if the end user has no idea what they could possibly still owe.


Fundamentally, the problem of representing money as Integers is division. A set of integers is not closed over division and, as mathematicians would put it, set of Integers does not form a field. For example, 10/3 is not an integer. Making fundamental money unit small does not solve the problem, it simply limits the size of rounding error. Division followed by multiplication can amplify the rounding errors.

A clean solution would be to use rational numbers of the form y/x stored as a pair of 64-bit numbers. It takes the same 128-bits space as in TigerBeetle proposal but has advantage of exact arithmetic with no rounding errors.

EDIT: typos


We could also tax everything above 32 bits, good for people, good for machine


Imagine if the most money you could have is $43 million (rounding!), and it rolled over to 0 if you exceeded that. You'd be paying accountants to keep you as close to the limit as you dare, instead of just cheating on taxes. But not paying them too much, as you're only a multimillionaire.


I guess at some point you just hire accountants to throw mini basketballs at mini hoops. Most efficient way to keep you below $43 million.


The real trickle down


> While some may argue that a 64-bit integer, which can store integers ranging from zero to 264, is enough to count the grains of sand on Earth, we realized we need to go beyond this limit if we want to be able to store all kinds of transactions adequately.

64-bit integers are barely enough anyways. If the world-wide patrimony is somewhere near $100tn or $200tn, that's around a bit more than 10^16 (so let's round the exponent up to 17) cents, and that's just two digits short of the maximum that can be expressed with 64-bit signed integers!

If you want to sum things like inter-bank debt, or if we have a lot of growth or hyperinflation we might need to express between 1¢ and $1e18 or $1e19, but the max range with signed 64-bit integers is [-$9.2 x 10^18, $9.2 x 10^18] if we still need to represent cents. So 64 bits is starting to be a tad too small for financial purposes.

Even unsigned 64-bit integers are not that comfortable, but signed integers are needed if you're dealing with assets and liabilities.

128-bit is definitely too big, but it's a lot easier to deal with than variable-length integers. One might still use variable length encodings to save space on the wire or on disk, but in-memory 128-bit integers is the way to go.


A reminder that this still isn't the correct way of handling money. The correct way is to put it in the type-system [1].

All the best,

-HG

[1] https://web.archive.org/web/20211014094900/https://ren.zone/...


This only puts currency codes in the type system; and uses rational numbers for amounts.

This is definitely useful because you can have the type system tell you if you've implemented e.g. exchange rate conversion incorrectly. But it's also a hassle because you need to reify currency values discovered at runtime as types, which isn't pretty [1].

[1] https://github.com/runeksvendsen/order-graph/blob/eef0006cba...


Yes, and to your first point, TigerBeetle also supports multiple ledgers, explicitly where each ledger is for a separate unit or currency (with isolation enforced by TigerBeetle across different units/currencies).

Thus, TigerBeetle doesn't preclude the use of money types at a higher layer (you would have seen that we tried to emphasize this also in the post).

Another way to think of this, is that we focus on the storage of accounting information by providing high-performance accounting primitives (e.g. execute 8k double entry transactions all within 5ms in a single network roundtrip DB query).

But the accounting policy (rounding etc.) remains the responsibility of the application/organization, since this may differ according to requirements/jurisdiction.


You need to be also handle "origin" or "flavour" of money. Future governments may place various sanctions and limitation on money. So green dollars will be better, Russian dollars not so great and so on. Some money may be owed to local VIP, and should not be confiscated... All that may be mixed on single bank account.


Yes, we do also have various user_data fields for accounts/transactions to record the:

  who / when / where / why / what / how / how much
https://docs.tigerbeetle.com/reference/transfers


It's interesting that even the current financial system fits so snugly into 2^64! Global GDP = $100T = $10^14. If we imagine global assets being worth 10x GDP, that's $10^15. 2^64 = 1.8 * 10^19. If people care about measuring values to precision at $0.0001, that's about the limit.


Reminds me of the Rosetta page https://www.rosettacode.org/wiki/Currency for various ways to handle a related problem.


>This not only avoids the burden of dealing with negative numbers (such as the myriad of language-specific wraparound consequences of overflow… or underflow),

Is this supposed to be some kind of joke?

Unsigned underflow wraps around to the highest number. Meanwhile signed gives you a negative number, which tends to be properly handled by code that assumes positive numbers.


An argument over fractional pennies costs full dollars of peoples' time.

- Anonymous


I once worked on a banking terminal for a large bank. My first attempt was sent back as there wasn't 'enough zeros on the buy button'. It could do £999,999,999


Reserve Bank of Zimbabwe "100 Trillion Dollars" hahahahaha

LOL. Is that real? OMG, that's incredible. Image if this trend kept going: what sign of humanity that new SI units need to be minted first because inflation drives currencies to insane new denominations, rather than cat videos, IoT and porn bulge yottabytes out of bounds?


Zimbabwe underwent a period of hyperinflation.

https://en.wikipedia.org/wiki/Hyperinflation_in_Zimbabwe

They eventually stopped printing currency.


The only good thing that Bitcoin ever did was create a real world use case that breaks shitty software handling financial numbers.

Either the programmers have to do the right thing or tell their boss that they can never support Bitcoin.

Both options give me a warm, fuzzy feeling inside when I watch them in real time.


Intel should release a new premium CPU with AccountMax Cores (just add a few registers for 256 bit math)

Of all the problems to solve, the size of the registers is the most trivial.


I despair at the state of CS these days.


And the Fed says, "Hold my beer!"



Next, people will want 128-bit/128 bit rational arithmetic for money. Then fractions will work right.


cf. https://news.ycombinator.com/item?id=37573939

(TLDR: It's the 10^10 precision in trading systems that gets you.)


Still not enough for storing Ethereum/ERC20 balances (256-bit)... :(


ETH uses 256-bit integers for accounting.


'hi"


Perhaps I'm the odd one out here, but most financial systems I've had the chance to work with don't actually use numeric types to store values, they use strings or other comparable types. Numeric values are passed to all outside interfaces, but the internal states are written in a way where no bit level issues peddle with the values. I'm wondering what the experience of the wider audience here is?


> internal states are written in a way where no bit level issues peddle with the values

... "bit level issues"? you eventually are going to need to use those bits to add, multiply, subtract or divide the numbers.

Using character strings to represent numbers is certainly one way to do arbitrary precision/ BigInteger (like the article describes). But you might want to postpone transcoding it to characters until you decide to export your numbers across a boundary. Otherwise all of the arithmetic operations you do have to suffer this round trip each time.


A shocking number of devs are simply afraid of bits and bytes.


Sounds slow as hell, what kind of financial systems have you worked with? I write trading software, and pretty much everyone uses integers with implied decimal place, similar to TFA.

How do you know you convert your internal string to a float/int correctly? You have to deal with the numeric types eventually


I've never written any trading software, but out of curiosity, have you done any tests comparing something like a GMP arbitrary-precision number vs a raw integer type.

Obviously the raw integer (64 bit or 128 bit) will go faster since it would have CPU-level assistance, but I'm curious to what that actually ends up being in practice, especially at the scale of trading. Is it 10% faster? 10X faster? Somewhere in between?


If by "bit-level issues" you mean bit flips (which you should be worried about), a string doesn't help much - "4" is 0x34 ASCII/UTF-8, flip lowest bit and you get "5" (0x35).

I'd imagine if I have to protect against bit-level fuckery something like this would be a better representation:

    struct Dough {
       u64 amount;
       u64 amount_again;
       u64 amount_just_to_be_really_sure;
    }
that and hardware-level protection of course


Storing the same element the same way isn't really helping against implementation bugs, if going that route, better store in multiple formats (u64, int, or string), to protect yourself from a CPU or interpreter or library bug. In practice, just store the amounts as integer, and if you are really worried, add a checksum (if total balance is not enough).


I think what OP's talking about aren't sw bugs, but rather hw errors like a bit flip caused by an external interference.


That pretty much guarantees being the victim of multiple coding bugs...


Maybe, if you're an incredibly bad programmer. Add operator overloading and method to extract the number and you never have look at the internals again.


They're talking about floating point precision issues, same thing as the article, and explaining another way to deal with it that doesn't require thinking as low-level as bits like the article is mainly talking about.


That's the kind of thing for what error correction codes were invented.


Not true for any of the finanical systems I've worked for the credit/derivatives/FI/etc desks of some of the largest investment banks down to the systems for the virtual card e-money issuer in the UK that I founded.


Derivatives are generally subject to some fairly large uncertainty in valuation, for example bid/offer spreads are usually many orders of magnitude larger than floating point error. When the derivative expires it does have some very fixed value but the investment bank will have made enough money off the trade to "generously" round up the float to the nearest cent.


Profit/margin was not larger than the ulp for single-precision float values for derivatives by the time I got there, so we all were using doubles.

The rounding-for-settlement issue that you describe is, I think, separate.


Single precision ulp is one part in ten million? I'm not sure which derivatives you worked with but that's an exceptionally small profit margin. For most derivatives a margin of one part in ten thousand would be considered small.


They use integers to store cents or fraction of cents, and that's it (or the equivalent of MySQL "DECIMAL" format if not using integers)


When you say "they" I could tell you exactly what format we stored card balances in, including the implied point position for different currencies (not the same for GBP and (say) JPY) and none of it involved DECIMAL!


> They use integers to store cents or fraction of cents or DECIMAL as alternative if not using integers

Seemed rather clear, what's difference with what you say ?


I referred to 'they' for my clients and start-up. You seemed to be making claims about those particular implementions, which seemed a bit odd.


I have never encountered such a system. Integers used for fixed-point representation do not suffer from "bit-level issues".


Is fixed-point the right term ?

As far as I know fixed point numbers have a fixed fractional part in bits, but this is different from using integers with a multiplicative factor, like 100, to represent correctly a fractional part as 1/100.


No it's not different, what you describe is exactly what fixed point is.


Oh, this does sound odd indeed. I can only assume this is very slow, and voluminous on the storage side.

In a similar vein, the only "tricky" solution I saw once was a financial system that stored rational numbers as fractions, and used fractions for all computations too. A decimal point was only used for the final results (end-user UI, APIs, etc.). I still think it was a an overkill.


Interesting, were the denominators always powers of 10? Or any positive integer?


The financial systems you've used never needed to add two numbers together?


I see your 10^14 Zimbabwe Dollar Note and raise you this 10^20 Hungarian pengő,

https://en.wikipedia.org/wiki/Hungarian_pengő (caption: "100 million trillion (100 quintillion) pengő (1946)")

At its nadir (US$ 1.0 = 4.6*10^29 P), a reasonable-sized transaction denominated in, for example, micro-pengős, would have overflowed a uint128_t.

Conclusion: just use floating point, it's inflation-proof.


At some point denominations will be 2^20 USD per note. As inflation grows, in the USA and others, a 64bit won’t be enough. Money is imaginary store of value we all agree on. The actual number changes.


Wouldn't we just re-base the currency at that point?


That “just” is doing some very heavy lifting. That could potentially involve having to update almost every financial system worldwide given the US dollar’s position as a global reserve currency.


If inflation averages 5% per year, a factor of 10^17 (what are the biggest USD notes actually in use?) is just over 800 years away, which is far enough that we might legitimately not have a moon any more let alone dollars.

If hyperinflation brings that date closer, the dollar will probably also stop being a global reserve. Or possibly: will be caused by it ceasing to be a global reserve.


> 800 years away, which is far enough that we might legitimately not have a moon any more

Wait, what? 800 years is absolute peanuts for the fate of orbital bodies. Wikipedia's Timeline of the Far Future estimates the demise of the moon in 7.59 billion years; still in orbit around the Earth (albeit a much higher one), it gets destabilized by the expanding sun and eventually swallowed by it, along with the Earth. If they aren't swallowed by the sun, they estimate 65 billion years before we lose the moon in one way or another.

https://en.wikipedia.org/wiki/Timeline_of_the_far_future


https://en.wikipedia.org/wiki/Self-replicating_spacecraft

Even the unit of replication is a factory under human control with no novel AI, and which can only replicate with significant human oversight of already existing mining, processing, and manufacturing equipment, it doesn't take insane reproduction rates to disassemble the moon in 800 years.


That's like saying "5 years away is far enough that Mount Rushmore might be gone by then!" because we could nuke it if we wanted to. Why would we do that!? We like the moon being there! Do these examples really help illustrate how far away 800 years, or 5 years, are?


Lots of people want to build megastructures, and the moon is convenient material; I suspect that when the capabilities make it seem like a serious possibility, people will discover the problems and then do it anyway just like with almost all the other environmental issues to date that compete against economic interests.

However, the main point of the example is "800 years is too far ahead to plan for how much the US dollar might inflate" by way of demonstrating how extreme things can change. As far as I know, no fiat currency has existed that long, and only three country-like entities[0].

Mount Rushmore isn't likely to be targeted by nukes, but I strongly suspect that it is defended against vandals (politically motivated or otherwise) with dynamite (or similar categories of explosives) — that said, if you've always wanted to go and have not yet done so, you should, as I could've said much the same thing about the World Trade Centre 22 years and a fortnight ago.

(I wonder if a single unfriendly nuke on US soil would cause an economic shock? Normally the assumption would be what else might come with it).

[0] https://www.brainscape.com/academy/longest-lasting-empires-w...


The federal reserve aims for 2% a year, their adjustments just haven't been working.


I love how you're bringing the US dollar's reserve currency status into a hypothetical where the US Treasury is issuing million dollar denominations. We can assume that some other parts of the world's financial system will have required fixing first.


Yes, but I think a "new dollar" is more likely than the US putting up with million-dollar bills. I guess it's a tough call which one is more embarrassing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: