I debated this with my boss at my first programming job (this was 20+ years ago). He thought 1/0 should be 0 rather than an error because "that's what people expect". My argument was from mathematical definitions (the argument which this blog post picks apart).
In retrospect, I see his point better - practical use trumps theory in most language design decisions.
I haven't changed my mind but the reason has shifted more toward because "it's what a larger set of people expect in more situations" rather than mathematical purity.
1/0 = 0 is usually not a practical thing, it's to satisfy that the output of the division operator stays in the type and you don't want crashes (a "feature" of ponylang and gleam, e.g.). Its kind of a PL wonk thing.
It's not at all a good idea for very important practical reasons as I outline in a reply to parent.
Well, if the requirement is to stay in the type, you could extend the type to include the point at infinity. That satisfies both programmer and the mathematician.
The original purpose of defining it to be Nan/INF in floating point was exactly that. You'd do all the work and then check if it was Nan/INF at the end without having to check every intermediate result.
I assert stopping immediately is much more practical. In many cases, you waste considerable amounts of processing power to reach a conclusion you often won't be able to use.
Rust and Haskell can solve this fairly well: both styles of dealing with errors are easily accessible.
Go solves this really badly.
As for specifically what to do about division: the right default depends on your application. Either way is defensible, and I would rather work on making it easy to pick either style in the language of your choice, than to worry too much about what the default should be.
> I don't want to handle errors after every division and division doesn't crash.
You can have one or the other.
You can't have both without the risk of nasal demons. Unless the result of the operation is business-safe to throw away.
That's why having the default / have both is an poor design choice by gleam and pony. Someone will reach for / and encounter demons. Afaict the other langs that do this are not intended for real world prod use. By default / should force the developer into either crashable or unwrap error return. If you want some sort of opt-in "logic-unsafe /", fine but call it something else like </> e.g.
It's not about what I think zero division yields I've taken a math class before. It's just about representation within the type system. If division can return infinities we can't safely combine division with other functions that are expecting ints and floats.
Most languages throw an error instead, but there are tradeoffs there too. If you've decided not to throw an error you should at least return a usable number and zero makes more sense than -1 or 7 or a billion or whatever.
You could also build the number stack from the ground up to accommodate this edge case, and make it so all arithmetic functions can handle infinities, infinitesimals and limits. I've come across a racket sublang like that but it's nearly unusable for the normal common things you want to do with numbers in code.
They're valid according to a spec that doesn't mean I want one showing up when I'm trying to calculate the area of a semicircle or whatever. In the context of getting one by surprise in simple arithmetic they are approximately as bad as zero. Either way you have to decide how to handle it and there are tradeoffs of different approaches, as the article discusses. It's not about someone just being ignorant of basic math like the comment I was replying to implied.
>In the context of getting one by surprise in simple arithmetic they are approximately as bad as zero.
I don't think so, because getting 0 in a larger expression might yield a result that looks plausible, leading to hidden bugs. Inf and NaN both are good because they necessarily propagate all the way up to the end result, making it obvious that something went wrong.
Technically, it is possible for floating-point Inf to stop propagating prior to the final result, depending on the operation. For example, 1/Inf produces zero, as does exp(-Inf).
But those are cases where the larger a value is, the less is contributes to the final value.
i would not expect 1/0 to be zero. as you divide by smaller numbers, the quotient gets bigger, so i can't understand why someone would expect /0 to be zero.
Paraphrasing you: "If I have five apples and were to divide them among 0 people, how many does each person get?" This sums up one approach to this problem, and can be thought of in a more intuitive manner than the limit approach. The answer could be zero. Or 1. Or 37. In fact, any number makes as much sense as the question. Which is why either an exception is raised, (or +- Inf is returned for floats, but that's just the limit approach). But perhaps it would be more fun just to return a random number on divide by zero :)
Like everything in life, it depends...
For example:
Storage has 5 items that need to be processed.
5 items need to be split equaly between available processes.
There are currently 0 available processes so 5 / 0 = 0 items to be processed is more correct than either 5 or Nan or infinity.
Your example is quite vague (e.g. are we dealing with an integer number of items and processes?) and in general if something looks kinda like a division it doesn't mean it is exactly division. Just like in math, we have the power to simply say: if COND -> divide normally, else -> do something else.
I agree wholeheartedly. I think the issue stems from 0 meaning both 0 of "something"/"a concept" and "nil."
If I have 5 apples and divide them in to 0 buckets of apples, that makes sense. If I have 5 apples and divide them into 0 buckets of tractor; that doesn't make sense.
It's more like you had five apples and divided them among zero people, which means not even you get to keep them. They were thrown in the trash instead. The answer is zero.
A stateful expectation of existence is what the denominator describes, but if you forced it to describe people, then you'd phrase it as "how many (ghosts) could possess (any number of apples)?"
Which would be infinite, since ghosts occupy no space and can't interact with physical reality.
As a proportion, compared to nonexistence, any quantity of something is infinitely greater than nothing, so if not n/0, how would you express you expect not the absence of a thing, but its nonexistence?
The OP uses finite fields and fields are basically harmonic structures (think modular math). Assume the field is the numbers 0..n-1 MOD n. At (n-1) + 1 you will get n MOD n which is equivalent to 0 MOD n which is 0. Now assume n-1 approaches infinity; is 0 not ∞ ?
One intuition could be: As you divide 1 by negative numbers of smaller and smaller magnitude, you get negative numbers of increasing magnitude. At 0, the positive infinity of 1/0 is met by the negative infinity of 1/-0 and their average is 0.
> He thought 1/0 should be 0 rather than an error because "that's what people expect"
So I saw this in action once, and it created a mess. Private company had a stupid stock dividend mechanism: every shareholder received some fraction, dependent on fundraising, of a recurring floating pool of shares, quantity dependent on operating performance. (TL; DR Capital was supposed to fundraise, management was supposed to operate. It was stupid.)
One quarter, the divisor was zero for reasons I can't remember. This should have resulted in no stock dividend. Instead, the cap-table manager issued zero-share certificates to everyone. By Murphy's Law, this occured on the last quarter of the company's fiscal year.
Zero-share certificates are used for one purpose: to help a shareholder prove to an authority that they no longer own any shares. Unlike normal share certificates, which are additive, a zero-share certificate doesn't add zero shares to your existing shares; it ambiguously negates them. In essence, on that day, the cap-table manager sent every shareholder a notice that looked like their shares had been cancelled. Because their system thought 1 / 0 = 0.
If you're dividing by zero in a low-impact system, it really doesn't matter what you output. Zero. Infinity. Bagel. If you're doing so in a physical or financial or other high-impact system, the appropriate output is confused puppy.
Huh? The article shows why 1/0=0 is mathematically sound, and then considers an error preferable in a programming context anyway, because practicality. It’s the opposite of the reasoning you’re describing.
> The article shows why 1/0=0 is mathematically sound
It does not, because it is not. And the “real mathematicians” that he quotes aren’t supporting his case either, they’re just saying that there are cases where it’s convenient to pretend. If you look at the Wikipedia page for division by zero you may find “it is possible to define the result of division by zero in other ways, resulting in different number systems”: in short, if it’s convenient, you can make up your own rules.
> in short, if it’s convenient, you can make up your own rules.
Yes.
People find it confusing that there is no simple model that encapsulates arithmetic. Fields do not capture it in its entirety. The models of arithmetic that describe it end up being extremely complex.
Arithmetic is ubiquitous in proofs of other things, and people like the author of this blog cannot get over it.
Reality is weird, inconsistent, and weirdly incomplete.
"Making up your own rules" is literally what mathematics is, though. Using that as a counterargument to using a specific set of axioms tells me you don't understand mathematics.
>”Making up your own rules" is literally what mathematics is, though.
We don’t make up arbitrary rules, though. Well…so-called mathematicians who study systems with completely arbitrary rules are just jerking off. The rules that most mathematicians use are based on our intuitions about what can’t be proven but “has to be” true.
As long as lim(1/x)_x->0 = inf, 1/0 = 0 doesn't make a whole lot of sense, mathematically speaking.
I might be wrong but I don't think it was addressed in the article either.
There's a great Radiolab episode[0] that talks about divide by zero in perhaps more conceptual terms.
KARIM ANI: If you take 10 and divide it by 10, you get one. 10 divided by five is two. 10 divided by half is 20. The smaller the number on the bottom, the number that you're dividing by, the larger the result. And so by that reasoning ...
LULU: If you divide by zero, the smallest nothingness number we can conceive of, then your answer ...
KARIM ANI: Would be infinity.
LULU: Why isn't it infinity? Infinity feels like a great answer.
KARIM ANI: Because infinity in mathematics isn't actually a number, it's a direction. It's a direction that we can move towards, but it isn't a destination that we can get to. And the reason is because if you allow for infinity then you get really weird results. For instance, infinity plus zero is ...
LATIF: Infinity.
KARIM ANI: Infinity plus two is infinity. Infinity plus three is infinity. And what that would suggest is zero is equal to one, is equal to two, is equal to three, is equal to four ...
STEVE STROGATZ: And that would break math as we know it. Because then, as your friend says, all numbers would become the same number.
Then take 10 and divide it by -10 = -1. 10 / -5 = -2. 10 / -0.5 = -20.
So from the other side of the y-axis it behaves the exact opposite. It goes to minus infinity. So at x=0 we would have infinity and minus infinity at the same time. Imho that is why it is undefined.
And you're exactly right, 0/0 is NaN in 754 math exactly because it approaches negative infinity, zero (from 0/x), and positive infinity at the same time.
I always thought the answer to verbal query "let y=1/x, x=0, find y" was "Well, the answer is the Y axis of the plot". Surprising that people have to be reminded that X can be signed. I've had similar conversation IRL.
It's equal (as in, comparing them with == is true), but they are not the same value. At least in IEEE 754 floats, which is what most languages with floating point numbers use. E.g., in JS:
I think you're misunderstanding me. They are the same value, but a different representation. The equivalence of the value can be shown with math, and has nothing to do with the implementation details of IEEE 754.
[wrong] 3a. 1 == 2 (assumes Infinity - Infinity == 0, which is false)
[ok] 3b. Infinity == Infinity
So starting from Infinity + 1 == Infinity + 2 gets you nowhere interesting.
And that quote is a great example of what I hate about
every pop-sci treatment of mathematics:
> Because infinity in mathematics isn't actually a number, it's a direction
Any time someone says "actually, in mathematics, ..." they're talking out of their ass. No matter what comes after, there is a different system of math that makes their statement false. There are plenty of branches of mathematics that are perfectly happy with infinity being a "number", not a "direction". What even is a "number" anyway?
It's even worse than that. The other issue is what happens when you've got a negative number as the numerator (number on top). Then the smaller the denominator (number on bottom) the more negative the result. -10/10 = -1. -10/5 = -2. -10/2 = -20. So if you divide by zero, it's obviously negative infinity! And it's positive infinity! At the same time.
The arguments around limits are addressed towards the end (under "Update 8/12/2018"):
> > If 0/0 = 0 then lim_(x -> 0) sin(x) / x = sin(0) / 0 = 0, but by L’Hospitals’ Rule lim_(x -> 0) sin(x) / x = lim_(x -> 0) cos(x) / 1 = 1. So we have 0 = 1.
> This was a really clever one. The issue is that the counterargument assumes that if the limit exists and f(0) is defined, then lim_(x -> 0) f(x) = f(0). This isn’t always true: take a continuous function and add a point discontinuity. The limit of sin(x) / x is not sin(0) / 0, because sin(x) / x is discontinuous at 0. For the unextended division it’s because sin(0) / 0 is undefined, while for our extended division it’s a point discontinuity. Funnily enough if we instead picked x/0 = 1 then sin(x) / x would be continuous everywhere.
Similar examples can be constructed for any regular function which is discontinuous (e.g. Heaviside step function).
It's fine. Infinity isn't a real number, so 1/x isn't continuous at 0, so it doesn't matter what the value of 1/0 is. All your open sets still behave the way you expect. Whether you choose "this function is undefined here" vs "it's impossible to ever reach the value of this function at this value, under any assumptions I'll ever care about" is purely a matter of convenience.
As others have pointed out "larger and larger" is the same when it is negative too. So I think people are just going: positive infinity + negative infinity = 0.
Intuitively nice in a sense but I honestly think '0' is misrepresenting what is going on here. I'm ok with it being ' "+ and/or -" infinity' as a new definition.
Programmatically I think it should result in a NULL or VOID or similar. I mean, by definition it has no definition.
But once you go behind, it flips suddenly anyway so you could just as well have it be intuitively “halfway between the positive and negative infinities” which is at least fun and could spawn a few “Why is 1/x suddenly go to zero” articles on HN in 2053
Well if you consider 1/z as a function of a complex coordinate it definitely makes a lot of sense to set it to infty. That identifies +infty and -infty if you restrict yourself to the real numbers.
I was also looking for this. And would like to add: lim(-1/x)_x -> 0 = -inf
That is (in my opinion) the whole point why it is actually undefined. On one side of the y-axis it goes to infinity, on the other to minus infinity. I don't see a solution to this and therefore always have accepted that it is undefined.
explains Lean's behavior. Basically, you use a goofy alternate definition of division (and sqrt, and more), and to compensate you have to assume (or prove based on assumptions) that the things you will divide by are never zero.
Hillel's pedantry is ill-taken, though, because he starts off with a false accusation that the headline tweet was insulting anyone.
Also, 1/0=0" is sound only if you change the field axiom.of division, which is fine, but quite rather hiding the ball. If you add " 1/0=0" as an axiom to the usual field axioms, you do get an unsound system.
My head-canon with dividing by zero is that 1/0 = undefined and 1/-0 = -undefined, and that's where I leave it because anything less funny than that seems like an impractical answer.
I find it odd that all of the mathematicians cited at the end are actually pretty much CS people, working on proof assistants. Kinda renders that section pointless, IMO (though the comment by Isabelle's author was interesting).
IMO, whether something like this makes sense is a separate matter. Personally I always just think of division in terms of multiplicative inverses, so I don't see how defining division by zero helps other than perhaps making implementation easier in a proof assistant. But I've seen people say that there are some cases where having a/0 = 0 works out nicely. I'm curious to know what these cases are, though.
Not that anybody asked me, but I think about it like this:
You have a field (a set of "numbers"). Multiplication is defined over the field. You want to invent a notion of division. Let's introduce the notation "a/b" to refer to some member of a field such that "a/b" * b = a.
As Hillel points out, you can identify "a/b" with a*inverse(b), where "inverse" is the multiplicative inverse. And yes, there is no inverse(0). But really let's just stick with the previous definition: "a/b" * b = a.
Now consider "a/0". If "a/0" is in the field, then "a/0" * 0 = a. Let's consider the case where a != 0. Then we have "a/0" * 0 != 0. But this cannot be true if "a/0" is in the field, because for every x we have x * 0 = 0. Thus "a/0" is not in the field.
Consider "a/0" with a=0. Then "a/0" * 0 = 0. Any member of the field satisfies this equation, because for every x we have x * 0 = 0. So, "a/0" could be any member of the field. Our definition of division does not determine "0/0".
Whether you can assign "1/0" to a member of the field (such as 0) depends on how you define division.
Consistency depends on your set of axioms. If you are willing to give up various nice properties of division, then you can obviously extend it however you like.
My gripe with arbitrary choices like this is that it pushes complexity from your proof's initial conditions ("we assume x != 0") into the body of your proof (every time you use division now you've split your proof into two cases). The former is a linear addition of complexity to a proof, whereas the latter can grow exponentially.
Of course, nothing is stopping you from using an initial condition anyway to avoid the case splitting, but if you were going to do that why mess with division in the first place?
Most definitions of division that I have seen use q * d + r = n if q is unique and abs(r)<abs(d), which doesn't require the definition of an inverse. Rather, d that exist for n = 1 and r = 0 can be labelled q's inverse but it doesn't require a new definition.
Additionally, if inverses are defined as separate objects then what is 2 plus the inverse of 2? It doesn't simplify to 2.5 because there's no addition axiom for numbers and multiplicative inverses, or for that matter any rules for inverses with inverses. So you might have 1/2 and 5/10 but they're not equal and can't be multiplied together.
Sounds legit, infinity is singular and so is 0. I think one problem is also that division isn't the only mathematical operation which can produce dubious results. E.g. sqrt(x), arctan(x) which have multiple branches which is why there is often a separate arctan2(x, y) to select the correct branch. Oh well and then there's just addition which silently overflows in almost every programming language.
Without arbitrary precision numerics and functions which aren't explicit about corner cases it's always a simplification. However performance-/code-wise this is usually not feasible.
Yeah but this is basically about countable and uncountable sets. It's rather counter-intuitive that the rational numbers are countable. So it's possible to create even a bijective mapping between rational numbers and natural numbers. On the other hand real numbers (which number types in programming languages try to approximate) are uncountable
In uxn, the result of division of anything by zero is defined as zero (there are no error conditions in uxn). I did not know that Pony is also doing that. This is not a proper "division" (since it is not always a multiplicative inverse operation), but it does not necessarily have to be (and, as another comment mentions, the integer division operator in many programming languages is not a proper "division" either); it is something else which might use a "/" sign or the instruction name "DIV" or whatever.
I've always wondered what would happen if we defined /0 as a new symbol, for example 'z'. The same as we define sqrt(-1) as 'i'. So if you can do 4*sqrt(-1)=4i, you could also do 4/0 = 4z. These two seems similar, as in taking something that should not exist, and just letting it exists in a totally different and orthogonal domain.
I tried once to investigate the implications, but it quickly became far more complex that with 'i' and never went far. Still intrigued if this is somewhat interesting or a total time loss though.
It's just a waste of time. The reason no value is conventionally assigned for division by zero is that assigning a consistent value doesn't help. When you want a value for that kind of expression at all, you'll want different values in different expressions.
In SQL, if you divide by zero, you get a NULL. If you divide by NULL, you get NULL (any operation involving a NULL yields NULL, even GROUP BY). I call it "a black hole zero", if it touches anything, that thing becomes a black hole zero.
Some languages will wrap division by zero in a special type, a NaN (not a number). You can then reason on top if that NaN if you want to.
So, in a sense, there are some people already doing practical stuff with substituting /0 for a new symbol.
4) The dyadic arithmetic operators <plus sign>, <minus sign>, <as-
terisk>, and <solidus> (+, -, *, and /, respectively) specify
addition, subtraction, multiplication, and division, respec-
tively. If the value of a divisor is zero, then an exception
condition is raised: data exception-division by zero.
However, the "any operation involving NULL yields NULL" is standard:
1) If the value of any <numeric primary> simply contained in a
<numeric value expression> is the null value, then the result of
the <numeric value expression> is the null value.
This is, in some sense, calculus. Look at 0z, which is 0/0, which calculus treats with l'hopitals rule. Another way of looking at it is to say that 0 is dt, then z is 1/dt. Clearly we can have different 0s, so we might name another dx, then take dx/dt, which is an arbitrary derivative.
So one grain of sand is a heap and then when you remove that grain the heap disappears, but you only removed one grain from a heap so this is impossible because it is discontinuous. One solution is to wrap the problem in fuzzy logic with a 'heapness' measure.
Generalizing this type of solution we have a practice of wrapping paradoxes in other forms of logic. You would define an interface between these logics. For example in Trits (0,1,UNKNOWN) you could define an interface where you can change the type of NOT-UNKNOWN from Trit to Boolean. This would return at least some part of the entropy to the original domain, preserving a continuity. Wave Function Collapse is another example of translating from one logical domain to another.
Disagreed. 1/0 should be infinity, and computers should be able to handle these concepts. Just look into what is 1/0.00000000000[etc]1. And no is not an error, you find out with a very real and tangible example, when you are developing a 3D engine and you want to make the camera to look at vector [ 0, 0, 0 ]. Quick resume: You can't, you need to force add a slight displacement so you can skip this silly error.
Whatever as long as the name does not imply that these are integers, because then it is just wrong. The same holds for overflowing results being clamped or resulting in smaller or negative values due to wraparound. These are not integers.
There is only one correct behavior for something named "int". Give the correct result or throw an error.
Agree `int` is the problem. This implies we're doing math over all integers, when in most languages what we're actually working with are bounded integers. (There's some counter-examples, Python and Haskell come to mind.) Calling them sane names like `i32` and `i64` makes it clear that overflow exists.
Those are all integers. https://en.wikipedia.org/wiki/Modular_arithmetic - "The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801." They have been integers for over 200 years now.
But if you write a + b and the result is wrapped around or saturated, it's not integer addition. It's something else and should be written in another way in code and have a different name. I am aware of modular arithmetic.
If you have a type named "int" with an operation called "addition", and that operation is not actually integer addition... it's wrong.
True correct behavior would have that if a > b, then a + c > b + c also holds true for all integers, but that isn't guaranteed for wrapping (or clamping.) (e.g. if 250 > 1, then 250 + 10 > 1 + 10 should be true, but with 8-bit wrapping you would get 4 > 11, which is false.)
In combinatorics and discrete probability, `0**0 = 1` is a useful convention, to the point that some books define a new version of the operator - let's call it `***` - and define `a***b = a**b` except that `0***0 = 1` and then use the new operator instead of exponentiation everywhere. (To be clear, `**` is exponentiation, I could write `a^b` but that is already XOR in my mind.)
So one might as well overload the old one: tinyurl.com/zeropowerzero
This causes no problems unless you're trying to do real (or complex) analysis.
1/0 can cause a few more problems, but if you know you're doing something where it's safe, it's like putting the Rust `unsafe` keyword in front of your proof and so promising you know what you're doing.
> It’s saying that Pony is mathematically wrong. This is objectively false.
Pff. The author wants to show off their knowledge of fields by defining a "division" operator where 1/0 = 0. Absolutely fine. I could define "addition" where 1 + 2 = 7. Totally fine.
What I can't do is write a programming language where I use the universally recognised "+" symbols for this operation, call it "addition" and claim that it's totally reasonable.
Under the standard definition of division implied by '/' it is mathematically wrong.
What they obviously should have done is use a different symbol, say `/!`. Obviously now they've done the classic thing and made the obvious choice unsafe and the safe choice unobvious (`/?`).
It's a question of usefulness. If in your problem domain "1+2=7" is the most useful definition, then by all means do that. Why does the semicolon terminate statements and not the universally agreed upon period? Why does the period denote member access? Why is multiplication not denoted by the universally agreed [middle dot / cross character] (strike out the one that is not universally agreed in your country). The design and semantics of a programming language ought to be in service of the programs we wish to express, and informed by our decades of experience in human ergonomics. Blind reverence to religions of yore does us no good. Mathematical notation itself has gone through centuries of development and is not universal, with papers within the same field using different notation depending on what strikes the author's fancy. To treat it as sacred and immutable is to behave most un-mathematically. Hell, you can still get into a nice hours-long argument about whether or not the set of natural numbers includes zero or not (neither side will accept defeat, even though there is clearly a right answer)!
> What I can't do is write a programming language where I use the universally recognised "+" symbols for this operation, call it "addition" and claim that it's totally reasonable.
As a programmer, you're right: we have standard expectations around how computers do mathematics.
As a pedant: Why not? Commonly considered 'reasonable' things surrounding addition in programming languages are:
* (Particularly for older programming languages): If we let Z = X + Y, where X > 0 and Y > 0, any of the following can be true: Z < X, Z < Y, (Z - X) < Y. Which we commonly know as 'wrap around'.
* I haven't yet encountered a language which solves this issue: X + Y has no result for sufficiently large values for X and Y (any integer whose binary representation exceeds the storage capacity of the machine the code runs on will do). Depending on whether or not the language supports integer promotion and arbitrary precision integers the values of X and Y don't even have to be particularly large.
* Non-integer addition. You're lucky if 0.3 = 0.1 + 0.2, good luck trying to to get anything sensible out of X + 0.2, where X = (2 ^ 128) + 0.1.
> I haven't yet encountered a language which solves this issue:
Well, Python supports arbitrary precision integers. And some other niche languages (Sail is one I know).
I don't think "running out of memory" counts as a caveat because it still won't give the wrong answer.
For floats, I don't think it's actually unreasonable to use different operators there. I vaguely recall some languages use +. or .+ or something for float addition.
> Well, Python supports arbitrary precision integers. And some other niche languages (Sail is one I know).
As a Lisper, I very carefully chose an example to account for arbitrary-precision integers (so X + X where X is, say, 8^8^8^8 (remember, exponentiation is right-associative, 8^8^8^8 = 8^(8^(8^8)))).
> I don't think "running out of memory" counts as a caveat because it still won't give the wrong answer.
Being pedantic, it doesn't give the _correct_ answer either, because in mathematics 'ran out of memory' is not the correct answer for any addition.
Right, but you can never guarantee giving the correct answer. What if someone unplugs the power mid-computation? That's basically where running out of memory is (for a modern desktop system anyway).
What's wrong with that? It's mathematicaly undefined. SQL dialects typically return NULL for erroneous operations. Plus, it's not like it's returning 0 or some other numeric-typed value
If all the inputs to the expression are not null, you should get some not null result so long as your well defined inputs meet other criteria of the expression (in bounds, etc.); I'd consider this a validation question and, failing validation, I'd want an exception. By simply returning NULL, you're really saying something else... that the inputs to the expression are valid, but result in something that cannot be determined. I don't think this is the case in 1/0; an exception is the proper signal since the construction of the expression itself is wrong (or the inputs invalid).
"SQL dialects typically return NULL for erroneous operations." I disagree with this. NULL does not mean erroneous, it simply means the definition is not yet known and therefore cannot be discussed beyond saying you don't know. That could be erroneous, but you don't know yet, all you have is a NULL.
If it's any comfort, I do agree that NULL is better than 0 or some other non-null result. I just don't think it's best and clouds the nature of the expression, the inputs to the expression, and ultimately is an incorrect result.
Also to be fair, MySQL had many more grievous foot-gun data quality issues in the past than this... though these things certainly did make it easier for a non-expert database developer to get something working without blow-up-everything errors.
My understanding was that it's "not allowed" rather than "undefined".
SQL returns NULL if any input value into an expression is NULL, not if an invalid operation is attempted. If the expression contains an error, SQL throws an error, it doesn't return NULL.
The SQL standard requires to error out in this case.
Also: I don't know of any system that would not result in an error when you try to divide something by zero.
If you actually write 1/0 in a manner that can be discovered through static analysis, that could just be a compile time error.
If you compute a zero, and then divide by it… I dunno. Probably what happened was the denominator rounded or truncated to zero. So, you actually have 1/(0+-e), for some type-dependent e. You have an interval which contains a ton of valid values, why pick the one very specific invalid value?
I think it would be possible and practical to use refinement types to statically prevent all divisions by 0. I think you could also do this to detect and prevent integer overflow.
Q on this post:
Is the field rule "Every element Except Zero has ... " (the 9th rule) defined with respect to the additive identity "zero" or the magical other undefined "Zero" that is the number we're all familiar with?
If so, how weirdly arbitrary that the additive zero is omitted for all multiplicative inverse definitions. (At least it seems to me). I always figured this was a consequence of our number systems, not of all fields.
> Every element EXCEPT 0 has a multiplicative inverse, a⁻, such that a*a⁻ = 1.
What is "0"? It's not defined in the axioms other than additive zero. Or is it multiplicative zero? (1?). Is it the number zero?
If it is the additive zero defined in axiom (3), then it just seems weird to me that additive zero is undefined for multiplicative inverse for all fields always and forever.
If it is the number zero, then how does that generalize to other fields?
If the answer is "Numbers are the first field and all fields generalize that", then I suppose we are referring to the number (0), and that's fine, as other fields are welcome to define their own larger definition of zero that includes the number (0) ... ?
The definition is that it is the additive identity for the field; eg x + a = x no matter what value x takes and what field you are considering. This must be unique; suppose a and b are both additive identities for a field, then b + a = b and a + b = a, but commutativity gives us a + b = b + a, resulting in a = b.
The reason the additive identity cannot have a multiplicative inverse is likewise fairly straightforward: once again using `a` as our additive identity we have y.(x+a) = y.x for all x, y in our field; distributing on the LHS gives y.x + y.a = y.x for all x, y in our field; subtracting y.x from both sides finally gives us y.a = 0 for all y in our field.
You would need to relax one or more of the field axioms to have a structure in which the additive identity can have a multiplicative inverse. I'm not aware of any algebraic structure of particular interest that would allow a multiplicative inverse of the additive identity, but in general if you're interested in reading more on this sort of thing I'd recommend reading about rings, commutative rings, and division algebras.
"Zero" is just a specific element of the field which satisfies being the additive identity as well as the rest of the properties of a field where 0 is mentioned. When the rest of the axioms refer to "zero" they refer to the exact same element of the set that is also the additive identity.
It's not the "number zero" because a field does not care about numbers, it's just elements of a set (which might be numbers like in R's case).
1 is not "multiplicative zero", it's the "multiplicative identity".
0 and 1 are just the shorthand we give for those elements. because those are the symbols we use in R which is the most common field we deal with in everyday life.
Another version of this: P(A|B) is not defined for P(B)=0, but you can safely let P(A|B)P(B)=0 when P(B)=0. Half the time, these two terms appear as a pair anyway.
It's a version of "you can't divide by zero, but you can multiply the divisor on both sides of the equation and then use 0*a=0."
The article goes way too long to say 1/0 = 0 isn't exactly wrong because slash can mean anything you want it to mean. As the article points out, it isn't really "right" either, you could equally validly say it's a wreath of pretty flowers which smell bad.
In the computational domain we hold entropy in high esteem. Arbitrarily assigning a value of 0 does not preserve entropy. We could return a promise that eventually we will not overflow if we get to be very very clever (arbitrary time) so that we can maintain purity.
This sort of convenient semi-arbitrary extension of a partial function is ubiquitous in Lean 4 mathlib, the most active mathematics formalization project today. It turns out that the most convenient way to do informal math and formal math differ in this aspect.
I set this to zero and print a warning/error about divide by zero on the log with data that caused it. That log would be sent to the business person worry about.
If they ignore it, I do not care, it is the business problem anyway.
Saying 1/0=∞ means creating a new number system with ∞ as a number. Now you have to figure out all operations with ∞, like -1*∞, 0*∞, ∞*∞, ∞/∞, or ∞-∞.
Making wrong definitions creates contradictions. With 1*x=x, ∞/∞=1, the associative property x*(y/z)=(x*y)/z, and ∞*∞=∞:
But why would we go from what obviously should be a very large boundless number and just replace it with 0. Our few comment discussion is why it’s undefined in a nutshell.
The main issue lies in weakening the field axioms to accommodate any strange new numbers. Instead, defining division by 0 to 0 adds no new numbers, so the field axioms don't change (x/x=1 still requires x≠0). I hope you see the value in extending field theory instead of changing field theory.
If we add new numbers like ∞, -∞, and NaN (as the neighbor comment suggests with IEEE754-like arithmetic), now x/x=1 requires x≠0, x≠∞, x≠-∞, and x≠NaN. Adding more conditions changes the multiplicative inverse field axiom, and thus doesn't extend field theory. Also, now x*0=0 requires x≠∞, x≠-∞, and x≠NaN. What a mess.
The problem is simply that the definition is a lie.
I’m not suggesting that we add numbers or change the definition from undefined. I think undefined is a more accurate description of x/0, because x/0 is clearly far greater than 0.
that's largely solved problem. ieee758 defines consistent rules for dealing with infinities. even if don't use the floating-point parts and made a new integer format, it almost certainly would make sense to lift ieee754 rules as-is.
A IEEE754-like arithmetic (transrational arithmetic, or transreal arithmetic) creates new problems due to adding new values. 0*x=0 now requires x≠∞, x≠-∞, and x≠NaN. (x/x)=1 now requires x≠0, x≠∞, x≠-∞, and x≠NaN, so this system doesn't satisfy the field axioms. NaN lacks ordering, so we lose a total order relation.
However, you get cool new results, like x/x=1+(0/x). Definitely some upsides.
Honestly this hurts my head but Hillel is inevitably correct. You can define an explicitly undefined operation to do whatever you like. But what’s the point? There’s no new mathematics you can do with it, no existing behaviours you can extend like this. Normally, when you divide by a small number, you get a large number. Now for some reason it goes through zero. Why not five? Why not seven?
Just because it’s formally consistent doesn’t mean it isn’t dumb.
Because exceptions are expensive, and functions with holes are dumb.
"Dumb" is purely a matter of aesthetic preference. Calling things "dumb" is dumb.
> Normally, when you divide by a small number, you get a large number. Now for some reason it goes through zero.
Zero is not a "small" number. Zero is the zero number. There is no number that is better result than 0 when dividing by 0; "Infinity" is not a real (or complex) number. This itself is GREAT reason to set 1/0 = 0.
It only ever bothers people who conflate open sets with closed sets, or conflate Infinity with real numbers, so it's good have this pop up to force people to think about the difference.
Sure.. but there are infinite series that sum to a finite value. Perhaps a pertinent example would be summing all the distances between each successive reciprocal of 1:
Sum[1/x - 1/(x+1), {x, 1, ∞}] == 1
You do actually need infinity to arrive at that 1.
Consider that lim -> inf does not mean “It goes to infinity”. Its actual definition has nothing to do with infinity. So your argument about infinity is a red herring.
Or try it the other way, tell me what mathematics works better if 1/x=0 than 1/x=5. If there’s an aesthetic preference displayed here, it’s for mathematics as a tool for reasoning.
Math major here: this is wrong. The expression 1/0 is NOT A NUMBER, even if you allow positive infinity or negative infinity. In particular, it is most certainly not 0.
Note that infinity would be a fine answer IF MATHEMATICS COULD BE CONSISTENTLY EXTENDED to define it to be so, but this cannot be done (see below). Note that using infinity does not "break" mathematics (as some have suggested below) otherwise mathematicians would not use infinity at all.
If we have an expression that is not a number, such as 1/0, you can sometimes consistently define it to be something, such as a number or positive infinity or negative infinity, IF THAT WOULD BE CONSISTENT with the rest of mathematics. Let's see an example of the standard means of getting a consistent definition of exponentiation starting with its definition on positive integers and extending eventually to a definition for on a much bigger set, the rationals (ratios of signed integers).
We define 2 ^ N (exponentiation, "two raised to the power of N") for N a positive integer to be 2 multiplied by itself N times. For example: 2 ^ 1 = 2; 2 ^ 2 = 4; 2 ^ 3 = 8.
Ok, what is 2 ^ N where N is a negative integer? Well we did not define it, so it is nothing. However there is a way to CONSISTENTLY EXTEND the definition to include negative exponents: just define it to preserve the algebraic properties of exponentiation.
For exponents we have: (2 ^ A) * (2 ^ B) ("two raised to the power of A times two raised to the power of B") = 2 ^ (A+B) ("two raised to the power of A plus B"). That is, when you multiply, the exponents add. You can spot check it: (2 ^ 2) * (2 ^ 3) = 4 * 8 = 32 = 2 ^ 5 = 2 ^ (2 + 3).
So we can EXTEND THE DEFINITION of exponentiation to define 2 ^ -N for positive integer N (so a negative integer exponent) to be something that would BE CONSISTENT WITH the algebraic property above as follows. Define 2 ^ -N ("two raised to the power of negative N") to be (1/2) ^ N ("one half raised to the power N"). Check: (2 ^ -1) * (2 ^ 2) = ((1/2) ^ 1) * (2 ^ 2) = 1/2 * 4 = 2 = 2 ^ 1 = 2 ^ (-1 + 2).
Ok, what is 2 ^ 0 ("two raised to the power of zero")? Again, we have not defined it, so it is nothing. However, again, we can CONSISTENTLY EXTEND the definition of exponentiation to give it a value. 2 ^ 0 = (2 ^ -1) * (2 ^ 1) = 1/2 * 2 = 1. This always works out no matter how you look at it. So we say 2 ^ 0 = 1.
I struggled with this for days when I was a kid, literally yelling in disbelief at my parents until the would run away from me. I mean 2 ^ 0 means multiplying 2 times itself 0 times, which means doing nothing, so I thought it should be 0. After 3 days I finally realized that doing nothing IN THE CONTEXT OF MULTIPLICATION is multiplying by ONE, not multiplying by zero, so 2 ^ 0 should be 1.
Ok, is there a way to CONSISTENTLY EXTEND the definition of exponentiation to include non-integer exponents? Yes, we can define 2 ^ X for X = P / Q, where P and Q are integers (a "rational number"), to be 2 ^ (P/Q) = (2 ^ P) * (2 ^ -Q). All the properties of exponentials work out.
Notice how we can keep EXTENDING the definition of exponentiation starting from positive integers, to integers, to rationals, as long as we do so CONSISTENT with the properties of the previous definition of exponentials. I will not do go into the details, but we can CONSISTENTLY EXTEND the definition of exponentiation to real numbers by taking limits. For example, we can have a consistent definition of 2 ^ pi ("two raised to the power of pi") by taking the limit of 2 ^ (P/Q) as P/Q approaches pi.
HOWEVER, IN CONTRAST to the above extension of the definition of exponentiation, there is NO SUCH SIMILAR CONSISTENT EXTENSION to division that allows us to define 1/0 as ANY NUMBER AT ALL, even if we allow extending to include positive infinity and negative infinity.
The limit of 1/x as x goes to zero FROM THE POSITIVE DIRECTION = positive infinity. Some example points of this sequence: 1/1 = 1; 1/0.5 = 2; 1/0.1 = 10; 1/0.01 = 100, etc. As you can see the limit is going to positive infinity.
However, the limit of 1/x as x goes to zero FROM THE NEGATIVE DIRECTION = NEGATIVE infinity. Some example points from this sequence: 1/-1 = -1; 1/-0.5 = -2; 1/-0.1 = -10; 1/-0.01 = -100, etc. As you can see the limit is going to NEGATIVE infinity.
Therefore, since positive infinity does not equal negative infinity, there is NO DEFINITION of 1/0 that is consistent with BOTH of these limits at the same time. The expression 1/0 is NOT A NUMBER, even if you include positive and negative infinity, and mathematics cannot be consistently extended to make it into a number. Q.E.D.
Yes, it's all good and nice that your types are sound and you don't have panics, but I feel like this could get you in trouble in the real world (gleam also uses this division convention, and people very much use gleam for "real world" things). Suppose you took an average over an unintentionally empty list (maybe your streaming data source just didn't send anything over the last minute due to a backhoe hitting a fiber in your external data source's data center) and took some downstream action based off of what you think is the rolling average. You could get royally fucked if money is involved.
Crashing would have been preferable.
1/0 = 0 is unsuitable and dangerous for anyone doing anything in the real world.
People are too scared of crashes. Sure, crashing is not ideal. Best is to do what the program is supposed to do, and if you can’t, then it’s better to produce a friendly error message than to crash. But there are far worse outcomes than crashing. Avoiding a crash by assigning some arbitrary behavior to an edge case is not the right approach.
Strongly agree here. IMO libraries should try hard to return sensible error codes (within reason, eg null pointer access is unrecoverable imo) but application code should just crash. And when a library returns an error code, default to just crashing if it fails until you have a compelling reason to do something more complicated.
Yes but here's the conflict. You design a typed language, you want the primary operators to be type stable so you can compose them. Then there's no room to return an error from a basic operation. So if your language also makes it a priority to NEVER CRASH, you are stuck.
Right, for arithmetic operations, you must have one of:
1. Might crash.
2. Result may not be what you’d expect from conventional math.
3. Inputs and outputs are different types.
4. Nonlinear control flow i.e. exceptions.
Division isn’t even particularly special here. If you have fixed-width integer types (as most languages seem to) then this is a problem for all the basic operators.
3 and 4 are attractive solutions but can get annoying or cause more bugs. (How many catch blocks out there have zero test coverage?) Between 1 and 2, 1 is usually much better.
For cases where the programmer wants 2, you can provide alternate operators. For example, Swift crashes on overflow or with the standard operators, but has variants like &+ for modular arithmetic.
Yes, it's absolutely better to crash if you're in an unexpected state. I had to deal with a service once which had a top-level exception handler that ensured that all exceptions would simply log and let the service keep running. That's great for the majority of exceptions which reach that point because most of them are no big deal to push through.
But one time an exception came at just the right time to cause the internal state and database state to be out of sync. That caused data updates in the service from that point on to start saving bad data into the database. It took a few hours to notice the issue and by that point a lot of the persisted data was trashed. We had to take down the service, restore the database from a backup, and reconstruct the correct data for the entire day.
Fortunately the data issues here were low impact, but it could just as easily have been critical data that was bad. And having a business operate on incorrect data like that could cause far bigger issues than a bit of downtime while the service restarts.
OP didn’t say Gleam is dangerous in general. They said it’s dangerous anywhere around physical or financial values. Your app isn’t critically dealing with either, so it’s not really a retort to their point.
> keeping my client information's integrity is as important to me as keeping the financials
Nobody is questioning your intentions. People writing apps in memory-unsafe languages don’t give fewer shits. They’re just more prone to certain classes of errors.
> how the `1/0=0` problem can be entirely avoided
1/0 problems are generally expected to be entirely avoided. This is about where the system behaves unexpectedly, whether due to human error or the computer being weird.
Correct, these are all trade-offs we make when building a product. Choosing between the "1/0 crashes your program" problem and the "1/0 returns 0" problem is one such tradeoff.
All I was doing was clarifying the impression OP gave.
Now that we all know the details we can make whatever tradeoff we prefer.
Let's be clear. Gleam is still a bit of an esolang. If you had a company and onboarded a junior onto it would you expect them to know that 1/0 == 0? As a senior doing code review for said junior, would you be confident that you would correctly think through every corner case when you encounter the / operator?
Its the year of the Lord 2024, why is a new language putting in such a huge footgun out of the box in its stdlib.
> Gleam offers division functions that return an error type, and you can use those if you need that check.
Yes, but is that what any given developer will reach for first? Especially considering that an error-returning division is not composable?
The language puts people into a place where the instinctive design can cause very dangerous outcome, hard to see in a code review, unless someone on the team is a language lawyer. You probably don't want one of those on your team.
I think there's a reasonable argument for gleam to have an operator that does division resulting in zero but at the very least that should NOT be "/"
as so often, the really preferable solution would be to make it impossible to code the wrong thing from the start:
- a sum type (or some wrapper type) `number | DIVISION_BY_ZERO` forces you to explicitly handle the case of having divided by zero
- alternatively, if the division operator only accepted the set `number - 0` as type for the denominator you'd have to explicitly handle it ahead of the division. Probably better as you don't even try to divide by zero, but not sure how many languages can represent `number - 0` as a type.
All Rust's primitive integer types have a corresponding non-zero variant, NonZeroU8, NonZeroI32, NonZeroU64, NonZeroI128 etc. and indeed NonZero<T> is the corresponding type, for any primitive type T if that's useful in your generic code.
Remember that "almost all" of the Reals are unrepresentable using finite sequences of symbols, since the latter are "only" countably infinite. The next logical step is probably the Radicals (i.e. nth roots, or fractional powers).
I know that nested radicals can't always be un-nested, so I don't think larger sets (like the Algebraic numbers) can be reduced to a unique normal form. That makes comparing them for equality harder, since we can't just compare them syntactically. For large sets like the Computable numbers, many of their operations become undecidable. For example, say we represent Computable numbers as functions from N -> Q, where calling such a function with argument x will return a rational approximation with error smaller than 1/x. We can write an addition function for these numbers (which, given some precision argument, calls the two summand functions with ever-smaller arguments until they're within the requested bound), but we can't write an equality function or even a comparison function, since we don't know when to "give up" comparing numbers like 0.000... == 0.000....
It's funny, I hold the exact opposite opinion, but from the same example: In the course of my programming career, I've had at least 3 different instances where I crashed stuff in production because I was computing an average and forgot to handle the case of the empty list. Everything would have been just fine if dividing by zero yielded zero.
What was the problem with crashing? Surely you had Kubernetes/GCP/ECS restart your container, or if you're using a BEAM based language, it would have just restarted
> Everything would have been just fine if dividing by zero yielded zero
perhaps you weren't making business decisions based on the reported average, just logging it for metrics or something, in which case I can see how a crash/restart would be annoying.
I imagine the problem was that it crashed the whole process, and so the processing of other, completely fine data that was happening in parallel, was aborted as well. Did that lead to that data being dropped on the floor? Who knows — but probably yes.
And process restarts are not instantaneous, just so you know, and that's even without talking about bringing the application into the "stable stream processing" state, which includes establishing streaming connections with other up- and downstream services.
It also seems more mathematically appropriate because it is as close to the limit of the reciprocal as one can get with that representation. Now please allow me to duck before being struck by the tomatoes of mathematicians.
It's probably due to how division is implemented, by shifting the divisor and subtracting it from the remainder. Subtracting (0 << n) leaves the remainder the same as it was and the corresponding bit in the quotient will be set at every step.
Intel's 80186 produced a result like that in one special case, because of a missing check in the microcode. This could be called a bug or an optimization: the "AAM" instruction was only documented as dividing by 10, but in fact takes a divisor as part of its opcode (D4 0A = divide by 10, as listed in the documentation; D4 00 = divide by zero). The normal divide instruction - as well as AAM on all other x86 processors - check for zero and throw an exception.
> It's probably due to how division is implemented, [...]
Or rather how division could be implemented. Risc-V is an abstract instruction set architecture not born from a concrete chip, like x86 was; but they are trying to make things easy on the hardware.
This article invents a new binary operation, calls it "division" and uses the "/" operator to denote it. But the article repeats multiple times that this new operation isn't a multiplicative inverse, so it's not actually division. For example, (a/b)*b=a isn't true for this new operation.
Reusing symbols like +, *, or / to define operations that aren't the + or the / you're used to is pretty common in math. It's just notation.
At the end of the day, the / that we have in programming has the same problem as this article's /, almost all programming languages will return 5/2 = 2 when dividing integers, even though 2 * 2 is not 5! Division is not defined for all integers, but it's just convenient to extend it when programming.
So if some languages want to define 1/0 = 0, we really shouldn't be surprised that 0*0 is not 1, we already had the (a/b)*b != a problem all along!
> Reusing symbols like +, *, or / to define operations that aren't the + or the / you're used to is pretty common in math. It's just notation.
Reusing symbols in a different context is pretty common; taking a symbol that is already broadly used in a specific way (in this case, that `a/b` is defined for elements in a field as multiplying `a` by the multiplicative inverse of `b`) is poor form and, frankly, a disingenuous argument.
I am a professor for algebra at a research university. I make a point out of teaching my students that `a/b` is NOT the same as multiplying `a` by the multiplicative inverse of `b`.
The standard example is that we have a well-defined and useful notion of division in the ring Z/nZ for n any positive integer even in cases were we "divide" by an element that has no multiplicative inverse. Easy example: take n=8 then you can "divide" 4+nZ by 2+nZ just fine (and in fact turn Z/nZ into a Euclidean ring), even though 2+nZ is not a unit, i.e. admits no multiplicative inverse.
That's nonsense. a/b is float in Python 3, and even in other languages a/b gets closer to it's actual value as a and b get bigger (the "limit", which is the basis of Algebra). So four operations in programming generally do agree with foundations of Algebra. But a/0=0 is %100 against Algebra. And it's very unintuitive. It's basically saying zero is the same as infinity, and therefore all numbers are the same, so why bother having any numbers at all?
Floats don't have multiplicative inverses, and the floating point operations don't give us any of the mathematical structures we expect of numbers. Floating point division already abandons algebra for the sake of usefulness.
Knuth vol 2 has a nice discussion of floating point operations and shows how to reason about them. Wilkinson's classic "Rounding Errors in Algebraic Processes" (1966) also has a good discussion.
If you were to define a/0 the most logical choice would be a new special value "Infinity". The second best choice would be the maximum supported value of the type of a (int, int64 etc). Anything else would be stupid.
(a/b)*b=a isn't true, but that's also not true for the math that you're thinking of. What is true is IF b≠0 THEN (a/b)*b=a. And this definition works just fine even if you define division by zero.
Also just to point out, the statement here really is a*b‾*b=a, which might make it more clear why b≠0.
There's no "if" in the division operation. Division is not defined for b=0. a/0 is a nonsensical quantity because the zero directly contradicts the definition of division.
maybe someday there will be a revelation where somebody proposes that it's a new class of numbers we've never considered before like how (1-1), (0-1) and sqrt(-1) used to be nonsensical values to past mathematicians. For now it's not defined.
The limit 1/x as x goes to zero diverges to plus or minus infinity depending on whether you approach from the right or the left. IEEE 754 uses a signed zero, so defining 1/+0 = +INF and 1/-0 = -INF makes sense. If you do not have a signed zero, arbitrarily picking either plus or minus infinity makes much less sense and picking their "average" zero seems more sensible. So x/0 is not actually +INF - even if you meant +0 and we forget about -0 - it is +INF or -INF depending on the sign of x and NaN if x is +0 or -0.
The definitions in the floating point standard make much more sense when you look to 0/INF as "something so close to/far from 0 we cannot represent it", rather than the actual concepts of 0 and infinity.
In floating point a = b * (a / b) is not always a true statement.
>>> import random
>>> random.random()
0.4667867537470992
>>> n = 0
>>> for i in range(1_000_000):
... a = random.random()
... b = random.random()
... if (a == b * (a / b)):
... n += 1
...
>>> n
886304
For example:
>>> a, b = 0.7959754927336106, 0.7345016612407793
>>> a == b * (a / b)
False
>>> a
0.7959754927336106
>>> b * (a / b)
0.7959754927336105
This is off by one ulp ("unit in the last place").
And of course the division of two finite floating point numbers may be infinite:
>>> a, b = 2, 1.5e-323
>>> a
2
>>> b
1.5e-323
>>> b * (a / b)
inf
>>> a/b
inf
As a minor technical point, x/0 can be -INF if sgn(x) < 0, and NaN if x is a NaN.
TFA was about mathematics, not computer programs.
Mathematically, the limit as b approaches 0 of a/b is defined to be +/- INF depending whether a and b have matching signs. The limit represents the value that a/b asymptotically approaches as b approaches 0. a/b for b=0 is still undefined.
For a good example of why this needs to be undefined, consider that limit as b approaches zero of a/b is both +INF and -INF depending on whether b is "approaching" from the side that matches a's sign or the opposite side. At the exact singularity where b=0 +INF and -INF are both equally valid answers, which is a contradiction.
also in case you weren't aware, "NaN" stands for "not a number".
Pony is what prompted TFA to consider whether or not 1/0 should be defined. It's not what the article is about. Obviously anybody who writes a compiler can define / to have a specified behavior for a zero divisor; TFA is about whether that's correct. There's nothing significant about IEEE 754 choosing to define an operation that's nominally undefined, as it does not have any bearing on whether or not that behavior is correct.
In modern math, the concept of a field establishes addition and multiplication within its structure. We are not free to redefine those without abandoning a boatload of things that depend on their definition.
Division is not inherent to field theory, but rather an operation defined by convention.
It seems like you're fixating on the most common convention, but as Hilel points out, there is no reason we have to adopt this convention in all situations.
Multiplicative inverse happens to be a convenient way to define division in the reals, but there are cases when multiplicative inverses do not correspond to any notion of division. E.g. take a finite ring of integers, like what you’d use for cryptography or heck any operation on an `int`!
The one that excludes 0. It's not a terribly complicated thing to restrict domain: you don't expect, for example, complex values in real-valued functions.
Can you say more? If "0 is not an allowable value for b", then it seems to me that (a/b)*b=a isn't true for all values. Specifically, it's false when b=0.
IIUC, codeflo is arguing that the division operation defined in the article isn't "actual division" because (a/b)*b=a isn't true for all values. But I can't think of a definition of division that satisfies that criteria.
When we say "is not an allowable value", we are speaking about the domain [1]: all the values for which the function is defined. When we say "for all values", we implicitly mean for all values of the domain.
The parallel in programming would be the contract : you provide a function that works on a given set of values. Or the type: the function would "crash" if you passed a value not of the type of its parameter, but it is admitted it won't be done.
(In the remaining I'm referring to 1/x instead of a/b to simplify things a bit)
Another way of saying it is that the function is undefined for 0. (Or on {0}). Then the property is true for all values (on which the function is defined, but saying it is redundant, the function can't be called outside its domain, it is an error to try to do this).
The domain is often left out / implicit, but it is always part of the definition of a function.
0 is not in the domain, so it's not to be considered at all when studying the function (except maybe when studying limits, but the function will still not be called with it).
If "0 is not an allowable value for b", then (a/b)*b=a is not defined when b=0, so it is neither true nor false, since you had previously agreed that b=0 is not allowed (regardless of what "/" and "*" are meaning in this context).
This is all well and fine, but feels like a lot of words to say "it's a matter of definition".
The question is what definitions will be useful and what properties you gain or give up. Being a partial function is a perfectly acceptable trade-off for mathematics, but perhaps it makes it difficult to reason about programs in some cases.
I suppose the aim of the article is to point out the issue is not one of soundness, which is useful — but I wish more emphasis had been put on the fact that it doesn't solve the question of what 1/0 should do and produced arguments with regards to that.
In retrospect, I see his point better - practical use trumps theory in most language design decisions.
I haven't changed my mind but the reason has shifted more toward because "it's what a larger set of people expect in more situations" rather than mathematical purity.