Hacker News new | past | comments | ask | show | jobs | submit login
Mod and remainder are not the same (conery.io)
165 points by yread on Aug 23, 2018 | hide | past | favorite | 48 comments



In algebra, mod is not an operator. We say 17 mod 4 is 1, but what we write is something like

17 = 1 (mod 4)

To say that 17 and 1 have the same remainder when divided by 4. If you do all of your operations and comparisons mod 4, it doesn't matter if we call the number 1 or -3.

In real life, we often need the remainder function, and we usually need it with positive numbers (how many eggs will I have left if I put these into boxes of 12). We sometimes call this the modulo function, which is a slight abuse of notation from the algebraic meaning of modulo. For positive numbers, it works completely intuitively. There are a couple of natural ways to extend this to negative numbers - an extension which would be meaningless or irrelevant in algebra - and computer language authors have (usually) called one rem and one mod.


Strictly speaking, it's not the case that

17 = 1 (mod 4)

but rather

17 ≅ 1 (mod 4)

i.e. "17 is congruent to 1 mod 4", not "equal", though congruency mod n is an equivalence relation.


What do you mean by "17".

The standard way of constructing modular arithmetic in "upper level" [0] is to define it over equivalence classes. For instance, let a ~ b mod 4 be the expected equivalence relationship for mod-4 arithmetic.

Then, define [x] = {a | a~x} to be the set of all numbers which are congruent to x mod 4.

At this point, we define [x]+[y] = [x+y] (along with the other arithmetic operations which are defined similarly). Then, there is just a bit of work to prove that the above definition is actually well defined (since the representative element we pick from both of the [x] and [y] sets is arbitrary, we must show that [x+y] is the same set regardless of which x and y we choose).

This means that, when we write 17 = 1 (mod 4), we are really saying [17]_4 = [1]_4; where [x]_n is the mod-n equivalence class containing x. The equal sign is literal, the numbers are not.

This type of implicit injection is actually fairly common in algebra. It is how we can view, for example, 17 as a natural number, real number, complex number, and polynomial without explicitly mapping it each time.

[0] Eg. an introductory undergrad course in abstract algebra.


Yeah I remember this notation completely messing up with my brain back in my discrete math class.


Clock arithmetic.

Modulo 4: a clock with only four evenly-spaced marks on it.

17 is congruent to 1 (modulo 4) means:

Starting at the top of the clock, go clockwise one mark.

Starting at the top of the clock, go clockwise seventeen marks.

You'll end up at the same position.

That's congruence.


I don't think that is quite true. Yes, for normal intergers 17 and 4, you can say the above. But mod can also be used as an operation on (and into) the equivalence classes of Z mod 4.


For this reason, x % 0 should be x.


It's funny, I posted this same thing on the 1/0 = 1 thread the other day. It seemed to me that the logic was linked.

https://news.ycombinator.com/item?id=17737175


For polyglots, the problem is that % sometimes mean mod, sometimes means rem, or sometimes means something else:

-7 % 2

7 % -2

Admit 4 possible sets of answers.


This article focusses on the (rather uninteresting) case of taking the modulus/remainder of some positive dividend divided by a negative divisor.

However, the real distinction lies in how the two operations handle negative dividends: If n is positive, the modulo operator

  x mod n
always returns a number between 0 and n – 1. The remainder operator returns a number between –n + 1 and 0 if x is negative instead. For example,

  -17 mod 12 = 7
and

  -17 rem 12 = -5.
I have yet to see a case where the remainder operator is the one that you want. If a language only supports rem I find myself constantly writing

  (x rem n + n) rem n
(which is just

  x mod n).


For this reason, Zig requires the programmer to specify which is intended when the operator is used on signed integers:

  var x: i32 = -17 % 12; //compile error: remainder division with '(integer literal)' and '(integer literal)': signed integers and floats must use @rem or @mod
  var y: i32 = @mod(-17, 12); //7
  var z: i32 = @rem(-17, 12); //-5


> (x rem n + n) rem n

Correct me if I'm wrong, but this is also attractive because it can be applied even to an "actual modulo" operator. That is, because "(x mod n + n) mod n" has the same behavior as the above, one need not even know which way a particular language's "%" operator behaves if one just always uses this filter.


a = b ( mod n) is defined as n | a - b That is n is a divisor of a -b ( a, b, n integers n non zero) Obviously if p is prime and p | n Then also a = b ( mod p)


I know. But programming languages usually don’t have the modulo relation but a modulo operator that sends each number to a special representative of its equivalence class. For positive numbers, everyone agrees that this should be the smallest nonnegative number (i.e. the representative between 0 and n – 1). However, there are two conventions for the representative of a negative number: Some programming languages return the largest nonpositive number (the article calls this the remainder), others return the smallest nonnegative number in this case as well (the article calls this the modulus). I chose to write mod and rem instead of % to be able to make this distinction.


I recalling seeing these referred to as modp (positive) and mods (symmetric) back in University. This seems to be the clearest way to distinguish them. I didn't realize different popular programming languages used different versions for the % operator...

The case of a negative denominator doesn't seem to make sense when talking about modulo in an algebraic sense. I don't think you can have a Ring with a negative dimension. So I've never even thought about what % would do with a negative denominator (RHS). If I was using % with an unknown denominator as input, I could just as easily end up with it being 0 which would be a bad thing, so I think it's fair to say I always require it to be positive.


No, the symmetric version is yet a third variant. What people are calling “rem” is what you get with division that rounds toward zero, whereas what they are calling “mod” is what you get with division that always rounds down. A symmetric version is what you get with division that rounds to the nearest integer, and I have not seen any language with a built-in operator for it.

To the grandparent:

> sends each number to a special representative of its equivalence class

The problem is that many programming languages have a “remainder” type % operator which does not do this in the case where the dividend is negative. For instance,

    -1 % 3 == -1
    2 % 3 == 2
Instead, folks who want mathematically reasonable behavior need to implement their own. For instance in Javascript we can define

    mod = (a, b) => a - b*Math.floor(a/b)


Grandparent here: I know, that’s what I tried to explain (poorly) with the next sentences. Programming languages with a remainder operator choose two representatives for every equivalence class except [0] and return the one that has the same sign as the dividend.

I tend to define

  mod = (a, b) => (a % b + b) % b
because it works without going through floats – in languages that have integer types. I don’t know if there are actual advantages though.


Well, algebraically, Z/nZ and Z/(–n)Z are the same. However, you have again some choice of canonical representative (and you might choose differently for n and –n). But I agree that this is a rather rare case.


I don't think there is a formal universal distinction in programming between a "modulo operator" and a "remainder operator" in terms of behaviour. Wikipedia says that the result of a modulo operation varies between languages [0]. Furthermore, by mathematical convention as seen in the Euclidean division theorem, the remainder is always in [0, n).

[0]: https://en.wikipedia.org/wiki/Modulo_operation#Remainder_cal...


The key to this is in how you do integer division. Here is what happens in Python2:

>>> 19 % -12

-5

>>> 19 / -12

-2

That is because Python2 integer division is something called "floored division". This can be seen as always rounding toward negative infinity. This is arguably the correct way to do integer division and in fact there has been a lot of argument about this over the years.

One very nice feature of floored division is that it does not have a weird discontinuity around 0 as "round toward 0" does. This discontinuity can sometimes cause problems in embedded applications involving physical motion.

So mod and remainder are the same. You just have to do the division right...


Guido van Rossum has written about the background of this in Python here: http://python-history.blogspot.com/2010/08/why-pythons-integ...



Yay lua!


> If I tell you that 9 is the result of my squaring operation, you can easily deduce that the input was 3.

Or minus 3.


> So: 5 mod 2 would be 1 because 5 divided by 2 is 4 with one left over.

No. 5 divided by 2 is 2, not 4, with one left over.


I'm willing to believe this was a typo.


Oh yes, 100%, but someone had to point it out :) One of my biggest fears is that some of my public stuff has obvious mistakes but nobody tells me out of politeness!


I posted a similar comment and had the same feeling. I'm glad you noticed/pointed it out too!


Derp. OP here - yep typo and corrected thank you!


The site seems to be down, but confusion about the difference arises in C# too.

C# does not have a mod operator, the % operator is a remainder operator.

https://blogs.msdn.microsoft.com/ericlippert/2011/12/05/what...



There are actually FOUR possible binary operations, all of which agree when both arguments are positive (e.g. f(13,5)=3 for all of them), and all of which satisfy the property that if f(n,d) = r, then r ≡ n (mod d). Namely:

• The first one always returns a positive number: for this f, when f(n,d) = r, it's true that 0 ≤ r < |d|. For example, f(13,-5)=3, and f(-13,-5)=2. Some programming languages that obey this: ALGOL, Dart, Maple, Pascal, Z3.

• The second one always takes the sign of the divisor (i.e. the denominator, if you write the division as a fraction n/d): for this f, when f(n,d) = r, it's true that |r| < |d|, and that r and d both have the same sign. For example, f(13,-5)=-2, and f(-13,-5)=-3. Some programming languages that follow this: APL, COBOL, J, Lua, Mathematica, MS Excel, Perl, Python, R, Ruby, Tcl.

• The third one always takes the sign of the dividend (i.e. the numerator, if you write the division as a fraction n/d): for this f, when f(n, d) = r, it's true that |r| < |d|, and that r and n both have the same sign. For example f(13,-5)=3, and f(-13,-5)=-3. Some programming languages that follow this: AWK, bash, bc, C99, C++11, C#, D, Eiffel, Erlang, Go, Java, OCaml, PHP, Rust, Scala, Swift, VB, x86 assembly.

• The fourth one takes the sign of the product, i.e. it's positive if both the dividend and divisor have the same sign, and negative otherwise: for this f, when f(n, d) = r, it's true that |r| < |d|, and that r and n*d have the same sign. For example, f(13,-5)=-2, and f(-13,-5)=2. I'm not aware of a programming language that does this, though I've seen some ad-hoc hand-rolled big-integer libraries that used to do this.

Of course in mathematics (number theory) we usually don't care, as, for example, modulo 5 (or -5 for that matter), we would say -13 ≡ -8 ≡ -3 ≡ 2 ≡ 7 ≡ 12... (mod 5), and so we can pick any of them and it doesn't affect the mathematics.


Common Lisp avoids the confusion of "which one am I using" by giving you two separate functions, MOD and REM: http://clhs.lisp.se/Body/f_mod_r.htm


Hmmm, having a go with er... Go:

  fmt.Println(19 % 12)
  fmt.Println(19 % -12)
  =======
  7
  7
Hmmm, let's try the built in math package then:

  fmt.Println(math.Mod(19, 12))
  fmt.Println(math.Mod(19, -12))
  =======
  7
  7
Looking at math.Mod()'s description:

  // Mod returns the floating-point remainder of x/y.
  // The magnitude of the result is less than y and its
  // sign agrees with that of x.
k, that seems like it's not supposed to be "modulus" then.

The Go language spec(https://golang.org/ref/spec#Arithmetic_operators) also clearly states it's remainder not modulus.

  %    remainder              integers
The manual approach does work:

  m := 19
  n := -12
  fmt.Println(((m % n) + n) % n)
  =======
  -5
Doing a bit of searching online shows some discussion in 2009, but no mention of any desire for a change:

https://github.com/golang/go/issues/448#issuecomment-6604976...

https://github.com/golang/go/issues/448#issuecomment-6604977...

Does anyone know if this has improved since?


Learned this the hard way when implementing a ring buffer in C++ after spending a lot of time doing things in Python.


Same here, only in Rust. Felt really rather stupid having managed to write code for 20 years without actually knowing this. Tons of knowledge is built on faulty assumptions.


In Swift % is the remainder operator [0][1] and there's also the `.truncatingRemainder(dividingBy:)` and `.remainder(dividingBy:)` methods on floating-point types [2].

[0] https://docs.swift.org/swift-book/LanguageGuide/BasicOperato...

[1] https://lists.swift.org/pipermail/swift-evolution/Week-of-Mo...

[2] https://developer.apple.com/documentation/swift/float/288616...


In many programming languages, a % b is simply a - b*(a/b), where '/' is integer (truncated toward 0) division. I'm alright with this because:

1. this computation is typically more efficient on silicon optimized for arithmetic, 2. this computation is unambiguous and more straightforward to people who are unfamiliar with modular arithmetic, especially when negative a and b are involved, and 3. in contexts where the mathematical remainder 0 <= r < abs(b) is needed, b is typically always known. It is easy to see that knowing a % b and b it is easy to recover r, but knowing b and r, one may not recover a % b (we don't know if it was positive or negative).


I would argue that mod is more intuitive. I imagine the integers being partitioned into contiguous bins of size b, and computing a % b means counting which index in the bin it is. This can be done visually too!


Funny, just a couple of weeks ago I faced a similar problem at work. I ended up changing slightly the code and adding a comment that read something like "We do it this way because the operation we want is the modulus, not the remainder".


> Another question you could answer would be “if I start on a road trip at 6PM, what time would it be when I get to my destination 16 hours later?”. That would be 6 + 16 mod 12 which is 10.

So many things wrong here. Why would you represent 6PM as "6"? And then you mod 12, which will only give the correct response between 12AM and 11AM. If you'd said 18 hours later you'd think you were arriving at midnight. It's a bad example anyway; why would you care what hour you arrive but not which day?


Well, that's why I always define RED(x) (for reduce modulo) as ((((x) % MOD) + MOD) % MOD). Quite useful in algorithmic competitions, which often ask you for answer modulo some prime.


I've regularly been annoyed by -1 % n being negative but I've never seen a case where the negative number is on the right. Has anyone seen this before in real code?


I suspect this is why the topic is relatively obscure; situations where you have a negative divisor and need the remainder/mod are not common.


> situations where you have a negative divisor and need the remainder/mod are not common.

yep. But sometimes, when you have numbers that come from some external system, and you didn't check them or expect that the % operator will make it positive, you end up with this sort of problem. Such as having negative positions with CSS, or a game where you end up with negative values for coordinates.


So: 5 mod 2 would be 1 because 5 divided by 2 is 4 with one left over.

I do not like this explanation and just don't understand what the author is getting at with this example. The answer of 1 is correct, but the explanation is just plain wrong and that makes it tough to agree with the author's thought process through the rest of the article.


So will JavaScript and C# be patched?


This is covered in more detail in the book Hacker's Delight. A recommended read.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: