Godel's incompleteness is not that complicated. The main insight Godel had was that proofs as well as provable statements can be encoded in numbers. This is obvious to everyone reading HN -- all computing works by encoding things into numbers. And it's easy to see how you'd encode proofs, proof rules etc. into numbers. (hint: as flattened ASTs!)
The thing is, Godel didn't live in an era of pervasive computing so he came up with a wonky encoding based on products of powers of primes. This makes the proof technically challenging, but the fundamental idea is not that complex. What he was able to eventually do was encode a recursive statement of the form "this statement is not provable" where the "this" is kind of like a pointer back to the full statement. The rest is straightforward.
This is true, obviously, but I can reassure you that the actual proof is very technical and long. Back in college I took a logic class that spent almost the entire semester going over this proof, in its shortened form i.e. using Rosser's trick. By using Rosser's trick one can prove a more general Incompleteness theorem (based on Q) and more elegantly. It ends up stating very concisely that a theory cannot be all 3 of (any 2 or 1 or 0 is fine):
It's true that the "main idea" of the proof is "everything is a natural number", which is obvious to us programmers (it possibly wasn't obvious to anyone in Godel's time). However, this is by no means the only trick that's used in the proof.
An attack is a reason to believe that you can solve the problem. I have no idea how'd I go about solving P=NP, but I did have some thoughts on provable security against transient execution attacks. Which is why I work on the latter but not the former.
Websockets are subject to the same origin policy. There's nothing you can do to violate the SOP via websockets that you wouldn't be able to do with regular HTTP or XHR.
This is not a peer-reviewed research paper. It seems to be a project report likely done by undergraduates. The paper is full of typos and it is not clear what the specific novelty of the proposal is (if there is any).
> This is not a peer-reviewed research paper. It seems to be a project report likely done by undergraduates. The paper is full of typos and it is not clear what the specific novelty of the proposal is (if there is any).
This is not a constructive comment.
The URL suggests this is a final(?) project paper for MIT's 6.857: Computer and Network Security course. The paper seems appropriate in style and length for this kind of setting.
When I clicked the link, I figured this would be a research paper from MIT. It took me a bit of reading to figure out it was just a student project report. That's why I posted the GP.
I stand by my comment that there doesn't appear to be anything particularly novel about this work. Especially with crypto it is important to stick with well-studied and well-understood primitives. There's a danger that someone looks at this and assumes it is being endorsed by MIT and implements it in their system.
This is a misapplication of that principle and runs the risk of turning into the toxic gatekeeping that I know first hand has kept many talented people out of the academic cryptography community. The authors of this work implement their system using existing proof of concept zkSNARK libraries that have been developed by researchers studying this area for roughly a decade or more. They are not rolling their own crypto and are guided by top notch cryptography researchers at MIT (albeit in a course setting).
Even supposing that there is a dangerous time to experiment with application with proof of concept work, this really isn't the case for zero-knowledge proof (or more specifically, zk-SNARK) systems right now. I have been in this field for many years and the chief complaint that I have heard many researchers have is that the tooling exists, yet people are not making heavy usage of it outside of ZCash. This, thankfully, has become less true within the last year or two. This work, even if it is written by students, is nice to see given this reality and can give researchers trying to optimize these development tools necessary feedback to improve them.
To put this into a larger research context, keep in mind that various funding agencies have probably spent at least $1 billion on FHE, ZKP, and similar novel crypto research in the last decade. Off the top of my head, this includes DARPA, ONR, NSF, and many more. The biggest barrier this area currently faces is fine tuning these tools to be performant and accessible to non-research level security engineers and developers. This includes determining which applications this technology may be useful for and where, if applied, it is surprisingly not useful. This is why there are continuing (multi-million $) programs to directly target these problems (DARPA SIEVE and IARPA HECTOR for example). This work by MIT students is, at least on the surface, a potentially useful datapoint that we can use to inform our decisions going forward (I personally would like to know more about their development experience using the tools).
Your point about the need to "popularize" zk-snarks etc. is well-taken and I upvoted you for it.
But that said, we seem to be talking past each other at this point. I don't see why my criticism of the OP was unconstructive. This is not actually a research paper, it has not undergone any sort of academic peer review, and if the goal is to present it as one example of the kinds of things that are possible to build with zkSNARKs, then it ought to be presented in that context.
I do want to note that it is possible to compose zkSNARKs in an application in ways that result in the composition not being zero-knowledge. Not saying that has happened here, but implying there's no danger with incorrect usage of zkSNARKs seems sketchy to me.
This is not a proof of why the product of negative numbers is positive. The reason why the product of negative numbers is positive is that we define multiplication to be that way.
Also, this post conflates the unary negation operator with negative numbers. The two are not the same. In so far as this post constitutes a proof (which IMO it does not), it is a proof about the behavior of the negation operator.
A good question to ask is why we made this specific choice of definition. Why should multiplication be defined such that -2*-3 = 6? This is a question that the post does shed some light on. If we'd chosen some other definition of multiplication, a lot of the "intuitive" properties of multiplication that hold over the natural numbers (such as the distributivity of multiplication over addition and subtraction) would no longer be true over the integers.
My point is that you cannot prove something that is true by definition. The OP trying to prove that the product of two negative numbers is positive is like asking to prove that 0 + 1 = 1 in Peano arithmetic.
The OP thinks that his "proof" is showing why multiplying negative values yields a positive result. But the proof is a load of nonsense because it assumes facts like distributivity of multiplication over addition and subtraction. It is literally impossible to prove that $\forall a, b, c \in Z. (a - b) * c = (a * c - b * c)$ -- distributivity of multiplication over subtraction -- without having already defined the meaning of a * b for all integers! This leads to a circular reasoning loop that the OP's "proof" can't get out of.
The thing to realize is that multiplication is not some magic operation handed down to us by god. It is just a binary total function defined over the integers. What the OP is trying to confusedly get at is the following:
1. There is an intuitive definition of multiplication as repeated addition over natural numbers.
2. It is not clear what the corresponding definition of multiplication over negative numbers is.
3. If we want to define multiplication as a total function over the integers, we need to define what the result should be when multiplying negative integers.
4. Specifically, with (3), we are taught in school that the result of multiplying two negative numbers should be positive, but it is not clear why this seemingly arbitrary choice was made.
Unfortunately, the OP is going about this all backwards. One cannot prove what the OP wants to prove. What one can instead do is argue that the specific (but seemingly arbitrary) definition that one has chosen for multiplication is a "good" choice because it has the same properties (distributivity etc.) as multiplication over natural numbers. At its core, this is a stylistic appeal about the "naturalness" of the definition.
Not every arithmetic property needs to be proved from Peano axioms. One can but it is tedious and unnecessary. A much better starting point is the set of field axioms where the distributivity property is already available as an axiom.
The assumptions made in the article are perfectly fine as per field axioms. Granted it would have been nicer if distributivity over addition was used instead of distributivity over subtraction. But it is not a big leap to derive distributivity over substraction from field axioms by distributing multiplication over a positive number and the additive inverse of another positive number.
Wherever you see an assumption made about negative number, just mentally replace it with additive inverse of a positive number and you would be fine.
1. You literally cannot prove this fact from the Peano axioms because Peano arithmetic operates on natural numbers, not integers.
2. As I said in my original post at the top, negative numbers are different from the unary subtraction operator (the additive inverse in the field). The number -2 is an entity that exists by itself regardless of whether you've defined an additive inverse. It turns out that the additive inverse of every positive integer is the corresponding negative integer, but this follows from the definition of +, not the other way around.
3. Even if you give OP the benefit of the doubt regarding his dodgy proof, it is saying something about the additive inverse and its relation to multiplication. It is not saying why the result of multiplying two negative values must be positive.
4. The multiplicative operator over the field must already be defined for you to be able to prove distributivity over addition. You can't assume distributivity over an operator that is only partially defined.
--
Think about how you'd define a field. First, you need a set (let's call it Z), then you need two total operators over the set (+ and ), and two elements of the set (0 and 1) and each of these must satisfy specific properties (aka the field axioms). In particular, + and must be defined for all members of Z, not just Z+ and further + and * must be distributive. These are all facts you need to prove about Z, *, +, and 1 and only then do you have a field. You cannot work backward by assuming the field axioms (which are unfortunately named because they are not axioms at all but properties) to derive the definition o the field operators.
> You literally cannot prove this fact from the Peano axioms because Peano arithmetic operates on natural numbers, not integers.
Sure you can. With definitions! Define integers from natural numbers. Define rationals from integers. And so on. And so on.
> The number -2 is an entity that exists by itself regardless of whether you've defined an additive inverse.
I see a serious misunderstanding of this topic. Please read upon the field axioms and ring axioms if you haven't so already. Then please check https://math.stackexchange.com/a/878844 which is arguably more rigorous than this post. But the essence is the same. This is more rigorous because the subtraction operator is not used anywhere. Only addition, multiplication and additive inverses have been used. Like another commenter said, if you just replace subtraction with addition with an additive inverse in the OP's post, things fall in place.
> You cannot work backward by assuming the field axioms (which are unfortunately named because they are not axioms at all but properties) to derive the definition o the field operators.
Of course, when we say they are field axioms we mean those properties hold true for the elements of the field. If you see those properties, they talk about distributivity over the elements of the field and additive inverses of the elements also belong to the field, so the distributivity automatically applies to additive inverses too.
After that with a little algebra, "product of additive inverses of two elements is equal to the product of the two elements" comes out as a result (not a definition).
Of course, by "product" we mean whatever * represents. It is not necessarily the multiplication operator we see in numbers.
While this product of additive inverses property does follow from the field axioms, there is a short discussion in the blog comments at https://susam.in/blog/product-of-negatives/comments/ which shows that this property holds true for all rings too.
So it is not just numbers for which this property holds true but for all elements of fields and rings too. Quite simply, (-a)(-b) = (a)(b) in all rings where (-a) and (-b) are the additive inverses of a and b respectively.
> If we'd chosen some other definition of multiplication, a lot of the "intuitive" properties of multiplication that hold over the natural numbers (such as the distributivity of multiplication over addition and subtraction) would no longer be true over the integers.
This is backward reasoning. The chosen definition of multiplication is not to keep things "intuitive". If you start with the field axioms, the chosen definition of multiplication is pretty much dictated by the axioms. If you choose another definition of multiplication, you would end with contradictions like 1 = 0 and such nonsense! And mathematicians abhor contradictions!
"Product of additive inverses of two elements is equal to the product of the two elements" is dictated by the field axioms in all fields.
> Also, this post conflates the unary negation operator with negative numbers.
-a is a standard way to represent additive inverse of an element in field.
The point about "unary negation operator" seems irrelevant.
In the real number field, additive inverse of a positive real number is indeed the negative of that number. The negative of that number is also obtained by the application of unary negation operator on the positive number.
The additive inverse of 3.14 is -3.14. Unary negation operator applied to 3.14 gives us -3.14. I don't see how conflating unary negation operator with negative numbers here is any issue here.
> In the real number field, additive inverse of a positive real number is indeed the negative of that number. The negative of that number is also obtained by the application of unary negation operator on the positive number.
This is a fact that follows from the definition of +. But + needs to be defined before you can start making assumptions about what the additive inverse is. The set over which the field is defined (Z or R) already contains -3, -2 etc. and -3 * -2 or -3 + -2 needs to be defined when you're constructing the field. It then turns out that -3 is the additive inverse of 3. You can't use this when arguing about the definition of why applying * on negative 2 and negative 3 gives you the result positive 6. Because you need to define * over all members of the field before you construct a field in the first place.
> The set over which the field is defined (Z or R)
What? The set of all integers, Z, is not a field! R is. But Z isn't. Z is a ring, a commutative ring. I doubt you understand what a field is!
> already contains -3, -2 etc.
Yes, and those elements are literally the additive inverses of their positive counterparts. If you disagree with this, then the numbers -3, -2, etc. literally have no meaning.
> It then turns out that -3 is the additive inverse of 3.
Are you making this all up with your original research or do you have any proper literature written by a professional mathematician to back it up?
I am aware. It is typically represented as Z_5. They are called prime fields.
I highly doubt wsxcde meant prime fields in their comment though. wsxcde seemed to be talking about the set of all integers and the set of all real numbers in their comment. Only the latter is a field (and a ring) whereas the former is only a ring.
And (-a)(-b) = ab holds in rings (and thus fields).
Yeah, many red states have made it illegal to strike. And of course, this eliminates any possible leverage that employees have over employers to force better treatment and that is precisely the reason why Republicans have done this.
The main argument is that NIMBYs are outnumbered at the state level. They "win" at the local level because older, richer homeowners are overrepresented at community meetings.
NIMBYs also win in local votes. Most voters turn out for presidential elections. Far fewer turn out for state elections. And the voter turnout for local elections is frankly pathetic. The votes where San Francisco set a cap on new office space per year and made apartment buildings near the coast inordinately slow and expensive to build? Off-year elections where the NIMBYs have an electoral advantage.
Some people on the website are complaining that people are putting their video titles in English but the content is in some other language. This is not a lame attempt to get more clicks, but driven by the fact that: (i) Hindi typing works only somewhat well on mobile phones and does not work at all on most desktops/laptops in India, and (ii) search in languages like Hindi sucks even more. Given that most tech savvy Indians are bilingual anyway, it just makes sense to have titles in English so that people can search for them conveniently.
You'll see this is something even Hindi newspapers do. I pulled these two articles from the front pages of the Dainik Jagran and Dainik Bhaskar, two of the bigger Hindi newspapers: [1] and [2]. Very little of the content, except for the title is in Hindi. This is so that people can search for content using English searches.
Sorry I had a typo. I meant to say very little of the content is in English. But the page title is in English and that matters because I can google for "Jagran UP budget" to get to the second article.
This is practical advice, but it also points to the reality of scientific funding today -- you can only get money to do things that you already know how to do. You must have an idea that is pretty much ready to go on day one. All that should remain are working out the details.
I can see why bureaucrats and government officials prefer it this way (accountability!) but I don't understand why we scientists put up with it.
It stuck with me what a famous Hungarian ethologist (Csányi) said in a radio interview recently: you propose the things that you already discovered in the grant application and then use the money to discover new stuff.
He generally had a grim view of academia and is senior enough to afford to speak his mind.
I actually think it's not as bad as some make it out to be, provided grants are funded at a reasonable rate. What's changed over the last couple of decades or so is that funding rates are single digit percentages. This is the real problem. My advisor said that when he was a younger researcher, it wasn't quite so hard to get funding so he had more time to pursue independent ideas.
Even as a much younger researcher, I feel that at any point in time, I usually have a couple of worthwhile new ideas that are quite likely to pan out and will get funded someplace or another. But if I have to spend a multiple months of effort into getting them funded, I have a lot less time available to generate the next couple of good ideas. This doesn't mean that people just stop doing research, most researchers are competitive workaholics. Instead, we'll just rehash the same old stuff in a new bottle: we target newer applications and/or chase the latest fads (e.g. adversarial ML) with the goal of getting some of our old ideas reused in a new domain.
This creates a negative feedback loop because the quality of research gets worse and the people think this stuff is not worth funding and that in turn reduces funding rates even lower which causes research to get worse.
Unfortunately, scientists working on "useless" ideas wasting public money is a much too convenient bogeyman for politicians to give up.
The thing is, Godel didn't live in an era of pervasive computing so he came up with a wonky encoding based on products of powers of primes. This makes the proof technically challenging, but the fundamental idea is not that complex. What he was able to eventually do was encode a recursive statement of the form "this statement is not provable" where the "this" is kind of like a pointer back to the full statement. The rest is straightforward.