I think one of the biggest mysteries (in my mind anyway), is how Mersenne found his original list of primes. Yes, some were shown to be incorrect and some he missed...but him finding the amount that he did still seems amazing to me, and I don't believe this has ever been explained.
It seems to me like he had a private method for calculating primes (that was imperfect, sure, but also astounding for being developed in the 17th century).
To me it feels like when this is brought up, modern day mathematicians are too quick to dismiss him based on the incorrect numbers/the missed primes, and ignore the many correct ones and the question of _how_ he managed to find them. That's what I would like to know, is his method.
Yes, some were shown to be incorrect and some he missed
It's a small sample but it's worse than you make it sound, I think. He came up with four new numbers, given what was known at the time. Half were wrong and he missed three. Whatever method he'd come up with, it was pretty badly mistaken. It's curious, historically, but almost certainly mathematically uninteresting.
> Whatever method he'd come up with, it was pretty badly mistaken. It's curious, historically, but almost certainly mathematically uninteresting.
He did not just stumble onto two correct primes in the 17th century by chance. He had used a seemingly now-lost process that was semi-fruitful. This should be worthy of investigation, not dismissed as "mathematically uninteresting".
Mersenne somehow figured out M(127)...in the early 1600s, and no one to this day seems to know how. How is this not mathematically interesting?
He did not just stumble onto two correct primes in the 17th century by chance.
M31 you can verify by hand, which really leaves us with one out of four. It's entirely possible he got one right by chance. It probably wasn't strictly by chance, very likely convinced himself of some pattern that doesn't really hold. There are plenty of neat-looking lines in, say, an Ulam spiral but those don't generate primes either.
> It's entirely possible he got one right by chance. It probably wasn't strictly by chance, very likely convinced himself of some pattern that doesn't really hold
This is exactly the attitude I alluded to in my earlier post. You think M(127) was found arbitrarily through chance or plucked from a likewise arbitrary pattern, which then allows you to completely write him off...as opposed to the possibility I'm asking about: that he actually did have an interesting method of analyzing primes which, while faulty, could nonetheless prove fruitful or even just interesting. It's this dismissiveness that I don't understand.
I guess I don't see the 'dismissiveness' bit. You have some hunch that there was something deeper to Mersenne's noodling. That's perfectly fine but I think both the sibling commenter and I have laid out a decent argument that, if you poke around the hunch a bit, there doesn't seem to be much there. The pattern, such as it is, is no good at identifying primes and even worse at picking out what we now call Mersenne primes. I'm not sure I follow what your argument is. It doesn't take much mathleticism to notice he picked four exponents close to doubling powers of two. The link in Wikipedia goes into a bit of detail:
Which seems both plausible, or at least in the ballpark and not terribly mysterious. He got a big number right. It happens.
That's not to say there aren't mathematical mysteries and curiosities of that nature. You have to look no further than Mersenne's contemporary and correspondent Pierre de Fermat for a solid one. He cranked out a few dozen theorems with next to nothing in the way of surviving proof. From the famous one on down, he got just about all of them right. People have been wondering aloud how exactly he pulled that off since at least Gauss's time. Nearly as famously, he got one wrong:
These look, in form, reminiscent of Mersenne's numbers and the first few of them are also prime. F5 is not. In a fun connection, Euler, the same dude who showed M31 is prime showed F5 is not. You can almost imagine Mersenne and Fermat shaking their fists and yelling 'Euleeeeeeer!' Seinfeld-like, from their graves.
Knowing it's just 31 and 127 that were really new AND correct does it still seem like Mersenne truly "somehow figured out M(127)"? To me it seem like he either applied an erroneous method or, more likely in my opinion, just had erroneous naive checking of these particular values up until he felt confident enough to list them rather than rigorously prove them and it sticks out to us as M(127) is the highest of these that we retrospectively know to be valid.
> To me it seem like he either applied an erroneous method or, more likely in my opinion, just had erroneous naive checking
Of course the method was erroneous, that's not the point I am (apparently rather poorly) trying to get across. My point is that the method seems to have been lost to time, and that it is a shame because while faulty for finding primes it may prove fruitful elsewhere, or may simply be interesting. And if he had a method of checking for primes in the 17th century, this would also be noteworthy and interesting, though it would beg the question of why he stopped presumably before reaching a thousand.
My base observation is relatively straightforward and I honestly don't understand the dismissiveness it is met with:
Sometime in the early 1600s, Mersenne found M(127). The 'how' is interesting to me because I don't see it likely that this was discovered by random luck, but predicated on a novel method he had developed which, while clearly very flawed, would nonetheless still curious to see. Our math and science textbooks are filled with various flawed methods, why then is this one of seeming no concern?
Mersenne didn't "find" M(127) by using a method, he noted 2^n-1 was commonly prime for the first few well known values and wasn't able to immediately exclude M(127). That's the difference, not discovery by proof with some method rather lack of exclusion by method. Were the exclusions more reliable it'd be interesting but neither being able to exclude or include accurately is not.
The exact method doesn't matter because poorly performed naive division checking with the first couple thousand primes gets you a more accurate list of primes so whatever he was doing isn't particularly interesting (and could have been as simple as something like that that, we just can't say how he messed up for certain without a time machine).
I think computer benchmarks should compute something useful, like mersenne prime verification. Million of benchmarks are run every day mostly on newer device with decent computing power. This amounts to a large amount of currently unused cycles.
Maybe it would be possible to create a block-chain technology in which their participants calculate something scientifically meaningful.
> all tape measures must be the exact same length.
Tape measures are useful even if they're not the same length.
Higher-end engineering tapes acknowledge that length isn't constant, and provide temperature coefficients, ranges of accuracy, and tensioning guidelines.
You can spend a lot of money on a tape measure which is more exact under more conditions. Most people don't need this.
Maybe it would be possible to design the scientifical computational tasks so that they are very similar in terms of stress for the CPU, but not completely even. You could then state that the benchmark offers comparable results with a statistical error of something like 2% +-0.5. You can also average the error by running the benchmark multiple times before you compare the score with other devices.
Is Mersenne prime verification really scientifically useful? At this point, getting one more number verified seems about as useful as stamp collecting.
Mersenne prime verification is useful for the science field of mathematics, where the results can be used to (in)validate theories and predictions of (Mersenne) Prime Numbers. Sure, it is a niche with little contact area with the rest of thr world, but so were Elliptic Curves (used in cryptography), Set Theory and Relational Algebra (both heavily used in relational DB technology).
I don't see why we should be limited to Mersenne prime verification, there are many other scientific computational projects in various fields that could benefit from it, astrophysics, molecular biology, genetics, chemistry, climate study, cancer research, ...
You could run the exact same computation and still do something useful. The inputs do not have to be identical, but they can be coordinated (or random). Think of running a brute force search on some useful problem.
Inputs definitely need to be identical because different inputs may lead to different behaviours in branch predictors and memory access patterns, affecting the score.
The impact may be small, but I see no reason why the impact should be there in the first place just to satisfy some mathematicians' curiosity about special numbers.
Sorry, I meant running different inputs on a program that does not branch or have an input-dependent memory access pattern. Any computation can be written in such a form, although there might be a large overhead compared to programs that do branch.
There are certain problems for which different inputs do not require different amounts of computer. "Add one to a number between 128 and 256" is an obvious example. The question is whether there are useful problems with that property.
Even such a task is subject to very specific requirements, because adding a number between 128 and 256 may be enough to take another path in the microcode depending on if the result overflows or not. I'm not saying this happens kn practice, but I wouldn't be surprised if a future generation of processor would do this and make the entire benchmark invalid from that point on.
A more likely scenario for such independent instructions would operate on an entire bit string, like boolean operators and vector instructions. I think you'd have a tough time producing any useful output from such an algorithm, though, because you wouldn't be able to do much with conditionals to keep the branch predictor score fair.
I don't think that there are any algorithms that could operate within a generic benchmark that could have random elements in them _and_ product a useful result. Either the calculations are different and fair but meaningless, or they're exactly the same with the same result.
> I don't think that there are any algorithms that could operate within a generic benchmark that could have random elements in them _and_ product a useful result.
One that springs to mind is Monte-Carlo sampled raytracing. An individual ray might take more or less time to compute, but the time to compute 10 million rays will be statistically be roughly constant. You could even imagine averaging a bunch of renders of the same scene from different machines to get a lower-noise result, thereby demonstrating a benchmark combined with useful work.
Statistical predictability is the key.
(Confession - this isn't exactly theoretical. I sometimes have occasion to render light fields, and I shard the work over as many random workstations as I can get my hands on. It's always obvious which workstations are faster than others, even without making any special attempt to balance the workload. I think this is actually a workable concept.)
We already have a existence proof for useful tasks for which changing inputs do not invalidate benchmark results - namely cryptography, where algorithms must run in constant time to avoid timing side-channel attacks. If your hardware takes a non-constant amount of time to add 8-bit integers, your hardware is broken and should receive a benchmark score of zero. The interesting question is how much overhead would be involved in turning scientific tasks like Mersenne prime search into benchmark-friendly workloads.
Wow. That makes it sound like even a perfectly deterministic calculation cannot be compared across machines. Some will be good at some, some good at others, and whether one machine is overall better than another depends on their intended uses.
Which now that I think about it, of course it would be like that. But still, what a headache for someone who just wants The Best One.
Even a simple addition of two integers can take a different amount of energy to perform depending on the values involved. This in turn affects the temperature of the chip, which can cause thermal throttling. Computer architecture is more complex than most software engineers realize.
This is a common misconception. In order for a Proof-of-Work algorithm to be useful, there must be an opportunity cost (penalty) to mining on the wrong chain. If the mining algorithm pays off anyway it cannot be used.
Unless the miners themselves benefit from the work, there would still be an opportunity cost to them, no? If it happens to be useful to someone else that shouldn't matter, right?
If it is useful enough that someone is willing to pay for it as a service, then it fundamentally changes the incentive model and voids the security it provides.
Whether or not there's an opportunity cost does matter, but why should this mean that the mining algorithm "paying off anyway" mean it can't be used? If the mining algorithm has an independent payoff whether or not you mine on the main chain, then there's still the same opportunity cost of the mining rewards.
With a PoW that has "external rewards", the cost of attacking the chain falls in proportion to the external reward. You are correct that it does not fall to zero.
Patents allow mathematical research to be excludable and thus not a public good.
Now one can argue that mathematical patents provide more costs than benefits, which is a line of debate I am certainly open to. End that, and math research would be a public good.
But in the system we have now, a huge amount of mathematical research has generated a significant amount of money for very many people, from the RSA algorithm to Penrose patenting his tilings (and getting money from toilet paper makers that used his aperiodic tilings to make their TP more fluffy [1])
What I think you mean to say is that the market consistently undervalues mathematical research. Unfortunately it's not so easy to actually demonstrate, because given the huge amount of math research produced, only a very small amount of that ends up being useful in the future and often the original discoverer is long dead. This is similar to the situation with other arts and sciences. A small number of people drive the field forward and the vast majority of scientific publications are basically filler. Certainly the world would be better with more effective funding mechanisms that we have on offer, which is basically a game of capture-the-grant.
The penalty is still there you pay the electric bill and it consumes time, but you would donate some cycles for useful scientific computational tasks without any benefit for yourself. If such an algorithm exist, then I don't see why this couldn't be used as Proof-of-Work algorithm.
PoW works because showing a correctly mined block, ie one with a nonce that satisfies the many-initial-zeroes hash, proves that you on average (you can be short term lucky) have a used a certain amount of computing power.
It proves it because with the selected algorithm, there's no known shortcut to just doing a load of hashes, so if you found one it means you did a lot of hashing.
What problem would you insert instead that had the same properties?
We should also not forget that it gets harder and harder for the same benefit. Any amount of energy can be poured into a PoW system. Invent fusion, use it all in crypto.
"Rent computational resource" could work with a verification of resource(s) step at the end of a set duration (block) and all of the computations performed using fully homomorphic encryption. If after the beginning of a block a random start time up to some max (~10 min) was selected for which the proving nodes would perform an operation to verify the resources were available and calculate the block hash instead of continuously burning, then at that starting time and for a set proving duration the verification algo would run and at the end the block would be distributed. Given the FHE was either running a paying program utilizing rented resources or the verification algo continuously, then it should not be possible to tell when the verification algo starts and stops. The non-FHE "gas" alternative that incentivizes performing program results to include in the hash has to have enough of an expected value that it would be prioritized over using the time to further calculate a minimal hash and the associated expected value for finding it without those gas incentivized transactions. The question is the FHE option more efficient than gas?
PoW algo could be Hashcash, which has not been "exhausted"? Not sure what "usage varied" means? To further elaborate on the proposal, from the block start time to verification start time x the miner would run say Mersenne solver, then from x to block end the miner would run verification algo, i.e. Hashcash. The Mersenne solver would have to run with less than or equal to the resource usage of the PoW algo, and if unable to equal then the PoW could also for example run in background. The computation is encrypted and the network traffic is encrypted. So, no way to determine that the verification algo time allocation is being "varied". Gas could still be used to prioritize work, but the idea is that most of the time the PoW algo is not running, and some other more economic algo is running. The PoW is being used for Proof of Capacity, unlike the claimed Chia coin PoC that continuously uses the capacity in a PoW manner rather than proving it and then offering or allowing it to be used for some other purpose in an interim. If the issue is the desire for the economic algo to be able to run continuously, then the nodes could be split into 2 or more groups. For two groups, one would run from time 0 to x and the other from 0 to y with x < y with y including the block from the x group in the block y produces. The x group would continue from x+verify duration to x_2+verify duration and including the y block in the x_2 block output, and so forth.
Sorry, I thought you meant the machines would be doing "real" work on behalf of some person renting it, rather than making Mersenne primes. If you have a fixed problem like that then of course it won't run out.
The problem then is more economics, who has an incentive to discover more Mersenne primes?
PoW is a closed feedback loop system. Doing PoW yields you a reward, that reward pays for energy, that energy runs your PoW. If PoW yields you anything other in addition to normal reward - for you as a miner the only rational thing is to run more PoW with it to get more rewards. That in turn will simply bump the difficulty such that any additional benefit yielded will match the additional required energy expenditure to get the reward.
One of the first I heard of was Primecoin [0], which still seems to be going along steadily. They've actually uncovered a number of world records [1] looking for Cunningham Chains.
These are not unused cycles, these are cycles used to benchmark devices. If you benchmark with mersenne prime verification, you'll know how fast mersenne prime verification is and that's it. Most people don't use their devices for mersenne prime verification.
In case it’s of interest - Gapcoin’s [1] Proof of Work function arguably does useful work by searching for large prime gaps of record merit ... “As of December 2017, the largest known merit value and first with merit over 40, as discovered by the Gapcoin network, is 41.93878373 with the 87-digit prime 293703234068022590158723766104419463425709075574811762098588798217895728858676728143227. The prime gap between it and the next prime is 8350” [2]
[1] https://gapcoin-project.github.io/
[2] https://en.wikipedia.org/wiki/Prime_gap
Benchmarking really does need repeated inputs and outputs. Otherwise you can't make a meaningful comparison between the systems that ran the test.
You can use these distributed computing projects for burn-in testing though. When you just want to run your computer hard to make sure it can manage N hours at 100%, and don't necessarily need to know how it compares to others, distributed computing can work (although, there's some risk of wrong results and therefore more work for the organizers if you're testing your system in extreme conditions, sometimes those lead to miscalculations)
There has been a lot of work on "proof of useful work" coins. https://gridcoin.us/ is the popular one. The benchmark idea doesn't work for the reasons other commenters raise.
This is a verification, the next likely Mersenne prime M(74207281) has already been discovered. Verification just involves eliminating all lower possibilities.
rank p digits discovered
48 257,885,161-1 17,425,170 2013 Jan 25
49* 274,207,281-1 22,338,618 2016 Jan 07
50* 277,232,917-1 23,249,425 2017 Dec 26
51* 282,589,933-1 24,862,048 2018 Dec 07
So we've already found three larger ones but have still yet to be verified as the 49th, 50th, 51st consecutive ones. Verification seem of more interest to those following closely and discovery of another one being bigger.
I imagine, one day when we finally break the factoring problem, we are going to have a bunch of these prime. This is one of the things I prat to see in my remaining lifetime.
It's very possible we never break the factoring problem. Since we have no properly unified theory of physics (e.g., one that models the observer in QFT), there are outstanding unknowns with quantum computing that may make it so that Shor's algorithm is not practically useful.
I ask thos question with genuine curiosity- can someone help me understand why this is important? I have the same question about a lot of esoteric fields on Math - do we study them just because of curiosity and seeing how far we can follow a pattern? Or is there actual value to for example knowing what Mersenne primes are, etc?
I guess my question is - is this applied in any way?
EDIT: I see from responses my comment created the impression that I conflate "purely intellectual" and "not valuable." That was not my intention, simply asking whether this has applications or not.
The only things "important" are having basic food to eat and a place to sleep. Everything else in life is "curiosity". It is incredibly annoying when the question of importance is brought up, typically when things like pure mathematics, or art, or humanities are being discussed. Why does no one ask, "can someone explain what's important about working for a FAANG company, what do they produce that's of any importance*.
The staying power of this project is very impressive, and the fact that even with the long interval between finds (years) that doesn't seem to detract from dedicating CPU power to the project.
Can you explain finding Mersenne primes so CPU intensive?
Mₙ = 2ⁿ − 1 gives an easy to generate candidate set and testing primality of a single number isn't that expensive. 57885161th candidate being the 48th hit is getting pretty sparse I guess, but it isn't that crazy. What am I missing? Seems the compute would be rather small compared to much of the "big data" / NN training done every day
Because they are so big. 10³ > 1,000, so in decimal, 2^57,885,161 - 1 has over 3 × 5,788,516 digits, so over 17 million.
You do not check whether that is a prime by trial division (https://en.wikipedia.org/wiki/Trial_division). If you could do a division every nanosecond and had a billion devices, that would still take way, way more time than till the heat death of the universe to do.
Because of the numers 'in between', once you start testing that many numbers in such an enormous range it eats up CPU power. There is some very beautiful math trickery which uses an FFT to help determine if a number is prime or not, and even though that's a massive optimization the number of remaining cycles required is still enormous.
The algorithm for testing a Mersenne number for primality has the same running time as performing a pseudoprime test, so there's no point using such probabilistic tests. It is however useful to perform "trial division" to exclude small factors.
If you scroll to the very bottom there's some stuff on recent use of probable prime proofs, as well as a fairly detailed explanation of how the whole thing operates.
That could really be a great thing! The problem is that Mersenne primes are "sparse", i.e. there aren't many of them. Finding one could be like minting a new block, but it would be so rare that it won't be practical. For crypto, you would need something with many solutions, but then it's probably not so useful to find yet another solution...
you also generally speaking need something that is much easier to verify than it is to generate. I'm not saying it's impossible, but I don't think that "verifying" a protein folding in the "cryptocurrency" sense is a meaningful operation in the "useful" way you're hoping it to be.
What always fascinated me about this sequence was the huge (multiplicative) gap between the exponents: 2^127 - 1 and 2^521 - 1.
If that property continued after 2^20,996,011 - 1 the next one would be around 2^86,133,241-1 which is bigger than the (provisional) 51st Mersenne prime, so we'd be stuck at "40th verified Mersenne prime found in 2003".
Which really makes you wonder about the density of these primes in exponent terms ...
"Helping humanity" is notoriously hard to measure, not least because people who are going to live in the 25th century are also part of "humanity" and yet we do not know how their world is going to look like.
Mathematics in general is a bunch of stuff that often looks useless to laypeople or even professionals and may look so for 200 years until it very suddenly becomes relevant and crucial.
The computer that you used to post your comment uses a lot of mathematical stuff developed in previous centuries. It is unlikely that Gauss or Euler could have predicted that their work (ink on paper) will one day be used to facilitate completely different modes of communication.
And it is well possible that some priest criticized them for wasting their talents on abstract things that don't help humanity as of 1800.
Apples and oranges. Naming more mersenne primes does not constitute discovering new mathematics. It's really very hard to argue that this is a good use of computing power.
Sure, pure mathematical theory has a tendency to become useful later, but finding specific Mersenne primes is not pure math.
It’s not known if there is an infinite number of Mersenne primes (though it is strongly suspected) - so an interesting mathematical object to discover would be ‘the largest Mersenne prime’ - but this process isn’t going to find that (it might find a candidate but it won’t find a proof). If there are indeed infinite Mersennes then this search is just guaranteed to keep finding more forever.
It’s like looking for green pebbles on a beach. It’s nice when you find one to add to your collection, but it’s not that surprising, and if you keep looking you’ll probably find more. You’re not advancing geological knowledge if you keep looking and finding more.
Yes but if you want to say if there are an infinite number of green pebbles on an infinitely large beach then the problem you need to solve is finding the distribution of the green pebbles on the beach. It's only possible to do that once you've found a few green pebbles and gets much easier if you've found many.
We don't know whether the search for further Mersenne primes eventually gives birth to a new algorithm which will enable further advances in other fields.
Cranking on the current best known algorithm for testing for Mersenne primes does nothing to advance the state of the art for other algorithms, though.
It’s like saying, we have a project that keeps sorting larger and larger sets using quicksort because we believe it will help advance the state of sorting algorithms. ‘We just sorted the largest ever set!’
I'm all for putting work to find things out and I agree with you, but for the sake of argument:
Say these are really relevant 200 years down the line, maybe even 100 or 50, does it really make sense to calculate them now, given that computers 50, 100 and 200 years from now will outperform ours by orders of magnitude?
Say you grab a CPU from the end of the 70s and get it working until now, that'd be about 13*10e15 operations, or about a month of processing in a modern 5GHz CPU. That's 40 years to a month.
Mathematics tends to be upstream of technology. I would expect that improvements in mathematical knowledge in general will help us construct even better computers in the future. We are slowly approaching some physical limits, but algorithmic improvements may get us further speed improvements.
IDK about this particular scientific team, but if you have to run a long-running task, you will be a lot more careful about complexity and speed of your data and algorithms instead of relying on brute force. Maybe you will actually discover some real improvements of existing knowledge.
I can even see the effect throughout my IT life outside science. People who cut their teeth on processors with limited power and not enough RAM tend to produce much more efficient solutions even when programming modern computers. They just don't take the gigabytes and gigahertz for granted.
Agreed with all of this, well put. I’d like to add though that we don’t do math just because we anticipate there will be some practical utility in a different field down the line. Mathematics isn’t merely a prerequisite for engineering. Does it not suffice that we study math for the sake of it?
The Mersenne primes for example are a delightful little mathematical object. IMO they are immediately curious and interesting. I am very grateful that we continue to make progress in understanding them.
I once had a math PhD friend tell me that if they ever became aware of any practical applications to their research, it would probably lose all interest to them. I’ve always found that to be curious and a good reminder that others can have radically different beliefs to my own in unexpected ways.
The best example I can think of is GH Hardy's work on number theory. He was a pacifist and so specifically wanted to work on something that would not help the war effort. Fast forward 100 years and number theory now forms the basis of modern cryptography - securing everything from your banking details to military communications.
We often don't know what the applications are at the time, but that pure mathematics work very often forms the foundation of something important later on.
>
Could someone tell me as a layperson, how finding these numbers is helping humanity?
When people give such an argument, they usually mean "I am not interested in this topic" and use "why is this helping humanity" as an excuse for their ignorance.
I’d say it’s worse, they’re usually trying to imply that the people engaged in the activity aren’t sufficiently allied with their pet political or moral causes. If you engage with their tweet-sized sniping the conversation inevitably leads to “why aren’t they spending their time and resources addressing climate change/wealth inequality/poverty/etc.”
Such arrogance. It's funny that you mention climate change because this project is a non-negligible use of energy that is indefensible even in the context of advancing "pure mathematics." It's literally computation for the sake of computing more things that haven't been computed before. The only end result is that we can say we computed more things.
Thanks for the jab but you are wrong, this is not how I meant my question. I am genuinely curious, as someone who has benchmarked their computer with prime95 and is not particularly mathematically inclined.
"When people say that other people usually mean X when they ask something, they are just ascribing that motive to another person, even if it is not what the asker has meant."
On a different eon that might be true but nowadays compute power - electricity - global warming. Given that, “frivolous” work that just burns energy could be potentially postponed until we go fully renewable in our energy stack.
There seem to be many frivolous uses of power to complain about when you get to this level of power use.
GIMPS' ~1PetaFLOPS[1] uses no more than a megawatt [2].
That's about 10 Gigawatt-hours (GWh) over a year [3].
The average US house uses about 10 MWh in a year [4] so GIMPS = the electrical power use of ~1,000 households. Portland reduced their power consumption by 20 GWh by switching their street lights to LEDs[5].
The electrical consumption for Superbowl parties is an estimated 75 GWh [6]. (I think Superbowl parties are far more frivolous.)
I see this question getting downvoted, and some of the responses are strangely hostile strawman attacks based on what those posters are guessing this person "really" means. I think this is a perfectly reasonable question.
A few years ago, I got a PhD in pure mathematics (low dimensional geometry and topology). As someone who loves pure math and spent many years devoted to it, I've had many conversations with mathematicians about whether we should be concerned with practical applications. One common argument, which several people have made here, is that there's been a lot of math that found practical application long after it was done. That really only applies to a very small proportion of math. The vast majority of math done has still never found application. So arguing that we should do that math because it may have applications later is kind of like arguing you should play the lottery since there's a greater than 0 chance that you'll win. Also, from the (admittedly biased) group of mathematicians I've spoken to about this, very few seemed to actually do math in the hope that it would be useful later.
Instead, people do math because it's interesting, beautiful, and challenging. I would argue this Mersenne Prime search is almost more like recreation. I could be wrong, since I never did much number theory, but I don't think this is advancing any research. It's just a fun hobby for people who enjoy math. I think doing math for the sake of its beauty is generally fine. I do have concerns about the climate aspect of this particular project though. This is a lot of power being consumed just for recreation. This project may not be big enough to have much of an environmental impact, but in general, I think we should be more mindful of not wasting large amounts of computing power just for fun.
I don't care about upvotes/downvotes but it was interesting that I got upvoted at the beginning and got a few useful responses, and then at some point a user above made an ad hominem attack and then the mood changed after my motivations were called into question. Suffice it to say, my question came from a place of curiosity not judgment.
RSA cryptography is is based on the assumption that factoring numbers is a hard problem. Isn’t kind of nice to know that there curious people trying their best to see how fast you can actually get?
Primality tests for Mersenne numbers are highly specialized. It's possible that they could advance factorization of general numbers, but extremely unlikely.
It seems to me like he had a private method for calculating primes (that was imperfect, sure, but also astounding for being developed in the 17th century).
To me it feels like when this is brought up, modern day mathematicians are too quick to dismiss him based on the incorrect numbers/the missed primes, and ignore the many correct ones and the question of _how_ he managed to find them. That's what I would like to know, is his method.