there is actually something rather special about the questions mathematicians focus on but the jargon and technical detail get in the way. ultimately, the reason prime numbers are so fascinating is because they turn up in weird places, with strange properties, and connect wildly different branches of mathematics together.
the Riemann Hypothesis is an example of this. it starts from a pair of observations:
1. the sum of the logarithms of the prime numbers less than n approximates the graph of y = x to absurd precision. there's no obvious reason why this should be true but the result just kind of falls out if you attempt to estimate the sum, even in really crude ways.
2. the sum of the reciprocals of the squares was calculated by Euler centuries ago. the way he solved the problem was to connect two different ways of calculating sin(x) -- the Taylor series and a new idea he developed which you may have tried in a calculus class. namely, he wrote down sin(x) as a infinite product of its roots. one result that happens to fall out of this comparison between the infinite sum and the infinite product is that you can factor equations like sum(1/n^2) over all the integers into a product of the reciprocals of the primes.
this second fact leads very directly to a proof of the first through some clever algebra but it's far more general. in fact, we not only get exact values for sum(1/n^2), but exact values for all expressions of the form sum(1/n^s) where s is a positive even integer -- odd powers remain an unsolved problem to this very day. in fact, these sums are hiding in the Taylor expansion of sin(x). so somehow, we've connected a seemingly arbitrary question about how to take a certain kind of sum of the integers, connected it to a product of primes, and tied both to an analytical function -- sin(x). already bizarre.
but it gets even stranger. if you let s vary as a complex number and not just as a positive number > 1, this very same approach lets you calculate the sums for negative even powers as well. remember, we're calculating sum(1/n^s) -- how can we get any result but infinity for sum(1/n^-2) -- i.e. sum(n^2)? more perplexing is that the value you can calculate from algebra and calculus over the complex numbers, following Euler's method, and Riemann's extension of it, is 0 for every negative even power. this is also where the notion that the sum of the integers is -1/12 comes from. this is the Riemann Zeta function.
so now, we've connected a seemingly arbitrary sum of the integers to an analytic function over the complex numbers via prime numbers. and worse still, if we connect this back to the first point, we discover that this Zeta function tells us the order of the error term on the estimate for sum(log p, p < x). remember, this function looks like the graph of y = x. if the Zeta function has a zero at s = 1/2, we have an exact formula for that sum and the error term is precisely O(x^1/2). we can even calculate the constant factors. this fixes all the prime numbers. it in fact gives you an exact formula to calculate them, not as a recursive function, but directly. all because some asshole in the 1700s posed a problem no one could answer until Euler: what's sum(1/n^2)?
so this is why prime numbers fascinate mathematicians. nothing we understand about them is obvious. but being able to answer seemingly arbitrary questions about them unlocks whole fields of mathematics. and new techniques in completely unrelated fields of mathematics suddenly answer deep questions about the structure of the natural numbers.
if this topic interests you, I highly recommend this series of videos from zetamath: https://www.youtube.com/watch?v=oVaSA_b938U&list=PLbaA3qJlbE... -- he walks you through the basic algebra and calculus that lead to all of these conclusions. I can't recommend this channel enough.
most of the optimizations discussed are actually kind of obvious if you know how chess notation works in the first place. like the reason the starting move "Nf3" makes any sense is because only 1 knight of all 4 on the board can possibly move to f3 on the first move. what the OP is doing is trying to take that kind of information density and trying to represent it, first by assigning bits to each part of a possible move and then by paring it down where the space of legal moves excludes entire categories of possibilities. there's another article that goes much further into how much optimization is possible here: https://lichess.org/@/lichess/blog/developer-update-275-impr...
or he just peaked early. there's no way to know that he would have had any more insights, as well as no way of knowing if he wouldn't. he didn't, and that's all we have.
Einstein gave us GR, yet lived a long life to be known for something he did in his much younger days.
While I agree with the Einstein argument, odds are that a talented young scientist, given enough years to live, has some good chances to find more than his early carrier findings. At least his odds are better than if he dies young...
it also doesn't help that the fix for the increased difficulty in implementing AsyncIterator for devs (assuming the approach advocated for in the OP winds up being the one selected by the language team) relies on the as yet (?) unstabilized generators/async generators feature. I'm not really why it's not available yet as the necessary compiler features are already in place and have been for years but because it's not, this is kind of a hard pill to swallow.
you've misunderstood my point -- I'm saying manufacturer claims need to be verified. 10x doesn't mean all that much to begin with and, moreover, fan static pressure claims from manufacturers get verified by virtually no one. a shady manufacturer could straight up make up these numbers and no one would be the wiser because of that lack of independent verification. it's like if a CPU GHz figure was taken at face value, without testing.
The Linus Tech Tips video says that it's a commercial product for use in factories where dust and other airborne material mean they can't use fans for cooling (apparently, similar previous products from this company used passive cooling).
However, the marketing images on the product page don't match with this; there's definitely a lot of images of use at home. I'm not sure whether it's a consumer product or not.
For small companies/startups, selling a niche product, direct to consumers is rarely worth the time. You have a small team that should be putting their effort into selling/packaging thousands with each call, not 2. The company lights only stay on if the collective sales goals are met. And, shipping to Joe Shmoe can easily cost money. This is why places like Adafruit exist: direct-to-consumer is a huge pain.
> Further analysis with mass spectrometry also found serotonin, a much more strictly controlled substance, in 26 percent of the tested supplements. According to the authors, the presence of unlabeled but significant quantities of serotonin could lead to serious side effects.
wait, you can buy OTC seratonin? that seems wild given the seratonin syndrome risks.
edit: it looks like it's indirectly via 5-HTP [1].
except no such neural structure has ever been found. humans have been using tools for longer than we've been human -- without solid evidence that this tool is interpreted as a social actor, based on real neuroscience, this kind of claim rooted in an evolutionary argument is psuedoscience. people have been making arguments from evolution to say all kinds of nonsense things since Darwin (like justifying racial hierachies). which neural structure is posited to cause us to humanize our tools?
if anything, the historical evidence points in the opposite direction -- that people objectify far more than they humanize, even when the cost is measured in hundreds, thousands, or millions of lives. that's merely an observation, not a hypothesis or a claim about what people will do or about what they are capable of doing. we ought to humanize more often.
"We haven't yet found a specific neural structure for recognizing faces" is far from evidence that no such structure exists. Our understanding of the brain still has massive gaps in it, and I can testify from my own experience working for a psychology & neuroscience department (which includes one person particularly specializing in perception, from the very basic "light hitting the optic nerve" stage all the way to object categorization and recognition) that we still have a lot to learn in this area specifically.
It may very well be that there isn't a brain structure dedicated to this, and that would be fascinating, too! But to denigrate the people doing their best to understand this stuff 30 years ago as "pseudoscientific" just because they made an assumption about how plastic the brain was without our benefit of 20/20 hindsight is very much uncalled-for.
> "We haven't yet found a specific neural structure for recognizing faces" is far from evidence that no such structure exists.
proving a negative is, famously, quite hard. an unsolved problem, even. facial recognition has a plethora of evidence beyond an argument from evolution. the notion that we humanize tools is one that, as yet, lacks that evidence. I urge people to be more skeptical of arguments from evolution. we understand very little about our evolution and it's easy to insert our own worldviews and beliefs into such arguments, allowing them to state virtually anything we like in a plausible envelope with the shape of a scientific argument. I'm not just calling the argument about humanizing tools pseudoscience -- I'm applying it equally to every other argument from evolution that lacks other motivating evidence.
I understand your original point was about a neural structure involved in humanization, and not of facial recognition, but am responding to the point you let the interlocutor derail this to.
> > "We haven't yet found a specific neural structure for recognizing faces" is far from evidence that no such structure exists.
> proving a negative is, famously, quite hard.
Whether structure or not, we do have very strong evidence that a mechanism of facial recognition exists as there are people who lack this mechanism to various degrees.
> The brain has even evolved a dedicated area in the neural landscape, the fusiform face area or FFA (Kanwisher et al, 1997), to specialise in facial recognition. This is part of a complex visual system that can determine a surprising number of things about another person.
> Your brain is a structure for learning structures.
And it does so by having specialized structures.
> It doesn't need to have a built-in module for recognizing faces; it wires up a face-recognition system on the fly, from visual data.
Except it does appear to have such a special structure, the Fusiform Face Area. If it did not, people with prosopagnosia wouldn't just have problems with recognizing faces, but would have more general pattern recognition problems.
> They performed electroencephalography (EEG) in 15 healthy adults who were observing pictures of either a human or robotic hand in painful or non-painful situations, such as a finger being cut by a knife. Event-related brain potentials for empathy toward humanoid robots in perceived pain were similar to those for empathy toward humans in pain. However, the beginning of the top-down process of empathy was weaker in empathy toward robots than toward humans.
So basically it seems we potentiate empathy toward similar kinds of beings and then maybe pattern-recognize that they are not similar to clamp down on the potentiated empathetic response?
I live in a space where I tend to talk with ppl about research every day or two (both academics and regular citizens). In case you value communication, know that you come across as way too unnecessarily confident to seem interesting to engage with. Take that as you'd like, coming from someone who learns from and shares with others regularly IRL
In case you're curious why: "pseudoscience" has a real meaning (not just an punchy and authoritative word to toss around to shut down discussion), and your mention of social darwinism comes across as a weirdly aggressive conversational closer
> humans have been using tools for longer than we've been human
It depends on how you define 'human'. Our line spit from chimpanzees 7 million years ago (mya); we walked upright 6 mya. Tool use began ~2.58 mya (possibly 3.3 mya, depending on some uncertain evidence).
I mean, lots of animals use tools, including chimps. those tools aren't nearly as sophisticated, so it depends on how you define tool use, but the point still stands. this is all besides the point.
> How do we know it's automatic without knowing the quality of each request?
what purpose does the filter serve if it allows virtually all requests through? your response amounts to "trust the FBI, they don't need oversight".
> I think the FBI understands the process and doesn't send flimsy requests in the first place.
I would love to see your evidence for this.
> If it's a rubber stamp, why are any rejected?
because those requests were egregiously bad?
warrant approval is bad across the board, even when the requests are public[1]. what reason do we have to assume they're better when they're sealed? "the FBI is self-regulating" doesn't pass muster when we can see the warrant requests they put forward and get approved when the details are public.
the Riemann Hypothesis is an example of this. it starts from a pair of observations: 1. the sum of the logarithms of the prime numbers less than n approximates the graph of y = x to absurd precision. there's no obvious reason why this should be true but the result just kind of falls out if you attempt to estimate the sum, even in really crude ways. 2. the sum of the reciprocals of the squares was calculated by Euler centuries ago. the way he solved the problem was to connect two different ways of calculating sin(x) -- the Taylor series and a new idea he developed which you may have tried in a calculus class. namely, he wrote down sin(x) as a infinite product of its roots. one result that happens to fall out of this comparison between the infinite sum and the infinite product is that you can factor equations like sum(1/n^2) over all the integers into a product of the reciprocals of the primes.
this second fact leads very directly to a proof of the first through some clever algebra but it's far more general. in fact, we not only get exact values for sum(1/n^2), but exact values for all expressions of the form sum(1/n^s) where s is a positive even integer -- odd powers remain an unsolved problem to this very day. in fact, these sums are hiding in the Taylor expansion of sin(x). so somehow, we've connected a seemingly arbitrary question about how to take a certain kind of sum of the integers, connected it to a product of primes, and tied both to an analytical function -- sin(x). already bizarre.
but it gets even stranger. if you let s vary as a complex number and not just as a positive number > 1, this very same approach lets you calculate the sums for negative even powers as well. remember, we're calculating sum(1/n^s) -- how can we get any result but infinity for sum(1/n^-2) -- i.e. sum(n^2)? more perplexing is that the value you can calculate from algebra and calculus over the complex numbers, following Euler's method, and Riemann's extension of it, is 0 for every negative even power. this is also where the notion that the sum of the integers is -1/12 comes from. this is the Riemann Zeta function.
so now, we've connected a seemingly arbitrary sum of the integers to an analytic function over the complex numbers via prime numbers. and worse still, if we connect this back to the first point, we discover that this Zeta function tells us the order of the error term on the estimate for sum(log p, p < x). remember, this function looks like the graph of y = x. if the Zeta function has a zero at s = 1/2, we have an exact formula for that sum and the error term is precisely O(x^1/2). we can even calculate the constant factors. this fixes all the prime numbers. it in fact gives you an exact formula to calculate them, not as a recursive function, but directly. all because some asshole in the 1700s posed a problem no one could answer until Euler: what's sum(1/n^2)?
so this is why prime numbers fascinate mathematicians. nothing we understand about them is obvious. but being able to answer seemingly arbitrary questions about them unlocks whole fields of mathematics. and new techniques in completely unrelated fields of mathematics suddenly answer deep questions about the structure of the natural numbers.
if this topic interests you, I highly recommend this series of videos from zetamath: https://www.youtube.com/watch?v=oVaSA_b938U&list=PLbaA3qJlbE... -- he walks you through the basic algebra and calculus that lead to all of these conclusions. I can't recommend this channel enough.