Hacker News new | past | comments | ask | show | jobs | submit login
My Grandfather Thought He Solved a Cosmic Mystery (theatlantic.com)
204 points by gone35 on Nov 1, 2018 | hide | past | favorite | 106 comments



I certainly understand the harsh reality of seeing a glimpse of a solution and finding no logical path there.

For me, it's factoring large numbers in short times. I've spent 18 years on and off of the problem. Most of that time was coming up with novel ideas and then arriving at the same conclusions as everybody else.

I'm too obstinate and "stupid" to know that the problem is practically unsolvable - but fortunately I'm not obsessive enough to throw my life into it. It's basically a hobby I do during my downtime.

I like to call it mathematical finger painting, or nature walks through obscure numeric systems. Studying topics, geometries, ancient multiplication methods for fun, and generally scratching under the surface of "why" without knowing how.


I registered an account just to reply to your comment.

After reading the article in question, I was thinking about my own experience. I too have been working on prime factorization for the past 6 years in the same fashion that you have and what you just wrote here describe perfectly how I feel about my progress and journey. Which makes me wonder... How many people out there are like us, working on the same problem while feeling the same way?

It's like the problem is laughing at you. Baiting you back in with its extreme simplicity. It truly is a mesmerizing problem that you can't help but feel is solvable if only one single Eureka moment is reached. Boy do I want this one solved...


Yeah, I’m not even close!! More like the movie Groundhog Day.

So far the journey has been really rewarding though. I’ve recently been focusing my efforts on remainders and decision paths.

The most invaluable lessons learned have been my understanding of the complexity of simplicity.

I’ve written some insanely complicated, convoluted and frankly weird ass code throughout all this.

For example: bignum math libraries, modified Euclidean geometry, imaginary number libs, fractal factoring, Russian peasant multiplication, 3D volumetric fluids to triangle area fitting. Multiple geometric shape fitting libraries, Gallois fields and lots and lots of weird what-if code that popped up in my head.

I’d always been good at algorithms, but straightforward head-first attacks of the problem always ends in failure.

Now I’ve learned to build analysis, statistics and proofs by necessity. It’s completely changed my mindset on design. (Obviously I studied far my comp-sci than mathematics.)

Other than sieving and brute force, I think the only other general solution to be a sort of ‘halving’ problem.

The relation between factors and q is essential sum or area. You can subdivide either into n buckets and remainders building a tree of nodes iteratively. At many nodes I have found some interesting patterns. I find nodes after some iterations that sum up to a factor. So now my question is finding the path every time.

Anyway, best of luck on your journey and enjoy the ride! Maybe in another few years somebody will finally slay this beast.


Also not the same problem, but I've been doing the same with the Traveling Salesman Problem for about 8 years now. It's a love hate play date every Sunday morning.


Not the same problem, but I have been on and off thinking about infinite/recursive compression and compression of random data. Finding patterns in numbers can be very compelling.


I have gone down such rabbit holes before. However, it's easy to demonstrate you can't compress arbitrary unbiased random data without going lossy.

For every possible input 0 to X you need to map to a different 'compressed' format. So, if you want to handle every possibility then some of your compressed versions need to be as long if not longer than the original version. That's not a problem if their is some bias in how your generating the data. Just map the more common versions to shorter encoding and you get a useful compression even if sometimes it makes things worse on average their is an improvement. But, with unbiased random data you can't chose which subset is more likely because every version is equally likely. Which is why it does not work.


To be said another way: certainly you can figure out a compression algorithm for any one set of data by being clever and taking your time, but that algorithm will not apply to a general case and would be worthless. If you go about finding patterns in a random string it is unlikely that another random string will have those patterns.


Concur. Said yet another way, one data sets entropy is another data sets negentropy.


I like your attitude. It feels like the appropriate way to explore in the mysteries of Nature. I've only spent maybe dozens of hours thinking about prime numbers myself over the years and I just can't shake the feeling that the only way to do what you're trying is with fantastical computers containing practically infinite bits and run all the calculations in parallel. But that doesn't detract, I think, from the fun of trying to think of tricks to pass the time.


I'd say that even if your answer is completely wrong, at least you can document WHY it was wrong. And maybe someone can open up a new vein of inquiry that branches off yours.

It reminds me of a sci-fi story where people who decided that they were tired of being emulated retired to a virtual mine where they try to discover new mathematical proofs.

They didn't have to do it, since it served no real purpose. But it opened up new avenues of knowledge.


>>It reminds me of a sci-fi story

Could you mention the the name?


Sounds like Diaspora by Greg Egan. If it’s not, Diaspora also has a virtual mine of mathematical proofs that is mined by the citizens, which are virtual “human” entities living in a simulated world.


I feel this way abut getting rich with some super smart idea that is just, just beyond the grasp of my intellect... This may well stay true for the rest of my life of course...

More down to earth though, I'm a biologist and whenever I get this feeling related to work, scientifically I have learned to trust it and keep at it, it usually does mean there is something of value in my ideas.


In high school my friend's Dad thought he had found a flaw in modern Physics understanding that would change everything. Physicists, he claimed, refused to see the truth because they were too loyal to Einstein. He sat me down and showed me flaws in a bunch of equations, which was duly impressive until I went to college and took Physics 101. Turns out he was ignoring things like redshift of light; when I plugged in the right equations the classical model held.

I tried to argue with my friend, but I had just proved that I was brainwashed by Big Phys. I learned that his Dad cornered anybody he could to evangelize his theories and had been doing so for years. He developed a theory that women and children were more receptive to his revolutionary ideas, so was in the process of writing a book that was more conversational in tone.

Last I heard he was still at it.


it's sad that he tries to validate his idea by evangelizing to laymen instead of trying to convince "big science" of his ideas.

My dad was and is the same way -- everyone is in some massive conspiracy plot to misguide the public. Typically there are demons or other some suches that are pulling the puppet strings. A few months ago he was convinced the world was flat.. It's this whole Dunning-Kruger effect that seems to affect so many people.

I think there are generations of people who missed out on a proper education and a healthy skepticism.


"I think there are generations of people who missed out on a proper education and a healthy skepticism. "

I don't think it's education. Most conspiracy theorists I have met seem to have a deep discomfort with our world and this is their way of explaining it. I think other people turn to religion for the same reasons.


I'm going to inject some cynicism and say that what a number of them find unsettling is how famous they aren't.

They always think they have a big idea. It's never some minor correction to a big theory, which a PhD would be happy with (as long as it's major enough to be publishable); no, it's Shatter The Foundations or nothing. Therefore, they're being done out of their fame and fortune by some grand conspiracy, which, in itself, proves how important they are: If they weren't right, They wouldn't be conspiring against them, now, would they? That mindset certainly sells books, and helps on the lecture circuit, too.

I'm sure there are some honest crackpots, but I wonder how honest you can be when you come upon a bit of well-accepted physics you can't understand or simply don't like, and conclude that everyone else is wrong, and you're the only one who's correct.


> My dad was and is the same way -- everyone is in some massive conspiracy plot to misguide the public.

My dad was never this way until Youtube. It took him down the rabbit hole and spat him out onto informationclearinghouse.info .

I love my dad, but sadly these days I avoid discussing controversial topics with him.


That sounds more like mental illness than a lack of education.


That's not what Dunning-Kruger effect is. Dunning-Kruger effect does not cover people thinking they are more talented than experts, just that they think they are more talented than they actually are.


It's impossible to judge without listening to both sides, but it's very unlikely that he found a flaw in an equation. Equation are easy to prove with experiments. It's why whole physic science is so addicted to predictions, because numbers cannot lie. If he found a flaw in a equation, then he can just calculate, measure, and publish.


You can go astray without really becoming a crank. A true crank that can't let the crankery go develops delusions of persecution and conspiracy to keep the truth out.

Going astray may be as simple as not being able to provide the necessary clarity, the clear road map that leads from what is familiar, to those in the field, to the novel thing you are proposing.

I am talking of course of heavily mathematical fields. Not going to comment on the murkier situation outside of that.


The article wonders (in the 5th-to-last paragraph) if a different kind of rejection might help reduce the number of people who turn into cranks. Sometimes they have an idea and it seems like they can't communicate it, so maybe just saying "I understand your point even if I think it's wrong" could help.


Good point. In my experience, when communicating ideas, it helps to separate the nuances of “I hear/understand you” and “I agree with you”. For novel ideas, it takes a significant amount of time after you understand it, to decide on it. In the mean time it is important to suspend judgement while exploring the idea and its consequences. Most people genuinely interested in ideas want to make sure their thoughts have been understood, and will happily receive a critique if it comes to be. Seeing acknowledgment that they’ve been understood allows them to relax and start listening. OTOH, being repeatedly dismissed without being understood leads them to feel persecuted for novelty.

Listening is an art.


You're absolutely right! People genuinely interested solely ideas will be satisfied to be understood and accept any reasonable critique that comes in response.

One of the human failure modes that I witness on a regular basis is that people fail to separate "I hear/understand you" and "I agree with you" within their own heads. People have a tendency to assume that what they find convincing will be similarly convincing to another. As a result, if it's not convincing to another, this other person must not understand it properly and must just need to be educated on the subject. Or, more cynically, they aren't paying attention because they don't understand the importance of the work.

My current operating hypothesis is that vanishingly few people are genuinely interested solely in ideas. Almost everyone has some ego at stake. Being told that your baby is ugly, no matter how kindly, compassionately, or empathetically it is done, is rarely a positive experience. This is where people like Dale Carnegie come in.

Beyond the emotional aspects of it, there are other issues. It can be incredibly labor-intensive to delve deeply into a paper and determine how right or wrong it is. If it's wrong, it's even more work to understand precisely where and how it is wrong, and then give the kind of feedback a hypothetical egoless author genuinely interested solely in ideas would like. This can be a great deal to ask, unpaid, of people who have other things they would like to be doing. This is doubly true when most papers are patently wrong in some obvious way and the painstakingly handcrafted feedback unlikely to be taken up in the manner an idealist could hope for. I'm sure you have a way to handle this with the art of listening, and I'm just not seeing it. Can you help me understand?

Again, you're completely right. It can definitely help to separate the ideas of understanding and agreement, and reasonable people interested in ideas will handle this gracefully! There just might be some room for nuance and perhaps a careful consideration of costs and burdens might be in order.


> if it's not convincing to another, this other person must not understand it properly and must just need to be educated on the subject. Or, more cynically, they aren't paying attention because they don't understand the importance of the work.

Here's a thumb rule I try to follow: Before having a strong opinion towards one side, one must be able to be articulate both (opposing) points of view. So, my goal in any discussion is to reach that stage -- mutually, if possible. After that point, I'm willing to trust reasonable people to exercise independent judgement. At least we can get a shared understanding of which differing assumptions lead us to different conclusions, and see whether there are any tests which will help us validate those assumptions.

> Almost everyone has some ego at stake.

Aye, we're all humans! All we can do is strive to be a little better each day :-) The same with regards to listening to someone -- I try to listen carefully and respond sincerely with what puzzles me. It's up to them what they wish to do with that.

> It can be incredibly labor-intensive to delve deeply into a paper and determine how right or wrong it is. If it's wrong, it's even more work to understand precisely where and how it is wrong, and then give the kind of feedback a hypothetical egoless author genuinely interested solely in ideas would like. This can be a great deal to ask, unpaid, of people who have other things they would like to be doing.

Absolutely! I don't mean to imply that one should take every crackpot paper seriously. I guess each researcher has a circle of respected colleagues whom one can trust for a little bit of indulgence. Also, this is partly what professors are paid for -- so I would hope they're willing to spend some time with their maturing students, and their colleagues, who have proven track records as researchers.

An interesting related tidbit: Some of my physics professors were very sincere in asking eager grad students to have some humility and avoid spending much time on the murky swamp of quantum foundations (which has claimed many decades of smart researchers' lives)--and definitely not by ourselves, at least till we had a few manifestly productive years working on other things, and developed enough of a research sense to understand when to back away from unproductive lines of inquiry.


It doesn't matter how good of a listener one is. If you can't refute a claim with evidence to the contrary, your refutation will feel like persecution.

If a person can't be convinced with real evidence, that is the road to crack-pottery.


I find it helps a lot when you're telling someone they're wrong first tell them how they're right.

Typically there's some agreement with a difference of either an underlying core axiom or some other tradeoff that the two people are weighing differently.

Pointing out the pieces you agree on helps the other person relax a little and open up when thinking about their position.


I'm pretty sure the kind of rejection doesn't matter on this.

http://web.mst.edu/~lmhall/WhatToDoWhenTrisectorComes.pdf is a fascinating read on this.

I suspect that a sound mind will deal with most kinds of rejection well, but an unsound one will deal with none of them well.


But that article says the opposite thing, that a certain form of rejection really does work:

"Thank you for developing a good approximation to the trisection of an angle! It has an average error of only 0.1 percent, according to this computer analysis of all angles from 0 to 180 degrees in steps of three. It compares favorably with past approximate trisections developed by X, Y and Z that work as follows..."


Okay, fair. Mostly I linked it to illustrate how many forms of rejection don't work. Basically rejecting someone is tricky business.


> I suspect that a sound mind will deal with most kinds of rejection well, but an unsound one will deal with none of them well.

I don't think this necessarily holds. I'd need to look at studies, but I feel that the more time you've invested into something, the worse rejection is, sound mind or not. And this would be compounded if you get the feeling the other person isn't even trying to understand you and your point of view.


The thing is, in mathematical fields, the reader is looking for a way to check what you are saying with what they know in terms of the math involved. In pure math for instance you should be able to follow the proof, or it might be possible to construct a counterexample, or at least to think about counterexamples. If the reader can't even get that far then maybe your paper is in trouble.

In physics you still need some exposition that leads to being able to work out the math. Ultimately it should lead to something that is either fits with known theory or contradicts it, and in the case of a contradiction then it might be experimentally confirmed or denied. You've got an uphill struggle otherwise.


I read the rejections in the article as saying "I don't understand you" or more specifically "you haven't made your case" rather than "you're wrong." I don't think they got to the point of deciding whether it was wrong or not.

This might be more common in fields like physics, where it comes down to the math. If you can explain the math, and it seems flawless, then it is understandable and may be publishable. But it could still be wrong, in ways that could be verified by experiment.


Maybe the rejection letters provided more details as to why they thought his argument was invalid which isn't captured in the article. If they did not, it seems to me that the reviewers didn't offer specific criticisms which would have been more useful.

I find these questions and paradoxes that illuminate the limits of logical reasoning quite fascinating.


It is hard to provide specific criticisms of a work one does not understand, and which makes no testable claims. If it is just one reviewer, then he may be missing the point, but if it is everyone, you have to give some thought to the possibility that it is you who is mistaken.

In principle, someone could be both right and incomprehensible, but there's no way forward unless one can get some people to both understand and agree with you. Even then, you could end up with one faction thinking it is on to something profound, while another thinks they are mistaken, as is happening with Integrated Information Theory at the moment.


It seems that most people told him they did not understand why he was trying to say, and he persisted in writing incomprehensible papers. What else could people do for him? Publish a mess and hope that validation doesn't lead to him sending them more stuff that nobody understood?


except the paper is very readable, I just did.

without commenting on importance, novelty, truth ... he explains clearly the (conditional) probability functions and performs substitutions i.e. deriving conclusions.


I did the same and concur.


I feel a little like I'm on this road myself. I've got a design for a type of fusion reactor that is simple and elegant, but I'm having a devil of a time convincing anyone that it's worth working on.

It feels like the Idea is stuck in limbo and I can't get the resources together to test it.

I feel like I have a responsibility to the Idea itself. As long as there is a chance that it could work, dropping it feels criminal.

It needs to be killed completely, through testing, or it's going to keep haunting me.


To convince someone to look at your fusion reactor plans, you have to first convince people that you'd be the kind of person capable of innovating in the field -- ie, publish papers, produce smaller results. Success follows success. Not knowing you or your ideas at all, I can already tell you that it's extraordinarily unlikely that you've come up with something new and effective, or you'd know how to get it built already.


I know HOW to build it, I just don't have a million dollars laying around to actually do it. The prototype needs to be built inside an MRI machine.


How serious are you about this? Why should anyone believe you when you blindly assert that you know how to build it? If you can convince someone in the field then there's a chance that the experiments can be done to see if it would work, but you'd need to convince someone that you know what you're talking about.

So where's your evidence? The usual method is to have a track record, another method is to be able to state very clearly what the new/novel idea is for this to be different and workable. In that case, what's your elevator pitch?

If you're outside the system it boils down to that question - if someone gives you three minutes, you need to be able to convince them to invest more time to understand more detail.

Do you have a description like that?

Why is your idea different? What's new about it?

Why should some invest the time to see how/whether it works?


I'm very serious, but very burnt out by many recent failures.

I would hope that nobody would blindly believe me, but would investigate the idea on it's merits.

It's basically an inside out cyclotron... instead of ions circling a central point, all ion trajectories intersect a focus point once per period. If they collide at that focus point without fusing, they will still be on a trajectory that brings them back to the focus.

There are no new physics, and only basic college level physics knowledge is required to understand the device.

Here is a simulation video I made using a program called simbuca that simulates ion motion in a penning trap. https://www.youtube.com/watch?v=RT6nvmN7GB0

As you can see, that simulation was made 3 years ago, but has very few views. I've tried, but don't know how to move forward without funding further research myself... I've basically stopped looking for outside funding.


Sounds a little reminiscent of General Fusions piston design: http://generalfusion.com/subsystems-fusion-energy-technology...

The essence: A sphere with an array of pistons compresses a central point.


That's funny, I had never seen that! It reminds me of a couple of things, water hammers and sonoluminescence:

https://en.wikipedia.org/wiki/Water_hammer

https://en.wikipedia.org/wiki/Sonoluminescence

The important equations are probably under:

https://en.wikipedia.org/wiki/Water_hammer#Slow_valve_closur...

So for high flow velocity and small time delta, the pressure can be pretty astronomical. If you could somehow create a little bubble in the center of the Earth and let it collapse, you might get a little fusion, statistically speaking, before it flattens out to steady state conditions and obviously a low probably of fusion since we aren't standing on a star!

Edit: I realized one other connection here - I think this is what happens when red supergiant stars run out of nuclear fuel and can no longer produce enough heat to hold up their gas against gravity. The gas cools, collapsing into a small volume near the limit of a black hole's density, and under such tremendous pressure, much of the star's mass undergoes fusion simultaneously and releases 10 billions years worth of energy in an instant, blowing the remaining material out into space. That's where the heavy elements in our bodies like iron (and even heavier elements that require added energy to fuse) come from:

http://cse.ssl.berkeley.edu/bmendez/ay10/2000/cycle/snII.htm...


I really like situations like these... places where you get singularities.. divide by zero situations that break things. It's where really interesting stuff happens that can be unexpected and lead to new ideas and understanding.


Yes, periodic compressions are a feature of both.

I've looked a lot at what they are doing and hope to replicate their story. If I can generate some "marketing neutrons" like they did, maybe funding will follow.


It seems like you might be neglecting repulsive forces that would alter the trajectories as pressures increased far below those necessary for fusion. In short, it’s hard to see how your device would fuse anything; you’d need some way to constrict everything or it wouldn’t collapse so neatly in each cycle. That constriction would have to be magnetic, require a ton of energy and modeling to solve, and... then you’re in the same hole as every other fusion researcher, except you have a weird geometry that hasn’t been studied and isn’t efficient.

Sorry.


A uniform magnetic field is used to cause the trajectories in the simulation. Because the ions are allowed to follow large, circular (helical in 3D), cyclotron trajectories, self repulsion is minimal over a large portion of their motion.

At the focus, where repulsion is more of an issue, all ions are essentially flying straight at the focus (at the same time) with enough energy to pass through it easily unless they collide with another ion there.

There were some instabilities in some simulations that I did where I cranked repulsion to massive levels... It's beyond my expertise to interpret whether those instabilities are fatal or not, or would cause problems in an actual device. this simulation is here: https://youtu.be/NnbfgTP7v5M

One benefit of this weird geometry is that the walls of the device can be very far from any hot plasma. Energy losses should be minimal, and the self intersecting trajectory gives ions multiple chances to fuse once accelerated.

It may not work, but I feel that it deserves to be looked at.


Hi. It might be good if you email me, as I'm not sure how much I can put into this post.

I've followed a similar path to the one you want to take. I got funding and built a fusion reactor design with some similarities (see US patent 5818891 for some details). What you are describing is more similar to a Migma design (https://en.wikipedia.org/wiki/Migma). If you are not already aware of Migma, I recommend researching it to see what the problems were. See if figure 3a looks familiar here http://www.rexresearch.com/maglich/migma.htm.

This type of design will have a few problems. First, to work the way you simulate, it will have to remain at low density. This is because of space charge (all those repulsions add up as you add more particles). Also, the particle velocities will smear out over time as the particles adopt a Maxwellian distribution (see https://dspace.mit.edu/handle/1721.1/11412). You can get around the space charge issue by putting electrons into the mix, but then you have a large source of energy losses (bouncing electrons emit photons that leave the area).

I'd love to see someone crack the self-sustaining fusion problem. If you would like to talk fusion, feel free to email me.


Wow, I like your idea... It's very similar in concept. Set up a system which naturally resonates at a specific frequency, then add energy at that frequency until something has to give...

I am familiar with Migma, which has self-intersecting orbits. I believe the main difference is that my ions will have a thermal distribution from the start, yet still be able to self intersect. (Particle velocity does not change cyclotron orbit timing until much higher energies. So Rider's arguments are not as applicable)

My hope is to exceed the Brillouin limit only very briefly, and that most of the time, the plasma will be diffuse enough that space charge won't be an issue...

The size of the focus should be the diameter of the electron beam, and electron beams can be very narrow. If the beam diameter is made very very small (electron microscopes prove this is possible), density at the focus could be very high for very short amounts of time... higher than even the Z machine, yet would quickly drop to levels low enough that space charge is not an issue.

The possibility of this periodic density spike is predicted in "Beyond the Brillouin limit with the penning fusion experiment" which implies that local density can exceed the limit in either space or time. They chose space, I've chosen time.

I'd love to email you about this, but fear I'm falling well into crank territory as it is. I'm pontifier everywhere, including gmail.


If you look at that second simulation, about midway through you can see the problem I was talking about, and FiatLuxDave goes into in more detail. The repulsive force is keeping the density of the plasma very low, and you end up with a cloud of weakly interacting, orbiting ions that never fuse. The problem really is that the repulsive force increases enormously at very close ranges, so it may seem like the ions are colliding, but they’re not going to regularly have the energy required to overcome Coulomb repulsion, which remember increases with the inverse of the square of the distance.

To overcome that you’d need to increase the strength of the magnetic containment, and shrink your container, but that will raise the density and change the behavior of your plasma while heating the walls of the chamber.


I'm not much of a physicist, but I find the idea intriguing. Let me see if I understand. When you're talking about "the ions" you mean positively charged ions — nuclei, right? But there have to be electrons around somewhere so the whole thing is electrically neutral, right? Where are the electrons, and what are they doing?


The deuterium nuclei are positively charged and are basically in a penning trap. I don't think electrons would be stable there, and would be attracted to the positively charged electrodes at each end.

The deuterium would start out as a very diffuse gas and a high energy, narrow electron beam (pulsed at the cyclotron frequency) would be used to ionize them at the focus and add energy. The resulting deuterium plasma would be non-neutral. It would want to fly apart, but as it expands the self repulsion should get much weaker allowing the weak magnetic field to steer each ion back toward the focus again. (in 2d, cyclotron motion in a uniform magnetic field is circular)

There are many questions I can't answer about the technical details of how the plasma would evolve with repeated electron beam pulses, and what qualities the beam pulses should have to give the best result... I think it should be short duration, narrow, and high energy.

Likewise, I don't know how high density can get before it starts to cause synchronization problems.


> The resulting deuterium plasma would be non-neutral

Meaning, it would be negatively charged because of the added electrons?

My guess is, you want the plasma to be neutral. (The term I've seen is "div-free" — you want the divergence of the E field to be everywhere zero.) But maybe removing the excess electrons wouldn't be too hard.

Anyway, if you want to chat about this more, feel free to email me — address in profile.


The nuclei would be positive ions... I don't believe free electrons would be stable in a penning trap set up for positive ions. The strongly positive endcaps (my guess is 60kv) would make it unlikely that they would stay in the same region as the positive ions, though I'm not sure.

The goal is for the ions to act like individual ballistic deuterium nuclei flying along a cyclotron trajectory. The density should be very low most of the time, but spike dramatically very briefly.

That line between high density and acting like individual particles is one of the reasons this device is difficult for me to model.


This reminds me of certain aspects of the Triple Alpha design. I think they’re still hiring.

https://en.m.wikipedia.org/wiki/TAE_Technologies


Couldn't you produce exact diagrams of such a thing and model its properties to show that it would be able to effectively achieve a stable plasma?


I've done some simulations, but have taken them as far as I am able to without additional funding. The results were promising, but the cost of better software to MAYBE get better tests was insane... Over $50k for a package that might not even do what I need it to do.

I've contemplated writing my own software from scratch, but that just seems daunting. I'm quite burned out.


Don't need $1MM, a picoSpin 45 retails for ~$3k [1].

As an alternative, you could homebrew a simple NMR yourself with a cow magnet or a neodymium magnet.

[1] https://www.fishersci.ca/shop/products/thermo-scientific-pic...


Sorry, but that just won't work.

The device I've looked at that I believe will work is a Hitachi Airis II. It has just about the right field strength, working volume and field uniformity for the prototype to have a chance of working. It's an older model and I have seen some for sale for approximately $30k.


That's going to be one heck of a prototype or one heck of a bang. Either way the results will be spectacular.


I meant you would know how to get funding for it, since nuclear physics research is a well established field.


I've looked at funding. I've talked to people in the field. There does not appear to be any money available from any source to fund any fusion projects other than Tokamaks or similar devices.

Tokamaks by the way, were invented by an outsider with no previous domain knowledge...


People involved with certain “long term high capital expenditure” efforts have indicated to me that most venture capital funders and US DOE-like sponsors aren’t too interested in projects with potentially long time horizons. They told me they got the bulk of their capital from foreign sovereign wealth funds and university endowment investments.


Good article; strikes a nice balance between a loving ode to a grandfather, and exploring how science is much more conservative than one might think.

Reminds me of this article [0], and specifically this passage, which makes the point that one metric of judging “crackpot ideas” that seem to come out of nowhere is by what they enable. Do they lead to the emergence of new results, or allow for the proof of things other than the direct thing that the theorem sets out to prove?

”Usually when there is a breakthrough in mathematics, there is an explosion of new activity when other mathematicians are able to exploit the new ideas to prove new theorems, usually in directions not anticipated by the original discoverer(s). This has manifestly not been the case for ABC, and this fact alone is one of the most compelling reasons why people are suspicious.”

[0] https://galoisrepresentations.wordpress.com/2017/12/17/the-a...


The linked article at https://www.researchers.one/article/2018-10-6 seems well-argued, nothing outlandish. It doesn't talk about anything in quantum mechanics though. I was hoping to find that somewhere.


Yeah, it doesn't seem fair to call someone a crank if they have a lucid point with a subtle, hard-to-spot flaw; such arguments are typically wrong in interesting, useful ways.

Edit: At risk of self-congratulation, that's what I feel is happening to me on HN when I try to meticulously untangle a complicated issue where I feel people are talking past each other, and then get quickly downvoted but never responded to. Case in point:

https://news.ycombinator.com/item?id=14310444


I agree with you there, pointing out more precisely how people are talking past each other makes all sides pissed off...


Whoa, after my linking, that old comment went from -2 to -1! I thought it was way past the point that you could vote on it! (It's ~15 months old.)


it's there. scroll down and click icons


The Oxford physicist mentioned in the article may well be Joy Christian, who has submitted numerous articles to Arxiv giving what he claims to be a violation of Bell inequalities using Clifford algebras. You can find them just by searching Arxiv for his name.

I spent some time reading these articles and the various refutations - and refutations of refutations - when I was a grad student, as I wanted to nail down where either he or Bell was fundamentally wrong. I never succeeded, but since Bell's proof was much simpler and understandable, and involved more elementary objects, and had been scrutinised for much longer without any serious flaws forthcoming, whereas Christian's proofs appeared to have some issues under debate, I came away thinking it likely that Christian was wrong.

Disappointing that I couldn't find the "central" flaw, if there was one. Sad that a talented scientist should spend so much of their career simply alienating themselves from the community in this way.


I don't understand what the paradox in the water/wine paradox is, after checking out its PDF (linked in the article, https://www.researchgate.net/publication/252476141_The_WineW...).

The problem is defined without giving the distribution(s). Specifying a distribution makes the problem mathematically solvable.

A uniform distribution for representing a ratio of 1/3 to 3 between two things does not match reality well because it gives more weight to one than the other, but I don't see why that's a paradox. You could generate random valid wine/water amounts (reject invalid combinations) and measure distribution of many experiments of that? You can define any distribution you want that makes more sense than those uniform ones. Ultimately though something is not specified, the distribution of the water amount and the distribution of the wine amount. Why is that a paradox?

EDIT: the real question is how was the jug filled up. Did someone pour a random amount of water first, and then after that poured a random amount of wine within the allowable limit? Or did they pour the wine first instead? Or did they pour a random amount of both, and discarded it if after that the ratio was not within the 1/3 to 3 limit? Or some other method? That determines the distribution. The distribution of the ratio is more like a derived distribution, derived from the distribution of the wine, the distribution of the water, and how those two distributions depend on each other based on the method used to fill it up.


Agreed, this reminded me of the math book I read as a kid (sorry, don't remember the name anymore), which asked a seemingly simple question: "is a random triangle acute"

The book then went over a few different methods of constructing random triangles (pick 3 random points, pick 2 random angles, pick 3 points on circle, etc..) and shown how each approach would give a different answer to the problem.


This reminds me of a discussion I once contributed to in an AI research lab, which I got to participate through a Research Experiences for Undergrads program. The topic was genetic programming and someone casually mentioned picking a genome (code) at random. They seemed to assume that would create an initially flat distribution. Whoa hold up there, I said, you can't assume that. They asked, what did I mean?

I told them that both the way the "random" genome bits are picked and the way the genome bits are interpreted (fitness function) changes the distribution. For example, if the genome bits are picked as a random integer between 0 and 2^n-1, but the fitness function is a popcount of 1 bits, then the distribution will be hill shaped for the same reason that a 7 is more likely to be rolled with two dice than a 2 or 12.

Out of the researchers and undergrads there, I think about half nodded their heads as soon as I pointed it out, and the other half struggled to understand or accept the idea that interpretation could affect probability distribution, no matter how much exposition or supporting arguments I gave.

I didn't know about the water/wine paradox then, I wish I had. But now I wonder if part of the water/wine paradox (the fact it is even considered a paradox) has to do with that bi-modal distribution I saw between people who understood that interpretation affects probability distribution and those who could or would not accept that.


The problem is that "uniform distribution" is hard to define. An even distribution of water/wine values is not the same as an even distribution of wine/water values.

Thinking in terms of possible states, you can choose an even distribution of x from 0 to 1 and define:

water/wine = x/(1-x)

wine/water = (1-x)/x

(which avoids being water-biased or wine-biased)


> water/wine = x/(1-x)

> wine/water = (1-x)/x

It's symmetrical but not "naturally" uniform. If we consider both quantities as independent and uniformly distributed over [0..1[, then 1:1 ratio becomes much more likely to occur than 1:1000000. I find that approach "the most natural". The probability P(water/wine < 2) is 7/8 in that case.


I agree. There are a number of ways to model the process of creating the mixture that will avoid the water vs. wine bias that is the essence of the "paradox" as stated.

The main lesson is that the Principal of Indifference is "not even wrong". (It's nonsense.)


So this is defining the random variable not as the ratio but as the implicit function that computes the ratio?

So we are really looking at the density function of the hidden variable X?

For the original problem it feels like the interval constraints on the ratio are causing the issue since they will be inverse. But maybe I am not clear on that...


The problem is not completely specified. However, in the study of probability and statistics there is a tendency to charge ahead and offer answers to incompletely specified problems, using e.g. the Principle of Indifference. The wine/water paradox illustrates a flaw in that approach. The Two Envelopes problem illustrates another flaw.

The "Grandfather" in the story argued that we can consider the problem specified if we assume that "the set of all distinct possible states" are uniformly distributed -- which is reasonable and correct when solving problems in quantum mechanics but unreasonable in ordinary everyday situations.

Thinking along those lines, you can construct a set of all possible concentrations: (0 wine particles, N water particles), (1 wine particle, N-1 water), (2, N-2), ... (N, 0). That led to my x/(1-x) formulation, which has the advantage of avoiding the paradox as stated, but I would not argue it's a well-founded result.


Why is it not well founded? Here an abstract amount (so a state) seems more realistic measure than the ratio which is a function of wine and water.


For one thing, there could be many distinct quantum states which correspond to the same water/wine ratio, so uniformly-distributed quantum states might not correspond to uniformly-distributed concentrations.

But more fundamentally, it seems a stretch to apply these quantum mechanical assumptions. One could interpret the question as "in all possible real-world scenarios in which such a test might be performed, how often would the concentration be in this range?" ... in which case one could consider all realistic models for how the mixture might have been created, and the likelihood of each of those actually being the case, and consider more and more things coming up with a more complete answer ... but one would never have a correct answer without actually stating some additional assumptions.


This seems just like Bertrand's Paradox

https://en.wikipedia.org/wiki/Bertrand_paradox_(probability)

That is, given a random chord of circle, what is the probability that it's longer than X? [1]

It's the same issue: how are you randomly choosing the chord? Random (uniformly chosen) intersection point with the radius? Two random (uniform) points on the circumference? People likewise make assumptions that, they argue, pin down how you have to choose the chord and derive solutions on that basis.

[1] In the official version, X = "side length of an inscribed equilateral triangle", but any value could raise the same issues.


The question is more about what kinds of distributions are in some sense "reasonable" for this setup. A uniform distribution for wine/water is problematic because it implies a non-uniform distribution for water/wine, and it is reasonable to expect that these distributions should be the same.

The underlying problem is still underspecified, and there is a class of distributions which have the appropriate symmetry, as described in the paper. The range of possible answers is still more limited under this symmetry assumption (that the ratio of water/wine and wine/water should have same distribution) than under no assumption (in which case any probability between 0 and 1 is acceptable).


For me that paradox just means that you can't just assume distribution of calculated property because (a)symmetries of the calculation will bite you in the ass.

Same thing that you can't just : "I don't know. Therefore 50/50."

The whole quantum thing angle doesn't make much sense to me because noone assumes there's some distribution of derived properties. Whole qm is as I understand it, calculating those distribution from underlying (a)symmetries of the model.


I would state it even more simply: an even distribution over the domain of a function is not necessarily even over it's range.


Actually, since it's a ratio, the uniform distrubution is in the logarithmic domain, ie over [−log(3),+log(3)], so the probablity (assuming, for no good reason, that it is a uniform distribution), would be:

      log 3 − log 2
  1 − ───────────── ≈ 0.81546
      log 3 − log ⅓
or just over four fifths.

But as others have pointed out, the real answer is the question is broken, since we need a distribution for x, not a interval for x.


As I understand it the key is that we do not have the distributions making it, apparently, ill-defined. If we add the Principle of Indifference the problem becomes defined enough to be solved (which is a progress) but, with two equivalent point of view, we manage to produce two different solutions (well explained in thebzax's reply). Hence the paradox.


> the real question is how was the jug filled up

This is often the case in problems of this sort: "probability" is not really well-defined until you have specified how a particular system was prepared.


Yeah, my conclusion, too. There is no paradox.


"It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong. In that simple statement is the key to science." -Feynman


Surely this is more relevant to scientific theories, rather than a logical theory of interpretation of probability?


It seems that, in some ways, a "crank" is not defined by what they believe or say, nor by whether what they believe or say is true, but by the relation between themselves and their society. Had Freud been in a society that treated the idea of the Oedipus complex differently, perhaps he would have become a crank. Some, like Darwin, were too circumspect to become cranks, but plenty of respected scientists, whether their main ideas were right or wrong, were not "cranks" mainly because of the way their peers reacted to them, not because of anything about them, or their ideas.


Freud was definitely a crank. People back then just didn't know any better so they never called him on his bs on the spot.

I was interested in psychology in high school trying to understand people around me. I even considered studying psychology. Then somebody gave me Freud to read. The horror I felt when I understood that his "theories" were just the observation of the world but through his deeply disturbed eyes and uniquely bent mind...

After that I decided that I don't want to have anything to do professionally with trade that still haven't properly distanced themselves and apologized for ever considering that this Freud guy had some merit.

Freud, the father of psychoanalysis.

Why not António Egas Moniz, the father of neurology, neurosurgery and psychiatrics.


I know absolutely nothing about psychology except I read some Freud. I don't have any opinions about psychology, or the nature of psychology, or Freud (tbh I'm not that interested either). But I'm not clear why I'm supposed to believe you more than psychology community. See, I'm not trying to be skeptical, I'm just trying to understand who is crank, and you stating Freud crank doesn't seem to make him crank in my eyes. Am I wrong? Why?


You are not wrong. You shouldn't believe me over anyone else. I'm just sharing personal experience and opinion.

If you want to form an opinion about Freud please do research what psychological community thinks about him and why. Basically, "he was asking interesting questions and giving wrong answers but we remember and value him for the questions."

> I know absolutely nothing about psychology except I read some Freud.

For me it sounds like "I know absolutely nothing about piloting aircraft except I read some notes of failed inventor that tried to build a flapping wings plane before Wright brothers flew, but noone remembers his name because aviation is not psychology."

Or "I know exactly nothing about solar system except I read some Ptolemy."

If you are interested in Freud, read Freud. If you are interested in psychology read something way more modern.

What I think distasteful is that psychological society didn't distance their trade from Freud enough and in 2018 you were under the impression that reading Freud counts in any way towards learning psychology.

And to address your more general point. How can you know if Freud is the crank or I'm the crank without knowing anything about the subject and without any interest in the subject?

You can't but it's ok since you have no interest. And if you are pressed against the wall you are safer with opinion of majority. Not even majority of psychological community. Just majority of people. Freud was famous so he can't be that bad.


Would it help if I told you the modern psychology community generally considers Freud a crank as well


Just finished reading one of the guys papers. I've read a lot of crack pots, and this guy's paper exhibited none of the typical features (megalomania, nonsequiturs, grandiose claims, complaints about persecution or conspiracies to silence the truth, and. so on). He might be wrong (didn't read carefully enough to reach my own conclusions, and I'm well aware of my own limitations), or he might have gone too far from his own expertise to be taken seriously by the right people, but not a crack pot


The basic idea doesn't sound crackpottish. There is only one correct representation of water to wine ratio, and a uniform distribution over all possible arrangements of water and wine would only give a single answer. Sounds like the problem lies in how the principle of insufficient reason is being applied.


It strikes me instead as rather trivial.

Burnside's Lemma, a fairly elementary result in group theory / combinatorics, is regularly used to compute probabilities by enumerating symmetries of the system under study. It's not a logical stretch at all to extend this to continuous / infinite-dimensional systems, and I think most theoretical physicists would be aware of this.


Very odd that a world renown physicist would be branded a crackpot for making this observation. Why would making the principle of indifference apply to the continuous domain be controversial?


Well, we'd have to read what he was showing people to know. Maybe his presentation was deeply obfuscated, or maybe he was saying something slightly more subtle than what I mentioned. I'm just guessing from the hints in the article.

From some googling I did find this: https://arxiv.org/abs/quant-ph/0310073


This is an interesting conclusion:

"Probabilities in physics could also be said to be measures of information interpreted within the framework of a physical theory."


... not a physicist here, but I do research on applied information theory and statistics. This grandfather's paper is sort of interesting to me because it starts to creep into some of my areas of expertise a tiny bit, but he seemed to be approaching it from a totally different perspective (of a physicist) that I'm less familiar with. I kind of so far seem to fall in the camp of "I kind of get what he's talking about but he's not quite connecting the dots, or isn't explaining himself well or something."

To answer your question: in some cases, transferring probabilistic reasoning about discrete states to continuous states, or vice versa, is sort of trivial. But sometimes it becomes somewhat controversial, and often this is because it's unclear how the scale of information maps onto the scale of data, to put it murkily. When it's discrete, you know your observations have specific possible values, which in itself is a chunk of information. But when it's continuous, you have an infinite range of values, and what constitutes "indifference" can be unclear.

Think of a scale (like a kitchen or bathroom scale), for example. The scale will have a certain accuracy within a certain range, and as you go further outside that range, the accuracy will decrease further and further. So, to make an analogy with the water-and-wine paradox, let's say I tell you "here's this container of walnuts, that could be anywhere from 0 to 100kg. What's the probability its mass as measured on this scale is in a certain range?" What would be adhering to the principle of indifference? You could argue it would be a uniform distribution over the numbers from 0 to 100, but you could also say it's somehow uniformly distributed over the distinguishable units of the scale, which will not be uniform from 0 to 100 because the meaningfulness of the scale's numbers will be compressed in its most accurate range, and stretched in its less accurate range, and there will also be issues with machine precision, etc. If you were counting walnuts though, you've kind of implicitly fixed the scale to the nonnegative integers, and assumed your ability to count doesn't get fuzzy at some point. So you assume uniformity on one scale is uniformity on another.

E.g., if you think of Jeffreys prior as being one scaling of indifference, even with a Bernoulli variable, indifference isn't necessarily scaled as a uniform: https://en.wikipedia.org/wiki/Jeffreys_prior.

I think this paradox involves (along with maybe being underspecified or poorly posed) somewhat related information-scale mapping issues, about how you define uniformity and what your "possibility space is" to be sort of Jaynesian about it.


So annoying they didn't link to the full articles, in it's various forms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: