Hacker News new | past | comments | ask | show | jobs | submit | maththrowawtays's comments login

I believe in (some) American universities, any non-grad student who teaches a class is called a professor.


In the summer I had a grad student who was teaching the class as the primary teacher, and he was called "professor" (although I suspect his paycheck was just the standard grad check).


Here is a diagram demonstrating what I think OP is trying to say for why it is r^2 in 3 dimensions: http://hyperphysics.phy-astr.gsu.edu/hbase/Forces/isq.html

In 2 dimensions it would be just ~r.


Right, so we are looking at area and not volume.


But still note, that even if some solution appears to be "pretty" or "obvious" mathematically for some specific scenario (like drawing the areas for inverse square rule, or looking at the galactic curve for a "fit" with MOND), it never means that it automatically follows that all of the measurements in all the experiments will match that.

The "better" theory must cover more, not less of the measurements. The one that at the moment certainly covers the most is dark matter, and the alternatives simply cover much less (or as I've posted even somewhere result in the wrong shapes). Once some alternative manages to cover the observations and bring even more predictive power, that one will eventually be accepted (although sometimes "one funeral at a time" was needed), even if it's less "pretty" and less "obvious."


The problem with the dark matter model is that it has more free variables so epistemologically it's just "able to fit more". At some level it feels like that time in high school physics (gravity lab with ball bearings) when one of my classmates proudly showed off a sixth order Excel polynomial fit of five data points with an R^2 of 0. (That doesn't mean dark matter is wrong)


> has more free variables

More than what exactly? As far as I know, the https://en.wikipedia.org/wiki/Lambda-CDM_model is the best we have, i.e. with the minimal number of parameters that still covers most of the observations. If there were a better one, it would already be accepted.

But unless you try to inform yourself, you wouldn't even believe how good it is, compared to the alternatives. Really, really good, as in, it actually predicted the future measurement results of the oldest signals reachable to us, and then later the measurements did match perfectly.

> epistemologically

Ah... I'd guess then that you don't write about physics but some non-scientific belief system.


the lambda-CDM model assumes the existence of a distribution of dark matter which is unique to each galaxy and empirically derived. That's potentially a (countably) infinite-dimensional free variable vector for each galaxy, although typically models assume dark matter is usually symmetrically distributed (which can still be a countably infinite-dimensional free variable vector, as a function of rho and phi at least). In any case it's got enough free variables to account for two drastically different cases - 'normal' galaxies, and diffuse galaxies, which apparently have no dark matter. How do you parameterize that without at least one free variable per galaxy?

> If there were a better one, it would already be accepted.

I see you've never actually been a scientist.


Firstly, you're going to run into walls if you try to fully understand why the mathematics of quantum mechanics are how they are. Physicists have been going at it for decades and from what I can tell the picture is not so clear yet. What has been done though is many thousands of experiments showing that the predictions the mathematical theory makes (even the wild, unintuitive ones) are correct.

Secondly, I think you're going too deep on this whole 'prediction' thing. The algorithms in quantum computing aren't trying to model something any more than a classical algorithm does. Currently, the notion of a quantum algorithm is just a series of special logic gates that act on qubits instead of bits. What the previous poster said above is correct. Quantum computers are like classical computers with randomization except instead of real probabilities it has what are called complex amplitudes. The way the complex amplitudes behave allow for the states to interact slightly different since they can now 'interfere' with each other to cancel each other out in very specific situations. This power gives quantum computers only slightly more power the classical ones (with randomization). (Most) experts don't think quantum computers can solve NP-hard problems. This power from the interference only helps in very specific problems.


Many of the comments have pointed out this flaw in the argument. I will try to post my own summary of this flaw:

The problem is that this case against quantum computing due to the sheer scale of the number of parameters is also a case against randomized computing, but randomized computing certainly exists in the real world (a computer with a coin flipper). A 'state' in randomized computing over N bits is specified by a probability being assigned to each of the 2^N possible states of the N bits.

For example, to describe the distribution after flipping 100 coins, you require 2^100 real numbers to specify the probabilities for each outcome, since there are 2^100 combinations of heads and tails you can get. But it is very clear that we can do this in real life. The point is, you don't need 2^100 parameters with full precision to flip a coin 100 times.

To be more precise, we can write the state as a 2^N length vector, and our operations (think logic gates) on the vector are called stochastic matrices (this is the type of matrix required to so that the operation maps states to states).

Quantum computing is no different, except that instead of using real numbers for probabilities, complex numbers are used and they are called amplitudes instead. Also, instead of stochastic matrices for operations, they are now unitary matrices. The point is, the scale of the number of parameters is the same, the only thing that's changed is that it is complex numbers instead of real numbers.

Now an argument might be that it is not possible to create gates with a very specific probability distribution with just a discrete and not very precise coin flipper, but it's actually possible to show that for randomized computing, you only need a constant factor more flips to get exponentially close to the distributions you want. A similar result was proved pretty early for the quantum case, and it is a foundational result. So this is not really a problem either.


You won't be able to find a countable union of sets of just one point which covers [0,1], so the third axiom you state is not violated.

In fact, like the law says, if you take such a countable union, the measure of the set will be 0 (e.g. the rationals).

If you're going to hold that the sum of measure for all points is 1, you're not going to be able to make any continuous distributions at all. The thing is, the assignment of measure does not work like this, as in you don't just add up the measure of all the points to get the measure of the whole set. Instead, measure is also assigned to subsets of [0,1] (in particular too all the sets in the sigma algebra of your choosing). There are some laws which prevent you from whatever probabilities to everything (such as the one you mentioned about countable unions), but there is nothing that says it has to be the sum of the probability of the points.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: