Hacker News new | past | comments | ask | show | jobs | submit login

Well, I don't think you're really qualified to disagree, considering that you're revealing ignorance of the relevant domain (computational complexity theory) by arbitrarily equating different problems as "complex but still solvable".

Well, some problems are so hard (in the rigorous complexity-theoretic sense) that no amount of hardware is going to make a difference. For example, problems currently classed as NP-hard take, in the general case, an amount of time that increases exponentially in the problem size, so past a certain threshold, take too much time even given all the computers on earth.

The problem you'd have to solve here is basically the subset sum problem: given a set of transfers in an out of a mixer (let's say you already know which addresses it uses, which is not easy since it can make new ones for free) which subsets of the transfers out have the same totals as which subsets of transfers coming in? (From that point you identify one in/out set of addresses as belonging to the same person.)

That problem likewise takes exponential time to solve in the general case. And since the mixer chooses the transfers, they can pick it so that it's hard to find solution partitions (i.e. drive it to the part of the problem space where heuristics help the least).

Or they could go the opposite approach and add a random, time varying tolerance (i.e. charge a fee that varies between x and y % over time, or promise that you might get up to x% more or y% less than you put in) that makes the problems extremely underdetermined so that there are arbitrarily many constraint-satisfying solutions thus that the aggregate data is uninformative.

No, "Google solves complex problems" does not prove what you think it does.




> The problem you'd have to solve here is basically the subset sum problem: given a set of transfers in an out of a mixer (let's say you already know which addresses it uses, which is not easy since it can make new ones for free) which subsets of the transfers out have the same totals as which subsets of transfers coming in?

This isn't even the right problem, given the usage model that I posited and to which mootothemax replied; if you keep a balance stored in the mixing account, to which you make deposits on a regular payments, then it's highly unlikely that outgoing payments will match incoming payments in the first place. How often do you currently deposit checks into your bank account in exactly the same amount as outgoing payments that you immediately write after making the deposit?

There'd be no conclusively correlating information here; payments into the mixing account would have different amounts and timestamps from payments going out of it, and in order to use inferences taken from patterns as identifying information - e.g. someone orders a pizza from Mario's Pizzeria every Tuesday at 7 PM - you'd have to already have identifying information about the person you're trying to find in the first place, e.g. that I live near Mario's and happen to enjoy their pizza.

It's not just that the complexity of the problem increases with scale, it's that the reliability of the correlations you can make also decreases with scale.


I think it's a bit rich to accuse me of not being qualified to disagree when you've decided to ignore the other half of the equation: that the transfers ultimately end up elsewhere, ie the merchant's wallet.

I fear that as long as you and I fling accusations like this at one another, no good will come of this thread, so propose we end it thus: I consider the problem solveable; and you do not.


Indeed, your ignorance is every bit as good as my knowledge.


Indeed, your ignorance is every bit as good as my knowledge.

Did you intend to be that self-deprecating? It's pretty nice to see someone being so humble, frankly.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: