It's not quite a 1:1 correspondence. For example, high quality video came to Linux somewhere in 2.x, when Windows got it in 4.0. That said, I believe it took about a year to get it on Linux after Windows.
I don't know how cryptography works, so I'd like to know why the key length for all cryptography schemes seems to go up in powers of 2. What's the reason one couldn't have a 1300 bit RSA key, for example?
Fast modmults for crypto keys are done using Montgomery multiplication, if the numbers being multiplied are power of two in size the ops are all masks and shifts after the initial transformation.
Alcohol has many other disadvantages which are, IMO, worse than just the harm to personal health, though these normally only surface when overused. Can't say the same about marijuana though, and I agree with you about that.
Do you really need SSD's to do a thousand reads per second? I'm assuming this is with a hot cache, and the access pattern on the 10TB of data is pretty heavily skewed towards a much smaller portion of the total dataset. Also, RAID-ed 15K drives, and the nature of the queries matter. We don't really know enough about the whole setup to know what's possible.
Please elaborate. Solving linear equations with software is nothing new, this algorithm just does it faster. So I'd like to see prior art for what you're describing here, sounds interesting. How exactly would one apply linear algebra to solving the first and third questions for example?
Back in college a few years ago, we mostly used Simplex and other tableau-pivot based algorithms for solving large linear programming problems. Link to wikipedia article on the Simplex algorithm http://en.wikipedia.org/wiki/Simplex_algorithm
For most problems that individuals face each day, linear programs are fairly simple to model and solve. However, there are lots of complex problems that are solved each day.
A few examples of large-scale linear programming problems:
- For Amazon.com: what quantity of each item should be stocked at each warehouse each day to minimize inventory while also optimizing for shipping time and cost to demand nodes (customers).
- For an airline: how to plan and schedule flights to all domestic and international airports to maximize profit
- In shipping logistics: how to allocate trucks and set routes to minimize fuel cost while satisfying delivery time.
For complex systems, you can easily run up a linear program with millions of independent variables (producing millions of rows in the linear system).
Solving a linear system of equations is not the same as solving a linear program. The article is about the former, not the latter. Linear programming is an optimization problem, subject to constraints that are described by inequalities. Linear systems of equations are finding values of relationships defined by equalities. There are lots of matrix multiplications involved in both, but other than that the solutions don't really resemble each other. For example, Linear Programming is not even known to be solvable in strongly polynomial time (simplex is exponential in the worst case, and the best method is weakly polynomial), while the algorithm in the paper improves from one strongly polynomial result ot another.
I just wanted to point out that it's not 90k concurrent users, but 90k total users (for some unknown definition of "user"). But it does process a couple dozen tweets per second according to him, with low load. That's quite a far ways off from 90k concurrent users. That's not to say that Go wouldn't help in that case, it really depends on what the application is doing.