Hacker News new | past | comments | ask | show | jobs | submit login
Great timing, supercomputer upgrade lead to successful forecast of eruption (illinois.edu)
97 points by rbanffy on June 6, 2022 | hide | past | favorite | 26 comments



I hiked Sierra Negra in summer 2019, less than a year after the eruption.

Of course, it could be pure hindsight, but the tour guide said that the 2018 eruption was anticipated by observable signs, and that our group was safe because there would be many signs before the next eruption.

I wish I knew about this model at the time so I could have asked a question or two.


I wonder if the real-time nature of the model makes it difficult to scale out to run on individual nodes (e.g. something like Folding@Home which sends off compute tasks to volunteer nodes across the world). I would imagine the government, universities, and so forth have lots of idling desktops and servers that could be clustered together into such a system.

Obviously the bureaucracy and management of such a system would be difficult, but it would be an inexpensive way to give these projects additional compute capacity.


Unfortunately the finite element software used in the forecast requires a commercial license to run.

https://www.comsol.com/products/licensing


Ethereum will someday (let's not argue on the timing) move to PoS. This will render tens of millions of GPUs idle.

I've been on a quest to find some sort of workload or task runner to take advantage of it and have been coming up empty handed.

The issue is profitability. The capex has been spent... now we just need to find people willing to either risk opex or find profitable workloads.


These cards will simply move over to mining the next altcoin on the list, as they all did when Bitcoin was no longer profitable.


This time is different because none of the coins on the list have the effective potential of ETH (again, not for debate).

ETC has a 6.5 day deposit time on exchanges, which makes it pretty hard to mine as small miners won't want to wait to get paid. This should come down as the hashrate (and security of the network) increases, however... without much network use (aka: utility)... the only thing to do with it will be to mine and dump.

Thus, the price of all the alts will drop, profitability will go down and cards will turn off. I'm already seeing signs of people turning off their cards in the GPU Miners forums...

We saw a preview of this behavior when Grin came out...


This whole argument revolves around exchanges not having a 'higher tier' user account profile model, which is simply not the case. Basically all serious OTC desks running out of exchanges work on this model and are happy to accomodate depending on your liquidity. If you are making any decent amount of money mining, they will be more than happy to enable a more substantial rotating deposit/withdrawal profile for preferred customer accounts.


I was referring to the small miners. On lower cap coins (everything but ETH/BTC, by a long shot), there isn't as much volume on the market, so even small farms will have a larger impact on price. Not all of them will have access to the OTC desks and they also tend to react in mass... when one freaks out, they all freak out.


By the time the PoS migration occurs, those GPUs will not be cost effective to run. Better to just buy cheap, newer, power efficient GPUs


This sounds similar to what we used in districts when I was still in the K12 sector. Something called "Dataseam" which was used for cancer research.

https://www.kydataseam.com/dataseamgrid


Out of curiosity, why is an article being written in 2022 about something that happened in 2017-18?


The actual scientific paper was just published:

https://www.science.org/doi/10.1126/sciadv.abm4261


"This takes an incredible amount of computing power previously unavailable to the volcanic forecasting community"

Could this workload be approximated with neural net models and ran on tpus?


Take a step back. TPUs accelerate any linear algebra so they can accelerate many interesting simulations, not just neural nets. That is much more exciting even if it is tangential to your question. And to be honest, frequently neural net surrogates are just worse than a straightforward "real physics" simulation. There are some cool "Neural ODE" and "physics informed neural nets" research that bridges the gap though.


> TPUs accelerate any linear algebra

Do they actually? Or do they just accelerate matrix multiplications of a very specific size?

I am not really a domain expert for scientific computing, but for example in finite element analysis, the tetrahedral stiffness matrix is 12x12. Presumably the matrices in fluid flow simulations, climate modeling, etc are also modestly sized, and the challenge has more to do with just how many total multiplications there are.

It is not at all clear to me that an accelerated 128x128 multiplication is helpful in these contexts.


What matters for TPUs today is whether the problem is sparse. TPUs are good at dense matrix multiplication (btw, the 128x128 is just the unit size, it still works with larger systems through the obvious composition of multiplications) with a modest amount of ALU and vector work and lots and lots of "tensor" (multidimensional array). And they are almost entirely 32-bit floats, not 64-bit.

So far TPUs have not really proved their worth for general purpose simulations across a wide range of fields. They weren't built for that, and we have other systems that can run the workloads faster, although it's unclear what wouuld happen if you took a really great team and had them work on a hard simulation problem (for example- my previous area was molecular dynamics, and TPUs can do great work on n-body simulations, but not so great on other parts of the force field.

If you have sparse mixed problems then CPUs are still the most cost effective. If you have dense matrix problems GPUs are the most cost effective. If you have problems that don't fit on other systems and you have a good team to optimize to the hardware, TPUs are an option and could be cost effective in principle.


Presumably finite element analysis involves multiplying many such matricies? Hopefully in parallel?

If so, you can represent them as a Nx12x12 "tensor" for some large N (presumably proportional to the number of elements?), and I'm reasonably sure that's within the realm of what TPUs accelerate well.


FEA is essentially solving the equation F=-kx (- cv + ma if you're going dynamic).

There are a few steps.

1) discretize the problem, breaking the item into a mesh. Maybe triangles/ tetrahedrons, but maybe also cubes, as the element math is easier.

2) for each element apply a geometric transform to the element stiffness matrix to convert it from a unit stiffness matrix to the global coordinates. Note that this step accounts for the shape of the unit as well.

3) assemble the stiffness matrix by iterating over all the degrees of freedom of all the nodes and adding the contribution from each element stiffness matrix. This will result in a (nodes * dof) Square symmetric matrix, that’s generally sparse and tends to the diagonal. If you’re doing dynamic, the damping and mass matrices need to be assembled as well.

4) Solve the equation using some factorization method, either LU or similar for static, or some Eigen solution for dynamic.

If you’re doing nonlinear/ plastic then repeat generating the stiffness matrix at each iteration.

It’s been years since I did this, but at the time (‘97) I went to a lecture that asserted to that point, since the first cray, the hw improvements and the sw improvements were both about a 10^6 speedup for FEA problems.


These are usually iterative methods


They mention porting to machine learning which I assume means neural nets.


ML also includes reinforcement learning and other types of training and inference.


Very cool. Hopefully learning to constantly crunch and simulate the fire hose of sensor data at places like this will lead to ever-better earthquake predictions too.


Hail Alma Mater! nice work.


Haha I feel the same way, feels good to see Illinois (do people still call it UIUC?) in the scientific headlines.


Just graduated and, yes, we definitely still call it UIUC.


Glad to hear! Congrats on graduation. '07 alumnus here.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: