> This isn't unusual or limited to supercomputers. Look at the LHC, look at the James Webb telescope. They had delays and overruns and problems too. This isn't "failing science", it's a project having project problems.
That is entirely uncomparable. They're not building a first supercomputer in the world, nor inventing much new tech aside from the usual progression of faster and smaller
They weren't building the first space telescope or particle accelerator in the world either. I'm not quite sure what the thinking behind pulling that one out like a trump card was.
And believing silicon technology process shrinks and new processor designs that take advantage of it is nothing much new betrays an unfortunate misunderstanding of these technologies. Smaller and faster to you looks like your phone or PC get a little cheaper and faster every few years. The technology that enables that is staggering. These push the limits of materials science and chemistry and a bunch of fields of physics relating to electronics and photonics. Designing the chip requires again pushing boundaries in hard problems in computer science and mathematics to model and compile and optimize and verify the logic.
The two most complicated machines ever made are the microprocessors on the silicon chips, and the factories which make them -- a single one of those costs double what it took to build the LHC at CERN, twice the GDP of Somalia. And that does not include all the R&D cost to reach the point they can be built.
The supercomputer required predicting these things years into the future and intercepting that technology. Intel ran into unforeseen delays in these things which derailed the supercomputer. Not all that surprising that such efforts don't always go smoothly. There are 3 companies left which can manufacture high performance chips, and many that have fallen. There are only a handful that can design high performance CPUs and GPUs.
> They weren't building the first space telescope or particle accelerator in the world either. I'm not quite sure what the thinking behind pulling that one out like a trump card was.
I thought it is obvious but... the fact new CPU generation is every few years and bigger space telescope every 30 years ? Why you claim it is because of tech based on no source whatsoever while in almost every case the reason for delays in just about any project is either mismanagement or engineering not being able to fulfill what sales signed on?
"Hurr durr big computers are hard" is not an argument. It's not even a "single machine" (which would be much more complex to realize), it's a bunch of networked nodes as most (all?) modern supercomputers are, which reduces the scale immensely, as once you get the single server it's just a question of interconnectivity (which Intel will buy from switch vendor most likely) and plumbing.
> And believing silicon technology process shrinks and new processor designs that take advantage of it is nothing much new betrays an unfortunate misunderstanding of these technologies. Smaller and faster to you looks like your phone or PC get a little cheaper and faster every few years. The technology that enables that is staggering. These push the limits of materials science and chemistry and a bunch of fields of physics relating to electronics and photonics. Designing the chip requires again pushing boundaries in hard problems in computer science and mathematics to model and compile and optimize and verify the logic.
They are nonetheless iterative. And Intel is still using essentially same process node as generation before, and same as their consumer chips. Intel wasn't inventing new material or new way to make chips for those, they planned to use same chips that will eventually land in servers. If anything it looks like Intel found a clever way to fund their new architecture...
The project already changed direction twice (from 180 petaFLOP in 2018 to 1 exaFLOP in 2021 to now 2 exaFLOP) which leads me to believe that's mostly a project management issue. CPUs the supercomputer was supposed to be built are already in sale as the last iteration was upgraded to "new" 2022 generation.
> I thought it is obvious but... the fact new CPU generation is every few years and bigger space telescope every 30 years ?
What's your question?
> Why you claim it is because of tech based on no source whatsoever while in almost every case the reason for delays in just about any project is either mismanagement or engineering not being able to fulfill what sales signed on?
I don't know what you're talking about but you don't seem to have understood what I was saying. It is a technology based project, and just like any big project which relies on advancing the state of the art (like the LHC and JWST), they can have problems including mismanagement.
> "Hurr durr big computers are hard" is not an argument.
That wasn't my argument. Was that your argument for LHC and JWST?
> It's not even a "single machine" (which would be much more complex to realize), it's a bunch of networked nodes as most (all?) modern supercomputers are, which reduces the scale immensely, as once you get the single server it's just a question of interconnectivity (which Intel will buy from switch vendor most likely) and plumbing.
I've worked on supercomputer bids before on big SSIs (the old SGI Altixes) and clusters, including one in the top 10 now, and they all run code I've written. It is actually far far more than just cabling a bunch of OTC boxes and switches together.
> They are nonetheless iterative.
Certainly not. Some design shrinks and half nodes are relatively small jumps. This supercomputer bid was likely developed around 2013 soon after Intel scaled production of 22nm, and they expected it to be on 10 or even 7nm.
> And Intel is still using essentially same process node as generation before, and same as their consumer chips.
That's because they had so many problems and delays with their silicon, that was not apparent in 2013, their roadmap blew out multiple times, by many years in the end.
> Intel wasn't inventing new material or new way to make chips for those, they planned to use same chips that will eventually land in servers.
And their server business suffered badly as well during that time, for exactly the same reasons.
> The project already changed direction twice (from 180 petaFLOP in 2018 to 1 exaFLOP in 2021 to now 2 exaFLOP) which leads me to believe that's mostly a project management issue. CPUs the supercomputer was supposed to be built are already in sale as the last iteration was upgraded to "new" 2022 generation.
Big projects have big project issues. Project management quite likely had problems, nowhere did I suggest that was not the case. just like LHC and JWST had project management failures, cost overruns, re-scoping, etc.
That is entirely uncomparable. They're not building a first supercomputer in the world, nor inventing much new tech aside from the usual progression of faster and smaller