Hacker News new | past | comments | ask | show | jobs | submit login

The better analogy is the massive investment in fiber optic cable in the late 90s. All the companies in that line (Global Crossing, Worldcom etc.) went bust after investing 10s of billions but the capacity was useful (with a >90% drop in price) for future internet services. Around 2000 when the bubble was bursting only 5% of capacity was being used but proved to be useful to get all internet-first companies like the Googles, Amazon, NetFlix's going.





I initially was agreeing with you, but I don't see NVIDIA, AWS, Microsoft, etc going to zero (and Worldcom was unraveled by accounting fraud).

Sun Microsystems sold to Oracle for $7B, and Netscape was acquired by AOL for $10B.



Yeah, Cisco didn't go to zero in 2000 and Nvidia won't go to zero. It will merely go down 90%.

When Cisco went bust, their stock price was still 150% higher than before the boom.

So will Nvidia be worth $5 trillion AFTER the AI bust?


Good old JDS Uniphase was one of the first individual stocks I bought. I mean it had to go up right? Fiber and dark fiber and the continual threat of the internet collapsing due to load… better buy that stock!

Worldcom here. Ah, the Enrons of the internet.

I wonder how the analogy holds up given computational advances. Will a bunch of H100s be as useful a decade later like fiber ended up being?

I might be wrong, but my understanding is that we're on a decelerating slope of perf/transistor and have been for quite a while - I just looked up the OpenCL benchmark results of the 1080 Ti vs 4090, and the perf/W went up by 2.8x despite going from 16nm to 5nm, with perfect power scaling, we would've seen a more than 10x increase.

Probably not. There will be better GPUs. It's like we did use all those Kepler K10 and K80 fifteen or so years ago, they were Ok for models with few millions of parameters, then Pascal and Volta arrived ten years ago with massive speed up and larger memory, allowing to train same size models 2-4 times faster, so you simply had to replace all Keplers. Then Turing happened making all P100 and V100 obsolete. Then A100, and now H100. Next L100 or whatever with just more on-board memory will make H100 obsolete quickly.

One thing that is missing is that we have massively improved the performance of the algorithms lately to require less compute power, so a H100 will still be performant several years from now. The problem will be that it's going to be using up more power and physical space than an out-performing future version and so will need to be scrapped.

Same applies to the railroads analogy used in the original article.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: