Ilya proved himself as a leader, scientist, and engineer over the past decade with OpenAI for creating break-through after break-through that no one else had.
He’s raised enough to compete at the level of Grok, Claude, et al.
He’s offering investors a pure play AGI investment, possibly one of the only organizations available to do so.
Who else would you give $1B to pursue that?
That’s how investors think. There are macro trends, ambitious possibilities on the through line, and the rare people who might actually deliver.
A $5B valuation is standard dilation, no crazy ZIRP style round here.
If you haven’t seen investing at this scale in person it’s hard to appreciate that capital allocation just happens with a certain number of zeros behind it & some people specialize in making the 9 zero decisions.
Yes, it’s predicated on his company being worth more than $500B at some point 10 years down the line.
If they build AGI, that is a very cheap valuation.
Think how ubiquitous Siri, Alexa, chatGPT are and how terrible/not useful/wrong they’ve been.
There’s not a significant amount of demand or distribution risk here. Building the infrastructure to use smarter AI is the tech world’s obsession globally.
If AGI works, in any capacity or at any level, it will have a lot of big customers.
All I’m saying is you used the word “if” a lot there.
AGI assumes exponential, preferably infinite and continuous improvement, something unseen before in business or nature.
Neither siri nor Alexa were sold as AGI and neither alone come close to a $1B product. gpt and other LLMs has quickly become a commodity, with AI companies racing to the bottom for inference costs.
I don’t really see the plan, product wise.
Moreover you say:
> Ilya proved himself as a leader, scientist, and engineer over the past decade with OpenAI for creating break-through after break-through that no one else had.
Which is absolutely true, but that doesn’t imply more breakthroughs are just around the corner, nor does the current technology suggest AGI is coming.
VCs are willing to take a $1B bet on exponential growth with a 500B upside.
Us regular folk see that and are dumbfounded because AI is obviously not going to improve exponentially forever (literally nothing in the observed universe does) and you can already see the logarithmic improvement curve.
That’s where the dismissive attitude comes from.
There are many things on earth that don't exist anywhere else in the universe (as far as we know). Life is one of them. Just think how unfathomably complex human brains are compared to what's out there in space.
Just because something doesn't exist anywhere in the universe doesn't mean that humans can't create it (or humans can't create a machine that creates something that doesn't exist anywhere else) even if it might seem unimaginably complex.
> AI is obviously not going to improve exponentially forever (literally nothing in the observed universe does)
Sure, but it doesn't have to continue forever to be wildly profitable. If it can keep the exponential growth running for another couple of rounds, that's enough to make everyone involved rich. No-one knows quite where the limit is, so it can reasonably be worth a gamble.
I couldn’t care less about the business side of technology.
Im an engineer and a technophile and as an engineer and a technophile it sours me to hear someone dangle sci-fi level AGI as a pitch to investors when we’re clearly not there right now and ,in my opinion, this current wave of of basically brute force statistics based predictive models will not be the technique that gets us there.
It makes the cynic in me, and many others probably, cringe.
I’m curious if you’d be willing to share more of your personal context?
My intent is to be helpful. I’m unsure of how much additional context might be useful to you.
Investor math & mechanics is straight-forward: institutional funds & family offices want to get allocations in investors like a16z because they get to invest in deals that they could not otherwise invest in. The top VCs specialize in getting into deals that most investors will never get the opportunity to put money into. This is one of them.
For their Internal Rate of Return (IRR) to work out at least one investment needs to return 100x or more on the valuation. VCs today focus on placing bets where that calculation can happen. Most investors aren’t that confident in their ability to predict that, so they invest alongside lead investors who are. a16z is famous for that.
There are multiple companies worth $1T+ now, so this isn’t a fantasy investment. it’s a bet.
The bet doesn’t need to be that AGI continues to grow in power infinitely, it just needs to create a valuable company in roughly a ten year time horizon.
Many of the major tech companies today are worth more money than anyone predicted, including the founders (Amazon, Microsoft, Apple, Salesforce, etc.). An outlier win in tech can have incredible upside.
LLMs are far from commoditized yet, but the growth of the cloud proves you can make a fortune on the commoditization of tech. Commoditization is another way of saying “everyone uses this as a cost of doing business now.” Pretty great spot to land on.
My personal view is that AGI will deliver a post-product world, Eric Schmidt recently stated the same. Products are digital destinations humans need to go to in order to use a tool to create a result. With AGI you can get a “product” on the fly & AI has potentially very significant advantages in interacting with humans in new ways within existing products & systems, no new product required. MS Copilot is an early example.
It’s completely fine to be dismissive of new tech, it’s common even. What bring me you here?
I’m here on HN because I love learning from people who are curious about what is possible & are exploring it through taking action. Over a couple decades of tech trends it’s clear that tech evolves in surprising ways, most predictions eventually prove correct (though the degree of impact is highly variable), and very few people can imagine the correct mental model of what that new reality will be like.
I agree with Zuck:
The best way to predict the future is to build it.
That's a very dismissive and unrealistic statement. There are plenty of investors investing in things such as AI and crypto out of FOMO who either see something that isn't there or are just pretending to see something in the hope of getting rich.
Obviously, there are plenty of investors who don't fall into this situation. But lets not pretend that just because someone has a lot of money or invests a lot of money that it means they know what they are doing.
I suppose my phrasing was a bit harsh at the end. To be clear, I mean that it doesn't mean they know what they are doing on every investment. Investing misses happen! People are wrong!
I'm also confused by the negativity on here. Ilya had a direct role in creating the algorithms and systems that created modern LLMs. He pioneered the first deep learning computer vision models.
Even with Ilya demonstrating his capabilities in those areas you mentioned, it seems like investors are simply betting on his track record, hoping he’ll replicate the success of OpenAI. This doesn’t appear to be an investment in solving a specific problem with a clear product-market fit, which is why the reception feels dismissive.
I repeatedly keep seeing praise for Ilyas achievements as a scientist and engineer, but until ChatGPT OpenAI was in the shadow of DeepMind, and to my knowledge (I might be wrong) he has not been that much involved with ChatGPT?
the whole LLM race seems deaccelerate, and all the hard problems about LLMs seems not do have had that much progress the last couple of years (?)
In my naaive view I think a guy like David Silver the creator/co-lead of Alpha-Zero deserves more praise, atleast as a leader/scientist.
He even have lectures about Deep RL after doing AlphaGo: https://www.davidsilver.uk/teaching/
He has no LinkedIn and came straight from the game-dev industry before learning about RL.
I’m not optimistic about AGI, but it’s important to give credit where credit is due.
Even assuming the public breakthroughs are the only ones that happened, the fact that openai was able to make an llm pipeline from data to training to production at their scale before anyone else is a feat of research and engineering (and loads of cash)
I have this rock here that might grant wishes. I will sell it to you for $10,000. Sure it might just be a rock, but if it grants wishes $10k is a very cheap price!
Ilya proved himself as a leader, scientist, and engineer over the past decade with OpenAI for creating break-through after break-through that no one else had.
He’s raised enough to compete at the level of Grok, Claude, et al.
He’s offering investors a pure play AGI investment, possibly one of the only organizations available to do so.
Who else would you give $1B to pursue that?
That’s how investors think. There are macro trends, ambitious possibilities on the through line, and the rare people who might actually deliver.
A $5B valuation is standard dilation, no crazy ZIRP style round here.
If you haven’t seen investing at this scale in person it’s hard to appreciate that capital allocation just happens with a certain number of zeros behind it & some people specialize in making the 9 zero decisions.
Yes, it’s predicated on his company being worth more than $500B at some point 10 years down the line.
If they build AGI, that is a very cheap valuation.
Think how ubiquitous Siri, Alexa, chatGPT are and how terrible/not useful/wrong they’ve been.
There’s not a significant amount of demand or distribution risk here. Building the infrastructure to use smarter AI is the tech world’s obsession globally.
If AGI works, in any capacity or at any level, it will have a lot of big customers.