Hacker Newsnew | past | comments | ask | show | jobs | submit | j_timberlake's commentslogin

If Harris was president, this letter would probably say "We proudly support the future of American scholarship and inclusion over simple-minded emotional reactions".

If Dems put any strategy into winning elections, they would never have picked Hillary, Biden, or Harris.

Just in time for the midterm elections? Seems like the Dems have this handed to them on a silver platter, but they're so bad at winning elections, it probably still won't matter.

They're not going to suddenly become competent enough to implement something like that.

"You are probably not very exposed to non-native speakers."

If native speakers have to talk to people of other languages to understand it, then it's not even English at that point. What it is, who knows.


Absolutely not happening. If robots get versatile enough to clean a random home, then they'll be good enough for higher-value work like being a robotic soldier or building more robots. And if they're taking lots of jobs like that, then there's going to be tons of spare man-hours for busywork like cleaning.

That seems to say it'll eventually happen, the question is just on what timeframe. Three years? Five? Ten? Fifty?

You don't understand! Every erotic chatbot service keeps getting censored, what happened to CharacterAI just keeps happening. There's a serious supply-shortage, do you really want people turning to Grok? The spice must flow!!!


"Billionaire investors are more irrational than me, a social media poster."


Zuckerberg has spent over fifty billion dollars on the idea that people will want to play a Miiverse game where they can attend meetings in VR and buy virtual real estate. It's like the Spanish emptying Potosi to buy endless mercenaries.


I mean, why do you think they have any idea on how a completely new thing will turn out?

They are speculating. If they are any good, then they do it with an acceptable risk profile.


The correlation between "speculator is a billionaire" and "speculator is good at predicting things" is much higher than the correlation between "guy has a HN account" and "guy knows more about the future of the AI industry than the people directly investing in it".

And he doesn't just think he has an edge, he thinks he has superior rationality.


Past performance is not indicative of future results.

You would need ~30 years of continuously beating the market to be able to claim that you are statistically likely to be better than random chance.

Does your average speculator have 30 years of experience beating the market, or were they just lucky?


I haven’t heard that statistic before. And the formulation seems imprecise? Does continuously beating the market mean that every single minute your portfolio value gains relative to the market?


"You would need ~30 years of continuously beating the market to be able to claim that you are statistically likely to be better than random chance."

You use the word statistically as if you didn't just pull "~30 years" out of nowhere with no statistics. And people become billionaires by making longshot bets on industry changes, not by playing the market while they work a 9-5.

"Does your average speculator have 30 years of experience beating the market, or were they just lucky?"

The average speculator isn't even allowed to invest in OpenAI or these other AI companies. If they bought Google stock, they'd mostly be buying into Google's other revenue streams.

You could just cut to the chase and invoke the Efficient Market Hypothesis, but that's easily rebuked here because the AI industry is not in an efficient market with information symmetry and open investing.


"Having money is proof of intelligence"


It kinda is, at least I'd say a rich person is on average more intelligent than a poor person.


Anyone who believes this hasn't spent enough time around rich people. Rich people are almost always rich because they come from other rich people. They're exactly as smart as poor people, except the rich folk have a much, much cushier landing if they fail so they can take on more risk more often. It's much easier to succeed and look smart if you can just reload your save and try over and over.


Why do you think that? Do you have data or is it just, like, your vibe?


One can apply a brief sanity check via reductio ad absurdum: it is less logical to assume that poor individuals possess greater intelligence than wealthy individuals.

Increased levels of stress, reduced consumption of healthcare, fewer education opportunities, higher likelihood of being subjected to trauma, and so forth paint a picture of correlation between wealth and cognitive functionality.


Yeah, that's not a good argument. That might be true for the very poor, sure, but not for the majority of the lower-to-middle of the middle class. There's fundamentally no difference between your average blue collar worker and a billionaire, except the billionaire almost certainly had rich parents and got lucky.

People really don't like the "they're not, they just got lucky" statement and will do a lot of things to rationalize it away lol.


> lower-to-middle of the middle class

The comparison was clearly between the rich and the poor. We can take the 99.99th wealth percentile, where billionaires reside, and contrast that to a narrow range on the opposite side of the spectrum. But, in my opinion, the argument would still hold even if it were the top 10% vs bottom 10% (or equivalent by normalised population).


Counter point - rich people would remain rich, and we would have an ossified society if this was true.

Intelligence is not a singular pre-requisite to wealth or “to be rich”.

People can specialize in being intelligent, educated, well read, and more - while still being poor.

And we know that most entrepreneurs fail, which is why VCs function the way they do.



It does seem like common sense that they would be linked. But there is also research:

https://thesocietypages.org/socimages/2008/02/06/correlation...


I think you're reading way too much into OpenAI bungling its 15-month product lead, but also the whole "1 AGI company will take off" prediction is bad anyway, because it assumes governments would just let that happen. Which they wouldn't, unless the company is really really sneaky or superintelligence happens in the blink of an eye.


I think OpenAI has committed hard onto the 'product company' path, and will have a tough time going back to interesting science experiments that may and may not work, but are necessary for progress.


Governments react at a glacial pace to new technological developments. They wouldn't so much as 'let it happen' as that it had happened and they simply never noticed it until it was too late. If you are betting on the government having your back in this then I think you may end up disappointed.


I think if any government really thought that someone was developing a rival within their borders they would send in the guys with guns and handle it forthwith.


They would just declare it necessary for military purpose and demand the tech be licensed to a second company so that they have redundant sources, same as they did with AT&T's transistor.


That was something that was tied to a bunch of very specific physical objects. There is a fair chance that once you get to the point where this thing really comes into being especially if it takes longer than a couple of hours for it to be shut down or contained that the genie will never ever be put back into the bottle again.

Note that 'bits' are a lot easier to move from one place to another than hardware. If invented at 9 am it could be on the other side of the globe before you're back from your coffee break at 9:15. This is not at all like almost all other trade secrets and industrial gear, it's software. Leaks are pretty much inevitable and once it is shown that it can be done it will be done in other places as well.


this is generally true in a regulation sense, but not in emergency. The executive can either covertly or overtly take control of a company if AGI seems to powerful to be in private hands.


Are there any examples in recorded history of such nationalization of technology besides the atomic bomb?


While generally true, a lot of governments have not only definitely noticed AI, they're getting flack for using it as an assistant and are actively promoting it as a strategic interest.

That said, any given government may be thinking like Zuckerberg[0] or senator Blumenthal[1], so perhaps these governments are just flag-waving what they think is an investment opportunity without any real understanding…

[0] general lack of vision, thinking of "superintelligence" in terms of what can be done with/by the Star Trek TNG era computer, rather than other fictional references such as a Culture Mind or whatever: https://archive.ph/ZZF3y

[1] "I alluded, in my opening remarks, to the jobs issue, the economic effects on employment. I think you have said, in fact, and I'm going to quote, ``Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity,'' end quote. You may have had in mind the effect on jobs, which is really my biggest nightmare, in the long term." - https://www.govinfo.gov/content/pkg/CHRG-118shrg52706/html/C...


Have you not been watching Trump humiliate all the other billionaires in the US? The right sort of government (or maybe wrong sort, I'm undecided which is worse) can very easily bring corporations to heel.

China did the same thing when their tech-bros got too big for their boots.


Humiliate? They're jostling for position and pushing each other out of the way to see who can buy the most government influenced while giving the least. The only thing that is being humiliated here is the United States reputation the world over. Those billionaires are making out like bandits, finally they really get to call the shots. That they give the doddering old fool some trinkets in return for untold access to power is the thing that should worry you, not that there - occasionally - is a billionaire with buyers remorse. There are enough of them to replace the ones that no longer want to play the game.


* or governments fail to look far enough ahead, due to a bunch of small-minded short-sighted greedy petty fools.

Seriously, our government just announced it's slashing half a billion dollars in vaccine research because "vaccines are deadly and ineffective", and it fired a chief statistician because the president didn't like the numbers he calculated, and it ordered the destruction of two expensive satellites because they can observe politically inconvenient climate change. THOSE are the people you are trusting to keep an eye on the pace of development inside of private, secretive AGI companies?


That's just it, governments won't "look ahead", they'll just panic when AGI is happening.

If you're wondering how they'll know it's happening, the USA has had DARPA monitoring stuff like this since before OpenAI existed.


> governments

While one in particular is speedracing into irrelevance, it isn't particularly representative of the rest of the developed world (and hasn't in a very long time, TBH).


"irrelevance" yeah sure, I'm sure Europe's AI industry is going to kick into high gear any day now. Mistral 2026 is going to be lit. Maybe Sir Demis will defect Deepmind to the UK.


That's not what I was going for (I was more hinting at isolationist, anti-science, economically self-harming and freedoms-eroding policies), but if you take solace in believing this is all worth it because of "AI" (and in denial about the fact that none of those companies are turning a profit from it, and that there is no identified use-case to turn the tables down the line), I'm sincerely happy for you and glad it helps you cope with all the insanity!


I know, you wanted to vent about the USA and abandon the thread topic, and I countered your argument without even leaving the topic.

Like how I can say that the future of USA's AI is probably going to obliterate your local job market regardless of which country you're in, and regardless of whether you think there's "no identified use-case" for AI. Like a steamroller vs a rubber chicken. But probably Google's AI rather than OpenAI's, I think Gemini 3 is going to be a much bigger upgrade, and Google doesn't have cashflow problems. And if any single country out there is actually preparing for this, I haven't heard about it.


> I know, you wanted to vent about the USA and abandon the thread topic, and I countered your argument without even leaving the topic.

Accusations about being off-topic is really pushing it: you want to bet on governments' incompetence in dealing with AI, and I don't (on the basis that there are unarguably still many functional democracies out there), on the other hand, the thread you started about the state of Europe's AI industry had nothing to do with that.

> Like how I can say that the future of USA's AI is probably going to obliterate your local job market regardless of which country you're in

Nobody knows what the future of AI is going to look like. At present, LLMs/"GenAI" it is still very much a costly solution in need of a problem to solve/a market to serve¹. And saying that the USA is somehow uniquely positioned there sounds uninformed at best: there is no moat, all of this development is happening in the open, with AI labs and universities around the world reproducing this research, sometimes for a fraction of the cost.

> And if any single country out there is actually preparing for this, I haven't heard about it.

What is "this", effectively? The new flavour Gemini of the month (and its marginal gains on cooked-up benchmarks)? Or the imminent collapse of our society brought by a mysterious deus ex machina-esque AGI we keep hearing about but not seeing? Since we are entitled to our opinions, still, mine is that LLMs are a mere local maxima towards any useful form of AI, barely more noteworthy (and practical) than Markov chains before it. Anything besides LLMs is moot (and probably a good topic to speculate about over the impending AI winter).

¹: https://www.anthropic.com/news/the-anthropic-economic-index


> the USA has had DARPA monitoring stuff like this since before OpenAI existed

Is there a source for this other than "trust me bro"? DARPA isn't a spy agency, it's a research organization.

> governments won't "look ahead", they'll just panic when AGI is happening

Assuming the companies tell them, or that there are shadowy deep-cover DARPA agents planted at the highest levels of their workforce.


You could have Google'd "Darpa AI industry" faster than it took you to write this post, but it sounds like you're triggered or something.


> it sounds like you're triggered or something

Please don't cross into personal attack, no matter how wrong another commenter is or you feel they are.


I googled it, and I can't find support for the claim that DARPA is monitoring internal progress of AI research companies.

Maybe you can post a link in case anyone else is as clumsy with search engines as I am? After all, you can google it just as fast as you claim I can.


> OpenAI bungling its 15-month product lead

Do you mean from ChatGPT launch or o1 launch? Curious to get your take on how they bungled the lead and what they could have done differently to preserve it. Not having thought about it too much, it seems that with the combo of 1) massive hype required for fundraising, and 2) the fact that their product can be basically reverse engineered by training a model on its curated output, it would have been near impossible to maintain a large lead.


My 2 cents: ChatGPT -> Gemini 1 was their 15-month lead. The moment ChatGPT threatened Google's future Search revenue (which never actually took a hit afaik), Google reacted by merging Deepmind and Google Brain and kicked off the Gemini program (that's why they named it Gemini).

Basically, OpenAI poked a sleeping bear, then lost all their lead, and are now at risk of being mauled by the bear. My money would be on the bear, except I think the Pentagon is an even bigger sleeping bear, so that's where I would bet money (literally) if I could.


Seems like OpenAI is playing it smart and slow. Slowly entrenching themselves into the US government.

https://www.cnbc.com/2025/08/06/openai-is-giving-chatgpt-to-...


That's probably their best bet, though the other AI companies are shaking hands too:

https://www.gsa.gov/about-us/newsroom/news-releases/gsa-prop...

Announced exactly 1 day before the $1 thing, to make everything extra muddled.

https://www.gsa.gov/about-us/newsroom/news-releases/gsa-anno...


Huh. That's interesting. I always thought it was Gemini because it's somewhat useful on one hand, and absolute shit on the other.


I like how this sounds exactly like a selectable videogame hero:

Undeterred by even the most dangerous and threatening of obstacles, Teemo scouts the world with boundless enthusiasm and a cheerful spirit. A yordle with an unwavering sense of morality, he takes pride in following the Bandle Scout's Code, sometimes with such eagerness that he is unaware of the broader consequences of his actions. Though some say the existence of the Scouts is questionable, one thing is for certain: Teemo's conviction is nothing to be trifled with.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: