But that makes sense, technology makes headlines when it's exciting. Crypto I'd disagree there's been advances, it's mostly scams and pyramid schemes and it got boring and predictable in that sense so once the promise and excitement is gone, HN doesn't talk about it anymore. Self driving cars became a slow advance over many years, with people not claiming it was around the corner and about to revolutionize everything.
AI is now a field where the claims are, in essence, that we're going to build God in 2 years. Make the whole planet unemployed. Create a permanent underclass. AI researches are being hired at $100-300m comp. I mean, it's definitely a very exciting topic and polarizes opinion. If things plateau and the claims dissappear and it becomes a more boring grind over diminishing returns and price adjustments I think we'll see the same thing, less comments over it.
That's a bit of an edge case, powered by the absolute, lovely turbo-nerdery of a few dedicated souls. They are called 4K77 / 4K80 versions for people looking for them.
Engineers thinking they're building god is such a good marketing strategy. I can't overstate it. It's even difficult to be rational about it. I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI. But this idea is sort of half-immune to criticism or skepticism: you can always respond with "but what if it's true?". The stakes are so high that the potentially infinite payoff snowballs over any probabilities. 0.00001% multiplied by infinite is an infinite EV so you have to treat it like that. Best marketing, it writes itself.
Similar to Pascal's wager, which pretty much amounts to "yeah, God is probably not real, _but what if it is_? The utility of getting into heaven is infinite (and hell is infinitely negative), so any non-zero probability that God is real should make you be religious, just in case."
This is explicitly not the conclusion Pascal drew with the wager, as described in the next section of the Wikipedia article: "Pascal's intent was not to provide an argument to convince atheists to believe, but (a) to show the fallacy of attempting to use logical reasoning to prove or disprove God..."
Did he say Pascal drew that conclusion and remove it with an edit or something? As it's written now it seems like you're correcting him for something he didn't post.
I know you're not being serious but building AGI as in something that thinks like a human, as proven possible by millions of humans wandering all over the place is very different from "building god".
Except that humans cannot read millions of books (if not all books ever published) and keep track of massive amounts of information. AGI presuposes some kind of super human capabilities that no one human has. Whether that's ever accomplished remains to be seen, I personally am a bit skeptical that it will hapen in our lifetime but think it's possible in the future.
Not sure about that one. I do agree with the AI bros that, _if_ we build AGI, ASI looks inevitable shortly after, at least a "soft ASI". Because something with the agency of a human but all the knowledge of the world at its fingertips, the ability to replicate itself, think at order of magnitudes faster and paralelly on many things at the same time and modify itself... really looks like it won't stay comparable to a baseline human for long.
There was nothing hypothesized about next-token prediction and emergent properties (they didn't know scale would allow it to generalize for sure). What if it's true is part of LLMs story, there is a mystical element here.
Someone else can confirm, but from my understanding, no they did not know sentiment analysis, reasoning, few shot learning, chain of thought, etc would emerge at scale. Sentiment analysis was one of the first things they noticed a scaled up model could generalize. Remember, all they were trying to do was get better at next-token prediction, there was no concrete idea to achieve "instruction following", for example. We can never truly say going up another order of magnitude on the number of params won't achieve something (it could, for reasons unknown, just like before).
It is somewhat parallel to the story of Columbus looking for India but ending up in America.
The Schaeffer et al. "Mirage" paper showed that many claimed emergent abilities disappear when you use different metrics, what looked like sudden capability jumps were often artifacts of using harsh/discontinuous measurements rather than smooth ones.
But I'd go further: even abilities that do appear "emergent" often aren't that mysterious when you consider the training data. Take instruction following - it seems magical that models can suddenly follow instructions they weren't explicitly trained for, but modern LLMs are trained on massive instruction-following datasets (RLHF, constitutional AI, etc.). The model is literally predicting what it was trained on. Same with chain-of-thought reasoning - these models have seen millions of examples of step-by-step reasoning in their training data.
The real question isn't whether these abilities are "emergent" but whether we're measuring the right things and being honest about what our training data contains. A lot of seemingly surprising capabilities become much less surprising when you audit what was actually in the training corpus.
Didn't it just get better at next token prediction? I don't think anything emerged in the model itself, what was surprising is how really good next token prediction itself is at predicting all kind of other things no?
I think at this point the better question is: why in the past fertility was so high? and I think the reasons were mainly that people _relied_ on their children to grow up and take over the farm and take care of their parents. They were also mortally bored and children are fun. They had them for selfish reasons.
But nowadays? why would you have a child? for a middle class+ family in a developed country, having a child is a 6 figures expense over their lifetime, limits your career, holidays, etc. From a selfish point of view, it doesn't make a lot of sense.
I don't think it's the only explanation but children are, individually, optional so you can, for selfish reasons, do it or not.
Unfortunately uv is usually insufficient for certain ML deployments in Python. It's a real pain to install pytorch/CUDA with all the necessary drivers and C++ dependencies so people tend to fall back to conda.
Are there particular libraries that make your setup difficult? I just manually set the index and source following the docs (didn’t know about the auto backend feature) and pin a specific version if I really have to with `uv add “torch==2.4”`. This works pretty well for me for projects that use dgl, which heavily uses C++ extensions and can be pretty finicky about working with particular versions
This is in a conventional HPC environment, and I’ve found it way better than conda since the dependency solves are so much faster and I no longer experience PyTorch silently getting downgraded to cpu version of I install a new library. Maybe I’ve been using conda poorly though?
This assumes the amount of work available is static. If the cost to produce software reduces by 80% then suddenly many more projects are viable, needing more engineers
I'm a little embarrassed to admit this, but over the past 15 years of my career I have worked on a few products the world really didn't need at all (nor want). I was paid, and life went on but it was a misallocation of resources.
I suppose now someone can build stuff the world doesn't need a lot easier.
IMHO the US is turning into an oligarchy / autocracy like Russia. Musk is one of the oligarchs. We all know what happens to oligarchs when they get too close to eclipse the top dog and go near a window.
I hope you realize no-one is buying "but our billionaires are the good ones!" anymore. Elon was blue until like five years ago. Remember when Bezos was too?
The difference is, rational liberal people who vote Democrat (and don't confuse liberals with leftists) understand that bad actors in a system that is generally resulting in positive US economic growth due to investment in people doesn't mean that we have to tear everything down. Historical data is all there, its undeniable, and not just in the stock market.
The whole reason Democrats are seen as weak is that there are plenty of criticisms of Democrats by other Democrats. Which is a sign of a well functioning party since it keeps is self in check. Of course the clueless idiots in US see this as weakness, but thats just how it is.
Meanwhile Republicans fall unilaterally behind their daddies Trump and Musk, which is why people like you switch up the subject about billionaires instead of pointing out that a guy who does Nazi salutes is in charge of government spending cuts, and managed to accidentally leak national secrets, the same thing that was a major campaign issue when Trump was running against Hillary.
Just to be ultra clear in case you wanna reply with some random talking point about Democrats being bad - there is absolutely nothing that you can say at this point that will ever make the democrats look as bad as Trump or Musk, which is impressive since its only been like 2 months. If you find yourself talking to people that are agreeing with you, just know that you are in an echo chamber, which is something that your side was making fun of people being in.
Crazily enough, Hollywood actors may be wealthy, but most are closer in wealth to the average Silicon Valley engineer than musk or Bezos. Oligarchs are generally not celebrities, they’re business tycoons who own a lot more capital and have a lot more ways to profit off of the government or “the system”.
I think this is a real problem but your post is an exaggeration. There are cases of fraud in science. There is a reproducibility crisis in some areas. There are political angles and rent seeking wrt grants. But how widespread is it? You're assuming it's close to 100% without evidence. I don't claim to have the exact number but intuitively yours is extraordinary (so it would need extraordinary evidence). I think these issues affect some areas much more than others and some regions more than others. I still believe science is the best way of enquiry for the natural world.
I'm not assuming it's close to 100%, I'm countering the GPs criticism of people who are skeptical about the title. He's saying "how dare you question these science experts!?!" And I'm saying the reason people do that is because scandals like LK99 erode the credibility.
I think on average trusting the experts is the right thing. And by the way LK99 is not even particularly damning, as far as we know it was science working as intended
I've watched it. I disagree. Zelensky calmly and reasonably asked JD Vance a question regarding his answer to the reporter. It was all fine until Vance started with the "frankly I think it's very disrespectful" line. HE decided to escalate. What Zelensky asked was reasonable and pretty in character for him. They know he's uncompromising with dealing with Putin. The _diplomatic_ position is to understand both sides and mediate, not to try to get one of the sides to bow down to their aggressor.
watched it once, this is what I saw (Vance suddenly antagonizing Zelensky as if to entrap a known hothead), and then followed by the two sly comments from Trump "that’s why I kept this going so long" and "this is going to be great television"
That's not true for the kind of searches we're talking about here. If you are looking for "best mechanical keyboard" or "reviews of shimano bicycle gearsets", reddit will regularly be of higher quality than the median google first page.
The problem with reviews is that on Reddit, it’s almost guaranteed that you’re reading a PR company’s post. Starting 2020, marketing agencies have been openly advertising that they game Reddit threads for product placements. If you know anyone working in those departments, just ask around.
Then guru-influencer-like people started selling growth hack tactics. Pretty much, again, openly discussing purchasing old Reddit accounts, how to make posts that are not obvious product placements and etc. Like if you see a list of suggested products, it’ll be:
1. Competitor
2. Your product
3. Competitor
With some pros/cons listed with the hopes to skew the result towards the second choice.
There are exceptions, like very hardcore tiny moderated subreddits, but I really wouldn’t take product recommendations from Reddit very seriously.
> The problem with reviews is that on Reddit, it’s almost guaranteed that you’re reading a PR company’s post.
Sure. But, in the comments, you will find out that if you press both control and m and backspace at the same time, the keyboard explodes. Unlike Google, that when searching about explodey keyboards it gives you 37 pages of "10 reasons why this is the best keyboard that totally doesn't explode".
The comments are absolutely astroturfed to fuck as well, but you're right, there's at least some small signal in there, whereas average Google results have an amount indistinguishable from zero.
And you think the top google results are not gonna be PR astroturf too? SEO is the whole reason I append Reddit in the first place, it just killed google.
AI is now a field where the claims are, in essence, that we're going to build God in 2 years. Make the whole planet unemployed. Create a permanent underclass. AI researches are being hired at $100-300m comp. I mean, it's definitely a very exciting topic and polarizes opinion. If things plateau and the claims dissappear and it becomes a more boring grind over diminishing returns and price adjustments I think we'll see the same thing, less comments over it.
reply