> Rather: WHATWG was founded because the companies developing browsers (in particular Google) believed that what the W3C was working on for XHTML 2.0 was too academic, and went into a different direction than their (i.e. in particular Google's) vision for the web.
Mozilla, Opera and Apple. Google didn't have a browser then, hadn't even made the main hires who would start developing Chrome yet and hixie was still at Opera.
> The amount of goal post shifting is so amusing to see
Can you be specific about the goal posts being shifted? Like the specific comments you're referring to here. Maybe I'm just falling for the bait, but non specific claims like this seem designed just to annoy while having nothing specific to converse about.
I got to the end of your comment and counting all the claims you discounted, the only goal post I see left is that people aren't using a sufficiently excited tone while sifting fact from hype? A lot of us follow this work pretty closely and don't feel the need to start every post with "there is no need for excitement to abate, still exciting! but...".
> I am not saying any of this means we get AGI or something or even if we continue to see improvements. We can still appreciate things. It doesn't need to be a binary.
You'll note, however, that the hype guys happily include statements like "Vibe proving is here" in their posts with no nuance, all binary. Why not call them out?
Well there's a comment here saying "I won't consider it 'true' AI until it solves all millenium problems"... That goalpost seems to be defining AI as not only human level but as superhuman level (e.g. 1 in a million level intellect or harder)
Except nobody ever actually considered the "turing test" to be anything other than a curiosity in the early days of a certain branch of philosophy.
If the turing test is a goal, then we passed it 60 years ago and AGI has been here since the LISP days. If the turing test is not a goal (which is the correct interpretation), nobody should care what a random nobody thinks about an LLM "passing" it.
"LLMs pass the turing test so they are intelligent (or whatever)" is not a valid argument full stop, because "the turing test" was never a real thing ever meant to actually tell the difference between human intelligence and artificial intelligence, and was never formalized, and never evaluated for its ability to do so. The entire point of the turing test was to be part of a conversation about thinking machines in a world where that was an interesting proposition.
The only people who ever took the turing test as a "goal" were the misinformed public. Again, that interpretation of the turing test has been passed by things like ELIZA and markov chain based IRC bots.
Is your argument that Terence Tao says it was a consequence from a known result and he categorizes it as low hanging fruit, but to you it feels like one of those things that's only obvious in retrospect after it's explained to you, and without "evidence" of Tao's claim, you're going to go with your vibes?
The overhyped tweet from the robinhood guy raising money for his AI startup is nicely brought into better perspective by Thomas Bloom (including that #124 is not from the cited paper, "Complete sequences of sets of integer powers "/BEGL96):
> This is a nice solution, and impressive to be found by AI, although the proof is (in hindsight) very simple, and the surprising thing is that Erdos missed it. But there is definitely precedent for Erdos missing easy solutions!
> Also this is not the problem as posed in that paper
> That paper asks a harder version of this problem. The problem which has been solved was asked by Erdos in a couple of later papers.
> One also needs to be careful about saying things like 'open for 30 years'. This does not mean it has resisted 30 years of efforts to solve it! Many Erdos problems (including this one) have just been forgotten about it, and nobody has seriously tried to solve it.[1]
And, indeed, Boris Alexeev (who ran the problem) agrees:
> My summary is that Aristotle solved "a" version of this problem (indeed, with an olympiad-style proof), but not "the" version.
> I agree that the [BEGL96] problem is still open (for now!), and your plan to keep this problem open by changing the statement is reasonable. Alternatively, one could add another problem and link them. I have no preference.[2]
Not to rain on the parade out of spite, it's just that this is neat, but not like, unusually neat compared to the last few months.
reading the original paper and the lean statement that got proven, it's kinda fascinating what exactly is considered interesting and hard in this problem
roughly, what lean theorem (and statement on the website) asks is this: take some numbers t_i, for each of them form all the powers t_i^j, then combine all into multiset T. Barring some necessary conditions, prove that you can take subset of T to sum to any number you want
what Erdosh problem in the paper asks is to add one more step - arbitrarily cut off beginnings of t_i^j power sequences before merging. Erdosh-and-co conjectured that only finite amount of subset sums stop being possible
"subsets sum to any number" is an easy condition to check (that's why "olympiad level" gets mentioned in the discussion) - and it's the "arbitrarily cut off" that's the part that og question is all about, while "only finite amount disappear" is hard to grasp formulaically
so... overhyped yes, not actually erdos problem proven yes, usual math olympiad level problems are solvable by current level of Ai as was shown by this year's IMO - also yes (just don't get caught by https://en.wikipedia.org/wiki/AI_effect on the backlash since olympiads are haaard! really!)
See this is one of the reasons I struggle to get on board the AI hype train. Any time I've seen some breathless claim about it's capabilities that feels a bit too good to be true, someone with knowledge in the domain takes a closer look and it turns out to have been exaggerated and meant to draw eyeballs and investors to some fledgling AI company.
I just feel like if we were genuinely on the cusp of an AI revolution like it is claimed, we wouldn't need to keep seeing this sort of thing. Like I feel like a lot of the industry is full of flim-flam men trying to scam people, and if the tech was as capable as we keep getting told it is there'd be no need for dishonesty or sleight of hand.
I have commented elsewhere but this bears repeating
If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).
I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.
The crypto train kinda ran out of steam, so all aboard the AI train.
That being said, I think AI has a lot more immediately useful cases than cryptocurrency. But it does feel a bit overhyped by people who stand to gain a tremendous amount of money.
I might get slammed/downvoted on HN for this, but really wondering how much of VC is filled with get-rich-quick cheerleading vs supporting products that will create strong and lasting growth.
I don't think you really need to wonder about how much is cheer leading. Effectively all of VC public statements will be cheer leading for companies they already invested in.
The more interesting one is the closed door conversations. Earlier this year, for example, it seemed there was a pattern of VCs heavily invested in AI asking the other software companies they invested in to figure out how to make AI useful for them and report back. I.e. "we invested heavily in hype, tell us how to make it real."
From my perspective, having worked in both industries and simply following my passions and opportunities, all I see is that the same two bandwagons who latched onto crypto either to grift or just egotistically talk shit have moved over to the latest technological breakthrough, meanwhile those of us silently working on interesting things are consantly rolling our eyes over comments from both sides of the peanut gallery.
The thing is, we genuinely are going through an AI revolution. I don't even think that's that breathless of a claim. The contention is over whether it's about to revolutionize our economy, which is a far harder claim to substantiate and should be largely self-substantiating if it is going to happen.
"I also wonder whether this 'easy' version of the problem has actually appeared in some mathematical competition before now, which would of course pollute the training data if Aristotle [Ed.: the clanker's name] had seen this solution already written up somewhere."
So in short, it was an easy problem that had already been solved thousands of years ago and the proof was so simple that it doesn't really count, and the AI used too many em-dashes in its response and it totally sucks.
> The demo at the top has some bad noise issues when the light is in small gaps, at least on my phone (which I don't think the article acknowledges).
Right at the end:
> The random jitter ensures that pixels next to each other don’t end up in the same band. This makes the result a little grainy which isn’t great. But I think looks better than banding… This is an aspect of the demo that I’m still not satisfied with, so if you have ideas for how to improve it please tell me!
Not only. There is an inherent aliasing effect with this method which is very apparent when the light is close to the wall.
I implemented a similar algorithm myself, and had the same issue. I did find a solution without that particular aliasing, but with its own tradeoffs. So, I guess I should write it up some time as a blog post.
> In medicine, we're already seeing productivity gains from AI charting leading to an expectation that providers will see more patients per hour.
And not, of course, an expectation of more minutes of contact per patient, which would be the better outcome optimization for both provider and patient. Gotta pump those numbers until everyone but the execs are an assembly line worker in activity and pay.
I don't think that more minutes of contact is better for anybody.
As a patient, I want to spend as little time with a doctor as possible and still receive maximally useful treatment.
As a doctor, I would want to extract maximal comp from insurance which I don't think is tied time spent with the patient, rather to a number of different treatments given.
Also please note that in most western world medical personnel is currently massively overprovisioned and so reducing their overall workload would likely lead to better result per treatment given.
I have no previous first-hand knowledge of this, but I vaguely remember discussions of avif in google photos from reddit a while back so FWIW I just tried uploading some avif photos and it handled them just fine.
Listed as avif in file info, downloads as the original file, though inspecting the network in the web frontend, it serves versions of it as jpg and webp, so there's obviously still transcoding going on.
I'm not sure when they added support, the consumer documentation seem to be more landing site than docs unless I'm completely missing the right page, but the API docs list avif support[1], and according to the way back machine, "AVIF" was added to that page some time between August and November 2023.
You are correct it is possible to upload avif files into Google Photo. But you lose the view and of course the thumbnail. Defeating the whole purpose of putting them into Photo.
Given it's an app, they didn't even need Google chrome to add support. Avif is supported on Android natively.
> You are correct it is possible to upload avif files into Google Photo. But you lose the view and of course the thumbnail.
I'm not sure what you mean. They appear to act like any other photo in the interface. You can view them and they're visible in the thumbnail view, but maybe I'm misinterpreting what you mean?
I take a photo, the format is jpeg. It backs up to Google photo, the Google photo app on Android renders the photo just fine.
I then convert that photo (via a local converter) to AVIF, Google backs it up, I can see the file in Google Photo on Android but it doesn't render the image. That being full size or thumbnail, all I get is a grayed square. So I concluded the app doesn't support avif rasterizing.
I then gave up on the automation that converted all my jpeg into avif, which in turn would have saved hundred of gigabytes given I have 10y worth of photos.
The experiment was done about 3 months ago, as of 2025 Google Photo on Android, latest version, would not render my AVIF photos.
Their standards position is still neutral; what switched a year ago was that they said they would be open to shipping an implementation that met their requirements. The tracking bug hasn't been updated[2] The patches you mention are still part of the intent to prototype (behind a flag), similar to the earlier implementation that was removed in Chrome.
They're looking at the same signals as Chrome of a format that's actually getting use, has a memory safe implementation, and that will stick around for decades to justify adding it to the web platform, all of which seem more and more positive since 2022.
> none of the other top labs release thier models like meta
Don't basically all the "top labs" except Anthropic now have open weight models? And Zuckerberg said they were now going to be "careful about what we choose to open source" in the future, which is a shift from their previous rhetoric about "Open Source AI is the Path Forward".
> Didn't some fake AI country song just get on the top 100?
No
Edit: to be less snarky, it topped the Billboard Country Digital Song Sales Chart, which is a measure of sales of the individual song, not streaming listens. It's estimated it takes a few thousand sales to top that particular chart and it's widely believed to be commonly manipulated by coordinated purchases.
Mozilla, Opera and Apple. Google didn't have a browser then, hadn't even made the main hires who would start developing Chrome yet and hixie was still at Opera.
reply