Hacker News new | past | comments | ask | show | jobs | submit | michael_nielsen's comments login

I see virtually no politics. Maybe don't follow accounts which post about politics?

I didn't follow anybody.

It was all I saw after creating account (which was just after US election results). Maybe i should give it another try.


You see posts even without following anyone smh

A good brief overview here from Tim Gowers (a Fields Medallist, who participated in the effort), explaining and contextualizing some of the main caveats: https://x.com/wtgowers/status/1816509803407040909


You have not understood the statement.


In that case, would you mind writing a more enlightening comment?


It's probably most clearly stated in the original post, but the point is you would either be able to find three people who are all friends with the other two, or you would be able to find three friends who are not friends with any of the others. The non-obviousness is due to the fact that someone may have only one friend, or someone may have two friends but those two would not be friends with each other, so I wouldn't call it blindingly obvious that there would always be a subset with one of these properties (I think the main intuition that is useful is that if you have a sparsely connected graph of friends, it's easy to find a group that aren't friends at all, and if you have a densely connected graph then you can find a group that are mutual friends. The theorem is then proving that there's no middle ground: it's either dense enough a mutual group must exist or sparse enough a non-friend group must exist.)


Does this formulation work?

You have 6 wooden blocks, each can be 1 of 6 colors. There will always be either (A) 3 blocks of the same color, or (B) 3 blocks of different colors.

Iterating through all possibilities:

6 of 1 color - case A

5 of 1 color, 1 of another - case A

4 of 1 color, 2 of another - case A

4 of 1 color, 1 of another, 1 of yet another - case A and B

3 of 1 color, 3 of another - case A

3 of 1 color, 2 of another, 1 of yet another - case A and B

3 of 1 color, 1 of another, 1 of yet another, 1 of another another - case A and B

2 of 1 color, 2 of another, 2 of yet another - case B

2 of 1 color, 2 of another, 1 of yet another, 1 of another another - case B

2 of 1 color, 1 of another, 1 of yet another, 1 of another another, 1 of another another another - case B

all colors different - case B


Your formulation is not equivalent. The actual Ramsey theory formulation is something more like...you need to color, either red or blue, all of the edges of a fully connected graph of 6 elements (there are 15 such edges). No matter which coloring you choose (there are 2^15 of these), there will always be a "triangle" between three nodes where the entire triangle is either fully red, or fully blue. If you were to instead restrict yourself to a graph with 5 elements instead of 6, it's possible[1] to color the edges so there's no triangle where all the elements are the same.

As an exercise, try repeating your same argument for 5 colors/blocks, and note that it still works, when it shouldn't.

[1] - https://commons.wikimedia.org/wiki/File:RamseyTheory_K5_no_m...


In your version, the color of a vertex is independent of other vertices. The real problem is about entirely about connections to other vertices. The latter cannot be reduced to the former.


Just so you know: you've completely misunderstood the statement.


There's (at least) two meanings for AI: (1) Software systems based on extensions of the LLM paradigm; and (2) Software systems capable of all human cognitive tasks, and then some.

It's not yet clear what (1) has to do with (2). Maybe it turns out that LLMs or similar can do (2). And maybe not.

I can understand being skeptical about the economic value of (1). But the economic value of (2) seems obviously enormous, almost certainly far more than all value created by humanity to date.


There goes 50+% of my use.

"LLMs are no good for [use case X]" often means "I aren't very good at using LLMs for [use case X]".

With many powerful tools - violins, say, or carpentry tools - we know that it takes a long time and a lot of learning to achieve competent performance, much less virtuoso performance. Someone who spent ten hours learning the violin and concluded "Violins sound terrible" wouldn't have diagnosed a problem with violins, but with their own mastery. I certainly think current LLMs have some big intrinsic weaknesses, but also that what they are is quite subtle.


I think it strongly depends on the model.

I found that og gpt4 is better for brainstorming than gpt4 turbo, which in tern is better than gtp4o.

If you're just using the web based portal you don't get much of a choice which model to use or it's temperature.


It was the records of $100 billion dollars in retirement assets that was deleted by Google. Not a small deal, and far more than many small banks.

I am, unfortunately, a customer of the retirement firm, UniSuper.


Stoll has written a lovely 2010 mea culpa, originally a (now-vanished) comment at: http://boingboing.net/2010/02/26/curmudgeony-essay-on.html,

I saved the comment. Quoting:

"Of my many mistakes, flubs, and howlers, few have been as public as my 1995 howler.

Wrong? Yep.

At the time, I was trying to speak against the tide of futuristic commentary on how The Internet Will Solve Our Problems.

Gives me pause. Most of my screwups have had limited publicity: Forgetting my lines in my 4th grade play. Misidentifying a Gilbert and Sullivan song while suddenly drafted to fill in as announcer on a classical radio station. Wasting a week hunting for planets interior to Mercury’s orbit using an infrared system with a noise level so high that it couldn’t possibly detect ‘em. Heck – trying to dry my sneakers in a microwave oven (a quarter century later, there’s still a smudge on the kitchen ceiling)

And, as I’ve laughed at others’ foibles, I think back to some of my own cringeworthy contributions.

Now, whenever I think I know what’s happening, I temper my thoughts: Might be wrong, Cliff…

Warm cheers to all,

-Cliff Stoll on a rainy Friday afternoon in Oakland"


He was wrong until he wasn't.

A lot of his words fell flat as incorrect prognostications in the 2000s and 2010s, but now that we're in the 2020s, I feel the heart and soul of what he was getting at rings true.

The bright-eyed luster faded, revealing the deeper truths.

> Every voice can be heard cheaply and instantly. The result? Every voice is heard. The cacophany more closely resembles citizens band radio, complete with handles, harrasment, and anonymous threats. When most everyone shouts, few listen.

Bingo.

> Lacking editors, reviewers or critics, the Internet has become a wasteland of unfiltered data. You don't know what to ignore and what's worth reading.

More true with each and every passing day.

> Logged onto the World Wide Web, I hunt for the date of the Battle of Trafalgar. Hundreds of files show up, and it takes 15 minutes to unravel them—one's a biography written by an eighth grader, the second is a computer game that doesn't work and the third is an image of a London monument. None answers my question

Google is starting to feel like that, especially when looking for more than simple facts.

> Won't the Internet be useful in governing? Internet addicts clamor for government reports. But when Andy Spano ran for county executive in Westchester County, N.Y., he put every press release and position paper onto a bulletin board. In that affluent county, with plenty of computer companies, how many voters logged in? Fewer than 30. Not a good omen.

Computers won't make people interested in municipal issues. At the national level, it's closer to team sports with all the betting and emotional rivalry.

> Then there are those pushing computers into schools. We're told that multimedia will make schoolwork easy and fun. Students will happily learn from animated characters while taught by expertly tailored software.Who needs teachers when you've got computer-aided education?

Schools continue to slide. Phones and tablets grant access to vast educational resources, but most kids don't use them in this way.

> And you can't tote that laptop to the beach.

Gotta find fault in this one, though. I've once or twice been goaded into being oncall during vacation. That's my own stupid fault, though.


>> Logged onto the World Wide Web, I hunt for the date of the Battle of Trafalgar. Hundreds of files show up, and it takes 15 minutes to unravel them—.... None answers my question

>Google is starting to feel like that, especially when looking for more than simple facts.

Honestly, if it weren't for Wikipedia, the web would be almost useless for getting basic facts quickly.


yup, and the Japanese internet is said to be much, much worse in this regards. Being able to find everything and anything but the fact you’ve been looking for is kind of a running gag over there, at least i was told.


I've never heard that, though I can't say I've lived here long enough to be any kind of authority on the matter. There is a Japanese version of Wikipedia though, and my girlfriend frequently looks things up on it just like I do with the English version.


take it with a grain of salt then, i haven’t lived in Japan for longer than a few weeks. I live in a german city with a large Japanese diaspora, so that’s where I got my, possibly wonky, intel


Went to the beach a while back and saw a family out on towels with kids. For the hour I was there, it was the two parents out frolicking in the waves as the 10? and 14?yr old spent the entire time on their phones, although they occasionally apparently commiserated with each other by showing a picture/video (of their parents doing something embarrassing?).

Really, it wasn't that shocking. It's not like everyone loves going to the beach. Nobody was upset, just bored. But, it seemed the exact opposite of the age dynamic I expected. It could have just as easily been reversed, but I would have stil lbeen just a little sad.

At the time (1995) this came out, people were worried about "piracy" on newgroups and murder for hire through pseudonymous accounts. Instead we got DMCA takedown fraud and SWATting.


I recall many trips to the beach as a kid where I sat and read a book while other swam.


100%. There are a lot of things the book got wrong, but how fucked up society got wasn't one of them.


I flagged this - it seems too clearly flame bait. If it was an honest mistake, my apologies. Disney had three movies in 2023 which took more than $200 million at the US box office


They were all flops. They cost massively more to film and to market than that. And remember theater owners get half of the gross.


As much as I despised for example the first new Star Wars, The Force Awakens:

"The film grossed $2.07 billion worldwide, breaking various box office records and becoming the highest-grossing film in the United States and Canada, the highest-grossing film of 2015, and the third-highest-grossing film at the time of its release"


People forget that the movie came out nine years ago and shouldn't pass for "recent years", which the discussion above is about. The movie also primarily sold through hype to kids who grew up with the prequels, which had little to do with the quality. People, including me, still lived in denial back then. It wasn't until the second movie that my friends realized how terrible Star Wars had become and promised to never watch a Disney movie at the cinema again. A reputation Disney seems to have embraced considering the countless discussions of their decline.


Well, part 2:

"It grossed over $1.3 billion worldwide, becoming the highest-grossing film of 2017"

and part 3:

"grossed over $1.077 billion worldwide, making it the seventh-highest-grossing film of 2019 "

Cannot really be called flops either. And the Mandalorian is highly succesful as well. And probably some other movies, I don't know, I do not follow. My point is, that I share the criticism of how bad Star Wars became under Disney, I dropped out, after they seriously introduced yet another death star. But commercially they were highly succesful.


Brand erosion takes time, so "force awakens" was seen by nearly every star wars fan, giving it a try. But if they were disappointed they were less likely to see it's sequel "the last jedi" which numbers show that I believe. If trust was further shaken by the quality of "the last jedi" then the numbers for it's sequel "the rise of skywalker" would reflect that. If trust was further shaken by the TV offerings like "Mandelorian" or "Book of Boba Fett" or "obi-wan kenobi" show, then those would also progressively have less and less viewership and less and less subscribers to streaming services like Disney+.

A business can do something that makes a ton of money and still tarnishes their brand and their relationship with their fans. So those fans thinking its a flop, even if it was a financial success isn't quite wrong.


They were flops even if they made money because their expectations were so infinitely sky-high.

Force Awakens? Everyone who had ever seen Star Wars went to see it (extended family had a tradition of seeing Star Wars movies when they came out). Later ones didn’t have that, and we’ve never seen the last one.

Elemental obviously outperformed expectations but was no Toy Story. Wish is not doing well and looks unlikely to recover.

We’re long gone from the era of every single Disney (or Pixar) animated film being an absolute instant classic and powerhouse.

(Part of this may be the huge number of live action remakes - even if financially successful they seem entirely forgettable).


Star Wars is destroyed beyond redemption by now, and the same goes for Indiana Jones. Pixar is also on a downwards trajectory, and whoever says otherwise is deluding himself/herself.


Pixar is just a movie studio now, churning out basic animated movies. They’re no longer head-and-shoulders above everyone, and other studios are certainly competitive or even outclassing.

Turning Red and Teenage Kraken have a superficially similar plot and the Pixar one is much “better made” in many ways, but neither is earth-shattering.


And how much did those movies cost to make? I think the movies you are referring to were expecting to make 500m or more. They needed to make about 500 to break even! Disney said have some successes last year. But they aren't as impressive as you might think


Box Office is not the yardstick disney uses, that's just the first phase of the disney wheel. They make oodles of money in merchandise and theme park content that's based on the same (expensive) IP as the movie. When they don't break even on the movie, they'll generally break even or make money on the IP behind the movie.


Here's the problem with that analysis, how do you attribute revenue to a specific movie? Will people attend the theme parks or spend more at them because of [movie X]? It's the same problem you have with streaming. Will people subscribe or stay subscribed to D+ longer because of [movie X]?

Until you can answer one or both in a repeatable, predictable way, we can wave our hands and say "it makes money later!" or "it doesn't make money later!" and neither is provable.

One other aspect that we CAN prove: streaming kills DVD sales. That's a revenue stream that is gone and won't be coming back so we have yet another deficit to fill.

Until then, Box Office and merchandising are the ONLY numbers that we, analysts, and stockholders can point at where "You put in $X and got out $Y" for their movie business. And as of right now, that puts Disney's 2023 numbers deeply negative.


To be clear, I totally agree with you. I think the success of theme parks and merchandise has been covering up mediocre IP from Disney for a while, and that fact is dangerous to their future prospects.

However, trying to balance this critique with some fairness to their strategy, it is difficult to disambiguate "the strategy isn't working" from "the strategy is helping us float across some mediocre years until we chance upon the next Frozen". It's kind of like VC returns, where it's 10 "%" of their IP (Star Wars, Mickey, Frozen, Toy Story, Marvel, etc.) that drive 90% of their performance. 2023 was definitely a poor "vintage" for Disney IP.

That being said, Disney has rebounded from many spells of mediocrity, and their theme parks, merchandise, and old IP (now monetized through Disney+ as you say) have kept them afloat through those poor periods.

Most recently they've only been able to jump-start the IP engines through acquisition (Pixar 2006, Marvel in 2009). I'm not a Disney shareholder myself, but I agree that the IP tap seems to be running dry and that's very concerning. I don't think Epic Games has anywhere near the value ceiling that Marvel and Pixar did.


> One other aspect that we CAN prove: streaming kills DVD sales. That's a revenue stream that is gone and won't be coming back so we have yet another deficit to fill.

Which is why Disney+ is its own streaming service. Keeps all the eggs in the same basket.


So far, streaming hasn't made nearly the same amount as DVD sales and it's ridiculously expensive to run one.

That said, licensing to other streaming services often does work. You get revenue for the cost of a contract vs having the infrastructure costs and nebulous ROI. You get the added benefit of direct attribution because you can tell "we licensed [movie X] for $X for Y years".


That would traditionally be the case, but the merchandising is bombing too, and (anecdata time) I can confirm this through personal observation: 80%-off sales of Star Wars merchandise in a local toy store, and my kids and their circle having a keen sense of which IP they like (unsurprising spoiler: it’s the stuff based on good movies, not the stuff based on bad movies).


I think the surest example of this is the Lego Star Wars toys more and more being obviously adult-targeted.

Not everything can be Frozen, but the pallet of Wish merchandise at Walmart is still there and now all marked down (except the Lego because they know that someone will buy it eventually for parts).

Elemental merchandising was completely non-existent and that was a mistake, people enjoyed that.


I'm not sure why they're so often so bad. I wonder if it's the Upton Sinclair effect; to paraphrase slightly: "It is difficult to get a person to understand something, when their hoped-for future wealth depends on not understanding it."


There are far far more dollars available to people that are on the "AI Safety" bandwagon than to those pushing back against it.

The idea that the Upton Sinclair effect is the source of pushback against AI Safety zealotry, is getting things largely backwards AFAICT.

Folks that are stressing the importance of studying the impact of concentrated corporate power, or the risk of profit-driven AI deployment, and so forth are receiving very little financial support.


> There are far far more dollars available to people that are on the "AI Safety" bandwagon than to those pushing back against it.

> The idea that the Upton Sinclair effect is the source of pushback against AI Safety zealotry, is getting things largely backwards AFAICT.

> Folks that are stressing the importance of studying the impact of concentrated corporate power, or the risk of profit-driven AI deployment, and so forth are receiving very little financial support.

IMO your comment doesn't substantively address michael_nielsen's comment, but I might be wrong. The following is how I understand your exchange with michael_nielsen.

The two of you are talking about three sets of people:

  Let A be AI notkilleveryoneism people.
  Let B be AI capabilities developers/supporters.
  Let C be people concerned with regulatory capture and centralization by AI firms.

  A and B are disjoint.
  A and C have some overlap.
  B and C have considerable overlap.
michael_nielsen is suggesting that the people of B are refusing to take AI risk seriously because they are excited about profiting from AI capabilities and its funding. (eg, a senior research engineer at OpenAI who makes $350k/year might be inclined to ignore AIXR and the same with a VC who has a portfolio full of AI companies)

And then you are pointing out that people of C are getting less money to investigate AI centralization than people of A are getting to investigate/propagandize AI notkilleveryoneism.

So, your claim is probably true, but it doesn't rebut what michael_nielsen suggested.

And I believe it's also critical to keep in mind that the actual funding is like this:

capabilities development >>>>>>>>>> ai notkilleveryoneism > ai centralization investigation


I'm not really trying to rebut Michael's argument -- I think it's true, to an extent, some of the time. But I think it's more true more of the time in the reverse direction. So I don't think it's a good argument. And more importantly, I think it fails to properly grapple with the ideas, instead using an ad hominem approach to discarding them somewhat thoughtless.

On your last point, I do think it's important to note, and reflect carefully on, the extremely high overlap between those funding ai notkilleveryoneism and those funding capabilities development.


(this discussion is quite nuanced so I apologize in advance for any uncharitable interpretations that I may make.)

> I'm not really trying to rebut Michael's argument -- I think it's true, to an extent, some of the time. But I think it's more true more of the time in the reverse direction.

I understand you to be saying:

Michael: Pro AI capabilities people are ignoring AIXR ideas because they are very excited about benefiting from (the funding of) future AI systems.

Reverse Direction: ainotkilleveryoneism people are ignoring AIXR ideas because they are very excited about benefiting from the funding of AI safety organizations.

And that (RD) is more frequently true than (M).

IMO both (RD) and (M) are true in many cases. IME it seems like (M) is true more often. But I haven't tried to gather any data and I wouldn't be surprised if it turned out to actually be the other way.

> So I don't think it's a good argument.

I might be misunderstanding you here because I don't see Michael making an argument at all. I just see him making the assertion (M).

> And more importantly, I think it fails to properly grapple with the ideas, instead using an ad hominem approach to discarding them somewhat thoughtless.

I am ambivalent toward this point. On one hand Michael is just making a straightforward (possibly false) empirical claim about the minds of certain people (specifically, a claim of the form: these people are doing X because of Y). It might really be the case that people are failing to grapple with AIXR ideas because they are so excited about benefiting from future AI tech, and if it were, then it seems like the sort of thing that it would be good to point out.

But OTOH he doesn't produce an argument against the claim "AIXR is just marketing hype." which is unfair to someone who has genuinely come to that conclusion via careful deliberation.

> On your last point, I do think it's important to note, and reflect carefully on, the extremely high overlap between those funding ai notkilleveryoneism and those funding capabilities development.

Thanks for pointing this out. Indeed, why are people who profess that AI has a not insignificant chance of killing everyone also starting companies that do AI capabilities development? Maybe they don't believe what they say and are just trying to get exclusive control of future AI technology. IMO there is a significant chance that some parties are doing just that. But even if that is true, then it might still be the case that ASI is an XR.


I mostly agree with this. Certainly the last line!

I've been reflecting on Jeremy's comments, though, and agree on many things with him. It's unfortunately hard to tease apart the hard corporate push for open source AI (most notably from Meta, but also many other companies) from more principled thinking about it, which he is doing. I agree with many of his conclusions, and disagree with some, but appreciate that he's thinking carefully, and that, of course, he may well be right, and I may be wrong.


Thank you Michael. I'm not even sure I disagree with you on many things -- I think things are very complicated and nuanced and am skeptical of people that hold overly strong opinions about such things, so I try not to be such a person myself!

When I see one side of an AI safety argument being (IMO) straw-manned, I tend to push back against it. That doesn't mean however that I disagree.

FWIW, on AI/bio, my current view is that it's probably easier to harden the facilities and resources required for bio-weapon development, compared to hardening the compute capability and information availability. (My wife is studying virology at the moment so I'm very aware of how accessible this information is.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: