Then it is not a “very serious suggestion”. It is a thought experiment which should be taken with commensurate weight.
>Wrong
Explain what “skip a day of meat do a year of LLMs” is then. If it’s not just an ad for feeling good about using LLMs, what is it?
>The 100,000 number was a throwaway hypothetical to make a point
>Two lines later he threw in a 2,000x too.
Alright he said that meat is 2,000 times worse than language models as well as 100,000 times worse than language models. He might have meant 100k but could also mean 2k.
Do you have a real problem in real life where if somebody called you and said “it’s gotten two thousand times worse” versus “it’s gotten a hundred thousand times worse?” the former would be fine and the latter alarming?
If yes, what is the problem? Why was it a problem at 1x? 2000x? 100,000x? Why was it a problem at at 1x and 100,000x but not 2000x?
> Explain what “skip a day of meat do a year of LLMs” is then. If it’s not just an ad for feeling good about using LLMs, what is it?
You can stop being part of the problem if you do it. The problem still exists, but you are no longer part of it. You reduced it by more than your fair share. While the problem would stop existing if everyone made the same choice, there's no pretense that that's actually going to happen. LLM companies are not being excused by such an unlikely hypothetical.
j-lb also made an argument to not care much about LLMs at all, but it was separate from the "skip a day of meat" argument. That's where the big multiplier comes in. But again, separate argument.
I don't want to argue about the example ratio he used. The real ratio is very big if the numbers cited earlier are correct. So if you're going to sit here and say 2000x might as well be arbitrarily large then I think you just joined the "LLM resource use doesn't matter" team, because going by the above citation 2000x is in the ballpark of the correct number, so LLM use is 1 divided by arbitrarily large, making it negligible. Congrats.
Just wanted to chime in and say you represented my case perfectly and got all my points (and their separation) 100%!
You're right, I never said we should not care about LLMs because we also "rightfully don't care about meat".
To me the whole AI resource discussion is just a distraction for people who want to rally against a new scary thing, but not look at the real scary thing that they just gotten used to over the years.
In a sense it's the `banality of evil`, or maybe `banality of self destruction`:
The “banality of evil” is the idea that evil does not have the Satan-like, villainous appearance we might typically associate it with. Rather, evil is perpetuated when immoral principles become normalized over time by people who do not think about things from the standpoint of others.
We've gotten so used to using huge amounts of resources in our day to day lives, that we are completely unwilling to stop and reflect about what we could readily change. Instead we fight against the new and shiny, because it tells a better story, distracting us from what really matters.
In a sense we are procrastinating on changing.
It's not the Skynet like AI that is going to be the doom of humankind, but the hot-dogs, taking your car for the commute, and shitty insulation.
Then it is not a “very serious suggestion”. It is a thought experiment which should be taken with commensurate weight.
>Wrong
Explain what “skip a day of meat do a year of LLMs” is then. If it’s not just an ad for feeling good about using LLMs, what is it?
>The 100,000 number was a throwaway hypothetical to make a point
>Two lines later he threw in a 2,000x too.
Alright he said that meat is 2,000 times worse than language models as well as 100,000 times worse than language models. He might have meant 100k but could also mean 2k.
Do you have a real problem in real life where if somebody called you and said “it’s gotten two thousand times worse” versus “it’s gotten a hundred thousand times worse?” the former would be fine and the latter alarming?
If yes, what is the problem? Why was it a problem at 1x? 2000x? 100,000x? Why was it a problem at at 1x and 100,000x but not 2000x?