This is spot on because there can’t be two issues that exist simultaneously. There can only be one thing that wastes enormous amounts of energy and that thing is beef
You can try to misconstrue and ridicule the argument,
but that won't change the math that if you have one thing that causes 1 unit of damage, and another thing that causes 100.000 units of damage, then for all intents and purposes the thing that produces 1 unit of damage is irrelevant.
And any discussion that tries to frame them as somewhat equally important issues is dishonest and either malicious or delusional.
My guess, as I've expressed earlier in the comment chain, is that it's emotionally easier for people to bike-shed about the 0.01% of their environmental impact, than to actually tackle things that make up 20%.
And no it's not only beef (which is a stand-in for meat and diary), another low hanging fruit is also transport, like switching your car for a bike.
But switching from meat and diary to a vegan diet would reduce up to 20% of your personal environmental impact, in terms of CO2.
And about 80-90% of rainforest deforestation is driven directly or indirectly by livestock production.
So it's simply the easiest most impactful thing everyone can do. (Switching your car for a bike isn't possible for people in rural areas for example.)
>1 unit of damage, and another thing that causes 100.000 units of damage, then for all intents and purposes the thing that produces 1 unit of damage is irrelevant
You make a good point. A problem is only a real problem if you can’t find a bigger thing that makes it look small by comparison. For example, the worldwide concrete industry creates more co2 than beef does so there is no reason to stop eating beef if you enjoy it.
Now I know that some might say that “all of this is cumulative” or “the material problems that stem from entrenched industries is actually a reason not to invent completely novel wasteful things rather than a justification for them” but in reality only two things are true: only the biggest problem is real, and the only problem is definitely some other guy’s doing. If I waste x energy and my neighbor wastes y amount, a goal of reducing (x+y) is oppressive whereas a goal where I just need to try to keep x lower than y feels a lot nicer.
I agree. Humans have been eating meat and doing construction for the entire history of civilization, they are not the sort of things that could be affected by posting online. LLMs on the other hand are new, largely in the hands of a small handful of companies, and a couple of those companies are bleeding cash in such a way that they might actually respond to consumer pressure. It is cynical to compare them to things that we know will not change as a justification for a blanket excuse for them.
Seeing as these models being wasteful is integral to the revenue of companies like OpenAI and Anthropic, the more people that tell them that the right business strategy is to start perpetually building data centers and power plants, the less incentive they have to build models that run efficiently on consumer hardware.
They just suggested a different bike shed — one for the purpose of their argument won’t ever get fixed. J-pb’s point is that running a bunch of generators 24/7 in Memphis is fine because people eat meat. Inefficient LLMs in the real world are okay because people could theoretically become vegan but have not. It’s just a thought experiment
If something costs too much, and you find a way to completely pay for it, that's not bikeshedding.
And it's not a thought experiment. It's a very real suggestion. If you're worried about the resource cost from your personal use, doing something to 100% offset it lets you stop worrying.
> become vegan
For one day per year. Replacing a day you would have otherwise eaten meat. That is an extremely attainable action for anyone that cares enough about LLM resource use enough to strongly consider avoiding them. It's not something that "will not change".
By the way, your goal of running efficiently on consumer hardware isn't as great as it sounds. One of the best ways to improve efficiency is batching multiple requests, and datacenter hardware generally uses more efficient nodes and runs at more efficient clock speeds. There's an efficiency sweet spot where models are moderately too big to run at home.
And it really undermines your argument when you throw in this stupid strawman about elon's toxic generators. You know j-pb was talking about typical datacenter resource use and not that. Get that insulting claim out of here.
It is only a “very real suggestion” if you believe that your argument might be effective.
Do you believe that “skip meat for a day use LLMs for a year” will have a climate impact?
Because if not then you agree with me that in this case theoretical vegans are just being used to justify more real consumption, not less
>stupid strawman about elon's toxic generators
They exist in the real world, right now. It is a real phenomenon and no matter how many vegans I imagine it’s still there. I’m not really clear on why the real thing that’s really happening is a strawman unless you think that the existence of that system is so bad that it undermines your position. Even then it wouldn’t be a strawman though, just a thing that doesn’t support your position that using LLMs is categorically fine because you can picture a vegan in your head
> Do you believe that “skip meat for a day use LLMs for a year” will have a climate impact?
If "use LLMs for a year" is enough to count as having a climate impact (negatively), then yes I believe "skip meat for a day use LLMs for a year" is enough to count (positively).
I'd be tempted to write off both of those, but the whole point of your argument is to consider LLM resource use important, so I'm completely accepting that for the sake of the above argument.
There are no theoretical vegans involved.
And the suggestion doesn't even involve vegans, unless there's a massive contingent of americans that only eat meat one day per year that I wasn't aware of.
And to get at what I think is your core objection: The fact that people can do this isn't being used to let companies off the hook. If only 2% of LLM users set up a meat skipping day, then LLM companies are only 2% let off the hook.
But at the same time let's keep a proportional sense of how big the hook is.
> They exist in the real world, right now. It is a real phenomenon
The strawman is you accusing people of supporting those generators.
> your position that using LLMs is categorically fine
>If "use LLMs for a year" is enough to count as having a climate impact (negatively), then yes I believe "skip meat for a day use LLMs for a year" is enough to count (positively).
Sorry, I should have clarified. In this case I meant “argument” as a thing that leads real people to either understand or agree with your position, not the construction of an idea in your mind.
With that in mind, do you think that “skip meat for a day use LLMs for a year” will convince enough real people, in real life, to not eat meat, that it offsets the emissions from LLM use?
Like imagine the future.
Since LLM use is a new category of energy use, you would have to convince people that haven’t already been convinced to skip meat by animal cruelty, health, philosophy, or existing climate concerns. People that were vegan before LLMs became popular obviously don’t count. The group of people that resisted decades of all that messaging will now make a meaningful adjustment to their consumption to cancel that out — and there will be enough of these new part time/full time vegans that it offsets the entire chat bot industry’s energy usage.
Do you imagine that being what happens?
If not it’s just somebody advocating for increased consumption in real life by invoking imaginary vegans.
As somebody that’s spent years as a vegan I am incredibly wary of “vegans can recruit” as a pitch. I’ve only ever heard that from people that have never tried to recruit in earnest or charlatans. Like I’ve mostly heard that from people that are not, never have been, and have no interest in being vegan.
Edit:
>The strawman is you accusing people of supporting those generators.
That’s not what a strawman is and it’s not an accusation, it’s an observation. If you say “I want subscription based online batched mega-high-compute language models” you are advocating for that industry, and those generators are part of it. Saying you feel that they’re somehow special and different because they’re icky does not make them any different from the thing that you say is necessarily the future. That you want!
I think anyone that does get convinced and skip meat should be able to use LLMs without shame or guilt, while we continue to pressure everyone else to save resources and we continue to pressure LLM companies to save resources.
LLM companies only get let off the hook if a very large fraction of their users do the meat skip thing, which is not very likely but could theoretically happen.
LLMs being a new category of energy use should get them some extra scrutiny, but only some. Maybe 3x scrutiny per wasted kilowatt hour compared to entrenched uses? If our real motivation is resource use, and not overreacting to change, LLMs should get some pressure but most of the pressure should go toward preexisting wasteful uses.
Nobody is advocating to ignore LLMs. But we shouldn't overstate them too much either.
And the giving up meat defense is not a defense for the companies, it's a defense for individual users that actually do it.
Like not an if or maybe thing, what do you see when you picture the future?
Do you think “Skip meat for a day use LLMs for a year” will produce enough new vegans to offset the energy usage and co2 produced by the LLM architecture of your choice?
Not asking if you want it to happen or if it’s something you can imagine could happen, I’m asking if you think it will
[_] yes
[_] no
Because if no, then the idea is just advocating for increased real consumption by invoking imaginary vegans!
Edit:
>LLM companies only get let off the hook if a very large fraction of their users do the meat skip thing, which is not very likely but could theoretically happen.
The person I was initially talking to took the position that LLM companies have negligible impact because people can be vegan. J-bp was saying that LLM companies shouldn’t be on anybody’s radars because uh, meat is 100,000 times worse.
The person you hopped in to defend was saying that LLM companies do not and should not have a “hook” because meat eaters exist
> It was a yes or no question [...] I’m asking if you think it will
[x] no
> Because if no, then the idea is just advocating for increased real consumption by invoking imaginary vegans!
Wrong.
> The person I was initially talking to took the position that LLM companies have negligible impact because people can be vegan.
He said "LLMs are not the problem here", which is true.
And he was arguing for individual use being offset when he said "maybe use ChatGPT to ask for vegan recipes".
The top level comment was also about individual use. "I would really like it if an LLM tool would show me the power consumption and environmental impact of each request I’ve submitted."
The comments right before you replied were also about individual use. "lifestyle choice".
> J-bp was saying that LLM companies shouldn’t be on anybody’s radars because uh, meat is 100,000 times worse.
The 100,000 number was a throwaway hypothetical to make a point. Not a number he was applying to LLMs in particular. Two lines later he threw in a 2,000x too.
And what he said is that LLM companies are not "somewhat equally important". Which is true. He didn't say you should ignore them entirely, just to have a sense of proportion.
-
Edit: Here is an important distinction that I think isn't getting through. There are multiple separate points being made by j-bp:
Point A, about not eating meat for a day, is only excusing anyone that actually does it. It's not a hypothetical that excuses the entire company.
Point B, about the size of the impact, suggests caring less about LLMs based on raw resource use. Point B does not care about the relatively small group of people that take up the offer in Point A. Point B is just looking at the big picture.
Then it is not a “very serious suggestion”. It is a thought experiment which should be taken with commensurate weight.
>Wrong
Explain what “skip a day of meat do a year of LLMs” is then. If it’s not just an ad for feeling good about using LLMs, what is it?
>The 100,000 number was a throwaway hypothetical to make a point
>Two lines later he threw in a 2,000x too.
Alright he said that meat is 2,000 times worse than language models as well as 100,000 times worse than language models. He might have meant 100k but could also mean 2k.
Do you have a real problem in real life where if somebody called you and said “it’s gotten two thousand times worse” versus “it’s gotten a hundred thousand times worse?” the former would be fine and the latter alarming?
If yes, what is the problem? Why was it a problem at 1x? 2000x? 100,000x? Why was it a problem at at 1x and 100,000x but not 2000x?
> Explain what “skip a day of meat do a year of LLMs” is then. If it’s not just an ad for feeling good about using LLMs, what is it?
You can stop being part of the problem if you do it. The problem still exists, but you are no longer part of it. You reduced it by more than your fair share. While the problem would stop existing if everyone made the same choice, there's no pretense that that's actually going to happen. LLM companies are not being excused by such an unlikely hypothetical.
j-lb also made an argument to not care much about LLMs at all, but it was separate from the "skip a day of meat" argument. That's where the big multiplier comes in. But again, separate argument.
I don't want to argue about the example ratio he used. The real ratio is very big if the numbers cited earlier are correct. So if you're going to sit here and say 2000x might as well be arbitrarily large then I think you just joined the "LLM resource use doesn't matter" team, because going by the above citation 2000x is in the ballpark of the correct number, so LLM use is 1 divided by arbitrarily large, making it negligible. Congrats.
Just wanted to chime in and say you represented my case perfectly and got all my points (and their separation) 100%!
You're right, I never said we should not care about LLMs because we also "rightfully don't care about meat".
To me the whole AI resource discussion is just a distraction for people who want to rally against a new scary thing, but not look at the real scary thing that they just gotten used to over the years.
In a sense it's the `banality of evil`, or maybe `banality of self destruction`:
The “banality of evil” is the idea that evil does not have the Satan-like, villainous appearance we might typically associate it with. Rather, evil is perpetuated when immoral principles become normalized over time by people who do not think about things from the standpoint of others.
We've gotten so used to using huge amounts of resources in our day to day lives, that we are completely unwilling to stop and reflect about what we could readily change. Instead we fight against the new and shiny, because it tells a better story, distracting us from what really matters.
In a sense we are procrastinating on changing.
It's not the Skynet like AI that is going to be the doom of humankind, but the hot-dogs, taking your car for the commute, and shitty insulation.
> Whatever you need to tell yourself to keep eating meat buddy.
I’m not the one that brought up moralizing or food. I can’t really comment on your relationship with your diet but it kind of seems like you saw somebody mention power usage and unprompted shared “well I don’t eat meat or cheese or yogurt” so I guess keep that up while you use enough energy to power your home to write some code slower than you would without it?
I am not even sure most people could articulate their morals. It's not just about never having heard about things as moral absolutism or consequentialism. Similar to how atrophied people's understanding of sympathy and empathy is as well.
I took a look at your resume to see if I would have relevant work for you but doesn't seem like it.
Maybe having vibecoding listed as a skill on your resume is a problem?
Alarm bells also go off when I see "Github (advanced)"
While you are powerless to change it I would also be concerned reviewing this resume as with the sole exception of your consultancy your longest tenure anywhere is just two years.
thanks. this is the fifth iteration of my resume in this last year's search. im clearly trying to push for ai-coding, as i think i was often overlooked for being too 'trad'. in reality im all-in on ai.
I understand that and from the rest of your writing on your site that very much shows. I use AI professionally to great avail. Personally, I wouldn't put something similar in tone on my resume and when I review resumes this language is not something I'm looking for either.
I'll point out that what is your reality in your job market might be far different from mine. I'm in Europe.
I try to screen out people who come across as zealots or dogmatic about just about anything. Everything could have it's time and place - PHP included ;)
I look for people who are pragmatic and doubt I represent the "people who are hiring pool" to a great extent. But I am hiring and I can just tell you what I see here and how I see it.
I am not sure if this will help you, but have an extended, deep conversation with ChatGPT about your resume. Tell it who you are, what you excel at, and list projects and technologies. Then, paste a couple of the job postings that did not work for you.
This might sound silly to you, but it absolutely works, because it will distill your experience better, ask you to re-arrange and generalize, and more importantly, it is far superior to us in finding unique key word combinations that work.
Look I don't want to pile on, but I sent the latest version of your resume through ChatGPT with o3 and it pointed out several things to fix or improve about the current version of your resume, which as a human who has interviewed 100s of candidates I mostly agree with. Hopefully this is helpful.
Look I don't want to be rude, but your resume screams "Dear AI Overlords I beg of you, please hire me! See? I did all the AI things to please you! Please don't abandon me, sniff"
It's downright comical.
My biggest problem with your resume is that it feels oddly vague and empty... "API development"? Come on.
20 years of full stack development experience? React is the de facto standard and as of today is 12 years old and it's absent. Absent! NodeJS? Absent!
Now think about what the key advantage of a highly experienced engineer is. Of course! It's the experience!
What you really should be doing is building a meta resume that contains all marketable job skills and experiences. Because you're experienced and know a lot of things, the resume will be too long, so what you need to do is tailor to the job posting and cut out all the irrelevant parts to stay under two pages.
Since you are so obsessed with AI, you could even let the AI cut your resume down (don't let it write new things) and then just send it off. What you 100% certainly shouldn't do is let it write the resume itself.
I suggest putting vibecoding into the search bar on HN or YouTube to look at the critical side of how it's perceived, I'm not a professional coder but based on hanging out here it seems somewhat looked down on by some? I'm guessing it's like how loads of people use chatgpt to draft emails but would prefer you didn't know or think it's a positive (again I'm not a professional coder, so best analogy I had)
There are a lot of small companies with home built software they need maintained. Go to the small businesses with the largest building's in your area. They have something whipped up they need fixed, expanded, etc.
I assume they mean this feature is built into the JVM itself, whereas Kotlin's lateinit more or less "just" desugars into code you could otherwise write yourself.
Wouldn't a potential workaround be to create a new barebones repository and push the repacked one there? Sure, people will have to change their remote origin but if it solves the problem that might be worth the hassle?
Not to mention how body language helps resolve so much surrounding tone when saying something in person. Telling someone to do a thing face-to-face can easily be neutrally charged. The same sentiment expressed in text can carry more weight - feel like an order and appear as if it is more urgent leading to stress
Do ship fast until it is known what the application will even be and what the feature-set is and what needs the user actually has and not just what it is believed they need.
Don't sink a bunch of time into "cleanly" exploring the fog-of-war. Gather as much insight as possible as soon as possible. Only then will you be equipped to actually architect for the domain as it actually is rather than what you the developer think it might actually be.
reply