It's not intended to be uncharitable - You clearly value many things I do (the world needs less attention-seeking, energy greedy shitty software).
I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience. Note that I count "social media" as media.
My proposition is that without hands-on experience, your information is limited to media narratives, and it seems like the "AI is net bad" narrative seems to be the source of perspectives.
Skepticism is warranted, and there are a million ways this technology could be built for terrible ends.
But, I'm of the opinion that:
A) The technology is not hype, and is getting better
B) That it can, and will, be built -- Time horizon debatable.
C) That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.
If anything, more people like you need to be engaging it to have grounded perspectives on what it could become.
> I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience.
Okay, I think I got your intent better, thanks for clarifying.
You can add discussion with other people outside software media, or opinion pieces outside media (I would not include personal blogs in "media" for instance, but would not be bothered if someone did), including people who tried and people who didn't. Medias are also not uniform in their views.
But I hear you, grounded perspectives would be a positive.
> That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.
I hear you as well, makes perfect sense.
OTOH, it's difficult to engage into something that feels fundamentally wrong or a dead end, and that's what LLMs feel like to me. It would be also frightening: the risk that, as a good person, you help shape a monster.
The only way out I can see is inventing the thing that will make LLMs irrelevant, but also don't have their fatal flaws. That's quite the undertaking though.
We'd not be competing on an equal footing: LLM providers have been doing things I would never have dared even considering: ingesting considerable amount of source materials completely disregarding their licenses, hammering everyone servers, spending a crazy amount of energy, sourcing a crazy amount of (very closed) hardware, burning an insane amount of money even on paid plans. It feels very brutal.
Can an LLM be built avoiding any of this stuff? Because otherwise, I'm simply not interested.
(of course, the discussion has shifted quite a bit! The initial question was if a dev not using the LLMs would remain relevant, but I believe this was addressed at large in other comments already)
My point on the initial discussion remains, but I think that it also seems like we disagree on the foundations/premise of the technology.
The actions of a few companies does not invalidate the entire category. There are open models, trained on previously aggregated datasets (which, for what its worth, nobody had a problem with being collected a decade ago!), doing research to make training and usage more efficient.
The technology is here. I think your assessment on its relevance is not informed by actual usage, your frame of its origins is black/white (rather than understanding the actual landscape of different model approaches), and that your lack of interest in using it does nothing to change the absolutely massive shift that is happening in the nature of work. I'm a Product Manager, and the Senior Engineer I work with has been reviewing my PRs before they get merged - 60%+ were merged without much comment, and his bar is high. I did half of our last release, while also doing my day job. Safe to say, his opinion has changed based on that.
Were they massive changes? No. But these are absolutely impactful in the decision calculus that goes into what it takes to build and maintain software.
The premise of my argument is that what you see as "fatal flaws" are an illusion created by bias (which bleeds into the second-hand perspectives you cite just as readily as it does the media), rather than your direct and actual validation that those flaws exist.
My suggestion is to be an objective scientist -- use the best model released (regardless of origins) with minor research into 'best practices' to see what is possible, and then ask yourself if the 'origins' issue were addressed by a responsible open-source player in 6-12 months, whether it would change anything about your views on the likely impact of this technology and your willingness to adopt it.
I believe that it's coming, not because the hype machine tells me so (and it's WAY hyped) - But because I've used it, seen its flaws and strengths, and forecast how quickly it will change the work that I've been doing for over a decade even if it stopped getting better (and it hasn't stopped yet)
Among the fatal flaws I see, some are ethical / philosophical regardless how the thing actually perform. I care a lot about this. It's actually my main motivation for not even trying. I don't want to use a tool that has "blood" on it, and I don't need experience in using the tool to assess this (I don't need to kill someone to assess that it's bad to kill someone).
On the technical part I do believe LLMs are fundamentally limited in their design and are going to plateau, but this we shall see. I can imagine they can already be useful is certain cases despite their limitations. I'm willing to accept that my lack of experience doesn't make my opinion so relevant here.
> My suggestion is to be an objective scientist
Sure, but I also want to be a reasonable Earth citizen.
> -- use the best model released (regardless of origins) with minor research into 'best practices' to see what is possible
Yeah… but no, I won't. I don't think it will have much practical impact. I don't feel like I need this anecdotal experience, I'd not use it either way. Reading studies will be incredibly more relevant anyway.
> and then ask yourself if the 'origins' issue were addressed by a responsible open-source player in 6-12 months, whether it would change anything about your views on the likely impact of this technology
I doubt so, but open to changing my mind on this.
> and your willingness to adopt it.
Yeah, if the thing is actually responsible (I very much doubt it is possible), then indeed, I won't limit myself. I'd try it and might use it for some stuff. Note: I'll still avoid any dependency on any cloud for programming - this is not debatable - and in 6-12 months, I won't have the hardware to run a model like this locally unless something incredible happens (including not having to depend on proprietary nvidia drivers).
What's more, an objective scientist doesn't use anecdotal datapoints like their own personal experience, they run well-designed studies. I will not conduct such studies. I'll read them.
> I think that it also seems like we disagree on the foundations/premise of the technology.
Yeah, we have widely different perspectives on this stuff. It's an enriching discussion. I believe we start having said all that could be said.
I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience. Note that I count "social media" as media.
My proposition is that without hands-on experience, your information is limited to media narratives, and it seems like the "AI is net bad" narrative seems to be the source of perspectives.
Skepticism is warranted, and there are a million ways this technology could be built for terrible ends.
But, I'm of the opinion that: A) The technology is not hype, and is getting better B) That it can, and will, be built -- Time horizon debatable. C) That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.
If anything, more people like you need to be engaging it to have grounded perspectives on what it could become.