Even vibe coding was coined as term applying to extremely experienced engineers (the guy who coined the term) using llms to rapidly experiment with refactoring and prototyping as if doing pair programming. The negative connotations we now have are an interesting semantic shift.
I completely forgot about prompt engineers. I remember now startups offering insane salaries to them a few years ago. IMO the next iteration will be planning engineers since the new thing seems to be the plan/execution split of these vibe coding tools.
> I know that AI will make it possible for others to do so, even if they drop the ball.
Seems you completely miss the point of Miyazaki's work. You can watch a video of Miyazaki watching an AI generated animation and see what he think about generative "art"[1].
That video you linked was hilarious. Oh how I wish more people would have Miyazaki’s way of thinking.
I don’t mean that they should necessarily have his exact same opinions on things. I mean that they should think through things and approach them in the same process and manner that Miyazaki does.
I hope fewer people evaluate animation techniques like Miyazaki did in that presentation. Likening a fantasy zombie character's movement to a disabled person and calling it an affront to life itself validates the exact look and feel the animation team was going for. Though he posits this as a negative for no logical reason that is offered.
This is dramatic of me to say, but I can sincerely claim that anyone in my division that pulled something like this would be demoted or let go. If for nothing else than evaluating a technical product using and only using emotional language.
in the world of art, people who care more about technical product rather than emotion tend to make disposable art that does not resonate across generations
I like what Miyazaki did, but I don't think his way of thinking is the one true way. Sure, generative zombies have no place in his idealized past, but they have places in other film and media. I think Miyazaki was wrong in his judgement.
I get being optimistic but there’s a lot of ethical considerations that we’re choosing to ignore. The result is techno feudalism.
Sure AI can help me with small things but it’s weird to be the guy preaching the gospel. In the end this is a product, sold by people who have more power than a single person ever had. They can do the marketing and hype, my interest lies in staying skeptical, especially in the incoming storm of AI generated misinformation or wave of students getting through university by cheating with AI.
I dont think that is really true. There are many open weights models you can run yourself, including state of the art models like deepseek. Right now you its still expensive to run them at a reasonable speed, but for instance a $9500 mac studio can run deepseek at a reasonable, if not spectacular, speed.
You can, just like you can theoretically use something else than Windows or MS Office. Until it's an entire ecosystem after a decade or two, and a workforce trained for that ecosystem, instead of right now easily substitutable AI providers.
The current huge investments into the AI providers are not made because the investors are looking forward to everlasting fierce competition far beyond the initial stage.
> You can, just like you can theoretically use something else than Windows or MS Office. Until it's an entire ecosystem after a decade or two, and a workforce trained for that ecosystem, instead of right now easily substitutable AI providers.
Author here: I had that thought for a while but I don't currently think that AI will unfold the same way (and I mentioned that in the post linked). I believe that at the speed this is going, and the innovation happening everywhere this will be a market with many models and players.
Nobody says it is going to happen next year, or even within five years.
But eventually, as the technology matures and gets more and more integrated in businesses' IT, they will have ever more dependencies around the AI. Glue code on several levels (from dev-ops to user level code), 3rd party, training. There is no technical reason for React to dominate Javascript GUI dev either, and plenty of alternatives - and that is a far milder amount of external dependencies than you have with fundamental tech like AI, since it is far down towards the leave nodes of business dependencies.
If people start adding a lot of code around a solution, training, and include third party apps, libs or tools they have to choose one.
Unless you think that the interfaces for all basic AI interfaces and APIs will be standardized, and you will be able to exchange them while keeping everything you yourself added around that AI?
Or, do you think those AIs will be used as-is, no additional dependencies will be added on the business side? And that being trained for one - as average person, not the leading-edge people interested enough to teach themselves as they do now - means you can use them all?
Well, if you don't want that future a decade from now, then vote now by choosing which AI provider to use.
But if you say "we have to use the same AI provider that everyone else uses, because that's where the ecosystem will be generated", well, you're contributing to there being only a few that make it far enough to generate ecosystems.
Exactly, it seems like for him all the controversy about AI is just "will AI get my very own (white privileged programmer) job"
and he doesn't even think about those:
- will it feed disinformation and disrupt democracies (like it has already proven to)
- will it be used to kill people (cf war in Gaza)
- will it require underpaid work from data labelers in Africa and Asia
- will it consume CO2 and energy resources that would be better allocated elsewhere (he doesn't care that he's now using much more energy for "not coding faster, but being able to read a book meanwhile" — well nice, one more privilege for the white western guy, and one more thing to suck up for the people living in climate-vulnerable locations)
etc
the fact that those guys are so naive and disconnected is really tiring
Author here. I am absolutely thinking about those too. I just also happen to think that all those issues are not dramatically changing because of AI. Disinformation has been a problem prior to AI too, and social networks are a much bigger harm in comparison and income inequalities or global warming were a problem prior to AI too.
I absolutely see a trend of piling all world's problem on top of AI and I think that this is wrong. These problems need to dealt with in isolation of AI, because the are largely non AI issues.
"I just also happen to think that all those issues are not dramatically changing because of AI"
-> this is your white westerner privilege right here, "I just to happen to think it's not such a bad thing" (mainly because you're not confronted to the problems it actually already generates for people in vulnerable positions)
that's exactly the point I'm making
Yes you "happen to think that all that is not too bad". BECAUSE you're privileged. And lacking the empathy or will to put time into thinking in how it already dramatically effect some populations... you're mostly just proving my point
While I agree, I feel we can't dismiss a technology just because we don't agree with the values of the people doing it, you know? The tech is amazing, let's build it in a nice way.
To the contrary, no one worried about its failure states is dismissing it.
They believe that even if it doesn’t match its hype, it’s going to destabilize society.
It’s already trivializing being rewarded for art. You want to be paid while you learn how to make excellent art work and CGI? Well just how will that work?
Phishing scams are now profitable for victim types it wasnt before.
Education is taking a hammering, that biblical is a fair adjective to apply. Most courses will have to resort to pen and paper exams, a reversal of digitization changes since the 90s.
Why are we doing all this to ourselves? So the rich get even richer while the rest is even worse off?
Why are smart minds enabling this? My university had mandatory computer science ethics classes. I used to think they were a waste of time. Clearly, I was wrong.
> The tech is amazing, let's build it in a nice way.
We tried that before, during the early 2000s there was huge optimism about tech, democratisation of information, people would be more well informed, with access to all the knowledge in the world.
In the end it wasn't built in a nice way, moneyed interests took over, social media exploded, fewer companies captured a lot of different markets after getting extremely well capitalised, buying competitors to stamp them out, or buying them to integrate into their own ecosystems and control new markets (e.g.: social media again).
The tech is amazing, the corporations behind it not so much, the capital investments required are absurdly large which gives even more power to already capitalised entities which, generally speaking, do not behave in moral and ethical ways.
There's no opportunity to build it in a nice way, that's not where the incentives are so inevitably that's not where it will go, hence the pessimism about it founded on historical facts.
I guess it depends. Podcasts are often about story telling and story telling isn't about being information dense. Podcasts are hamburgers, which might be a good vehicle to deliver a piece of a vegetable into your body.
Not all podcasts are created equally. Some (not that few) are fantastic pieces of investigative journalism. There’s drivel in every medium, podcasts aren’t special.
I think the most important part is that if you're bad at articulating your ideas, AI tools will keep you in that place. You've basically given up on improving and you choose to be represented by the AI's level in the future.
> Even if you do all this and still get fired, it's not about your skills. It's just how things work sometimes. But if you've built up AI skills, side projects, and a deep understanding of your industry, you're in a strong position to start something of your own—freelancing, consulting, or even launching a startup.
"just how things work sometimes" - very insightful. Fired? Then you're in a great position to launch a startup!
I don't doubt you're right about social media and smartphones rotting our attention spans. But also, peripatetic philosophy is ancient. I spend most of my day sitting. Whether its work, entertainment, or hobbies, most of these things have me sat in front of a screen. So its nice, and I do think it increases my retention, to be able to do something while walking or cycling instead of sitting.
so that's the cool part, I think, instead of wasting time on socmed and news cycle composting, waste time on this instead. I think this is the general direction all media is headed, regardless of whether one agrees with it or not. Feed it whatever you want and it will shuffle together a plot, just for you.
Sorry for not discussing the product itself, but...
I'm just not seeing a machine that is "likely correct", constantly interrupting the "operator" to be that much of a win. I have seen some software influencers reflect on how much more fun it is to code, after dropping the LLM assistant.
All of these feel like offerings to the Productivity God. As a salary guy I'll never get excited that I can do more during my work day. It's already easy to hit my capacity.