I'm an engineer through and through. I can ask an LLM to generate images just fine, but for a given target audience for a certain purpose? I would have no clue. None what so ever. Ask me to generate an image to use in advertisement for Nuka Cola, targeting tired parents? I genuinely have no idea of where to even start. I have absolutely no understanding of the advertisement domain, and I don't know what tired parents find visually pleasing, or what they would "vibe" with.
My feeble attempts would be absolute trash compared to a professional artist who uses AI to express their vision. The artist would be able to prompt so much more effectively and correct the things that they know from experience will not work.
It's the exact same as with coding with an AI - it will be trash unless you understand the hows and the whys.
I would never use an LLM for trying to generate a full solution to a complex problem. That's not what they're good at.
What LLMs are good at and their main value I'd argue, is nudging you along and removing the need to implement things that "just take time".
Like some days back I needed to construct a string with some information for a log entry, and the LLM that we have suggested a solution that was both elegant and provided a nicer formatted string than what I had in mind. Instead of spending 10-15 minutes on it, I spent 30 seconds and got something that was nicer than what I would have done.
It's these little things that add up and create value, in my opinion.
> What LLMs are good at and their main value I'd argue, is nudging you along and removing the need to implement things that "just take time".
I was still learning Java at uni (while being a Python/Lisp fanboy) when I realised this:
- Complex and wordy languages need tooling (like autocomplete, autoformatting) to handle the tedious parts.
- Simple and expressive languages can get away with coding in notepad.exe.
- Lisp, as simple and powerful as it is, needs brace highlighting. You. Simply. Just. Can't.
Now 10, 20 years later you can look back at the evolution of many of these languages; some trends I've observed:
- Java, C#, C++, have all borrowed a lot from functional languages.
- JVM has Clojure.
- Go stubbornly insists on "if err != nil" - which isn't even the worst part; the following fmt.Errorf is.
- Rust (post-1.0) cautiously moved towards "?"; Zig (still pre-1.0) also has specific syntax for errors.
- Python is slowly getting more verbose, mostly because of type annotations.
- Autoformatters are actually great, you don't even have to care about indenting code as you spit it out, but... Python, being whitespace-sensitive, makes them a bit less useful.
Good tooling helps you with wordy languages. Expressive languages help you write concise code. Code is read much more often than it's written. Deterministic tooling can work with the structure of the code, but LMs (being probabilistic) can help you work with its intent. Language models are an evolution of automated tooling - they will get better, just like autocomplete got better; but they will never "solve" coding.
In my opinion, there's no truth here, only opinions.
This is an interesting one. Do we own the rights to our own likeness? And if we do, what about doppelgangers - people who look eerily similar to celebrities, or other "unknown" people?
> Do we own the rights to our own likeness? And if we do, what about doppelgangers - people who look eerily similar to celebrities, or other "unknown" people?
Is this somehow a novel question? We've had the same issue since at least the invention of cameras.
AI does not and will not solve the most difficult part of programming:
Expressing how you wish to solve a problem in simple terms.
It doesn't matter if you communicate with a compiler or an LLM - you still need to express your thoughts and ideas with no ambiguity for it to produce the wanted behavior. What makes "vibe coding" with an LLM both easier and more challenging at the same time is that is will guess what you mean and give you results that "kind of " work even when you express yourself unclearly. For someone who can code, the "kind of work" results can be used as a starting point to evolve into something useful. For someone who can't code, it's an inevitable dead end.
I find that those who struggle with programming have the exact same type of struggles when trying to do it with LLMs - no structured plan on how to approach a problem and difficulties to understand the context in which they are working.
It's amazing how often it happens in large companies that different people from different organizations are troubleshooting or fixing the same fault, independent from each other, without even knowing. Sometimes you don't even realize until you've implemented a fix which causes a merge conflict with the fix that someone else is working on.
This is exactly why I gave up a position as a full stack / devops engineer in favor of going back to low level drivers - there were too many unknowns, and far too many unknown unknowns often paired with expectations of prompt (and cheap) solutions to complicated issues.
Technically it was interesting and challenging, but in terms of stress just not worth it. You could pay me twice my current salary and I still would not go back to it. Now I try to place myself as far away from paying customers as technically possible.
> ...far too many unknown unknowns often paired with expectations of prompt (and cheap) solutions to complicated issues.
That describes pretty much all of my "full-stack" experience.
What sort of job/background do you have where you are writing low level drivers? I'd love to get into that side of things but I don't know where to start.
How'd you manage the transition (back?) to low-level? I would love to do chip work (or really anything systems-y) but all my experience is fullstack/webdev. Every time I apply I get bounced for insuffficient domain experience.
I started off my career in low level stuff and transitioned upwards to web. I’ve always been all over the place in terms of tech so it wasn’t particularly big steps either way. I’ve usually got something low level-ish going on at home. Emulator development, robotics, …
Oncall is becoming popular even for low level. My last few roles have all required it for reasons I've been unable to figure out beyond "all developers need on-call and you're a developer". In my case, a fix often requires hardware access and my commute is longer than the start-work SLA.
While I do agree with you, I am a bit concerned about the recent developments with "paid, but still has ads" subscriptions and how Youtube might slip towards such practices as well as soon as they have a large enough number of paying customers. Their premium might suddenly not be so... premium.
With respect to that, YouTube premium has been around for over ten years, the majority of which I've been a subscriber because adblocking on Apple TV (my primary YouTube experience) is far too much of a fuckabout for me to willingly engage in it, and they haven't yet done it. I think Google is well aware of the fact that Premium with ads is an utter non-starter as a product. What would you even be paying for then? This isn't like TV+ or Prime where you have exclusives, almost everyone who posts to YouTube would happily jump that ship given enough reason to.
And while there are still ads (sponsored segments) I personally have less problem with those since those are substantial money for the creators I enjoy, and a lot of the ones I watch actually manage to make them pretty funny. And hell, a couple I've even used their codes for shit over the years for. Like, an ad is an ad and some people hate all of them, but I can personally say I've engaged with ads from creators I like at an exceptional rate compared to... virtually every other type of advertising I've ever encountered.
There is for certain a wide gap between a sponsored segment from the same voice and some random ad coming in and blaring over the top. For me, I can handle the narrator delivering an ad, it's the intrusive slot machine aspect of generic ads that irk me. Happy youtube subscriber here, use the music too, great deal.
Youtube is one of the platforms where I find real value, usually in making/maintaining/repairing things, being able to skip through videos to find answers without worrying about ads definitely saves me significant time and therefore money.
Yeah I balked for a long time at paying for YouTube, but in the end, I consume magnitudes more YouTube than any other streaming platform. It's my most expensive video subscription but like... I can't say I don't use it.
It’s the only one I pay for. I have Netflix included in my cell plan. Every so often I fire it up to see what they have, find nothing, and go back to the good stuff.
Yep. Most folks I follow will record ads at a different time from the rest of the video too, wearing different clothes and in different lighting. Sometimes I can be bothered to grab the remote, sometimes not, lol.
I totally get it. That said, YouTube premium is worth every single penny and has only gained features over time; no other subscription I have comes close in terms of value.
Seems to still being trialed in only a few regions. It's No ads during videos (still display ads during search, etc.), except for shorts and music videos for 6€/m. Without all the other premium features
If your target audience is yourself, there's still plenty of value in writing it down.
Wait long enough and there will be situations where you did something in the past that works really well, but you can't remember anymore what you did or why you did it. An AI doesn't really help you then.
> I'm coming around to the Dark Forest theory, personally, as terrifying as it is.
If you're silent with the intent of remaining hidden, then that behavior must have been learned. Either you evolved from a prey type of species, or a non-apex predator.
It would be strange of humanity is the only apex-ish predator intelligent-ish life form in the universe that blasts signals into space without consideration of who might hear them.
We don't have telescopes capable of receiving accidental emissions from Earth (say television) at interstellar distances. It seems reasonable that you could use the gravitational lens of a star to do so if you could launch a probe to 800 au for each and every star system you want to monitor.
As we transition to digital technology, our transmissions look more like broadband noise as opposed to having a strong carrier wave, cellular communications in 2025 are far less visible than, say, television broadcasts of 1975.
Deliberate attempts to communicate with other intelligent life are quite forlorn. This message was sent to a globular cluster 25,000 light years away
There a few 100,000 stars there, somebody there would have to be looking at our sun in particular at the right time, then it would take at least 25,000 years to get a message back to us, in which case it is likely that we'll be extinct, collapsed back to hunter-gatherers, or maybe advanced but forgotten that we sent the message or don't care anymore.
> If you're silent with the intent of remaining hidden, then that behavior must have been learned. Either you evolved from a prey type of species, or a non-apex predator.
Or you learned it in a non-evolutionary way, through logical reasoning.
No it is not necesarily learned. Even among species on earth a lot of behaviors are not learned but inherent. In other words, selection acts on a spectrum of behaviors and those with some fitness advantage you are likely to see more frequently in the next generation of offspring, extending until lineages of that offspring with that behavior are potentially all there is.
A dark forest planet need not learn to be a dark forest planet in the same way an earth colored beetle need not learn to perfectly mask itself against the dirt from a bird; the fitter mutation given the context of the environment won out.
I'm an engineer through and through. I can ask an LLM to generate images just fine, but for a given target audience for a certain purpose? I would have no clue. None what so ever. Ask me to generate an image to use in advertisement for Nuka Cola, targeting tired parents? I genuinely have no idea of where to even start. I have absolutely no understanding of the advertisement domain, and I don't know what tired parents find visually pleasing, or what they would "vibe" with.
My feeble attempts would be absolute trash compared to a professional artist who uses AI to express their vision. The artist would be able to prompt so much more effectively and correct the things that they know from experience will not work.
It's the exact same as with coding with an AI - it will be trash unless you understand the hows and the whys.