I am not talking about usability or accessibility but rather just a nice feeling of using the UI. Of course that is subjective but if I click and it appears as close to zero time perception then to me that is much better than lag and/or animation.
That would indeed be pointless because I was originally replying to a single UI interaction, where it doesn't really make a huge impact whether it happened in 2 or 5 frames.
You're trying to bring in continuous changing of frames here which is obviously perceived differently.
No, it's sometimes just extremely easy to recognize people who have no idea what they're talking about when they make certain claims.
Just like I can recognize a clueless frontend developer when they say "React is basically just a newer jquery". Recognizing clueless engineers when they talk about AI can be pretty easy.
It's a sector that is both old and new: AI has been around forever, but even people who worked in the sector years ago are taken aback by what is suddenly possible, the workflows that are happening... hell, I've even seen cases where it's the very people who have been following GenAI forever that have a bias towards believing it's incapable of what it can do.
For context, I lead an AI R&D lab in Europe (https://ingram.tech/). I've seen some shit.
Define "not trivial". Obviously, experience helps, as with any tool. But it's hardly rocket science.
It seems to me the biggest barrier is that the person driving the tool needs to be experienced enough to recognize and assist when it runs into issues. But that's little different from any sophisticated tool.
It seems to me a lot of the criticism comes from placing completely unrealistic expectations on an LLM. "It's not perfect, therefore it sucks."
As of about three months ago, one of the most important skills in effective LLM coding is coding agent environment design.
If you want to use a tool like Claude Code (or Gemini CLI or Cursor agent mode or Code CLI or Qwen Code) to solve complex problems you need to give them an environment they can operate in where they can solve that problem without causing too much damage if something goes wrong.
You need to think about sandboxing, and what tools to expose to them, and what secrets (if any) they should have access to, and how to control the risk of prompt injection if they might be exposed to potentially malicious sources of tokens.
The other week I wanted to experiment with some optimizations of configurations on my Fly.io hosted containers. I used Claude Code for this by:
- Creating a new Fly organization which I called Scratchpad
- Assigning that a spending limit (in case my coding agent went rogue or made dumb expensive mistakes)
- Creating a Fly API token that could only manipulate that organization - so I could be sure my coding agent couldn't touch any of my production deployments
- Putting together some examples of how to use the Fly CLI tool to deploy an app with a configuration change - just enough information that Claude Code could start running its own deploys
- Running Claude Code such that it had access to the relevant Fly command authenticated with my new Scratchpad API token
With all of the above in place I could run Claude in --dangerously-skip-permissions mode and know that the absolute worse that could happen is it might burn through the spending limit I had set.
This took a while to figure out! But now... any time I want to experiment with new Fly configuration patterns I can outsource much of that work safely to Claude.
The statement I responded to was, "creating an effective workflow is not trivial".
There are plenty of useful LLM workflows that are possible to create pretty trivially.
The example you gave is not hardly the first thing a beginning LLM user would need. Yes, more sophisticated uses of an advanced tool require more experience. There's nothing different from any other tool here. You can find similar debates about programming languages.
Again, what I said in my original comment applies: people place unrealistic expectations on LLMs.
I suspect that this is at least partly is a psychological game people unconsciously play to try to minimize the competence of LLMs, to reduce the level of threat they feel. A sort of variation of terror management theory.
For one - I’d say scoped API tokens that prevent messing with resources across logical domains (eg prod vs nonprod, distinct github repos, etc) is best practice in general. Blowing up a resource with a broadly scoped token isn’t a failure mode unique to LLMs.
edit: I don’t have personal experience around spending limits but I vaguely recall them being useful for folks who want to set up AWS resources and swing for the fences, in startups without thinking too deeply about the infra. Again this isn’t a failure mode unique to LLMs although I can appreciate it not mapping perfectly to your scenario above
edit #2: fwict the LLM specific context of your scenario above is: providing examples, setting up API access somehow (eg maybe invoking a CLI?). The rest to me seems like good old software engineering
I don’t really see how it’s different than how you’d setup someone really junior to have a playground of sorts.
It’s not exactly a groundbreaking line of reasoning that leads one to the conclusion of “I shouldn’t let this non-deterministic system access production servers.”
Now, setting up an LLM so that they can iterate without a human in the loop is a learned skill, but not a huge one.
I don’t think anyone expects perfection. Programs crash, drives die, and computers can break anytime. But we expect our tools to be reliable and not fight with it everyday to get it to work.
I don’t have to debug Emacs every day to write code. My CI workflow just runs every time a PR is created. When I type ‘make tests’, I get a report back. None of those things are perfect, but they are reliable.
I'm not a native speaker, but to me that quote doesn't necessarily imply an inability of OP to get up the curve. Maybe they just mean that the curve can look flat at the start?
I remember the time they were cracking down because I had entered 90%+ of the tickets into the ticket system (the product manager didn't write tickets) and told me that "every ticket has to explain why it is good for the end user".
I put it in a ticket to speed up the 40 minutes build and was asked "How does this benefit the end user?" and I said "The end user would have had the product six months ago if the build was faster."
Maybe, but I already had a reputation of being the dark wizard back then. If anything, the other students in my group went along with this because they knew I could overcome any problem... regardless of the cost on my sanity.
> How did you handle the debugging the raspberry pi on real hardware?
Painfully through serial output. I didn't have access to a JTAG probe at the time (I'm not even sure the Raspberry Pi could be debugged that way) and documentation was exceedingly poor.
After that experience, I refuse to debug anything hardware-related without at the very least a GDB stub.
This is Broadcom we're talking about, where that's par for the course. Personally I'd choose a SoC from AllWinner or Rockchip or even Mediatek over them.
> I didn't have access to a JTAG probe at the time (I'm not even sure the Raspberry Pi could be debugged that way)
The BCM2835-based ones can - I don't know about the modern ones - but you have to change the configuration on a couple of GPIOs to make it show up. (Which makes it difficult to debug early startup, unfortunately.)
Hacking isn't coding isn't programming isn't software development isn't software engineering. But in the end many people use these terms mostly interchangeably and making a point of the differences between the definitions you personally use is rarely a productive use of time.
Depends on the specific changes of course, but generally speaking.
reply