Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Knowing qrious exists and how to integrate it into a page: expensive.

qrious literally has it integrated already:

https://github.com/davidshimjs/qrcodejs/blob/master/index.ht...

I see many issues. The main one is that none of this is relevant to the qemu discussion. It's on another whole level of project.

I kind of regret asking the poor guy to show his stuff. None of these tutorial projects come even close to what an AI contribution to qemu would look like. It's pointless.



Person in question here.

I didn't know qrious exist. Last time I checked for frontend-only QR code generators myself, pre-AI, I couldn't find anything useful. I don't do frontend work daily, I'm not on top of the garbagefest the JS environment is.

Probably half the win applying AI to this project was that it a) discovered qrious for me, and b) made me a working example frontend, in less time than it would take me to find the library myself among sea of noise.

'ben_w is absolutely correct when he wrote:

> The goal wasn't "write me a QR library" it was "here's my pain point, solve it".

And:

  <quote>
  Running `npm install qrious`: trivial.
  Knowing qrious exists and how to integrate it into a page: expensive.
  </quote>
This is precisely what it was. I built this in between other stuff, paying half attention to it, to solve an immediate need my wife had. The only thing I cared about it here is that:

1. It worked and was trivial to use

2. Was 100% under my control, to guarantee no tracking, telemetry, ads, crypto miners, and other usual web dangers, are present, and ensure they never are going to be present.

3. It had no build step whatsoever, and minimal dependencies that could be vendored, because again, I don't do webshit for a living and don't have time for figuring out this week's flavor of building "Hello world" in Node land.

(Incidentally, I'm using Claude Code to build something bigger using a web stack, which forced me to figure out the current state of tooling, and believe me, it's not much like what I saw 6 months ago, and nothing like what I saw a year ago.)

2 and 3 basically translate to "I don't want to ever think about it again". Zero ops is my principle :).

----

> I see many issues. The main one is that none of this is relevant to the qemu discussion. It's on another whole level of project.

It was relevant to the topic discussed in this subthread. Specifically about the statement:

> But there are also local tools generated faster than you could adjust existing tools to do what you want. I'm running 3 things now just for myself that I generated from scratch instead of trying to send feature requests to existing apps I can buy.

The implicit point of larger importance is: AI contributions may not show up fully polished in OSS repos, but making it possible to do throwaway tools to address pain points directly provides advantages that compound.

And my examples are just concrete examples of projects that were AI generated with a mindset of "solve this pain point" and not "build a product", and making them took less time and effort than my participation in this discussion already did.


Cool, makes sense.

Since you're here, I have another question relevant to the thread: do you pay for AI tools or are you using them for free?


TL;DR: I pay, I always try to use SOTA models if I can.

I pay for them; until last week, this was almost entirely[0] pay-as-you-go use of API keys via TypingMind (for chat) and Aider (for coding). The QR code project I linked was made by Aider. Total cost was around $1 IIRC.

API options were, until recently, very cheap. Most of my use was around $2 to $5 per project, sometimes under $2. I mostly worked with GPT-4, then Sonnet 3.5, briefly with Deepseek-R1; by the time I got around to testing Claude Sonnet 3.7, Google released Gemini 2.5 Pro, which was substantially cheaper, so I stuck to the latter.

Last week I got myself the Max plan for Anthropic (first 5x, then the 20x one) specifically for Claude Code, because using pay-as-you-go pricing with top models in the new "agentic" way got stupidly expensive; $100 or $200 per month may sound like a lot, but less so when taking the API route would have you burn this much in a day or two.

--

[0] - I have the $20/month "Plus" subscription to ChatGPT, which I keep because of gpt-4o image generation and o3 being excellent as my default model for random questions/problems, many of them not even coding-related. I could access o3 via API, but this gets stupidly expensive for casual use; subscription is a better deal now.


> TL;DR: I pay, I always try to use SOTA models if I can.

Interesting; I'm finding myself doing the opposite — I have API access to at least OpenAI, but all the SOTA stuff becomes free so fast that I don't expect to lose much by waiting.

My OpenAI API credit expired mostly unused.


The very first part of the quotation is "Knowing qrious exists".

So the fact they've already got the example is great if you do in fact already have that knowledge, and *completely useless* if you don't.

> I kind of regret asking the poor guy to show his stuff. None of these tutorial projects come even close to what an AI contribution to qemu would look like. It's pointless.

For better and worse, I suspect it's very much the kind of thing AI would contribute.

I also use it for things, and it's… well, I have seen worse code from real humans, but I don't think highly of those humans' coding skills. The AI I've used so far are solidly at the quality level of "decent for a junior developer", not more, not less. Ridiculously broad knowledge (which is why that quality level is even useful), but that quality level.

Use it because it's cheap or free, when that skill level is sufficient. Unless there's a legal issue, which there is for qemu, in which case don't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: