Hacker Newsnew | past | comments | ask | show | jobs | submit | wkirby's commentslogin

Looking forward to trying this out. An early comment: I would love to be able to override tool descriptions and system prompts from a config. Especially when working with local models, context management is king and the tool descriptions can be a hidden source of uncontrollable context.


I'm not trying to start a fight, but fwiw I adore both typescript and rails.


I dream of building an apple watch case that adds a T9 bluetooth keyboard. Turn the standalone Apple Watch into the dumbphone I've always wanted.


This is exactly my dream too, especially after seeing the Apple Watch prototype dumbphone cases they used to conceal them in public[0]. It would be a glorious re-purposing of the Apple Watch to serve the most diehard fans of the killed iPhone Mini.

Sadly the Apple Watch doesn't do proper external text input. You can connect a bluetooth keyboard, but it works by sending all input via the VoiceOver accessibility feature, which is slow and fidgety.

[0]https://cdn.idropnews.com/wp-content/uploads/2021/06/1509254...


It’s trying to convey that Sam Altman is a dweeb.


Like my co-founder said, we're a small company based in the US, and hiring in foreign jurisdictions is both expensive and time consuming. We're currently set up to hire in the US and Canada, and while we'd be willing to expand our footprint for the right candidate, the easiest thing for us is to look for candidates in our current operating jurisdictions.


Apsis Labs | Staff Frontend Engineer | REMOTE (US, Canada) | $132,000 | https://www.apsis.io

Seeking a skilled frontend-focused full-stack engineer who thrives on building beautiful and functional user interfaces, but still feels comfortable on the back-end.

While we’re looking for developers with strong technical skills we don’t typically hire for experience in a particular framework or technology. We’re mostly seeking generalists that enjoy working in new technical stacks and have exceptional communication skills; because we’re a small company, everyone here takes on a lot of roles, and strong relationships with our clients are essential to our success.

We offer a 20-hour work week, retirement and health benefits, a competitive salary, an unlimited vacation and parental leave policy. You can read more on our work philosophy here: https://www.apsis.io/mission.

If you're interested, please reach out to us with any questions or with your resume at contact@apsis.io.


I agree with the sentiment elsewhere in this thread that this represents a "hideous theft machine", but I think even if we discard that, this is still bad.

It's very clear that generative has abandoned the idea of creative; image production that just replicates the training data only serves to further flatten our idea of what the world should look like.


Right, the focus is on IP theft, and that’s part of it, but let’s set that aside.

How useful is an image generator that, when asked to generate an image of an archaeologist in a hat, gives you Harrison Ford every time?

Clearly that’s not what we want from tools like this, even just as tools.


Not an expert with this stuff but could you not just put "Harrison Ford" in the negative prompt?


First, that only works for potential biases you already know about and can anticipate, or can spot in the output. If the result is an egregious ripoff or an artist you’ve never heard of, or is the likeness of a model or actor you’re not aware of, how would you know?

Second, it doesn’t create the kind of originality we want. It just limits the kind of unoriginality we are getting.


Absolutely love the sound design on this.


Me too. They’re all by my friend Max Kotelchuck. He made the sounds for /lander too.


> An engineer with AI tool can now outbuild a 100-person engineering team.

What an insane statement. If the tooling improves that much the team of 100 will also improve. A worker with a shovel only outperforms the other workers if they’re still digging with sticks.

That sets aside the assumption that a few years from now we’ll see any material improvement at all. More likely we’ll see more wasted hype on some new revolution.


Yeah I really don’t get why people keep hyping AI like this. It really doesn’t make things go that much faster. At best you’re able to generate prototypes more quickly + get better autocomplete. Nothing particularly revolutionary there.

Anyone claiming a generalized 100x, 10x, or even 2x productivity gain is either delusional or trying to sell you something. Possibly both.

The companies saying they are reducing the size of their workforce because of gains they’re getting from AI are probably just telling investors what they want to hear while cutting costs for the same reason they always have.


I felt this way until Claude Code. It works much, much better in large codebases than anything else I've tried. It implements smaller features, including ones with FE + API changes and tests for each, pretty well. I'm going to try cloning our main repo multiple times to get it working on multiple branches at once.


100%. I just tried it the other day. Game changer


I will definitely check this out, thanks.


>Anyone claiming a generalized 100x, 10x, or even 2x productivity gain is either delusional or trying to sell you something

I don't understand how anyone who spends a couple hours or more per day coding new functionality couldn't at least double their productivity with LLMs, unless their organisation prohibits LLM usage. Even just limiting the LLM to writing unit tests would still save that much time.


The thing is, tab complete using LLMs is really great. But I still read it, then press tab, then press enter, then type a few chars, then wait.

Sometimes I get 1 good line from that. Sometimes I get 30. Usually I get 10 bad lines and have to type a bit more to coax out 8 good ones.

It just looks faster but typing was NEVER the bottleneck for coding.

Where it really flies, though, is building tooling around a well known API. FML if I ever have to write AWS CDK or AWS API calls without an LLM again. You're looking at ages of reading through really bad docs to get it going.

For that, which is a 1% task of convenience for most jobs, I can use most LLM output verbatim. But that's like I said less than 1% of the job, and only then when the core software is done.


Did you notice how much better things are today (eg Claude Sonnet 3.7) than they were 1 year ago? Don’t you expect things will not improve in the next year? Even R1, a public weights model, can add huge value when left to code in a loop.


> how things were 1 year ago

Not a substantial productivity multiplier.

> how things are today

Somewhat better than before, but still not a substantial productivity multiplier.

> Don’t you expect things will not improve in the next year?

I expect they'll be marginally better than they are now, but still not a substantial productivity multiplier.

"A huge paradigm shift is just around the corner" is a very popular narrative & it almost never bears out.


Hm, I’m a CDK pro (4y of full time experience). I used all LLMs, except latest Claude model. All were bad in my estimation and just got in the way. I don’t use them for CDK code anymore.


Yeah! That's exactly the thing. It's passable for novices and bad for experts. But I don't need expert level CDK I need an instance to start up. Hate it or love it that's all I need.


The bottleneck is not putting code on the hard drive, or turning my thoughts into code — the productivity bottleneck is thinking and frankly no LLM is thinking better than an average developer.


Author of the post here. We’re a 4-member team running 4 products with 4,500+ paying customers. No sales team, no infrastructure team - just freemium and serverless computing (Firebase) doing the heavy lifting. I’ve been a developer and founder for 20+ years, and I know that 15 years ago, each product would’ve required at least a 30-member team to build, maintain, and sell. This isn’t hypothetical, it already happened to us. Most of this leverage comes from internet distribution (freemium) and cloud computing (serverless), not AI. (Though we do use AI to answer support questions—since my cofounder is the only one handling them.) Now with AI, I argue that a solo engineer could outbuild a 100-member team in a couple of years. Given how much productivity has already increased, why is that an insane claim?


How do you determine either what features to build or the next product to develop?

I understand you are focused freemium as a goto market strategy with no dedicated sales or marketing team members.

What tools are you displacing (e.g. Excel)?


> How do you determine what features to build or the next product to develop?

My co-founder handles support, so he decides what to build next based on customer discussions. If a problem is big enough for a specific segment, we turn it into a separate product.

> What tools are you displacing (e.g., Excel)?

Most SMBs prefer to stay with Google Forms/Sheets rather than switching to a full-fledged CRM like HubSpot. But embedding Google Forms in their website affects their branding. We beautify and enhance Forms into a CRM, so they don’t have to migrate. I wrote about it here: https://manidoraisamy.com/developer-forever/post/can-you-use...


They clearly don't mean one person with AI now vs. 100 people with AI now. They're comparing one person with AI now to 100 people without AI before.

Regardless, one person still costs 1/100 as much as 100 people. Let's say each of the 100 adopts AI and multiplies their productivity by 100. Does their company need its total engineering productivity multiplied by 100? They might settle for let's say 3X and save 97% of the costs by firing 97 people.

(I tend to ignore most of the hype and I'm dubious about that 100X figure, but I'm taking it at face value here for illustrative purposes.)


OP here. While this logic holds, large companies don’t move fast.

In 2018, I wrote about scaling big while staying small using serverless computing (https://cloud.google.com/blog/products/gcp/scale-big-while-s...). But by 2020, instead of leaner teams, we saw more hiring and even bigger orgs—ironically, even at companies selling serverless services.

Why? Because incentives at large companies favor empire-building (prestige from managing big teams) over efficiency. I expect the same inertia with AI: solo devs will fully embrace AI, serverless, and freemium to race ahead, while big teams will adopt AI at a crawl.


Even if that’s what they mean (and I agree, that’s plausible, though not obvious) it’s still an asinine statement in the context of their broader thesis: advancements in generative AI are going to power the rise of the solopreneur. In absolute terms, an individual developer may be more productive in 3 years than they are today, but in relative terms, they will still be underpowered when compared to large teams building complex software. It only makes sense if we also assume the consumer and quality bar of today as well — and I don’t think LLMs are expected to crack time travel.

There will still be successful solopreneurs, just as there are today, but the idea that tooling-based productivity gains for individual developers are going to drive a power shift towards solo development and away from team-based companies is stupid.


That's a good point, and there's probably a sweet spot somewhere between "a few" and Dunbar's number, that represents the typically-most-efficient team size going forward.


> If the tooling improves that much the team of 100 will also improve.

I'm not sure this follows - I find as team size grows, the amount of time spent on synchronization approaches asymptotically to 100%.


Great. So a team of 100 with LLMs is still faster than a team of 100 without even if they spend the same amount of time on synchronization? Or did I miss something here?


well if they have more slop to get through, they'll spend even more time going over the more slop to find the useful bits. imagine 97 of them believe the machine upfront and you have to convince them that the code is shit. shudders


Sure but maintenance is just part of the definition of productivity. You have to do maintenance too to be productive. (On long time scales)


I'm not sure your example tracks. With an excavator, one person can outperform a 100 person digging crew. AI is the next excavator.


If 1 person can use an excavator/AI, then a 100 person crew can use 100 excavators/AIs. Capabilities have increased for everyone, not just for 1 man operations.


Have you seen how big excavators are physically? There are functional limits to certain tools. I can see how software could become a hot mess if AI tools aren’t calibrated or the human element makes a mess of combining disparate parts.


In this scenario, it's like the excavators are pocket sized, and cheap. As cheap as the shovel. So there is no reason for the entire digging crew not to have one each


It's a metaphor.


In this analogy, how many ditches do you think need to be dug? Each company only has a limited number of ditches they need done. The unnecessary human diggers will be let go.


I didn't make the original analogy. I was just explaining part of the point.

I disagree with the number 100. It is not a reasonable number with respect to current AI capabilities.

If the cost of digging ditches goes down, companies will be apt to dig more ditches. I've seen many projects worth $100k-$500k but they weren't pursued because the cost in salaries was higher.

I don't think companies are generally well run to let go of unnecessary people efficiently. At least one place I've worked at had, by my estimates, $100m salary of unnecessary people employed. It didn't matter, the business has revenues of $100b yearly, so it's a drop in the bucket.


In my experience, team efficiency does not improve linearly with head count. A 100-person team may be 2 or 3 times more productive than a 10-person team. Collaboration efforts (process, bureaucracy, calls, meetings, mails, chats) increase exponentially with larger teams. AI can help with coding but not much with this collaboration process, at least not yet. Now, AI can make a small team much more productive because their collaboration overhead stays the same. But AI cannot help with a 100 person team because their collective collaboration overhead cannot be solved by AI. I guess the trend will move towards smaller sized teams that can effectively use AI.


Depends on if you think AI is more like a shovel or an excavator. The article to me implies a shovel: a tool used by one person to increase individual productivity. An excavator is not run by a single worker — it’s run by a team. If AI assisted coding is an excavator, a solo developer won’t outperform a 100 person dev team, because they won’t be able to operate the AI tooling efficiently or effectively.


I think they meant that the digging crew would also use excavators.


but it takes months for a team of 100 to o do anything, let alone make the decision to do so. Right now there is a sort of time arbitrage opportunity for single developers


Yeah. As we saw with the invention of excavators, we ran out of things to build and didn't need construction crews anymore.


Yeah, but AI is trivially cheap, unlike excavators.

If a solopreneur can buy an excavator, the corporate developers will get them too.


You need to retool the corporation, the way the company is organized, and retrain the workforce for the new way building. In addition you have to pull that of, while still keep the light on, for what you already have. It will be much easier for someone starting from scratch.


I do think many corporations will fail for this reason. But some will successfully adapt, and others will start fresh and have no baggage.

I do think it will be easier for very small organizations to compete than it was previously - which is great - but let's not fool ourselves that big companies won't maintain some competitive advantages.


Yeah, but the 100 person digging crew is going to get 100 excavators.


Have you ever been on a construction site? How many excavators do you think a company needs?


This is fun, but there's an amusing oxymoron embedded in needing an app on my phone to verify that I'm... not using my phone


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: