Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am fiddling with tools like Cursor, Aider, Augment Code, Roo Code and LLMs like GPT, Sonnet, Grok, Deepseek to try to decide whether I can use AI for what I need, and if yes, identify some good workflows. I've read experiences of other people and tried my own ideas. I've burnt countless tokens, fast searches and US dollars.

Working with AI for writing code is painful. It can break the code in ways you've never imagined and introduce bugs you never thought are possible. Unit testing and integration testing doesn't help much, because AI can break those, too.

You can ask AI to run in loop, fixing compile errors, fixing tests, do builds, run the app and do API calls, to have the project building and tests passing. AI will be happy to do that, burning lots of dollars while at it.

And after AI "fixes" the problem it introduced, you will still have to read every goddam line of the code to make sure it does what is supposed to.

For greenfield projects, some people recommended crafting a very detailed plan with very detailed description and very detailed specs and feed that into the AI tool.

AI can help with that, it asks questions I would never ask for an MVP and suggests stuff I would never implement for an MVP. Hurray, we have a very, very detailed plan, ready to feed into Cursor & Friends.

Based on the very detailed plan, implementation takes few hours. Than, fixing compile errors and fixing failing tests takes a few more days. Then I manually test the app, see it has issues, look in the code to see where the issues can be. Make a list. Ask Cursor & Friends to fix issues one by one. They happily do it and they happily introduce compilation errors again and break tests again. So the fixing phase that last days begins again.

Rinse and repeat until hopefully we spend a few weeks together (AI and I) instead on me building the MVP myself in half time.

One tactic which seems a bit faster, is to just make a hierarchical tree of features, ask Cursor & Friends to implement a simple skeleton, then ask them to implement each feature, verifying myself the implementation after each step. For example, if I need to log in users, just ask to add logging in code, the ask to add an email sender service, then ask to add email verification code.

Structuring the project using Vertical Slice Architecture and opening each feature folder in Cursor & Friends seems to improve the situation as the AI will have just enough context to modify or add something but can't break other parts of the code.

I dislike that AI can introduce inconsistencies in code. I had some endpoint which used timestamps and AI used three different types for that DateTime, DateTimeOffset and long (UNIX time). It also introduced code to convert between the types and lots of bugs. The AI uses some folder structure for a part of the solution and other structure for other parts. It uses some naming conventions in some parts and other naming conventions in other parts. It uses multiple libraries for the same thing, like multiple JSON serializing libraries. It does things in a particular way in some parts of the application and in another way in other parts. It seems like tens of people are working in the same solution without anyone reading the code of the others.

While asking AI to modify something, it will be very happy to modify things that you didn't ask to.

I still need to figure out a good workflow, to reduce time and money spent, to reduce or eliminate inconsistency, to reduce bugs and compile errors.

As an upside using AI to help with planning seems to be good, if I want to write the code myself, because the plan can be very thorough and I usually lack time and patience to make a very detailed plan.




> AI to run in loop, fixing compile errors, fixing tests, do builds, run the app and do API calls...

Ah I really wanna trust AI won't "fix" the tests by commenting out the assert statements or changing the comparison inputs willy-nilly. I guess that's something terrible human engineers also do. I review changes to tests even more critically than the actual code.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: