Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's interesting that this is a similar criticism to what was levelled at Ruby on Rails back in the day. I think generating a bunch of code - whether through AI or a "framework" - always has the effect of obscuring the mental model of what's going on. Though at least with Rails there's a consistent output for a given input that can eventually be grokked.


A framework provides axioms that your theory-building can work on top of. It works until there are bugs or inconsistencies in the framework that mean you can't trust those axioms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: