Hacker News new | past | comments | ask | show | jobs | submit login

Text puts the problem squarely into the sights of generative models.



Maybe one day, but I’ve asked the various models basic EE interview questions, and it’s painfully obvious that while it can make a paragraph that has the right words, it’s incapable of reasoning spatially, or describing complex connections. My job is secure (for this year).


I have noticed a huge improvement in ChatGPTs coding abilities having an interpreter to check its work. I expect we will be able to bring a similar feedback loop to hardware once we can integrate things like simulation and equations. Also having modules that are tricky to configure incorrectly.


The performance of ChatGPT (4) changes frequently. Lately it has been crazy good. I fed it a non-trivial but naïvely written (and thus slow) numerical algorithm written in Python yesterday. I then told it to vectorize, which worked and was both correct and several orders of magnitude faster. Then parallelize. Faster yet, and still correct. Pretty amazing.

Just an anecdote.


I’ve noticed the opposite with chatgpt.

To be fair, idk how much elixir code is in the training set.


yeah I have only really used it for simple python things, its pretty good at that.


Absolutely! With only a few dozen lines of context to pick up the syntax, copilot is already able to make useful contributions like configuring regulators and filters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: