Maybe one day, but I’ve asked the various models basic EE interview questions, and it’s painfully obvious that while it can make a paragraph that has the right words, it’s incapable of reasoning spatially, or describing complex connections. My job is secure (for this year).
I have noticed a huge improvement in ChatGPTs coding abilities having an interpreter to check its work. I expect we will be able to bring a similar feedback loop to hardware once we can integrate things like simulation and equations. Also having modules that are tricky to configure incorrectly.
The performance of ChatGPT (4) changes frequently. Lately it has been crazy good. I fed it a non-trivial but naïvely written (and thus slow) numerical algorithm written in Python yesterday. I then told it to vectorize, which worked and was both correct and several orders of magnitude faster. Then parallelize. Faster yet, and still correct. Pretty amazing.
Absolutely! With only a few dozen lines of context to pick up the syntax, copilot is already able to make useful contributions like configuring regulators and filters.