Hacker News new | past | comments | ask | show | jobs | submit login

Not answering for medical industry, but answering for the similar realm of aerospace systems:

One big question is, does the proposed software tool assist a human engineer, or does it replace a human engineer?

If a tool replaces a human -- the phrase used often is "takes a human out of the loop" -- then that tool is subject to intense scrutiny.

For example, it would be useful to have a tool that evaluates the output of an avionics box and compares the output to expected results, to automatically prepare a test passed/failed log. Well, this would amount to replacing a human who would otherwise have been monitoring the avionics box and recording test results. So the tool has to be verified that it works correctly, in the specific operating environment (including things like operating system version, computer hardware type, etc.)

So what about ChatGPT? One big hurdle is that, given the same input, ChatGPT will not necessarily provide the same output. There's really no way to verify its accuracy in a repeatable way. Thus I doubt that it would ever become a tool that replaces a human in aerospace engineering.

How about using it then to assist an aerospace engineer? Depending on the assistance, this should not necessarily be materially different than getting help from StackOverflow.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: