Hacker News new | past | comments | ask | show | jobs | submit login

I put in a Python script I wrote to automate some things around AWS. It described the purpose of the scripts. Then I asked it make some changes and it did. I asked it why would I use it. It gave me a plausible explanation. I asked it to add comments and the comments were pretty good.

I even asked it how the script could be improved and it made suggestions around adding error handling and making some hard coded names into command line parameters.

I asked it to give me code to implement the suggestions and it gave me working code.

It’s much better than you give it credit for.




Sure, depending on what you ask and how that aligns with the content it was trained on and the word statistics it has learned, it can give correct answers.

OTOH I've also asked it what day of the week a given date was and received two different wrong answers depending on the exact phrasing of the question. I've also seen it confidently "explain" why taking 90% of a number and adding 10% of that back will get you to the original number...

The trouble is the output is a mix of truth and lies, and GPT has no way to distinguish between the two.


I’ll give you that,

I once asked it write a Python script that lists all of the accounts in an AWS organization with a given tag key and value.

It confidently, initiated the SDK (boto3) and the correct object on the SDK (Organizations) and then it called a none existent function - “get_accounts_by_tag”.

The next day I asked it the same question and it got it right using a technique that I would have never thought of.

On the other hand, I asked it “given the following XML file and a DynamoDB table with the following fields, write a Python script that replaces the value node in the file where a corresponding key is found in the table with the value in the value field”.

The code was perfect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: