Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A: A CLI tool to generate code snippets from GPT3 written in Rust (github.com/ddddddeon)
53 points by ddddddeon on March 2, 2023 | hide | past | favorite | 27 comments


It's interesting, this is where we're at:

1. english -> 2. programming language -> 3. machine language

We need step 2 right now because a human has to manually push text around to actually make it to step 3.

But programming langagues are for humans only; computers don't need them. So why should step 2 even exist in the future? Especially if a computer becomes capable of verifying it's ouput better than a human could.


Because the computer in this case is relying on a vast repository of human knowledge encoded in these languages to go from 1 to 3. Machine code was probably not in the training set.

Would be interesting if you could train a GPT-N model to be a compiler somehow.


True of GTP and other LLMs, but an AI trained on machine code could simply output machine code. The difficulty would be that it wouldn’t then understand natural language prompts.


Why not have it be human readable as well? Good to be redundant. They can automate the compilation, and we can't always rely on the machine


It seems like we might look back at writing code in 15-20 years the way we now look back at something like punch card programming


If we eliminated step 2 and achieved equivalency with or even slightly improved upon the state of the art, we would still be at a significant disadvantage.

The problem is that the current best (publicly known) results are nowhere near good enough not to be verified by a human prior to deployment. So if we get to _that_ point having skipped step 2, the verification will have to be done on machine language.

And since there's no human-abstracted code intermediate between the spoken language and the assembly, the algorithmically-searched-out analogies of language to purpose are fairly likely to baffle everyone.


Because we don't have a system that understands language and generates code. We have a token predictor trained on a huge corpus of code mixed with explanatory language. And the existing huge corpus of code mixed with explanatory language consists, necessarily, almost entirely of code written in human-oriented languages.


For auditing the generated instructions, as auditing machine language itself is unrealistic.


Interesting idea. The verifying correctness kind of seems like the most important part though, and entirely non trivial. Test driven development maybe?


Step 2 is more token efficient, I suspect we'll not get rid of it.


Super cool idea, but I really really hate it when I take about 15 minutes out of my day to install something and then have it not work at all.

  ~ a python script that makes a request
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: NotPresent', /Users/some_user/.cargo/registry/src/github.com-1ecc6299db9ec823/a-gpt-0.1.9/src/main.rs:15:51 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Edit: I'm on a Mac x86. Curious if others have same experience.


Looks like you need an env var `OPENAI_API_KEY` for this to work. Strange that it is not documented at all.

Also they could have provided an actual error rather than letting it panic!


With a valid key I still get errors.

read the returning message it says I exceeded my daily quota, checked my account, my free API trial period ended a few months back.

Paying for ChatGPI does not mean you can get API access, in short, you need pay for API access separately after a short trial period to use this tool:

"No, the ChatGPT API and ChatGPT Plus subscription are billed separately. The API has its own pricing, which can be found at https://openai.com/pricing. The ChatGPT Plus subscription covers usage on chat.openai.com only and costs $20/month."


They could use a to have ChatGPT generate a function for that.


Same here - also on macos. Thinking about opening an issue on the github repo, but will try to see if I could figure out from the source code what this is about, first


What this shows is if it compiles it doesn't just work


same here on x86 ubuntu 22.04


“Written in rust” but it’s just a raw call to ChatGPT.


Someone knows how to get voted to the front page


The source is really in the title, that's the prompt given to ChatGPT to output the code /s


Someone needs to go back to remedial naming things class.


Reminiscent of A and AAAA records in DNS.


I love that the name makes this feel like typing a sentence to your terminal


This is the wet dream of "bropreneurs" who already write such descriptions on the various freelance portals...


Interesting that the top 2 comments (when reading this) are at opposite ends of the sentiment spectrum over the name. Personally I think it's one of those names that sucks to use in conversation but great to use on the command line. Of course aliasing or abbreviations can also get you there.


unfortunately, this approach fails for basic algorithms:

try to compute the power of X to the Y without a library ...


it’s using the text-davinci-003 model and not ChatGPT (gpt-3.5-turbo) released yesterday




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: