Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Quick, someone use AI to scan the codebase and explain the decision tree of Copilot Chat with regards how it handle prompts and responses.


I very much need to know this also. First, tools [0] and prompts [1]. I'll get back to you in a minute while I back trace the calling path. One thing to note is that they use .tsx for rendering the prompts and tool responses.

1. User selects ask or edit and AskAgentIntent.handleRequest or EditAgentIntent.handleRequest is called on character return.

2. DefaultIntentRequestHandler.getResult() -> createInstance(AskAgentIntentInvocation) -> getResult -> intent.invoke -> runWithToolCalling(intentInvocation) -> createInstance(DefaultToolCallingLoop) -> loop.onDidReceiveResponse -> emit _onDidReceiveResponse -> loop.run(this.stream, pauseCtrl) -> runOne() -> getAvailableTools -> createPromptContext -> buildPrompt2 -> buildPrompt -> [somewhere in here the correct tool gets called] -> responseProcessor.processResponse -> doProcessResponse -> applyDelta ->

[0] https://github.com/microsoft/vscode-copilot-chat/blob/main/s...

[1] https://github.com/microsoft/vscode-copilot-chat/blob/main/s...

[2] src/extension/intents/node/toolCallingLoop.ts


Something I’ve wanted to hack together for a while is a custom react-renderer and react-reconciler for prompt templating so that you can write prompts with JSX.

I haven’t really thought about it beyond “JSX is a templating language and templating helps with prompt building and declarative is better than spaghetti code like LangChain.” But there’s probably some kernel of coolness there.


VSCode uses @vscode/prompt-tsx [0]

They also provide documentation for all this. [1]

VSCode also provides examples. [2]

[0] https://github.com/microsoft/vscode-prompt-tsx

[1] https://code.visualstudio.com/api/extension-guides/chat

[2] https://github.com/microsoft/vscode-extension-samples/blob/m...


Priompt might be what you're looking for: https://github.com/anysphere/priompt.


>> Quick

> in a minute

Honestly. Why the hurry?


Care to also check if they do prompt decomposition into multiple prompts?


You're asking if they break the user prompt into multiple chunks?

All I can find is counting number of tokens and trimming to make sure the current turn conversation fits. I can not find any chunking logic to make multiple requests. This logic exists in the classes that extend IIntentInvocation which as buildPrompt() method.


I believe it's this paper, but... not certain: https://arxiv.org/abs/2210.02406

will update when i find more info.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: