This looks quite interesting. So I have question for these AI powered editors: what advantage would a dedicated editor like this have over just using an AI plugin for VsCode? How do you fundamentally build the editor differently if you are thinking about AI from the ground up?
Our editor isn't a regular coding editor. You don't actually write code with e2b. You write technical specs and then collaborate with an AI agent. Imagine it more like having a virtual developer at your disposal 24/7.
It's built on a completely new paradigms enabled by LLMs. This enables to unlock a lot of new use cases and features but at the same time there's a lot that needs to be built.
Writing technical specs a fancy way to say coding. This reads to me like you're writing a new programming language tightly integrated with an IDE that targets a new AI based compiler.
Yes, as always the essential complexity of software is understanding and describing the problem you are trying to solve. Once that is done well, the code falls out fairly easily.
That’s like saying a painting, once they understand what they are trying to paint, just falls out of a painter. It is true but not useful for not-painters.
I think the difference here is that code is effectively a description, so there is an extremely tight coupling between describing the task and the task itself.
You could tell me, in the most painstaking detail, what you want me to paint, and I still couldn't paint it. You can take any random person on the street and tell them exactly what to type and they'd be able to "program".
That's just picking nits with the metaphor. Change it to a poet or a novelist and it works the same. If you tell a person exactly what to write they are just a fancy typewriter, not a poet or novelist. Same with code.
Hmm... Where is the new language in this? The specs is just human language and some JSON for defining structures. It's more so that the human language becoming a programming language with the help of AI.
And over time, people will discover some basic phrases and keywords they can use to get certain results from the AI, and find out what sentence structures work best for getting a desired outcome. These will become standard practices and get passed along to other prompt engineers. And eventually they will give the standard a name, so they don’t have to explain it every time. Maybe even version schemes, so humans can collaborate amongst themselves more effectively. And then some entrepreneurs will sell you courses and boot camps on how to speak to AI effectively. And jobs will start asking for applicants to have knowledge of this skill set, with years of experience.
Until one day a new LLM gets released, GPT5, that doesn't recognize any of those special words. Mastering prompt-speak is essentially mastering undefined behaviors of C compilers.
gpt4 won't know anything about gpt5, you would have to make a sophisticated prompt for gpt4 that converts its quirks into gpt5's quirks, but if you know so much about both LLMs, why not to use gpt5 directly?
The idea is someone would first make a prompt for GPT4 that outputs GPT5 enabled prompts. You would initialize GPT4 with it, and then speak to GPT4 to compile prompts to GPT5 context which then gets fed to GPT5.
Although you may know about LLMs, you might specialize in speaking to specific models and know how to get optimal results based on their nuances.
>And over time, people will discover some basic phrases and keywords they can use to get certain results from the AI, and find out what sentence structures work best for getting a desired outcome.
This just sounds like a language that is hard to learn, undocumented, and hard to debug
I'm not sure I follow this answer. What are the entirely new paradigms? Writing is still the initial step. If text editing remains a core part of the workflow, why restrict the user's ability to edit code?
> You don't actually write code with e2b. You write technical specs and then collaborate with an AI agent.
If I want to change 1 character of a generated source file, can I just go do that or will I have to figure out how to prompt the change in natural language?
> How do you fundamentally build the editor differently if you are thinking about AI from the ground up?
Great question. I would love to hear the devs thoughts here. This is one of those questions where my intuition tells me there may be a really great "first principles" type of answer, but I don't know it myself.
If you could use it without submitting data to some ai company, or if it came with a non-disgusting terms of service, that would be a killer feature for me.
For example, the last ai company installer I just clicked "decline" to (a few minutes ago) says that you give it permission to download malware, including viruses and trojans, onto your computer and that you agree to pay the company if you infect other people and tarnish their reputation because of it. Literally. It was a very popular service too. I didn't even get to the IP section
edit: those terms aren't on their website, so I can't link to them. They are hidden in that tiny, impossible to read box during setup for the desktop installer
I built this https://github.com/campbel/aieditor to test the idea of programming directly with the AI in control. Long story short, VS Code plugin is better IMO.
In essence when working with code stops being the major thing you do (you abstract that away) and start managing the agents working on code and writing the spec, you need new tools and working environment to support that.
> and start managing the agents working on code ... you need new tools
Jira?
Only slightly joking. It really sounds like we're moving in the direction of engineers being a more precise technical version of a PM, but then engineers could just learn to speak business and we don't need PMs.