I have used Artifacts a couple of times and found them useful.
But now I am even more confused. They make an LLM that can generate code. They make a sandbox to run generated code. They will even host public(!) apps that run generated code.
But what they will not do is run code in the chatbot? Unless the chatbot context decides the code is worthy of going into an Artifact? This is kind of what I mean by the offering being jumbled.
BTW saw your writeup on the LLM pricing calculator -- very cool!
Yeah I can't imagine Claude will be without a server-side code execution platform forever. Both OpenAI (Code Interpreter) and Gemini (https://ai.google.dev/gemini-api/docs/code-execution) have had that for a while now, and it's spectacularly useful. It fills a major gap in a Chatbot's skills too, since it lets them reliably run calculations.
Sandboxing is a hard problem, but it's not like Anthropic are short on money or engineering talent these days.
But now I am even more confused. They make an LLM that can generate code. They make a sandbox to run generated code. They will even host public(!) apps that run generated code.
But what they will not do is run code in the chatbot? Unless the chatbot context decides the code is worthy of going into an Artifact? This is kind of what I mean by the offering being jumbled.
BTW saw your writeup on the LLM pricing calculator -- very cool!