Hacker News new | past | comments | ask | show | jobs | submit login

I spent a couple hours playing with Rosebud this morning and I have a bit of feedback.

- I thought this would be a useful tool for prototyping, but as a programmer I would probably never use this to release a finished product. Maybe it would be better to target this kind of tool for prototyping instead of being able to make a full game that people will want to pay money for.

- I tried making a simple turn-based (american) football game. It's pretty good for getting some boilerplate set up, but sometimes it felt like it would have been easier to modify the code myself at times than try to explain what I wanted modified to the AI.

- It seems like whenever I did try to modify the code by hand, my changes would get wiped out by the next iteration generated by the AI. Maybe it was referencing the old code?

- I found getting assets into the game a bit tricky. Sometimes it would try updating the code to reference assets that we hadn't created yet. After that I couldn't get it to generate assets for me.

- Sometimes the AI would get stuck on something (e.g. generating an asset), or it would make a mistake. In that case I couldn't figure out a way to "go back" to a previous iteration of the game.

- At one point I tried to get the AI to make a change, which it did, but the code it generated had a large portion of the original code based removed and replaced with something like `// The rest of your code here.` which obviously broke the game.




Thanks for summarizing these and spending a few hours trying out Rosebud!! Some detailed responses below:

"I found getting assets into the game a bit tricky. Sometimes it would try updating the code to reference assets that we hadn't created yet. After that I couldn't get it to generate assets for me."

This is an known issue we are trying to solve. But to give more color on why it's been tricky, since we are relying on a chat interface, our agent has properly determine is your ask includes an ask for asset generation, and if so, it has to determine whether an asset exists, and if it does not, whether to generate an asset, after which it needs to determine where to insert it and whether to modify other parts of the code when inserting. For instance, in the character templates, the AI can successfully modify an asset and character's description in one prompt, but sometimes it doesn't correctly interpret the instruction and pulls existing assets from the Phaser JS library. In short, it's quite finicky to get agents in prod to behave reliably 100% of the time. But we are working on it! Have many ideas of how to improve and are experimenting.

"I thought this would be a useful tool for prototyping, but as a programmer I would probably never use this to release a finished product. Maybe it would be better to target this kind of tool for prototyping instead of being able to make a full game that people will want to pay money for." & "I tried making a simple turn-based (american) football game. It's pretty good for getting some boilerplate set up, but sometimes it felt like it would have been easier to modify the code myself at times than try to explain what I wanted modified to the AI."

For users who already know how to develop, we've definitely heard similar validation that this is better for prototyping currently than for final games. I think this stems from the fact that 1) currently we don't have multi-file support and more advanced features like multiplayer yet, which makes it hard to compare with what's achievable by a more advanced game dev. And 2) the chat based approach is not always reliable, so it's frustrating sometimes to get what you want versus coding it up yourself. This is why we are also targeting less technical users, because they don't have many options available to them that can be as flexible as Rosebud. We have been delighted to see what these users create because some of the stuff gets very creative and addictive, even with our current limitations.

"It seems like whenever I did try to modify the code by hand, my changes would get wiped out by the next iteration generated by the AI. Maybe it was referencing the old code?"

This is a bug. If you don't mind sharing in the feedback channel specific cases when it does this, that would be super helpful. We've been fixing aspects of this bug over the last few week.

"Sometimes the AI would get stuck on something (e.g. generating an asset), or it would make a mistake. In that case I couldn't figure out a way to "go back" to a previous iteration of the game."

We have a history feature we are shipping in stages. Right now you can only regenerate, but we want to let users go back to different states of their game. Many users have wanted this feature and we just have to make some choices about what gets saved in the history (all llm changes or also manual, ... etc).

"At one point I tried to get the AI to make a change, which it did, but the code it generated had a large portion of the original code based removed and replaced with something like `// The rest of your code here.` which obviously broke the game."

This stems from a context length issue and is definitely annoying. We also have plans to implement diffs and other solutions that should let users run into this issue less.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: