Hacker Newsnew | past | comments | ask | show | jobs | submit | more eddieweng's commentslogin

Yes, it's depth


yes :)


I think you executed really well. Congrats on the launch!!


Yes, certainly. The image generation process utilizes a variation of the text-to-image model called StableDiffusion. The frontend collects requests and sends them to a two-layered backend. The first layer determines the appropriate prompt to send to the worker, while the second layer calls the image generation API. Once the image is generated, it is sent back to the first layer and then returned to the client.


Glad you like it! I understand your point; I'll work on object segmentation and let users decide what to change. Thanks for the feedback


Please do! Adding in a Segment Anything Model UI to the app would be great. You can then click on the items you want removed and use the segments as inpainting masks for the generative model.


Leveraging multiple models here (Segment Anything is a good candidate) seems like a must to build a defensible product. Using just SD with basic control net and maybe some additional finetuning makes it very cloneable. If you leverage other models you can do things like identify what elements of the room must be preserved (walls, windows) and what can be safely removed or edited. This could allow you to generate a simulated image of the room without furniture as a base. Derive a 3d projection to allow the user to place basic 3d furniture stubs within the space. Then use those stubs to create subject masks with the furniture-less base to render a final room.


Hi, thanks for the feedback! NSFW image filter will be added into the pipeline, and the upcoming version will allow you to modify results using prompts.


Hi, thanks for the feedback. The upcoming version will allow you to modify results using prompts.


But part of the value of a designer is they’d know what prompts to even give. They’d understand your needs, price points, and walk you through a space of directions.

I wonder if this is better suited as a tool for designers themselves to quickly create drafts to demonstrate their direction. But only using stable diffision worries me that the kinds of products they’d want to leverage will be limited.


Hi, thank you for your feedback. Currently, the product is intended for inspiration purposes only.


lol


Thank you for the feedback! will implement this feature for sure


Please do. It's really tedious to use iteratively at present.


Thank you :D


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: