Hacker News new | past | comments | ask | show | jobs | submit | kixpanganiban's comments login

Curious - how many containers and machines images these days come with uv by default?

Right now it looks like Oasis is only trained on Minecraft. Imagine if it was trained in thousands of hours of other games as well, of different genres and styles.

Ostensibly, a game designer can then just "prompt" a new game concept they want to experiment with, and Oasis can dream it into a playable game.

For example, "an isometric top-down shooter, with Maniac mechanics, and Valheim graphics and worldcrafting, set in an ancient Nordic country"

And then the game studio will start building the actual game based on some final iteration of the concept prompt. A similar workflow already to concept art being "seeded" through Midjourney/SD/flux today.


Thanks! That’s such an ambitious endgame that it didn’t occur to me.


This is amazing! I've been meaning to do something similar for all the Show HN threads, granted it's a much bigger set, but I haven't had the chance to.


The value proposition of this, especially at the price, is very weird. how is this better than putting together a python-cookiecutter template with pocketbase, htmx, and stripe?


I would say that making a python-cookiecutter template would just be a different delivery method for the same underlying value.

The underlying value of DeploySolo is that it is a complete SaaS template integrating a unique combination of tech that I haven't seen before in a complete package yet.

It comes out of the box integrated with:

1. Auth cookie storage with vanilla js (avoiding front end frameworks)

2. Stripe webhooks setup so you only have to generate product IDs and secrets, and simply place it in the code.

3. UI elements/pages from tailwind, serving as a minimal foundation for your own tailwind styles.

4. An extremely sane and pleasant templating system using Go's html/template. You can effectively reuse html fragments as components, but output simple pages. If you need dynamic interactivity, htmx fits into this beautifully.

Of course its possible to set up all these things yourself, but all in it took me two months of early mornings.

If you're a busy adult, starting with a complete package like this could be the difference between success and never launching at all, weighed down by complexity.


This is in addition to the educational content I'm going to be creating around this topic. While building DeploySolo, I spent 50% of my time reading source code and 50% in GitHub discussions.

Its a skill that engineers should develop to be comfortable with these resources.

But having tutorials and cookbooks that help achieve a user's specific goal is extremely helpful for the new engineer or the time conscious one.

I attribute a lot of Django/Laravel's popularity to these resources, that are currently missing in the Go/Pocketbase ecosystem.


Yes, just my one OpenAI key. I didn't hit any limits so far, though I wasn't hitting their APIs that hard.


While it is true that Speedometer is developed by Apple, it feels misguided to just throw in "of course it will be performant". Even Google Chrome scores higher in Speedometer in M1 (184, vs 88 when ran on an Intel Mac) -- so it's not like Apple is just underhandedly making its own software score higher. To be fair though, it's not like you can test other browser engines on an iPhone since they're all just basically Webkit web views.

Also it's worth noting that on Basemark Web 3.0, the iPhone 14 Pro Max scores 1033.56 while the formidable Galaxy Z Fold 4 running the SD 8+ Gen 1 scores only 641.02, so it doesn't appear like Speedometer is just fluffing numbers for Apple.


Now you got my point. You should have used other several benchmarks then.


It's very subjective, for sure. I've always felt like browsing on Apple silicon (M1/A# Bionic) is "snappier" than on its counterparts, but it's hard to quantify. Speedometer has successfully put that into numbers, so that number is relevant to me.


This is amazing. Holy moly. The first sample audio clip could easily pass for a clip from an audiobook. And without knowing about this you'd find it hard to convince me that the Joe Rogan clip isn't actually Joe


I'm genuinely curious why you think the AI "steals" art from artists. That train of thought seems to imply that any material generated by any machine learning process is "stolen" from the training data. Why do you think that's the case?

Speaking of data and profits, if I were a digital artist with the ability to hand-draw images, and I decided to draw a unique composition that follows say the art style of Anne Stokes, do you think it would be the case that I "stole" from Anne Stokes?

It's unclear to me why you point to the need of me, the AI-Artist, to deliver some obscure "value" back to the artists from whom training data was referenced. The only value I have the ability to deliver is to the AI model and its authors, other than perhaps propagating the art style or work of the authors of the art basis.


The "we didn't create the specific composition" bit is the one that's most contentious, I think. Even in Midjourney, getting an image to be just right takes a lot of time and skill with all the flags, word weights, and specific word prompts you have experiment with over hours. To me, the resulting image feels very much like my composition, with every element of the image placed exactly where I want them on the canvas, with the specific style, color, and feel that I dictate.

Your analogy with an assistant is interesting, but to me is flawed. A better analogy in my view would be an art director or concept artist coming up with a concept that they ask their team to execute. In that case, it's normal for the art director to take credit for the art, and in this context where the AI is just a tool with no agency, I feel the same way.


You have a point. Can you share a prompt with that level of specificity? Its fascinating to see this new sort of science of prompt engineering.


An example prompt for a recent image I generated: "skull::0.75 of a woman with long hair, ornate::0.9, hooded::0.7 robe::0.7, highly detailed::0.8, decorated, full body, standing in a mystical::0.8 forest::0.7, style of Aleksi Briclot, realistic::-1 photoreal::-0.7, --no water --no sky, --ar 4:5"

Prompt aside, the final image took me about an hour to make, rerolling a number of times to get the composition I want with the base v3 algorithm, then upscaling and then remastering with the --test --creative --upbeta mode, and rerolling again until I got the final remaster just right.


It looks great, and there's definitely an art to prompting. But whether you created the exact composition as we were discussing still isn't solved for me. It's not full body, there's barely something that looks like a skull, and it's off to the side.

It's a very interesting question that I think we'll be debating and thinking about for some time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: