Hacker Newsnew | past | comments | ask | show | jobs | submit | Ameo's commentslogin

$10 per million output tokens, wow

sqlx is my favorite way of working with databases in Rust hands down.

I've tried alternatives like Diesel and sea-orm. To be honest, I feel like full-blown ORMs really aren't a very good experience in Rust. They work great for dynamic languages in a lot of cases, but trying to tie in a DB schema into Rust's type system often creates a ton of issues once you try to do anything more than a basic query.

It's got a nice little migration system too with sqlx-cli which is solid.


I’ve used Diesel for a bit now but haven’t had issues wrangling the type system. Can you give an example of an issue you’ve encountered?


This has been exactly my experience! I've found SQLx to be a joy to work with in Rust!


Same. Never again diesel. The type system just turns it into madness. Sqlx is a much more natural fit.


Shadertoy for geometry - Geotoy

https://3d.ameo.design/geotoy

Most core functionality is finished, and it's ready to go. Still some work to go on docs, tutorials, and polish.


This has to be at least the fifth LLMpeg I've seen posted to hacker news in the past few months.

This whole repo is a single 300 LoC Python file over half of which is the system prompt and comments. It's not even a fine-tuned model or something, it's literally just a wrapper around llama-cpp with a very basic prompt tacked on.

I'm sure it's potentially useful and maybe even works, but I'm really sick of seeing these extremely low-effort projects posted and upvoted over and over.


At this rate one could probably automate the low effort project -> HN post pipeline


Hmm … {thinking face emoji}

{money-mouth face emoji}


{Eye roll emoji}

How many Show HN have we seen of low effort genAI image tools that are less capable and less in every way than the umpteengillion previous versions of a genAI tool? People use the Show HN like it's their parent's refrigerator to hang their preschool art looking for validation or something. If anyone was looking to hire someone in AI, would you see one of these subpar projects from a Show HN and think that's someone you would be interested in working with?


Using yet another LLM to generate the project code!


I bet it is very often that people upvote based on the title and perhaps comments, not the actual content or its utility. Besides, this karma business can really get one hooked as a sucker for the high grade…


> This whole repo is a single 300 LoC Python file over half of which is the system prompt and comments.

You accurately described many "AI apps" of this era.


At some point, one would imagine just making a less confusing FFmpeg CLI would be a better use of everyone's time. (I've sort-of understood it now, but the learning curve is pretty steep.)


that's akin to making a "less confusing git" - the reason ffmpeg/git are so widely used is largely due to how powerful and fine-tunable they are. the learning curve is an unfortunate but necessary side effect


It's really not, things can be powerful and easy to use. I'd even say some parts of ffmpeg are well designed in this way like `ffmpeg -i ./video.mp4 ./video.wav` working is well designed.

Think the fact all the commands are shorthand doesn't help because no matter how many ffmpeg commands you copy and paste in your life unless you put the effort in you're not going to begin to remember what -an means in the sea of all the other two letter switches and the copywriting in the output and error messages is very hard to tell whats going wrong for someone who hasn't used it a long time.

Not saying it should all be super wordy just that it's difficult to pick up things though osmosis when the commands look like this -ss -t -rc:v, respect to anyone who actually learnt how this works so they could type it without sitting there with the documentation and hitting a wall for an hour.

Will say though the raw tech inside ffmpeg has always meant figuring out getting it to do the thing has always been worth it because it's insanely powerful.


> It's really not, things can be powerful and easy to use. I'd even say some parts of ffmpeg are well designed in this way like `ffmpeg -i ./video.mp4 ./video.wav` working is well designed.

I'd argue that even this command line isn't well-designed at all; it should be `ffmpeg video.mp4 -o video.wav`. How is it sane that anything without an -i in front is an _output_ filename?


It's not an output filename.. The -i is the input, the .wav is the output filename in that example.


Yes? That's not in conflict with anything I said.


You're right! I misread your "without" as "with". I need better glasses, or reading comprehension.


You could make both git and ffmpeg much less confusing without sacrificing their power and tunability.


Case in point, git is already much less confusing than it was at 1.0 and nobody complains it has gotten less powerful. (“Less confusing” does not mean “not confusing”, of course.)


You could also write a wrapper for non technical users without a rewrite or comprising core functionality.


In my experience that never helps because the wrapper is never comprehensive or well documented enough that you can completely avoid dipping below it and as soon as that happens it's worse because you have to learn the original thing anyway plus some poorly documented wrapper.


It was vibe coded too, the doc comments and pokemon try-catch are dead giveaways. It's a slop wrapper around a slop generator to farm github stars. Welcome to the future.


> pokemon try-catch

I've seen LLM's do this in other languages as well but didn't realize there was a term for it. Wrapping entire function bodies in try/catch, at the very least please just wrap the caller so you don't have to indent the entire body for no reason. Not to mention a lot of commands inside can't even throw.


You can just use ChatGPT, etc already to generate the ffmpeg code. I just did this a few nights ago. There's no wrapper or any need to tune anything as far as I can tell.


Look at it a validation of user demand.


There are whole startups that are just a system prompt and a thousand lines of UI these days...


No one is stopping you from creating a Nobel Prize winning ffmpeg wrapper yourself to show how its done ;)


I used this at a previous company with quite good success.

With relatively minimal effort, I was able to spin up a little standalone container that wrapped around the service and exposed a basic API to parse a raw address string and return it as structured data.

Address parsing is definitely an extremely complex problem space with practically infinite edge cases, but libpostal does just about as well as I could expect it to.


Ditto - I was impressed with how well it handled the weird edge cases in our data.

They've managed to create a great working implementation of a very, very small model of a very specific subset of language.


Worth noting that libpostal requires ~2GB RAM when fully loaded due to its comprehensive data models. For containerized deployments, we reduced memory usage by ~70% by compiling with only the specific country models needed for our use case.


> The model source code and weights will also be provided upon final publication.

Page 59 from the preprint[1]

Seems like they do intend to publish the weights actually

[1]: https://storage.googleapis.com/deepmind-media/papers/alphage...


Thank you for this. I did not notice this at the end of the paper.


> runes basically infecting the entire codebase

Yeah sadly the stores the author talks about here aren't the right way to do things anymore in modern svelte and they're all-in on Runes.

Stores were a big part of the reason I liked svelte; they were so conceptually simple, extensible with plain JS, and made interop outside of Svelte trivial without the Svelte magic leaking out.

They're still in Svelte, but they mix poorly with runes and are basically unsupported in runes mode. I opened up a bug about store behavior not working like it used to when mixing with runes, and the response was pretty much "yeah we don't expect it to work like that anymore when runes mode is enabled".


I don't even mind the runes I just don't get the impression from the docs that anyone has a clear feeling about how to actually use them e.g. forget the toy examples, suppose an alien has given me clump of minified business logic and we have to make it work.


Sounds similar to my own experiences trying to debug GH actions locally.

I've tried twice now to get it working, pulling down many GBs of images and installing stuff and then getting stuck in some obscure configuration or environment issue. I was even running Linux locally which I figured would be the happiest path.

I'm not eager to try again, and unless there's a CI that's very slow or GH actions that need to be updated often for some reason, I feel like it's better to just brute-force it - waiting for CI to run remotely.


There's another alternative - debug in CI itself. There's a few ways that you can pause your CI at a specific step and get a shell to do some debugging, usually via SSH. I've found that to be the most useful.



Came here to say this, and to recommend: https://github.com/appleboy/ssh-action

It’s slow and arduous work to inject at the right point and find out what went wrong. But it’s miles better than act. act is a noble idea, and I hope it can meet its target, but it falls short currently.

ssh-action gets you on the host, and lets you quickly establish what is happening and implement fixes. It’s top notch!

(I’m unaffiliated, just a big fan of ssh-action).


Extremely cool!

It's interesting and honestly encouraging that this kind of thing can be discovered and understood using just "simple linear methods" and high-level analysis of patterns in layer activations.


This was a great read and had just the right amount of detail to satisfy my curiosity about the process without being annoying to read.

Huge props to the author for coming up with this whole process and providing such fascinating details


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: