Hacker News new | past | comments | ask | show | jobs | submit | a5huynh's comments login

Not op but one my favorites is: https://www.youtube.com/@Settledrs

Very entertaining to watch and explains things so even non players can understand the sheer absurdity of some of the attempts.


Just to add context since I was curious about "Toyota had it wheels literally falling off".

That was a recall from 2022 (https://www.cnn.com/2022/10/06/business/toyota-bz4x-wheel-fi...) for 260 vehicles (their model BZ4X electric SUV).

The Cybertruck recall affects 3,878 vehicles (https://www.caranddriver.com/news/a60538687/2024-tesla-cyber...).


That's some misleading context since the article you linked says:

> Only 260 BZ4Xs had been delivered to customers before the recall was announced

If they had sold 4000 like the truck, they would have recalled 4000. Both recalls affected all the cars sold for the model.


I don't think its too misleading because in both cases that's how many vehicles are affected at very moment by the recall.

Toyota shipped/sold more as mentioned in the article, but those are unaffected. Likewise with the Tesla, any shipped afterward the defect was discovered are unaffected.


This reminds me of Floneum (https://github.com/floneum/floneum), this open-sourced tool for graph-based workflows using local LLMs.

More for personal use and not quite as polished but a decent alternative for those looking to play around with the idea locally.


They look great. https://flowiseai.com/ does something similar for building AI apps specifically. Less workflow centric but worth checking out regardless.


I've been using a combo of LLMs + live transcription to build a passive assistant that keeps track of talking points and can pull out data/tasks from a conversation you're having (https://sightglass.ai or here's a demo of me using it: https://www.loom.com/share/0220ca03bce341669d314d4254872226)

So far this is being used for:

- Sales -> guiding new recruits during more complex client calls

- HR -> Capturing respones during screening interviews

If you'd like to try this out feel free to DM me or email me at andrew at sightglass.ai, we're looking for more testers!


It's in San Francisco. It was announced Sept. 6th on their blog: https://openai.com/blog/announcing-openai-devday


They didn't do a good job of "gathering" then. Real AI researchers don't have time to keep reading blogs. They should have announced it first and foremost on arXiv or inside the GPT API documentation if they wanted real researchers.


Are they even a real researcher, if an AI agent hasn't scavenged the Internet for relevant events and planned the trip. Check, maybe your virtual AI persona is attending it.


Well this is mostly for developers and not researchers. Its about using their system.


A problem with fine tuning based on organization data is that if the underlying data changes, you'd need to fine-tune the model again each change. This might be okay for one-off changes (such as the name of the model in the example) but if it costs $300 each time (not to mention the time spent) and you have 100s/1,000s of changes per month, it's not really viable.


If you want to play around with Stable Diffusion XL: https://clipdrop.co


Since clipdrop hs an API is there any way to use it with ComfyUI or Automatic111 (or whatever that's called).


I just tried this and the UI is very nice (better than dreamstudio), with nice tool integration, and image quality is definitely going up with each new release. You can see a few results at fb.com/onlyrolydog (along with a lot of other canine nonsense).


Hey HN!

We've been building a semantic search engine for audio content focusing on podcasts. We're getting an early version for feedback.

You can search or ask questions about a particular podcast episode or feed and get back an answer as well as link to the relevant podcasts.

We also let you follow your favorite podcasts and receive summaries in your inbox whenever a new episode comes out


If you're running the text-generation-webui (https://github.com/oobabooga/text-generation-webui) it has the ability to train LoRAs.

It'll require a beefy GPU but I've seen some fun examples like someone training a LoRA on Skyrim books.


I recently found this list of models that works with llama.cpp: https://rentry.org/nur779 (with dl links, albeit given llama's licensing gray area, use at your own risk)

The latest so far would be Vicuna, whose weights were just recently release.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: