I think this game is one of the best pieces of storytelling ever created, and it's so weird and arthouse that I can hardly believe there's an audience for it.
Meanwhile, the other rumor is that Nvidia stopped production on RTX 40 series in order to shift all manufacturing towards ML chips like H100. Seems like they're both in an all-out sprint to produce as many H100 and MI300X as possible. Which makes total sense to me.
True (and it's ECC to boot), but the extra RAM isn't that expensive (I'd WAG $500 at a stretch, based on an extra 8GB on 4060 Ti retailing for $100 more), and is also arguably the biggest mechanism of market segmentation between the two products.
No, do not do this. Use Matrix, or...anything else really. I tried building on top of IRC a decade ago and it was a more forgivable mistake back then, but a mistake nonetheless.
Yeah, for things like realtime voice calls as Revolt offers, there's no point in trying to bolt on to IRC.
You can (ab)use IRC as a generic datastore, the same way you can Twitter, SMS, or anything else that allows for data to be stored, but it's a terrible idea, and you'll end up with something overly complicated and without any sort of compatibility with generic IRC clients.
And honestly, complaining about Electron apps is just lazy. It may not be the choice you make when you have unlimited resources and time to write separate native applications for every platform, but it's perfectly accessible for a first iteration. Also a great way to make sure all your platforms can have roughly the same behavior. Would much rather have an Electron Linux app (which is easy to port and costs little to maintain) than no client at all.
I would also check out their 3B model. I tested it on launch with LoRA fine-tuning and found it to be surprisingly capable despite its size. I think a lot of people are skipping past testing it because it only has 3B params.
Horizon: Call of the Mountain is a recent example of a UX interaction that I really like with eye tracking + a peripheral. It uses a button press, so it's an intentional action, but the eye gaze determines which action to take.
I think in the world of ML, contributing that much compute should count as co-creation. There is a lot of code published by academia that just needs compute and data, but they don't receive it. Stability deserves credit for Doing the Thing. IP rights are another thing but that's a whole subtree of legal questions that society is barreling towards.
People have to remember, back in 2022-08, AI was still not a big thing. DALLE-2 was only released 2 months ago, and was just a quickly forgotten novelty.
Emad had to have funded the training of SD even before DALLE-2, investing the cash in exchange for branding rights to a free model.
Its obviously an extremely good bet in retrospect (RunwayML is probably bleeding with regret), giving stability herculean name recognition despite not having done any high profile research themselves. But when the bet was made, it was quite an insane bet requiring a lot of vision.
Most of the latent and stable diffusion authors also work at Stability AI, as do many other generative AI research leaders in media.
Naming rights on the model were no part of the compute grant, we give them incredibly freely and also support. Naming was suggested by the researchers in this case.