Hacker Newsnew | past | comments | ask | show | jobs | submit | jdprgm's commentslogin

I wonder if it would be in the governments interest to heavily subsidize streaming services. Considering virtually everything seems to be getting hopelessly more expensive and no real progress on economic inequality seems likely outside a slim AI path - dollar for dollar free or cheap entertainment provides a lot of utility and can help keep the poor masses complacent.

$30 a month makes a hell of a lot more of a dent in entertainment affordability than it does in healthcare. No clue on how accurate these estimates are but it seems like the combined budget of most shows and movies in a given year is somewhere around the 40-50 Billion range which in the context of all the other shit in the federal budget is kind of nothing.


Do you think the entertainment value you get if you are required to pay 30$ would be better than what you can get right now for 30$?

Last thing I want is more billionaire handouts, with all due respect. As much as the rising costs suck, it is still better than the cable lock-in contracts and bundling deals. Netflix didn't lead to the cost of living crisis we arrived in today.

A single machine for personal inference on models of this size isn't going to idle at some point so high that electricity becomes a problem and for personal use it's not like it would be under load often and if for some reason you are able to keep it under heavy load presumably it's doing something valuable enough to easily justify the electricity.


We've been getting increasingly fucked for years on housing prices, healthcare, food, live entertainment, etc. Consumer electronics were one of the few areas that you could at least argue you were getting more value per dollar each year. GPU's have been a mess for awhile now but now it seems like it's just going to be everything.


It feels like the past 25 years has been a continuous slowly constricting circle just chipping away at privacy and freedom and it almost never goes in the other direction or even just reverts a policy back to baseline. People largely don't seem to care though and I don't think there are any politicians seriously fighting against it and prioritizing as a primary policy.


Promising looking tool. It would be useful to add a performance section to the readme for some ballpark of what to expect even if it is just a reference point of one gpu.

I've been considering building something similar but focused on static stuff like watermarks so just single masks. From that diffueraser page it seems performance is brutally slow with less than 1 fps on 720p.

For watermarks you can use ffmpeg blur which will of course be super fast and looks good on certain kinds of content that are mostly uniform like a sky but terrible and very obvious for most backgrounds. I've gotten really good results with videos shot with static cameras generating a single inpainted frame and then just using that as the "cover" cropped and blurred over the watermark or any object really. Even better results with completely stabilizing the video and balancing the color if it is changing slightly over time. This of course only works if nothing moving intersects with the removed target or if the camera is moving then you need every frame inpainted.

Thus far all full video inpainting like this has been so slow as to not be practically useful for example to casually remove watermarks on videos measured in tens of minutes instead of seconds where i would really want processing to be close to realtime. I've wondered what knobs can be turned if any to sacrifice quality in order to boost performance. My main ideas are to try to automate detecting and applying that single frame technique to as much of the video as possible and then separately process all the other chunks with diffusion scaling to some really small size like 240p and then use ai based upscaling on those chunks which seems to be fairly fast these days compared to diffusion.


Good point — I’ll add that to the README.

Masking is fast — more or less real-time, maybe even a bit faster.

However, infill is not real-time. It runs at about 0.8 FPS on a 3090 GTX at 860p (which is the default resolution of the underlying networks).

There are much faster models out there, but none that match the visual quality and can run on a consumer GPU as of now. The use case for VideoVanish is more geared towards professional or hobby video editing — e.g., you filmed a scene for a video or movie and don’t want to spend two days doing manual in painting.

VideoVanish does have an option to run the infill at a lower resolution. Where it fills only the infilled areas using the low-resolution output — that way you can trade visual fidelity for speed. Depending on what’s behind the patches, this can be a very viable approach.


I had basically this exact idea too a few months ago and at the time already found a few implementations attempting it. https://robomonkey.io/ being one example I found so didn't pursue it further.

Also it turns out llm's are already very good at just generating Violentmonkey scripts for me with minimal prompting. They also are great for quickly generating full blown minimal extensions with something like WXT when you run into userscript limitations. These are kind of the perfect projects for coding with llm's given the relatively small context of even a modest extension and certainly a userscript.

I am a bit surprised YC would fund this as I think building a large business on the idea will be extremely difficult.

One angle I was/am considering that I think could be interesting would be truly private and personal recommendation systems using LLM's that build up personal context on your likes/dislikes and that you fully control and could own and steer. Ideally local inference and basically an algo that has zero outside business interests.


Great minds think alike :) I think it is important for users to have more control over how the browse the internet, so I'm happy to see others building in the space!

> Also it turns out llm's are already very good at just generating Violentmonkey scripts for me with minimal prompting. They also are great for quickly generating full blown minimal extensions with something like WXT when you run into userscript limitations.

We've thought about full blown extensions and maybe we'll get there, but I'd wager that there is gap between users who would install/generate a userscript vs a full blown extension. Also a one-click install userscript is much simpler to share vs full chrome store submission/approval (the approval time has been a pain for many developers I've talked with). With that said, this is early days and we're still figuring out what people want.

> One angle I was/am considering that I think could be interesting would be truly private and personal recommendation systems using LLM's that build up personal context on your likes/dislikes and that you fully control and could own and steer. Ideally local inference and basically an algo that has zero outside business interests.

I've definitely considered the idea of your own personal, tunable recommendation system that follows you across the web. And I have some background there (worked on recommendations systems at Pinterest), but recommendation systems are very data hungry (unless we regress to the XGBoost days), and task of predicting will/won't the user like this image (binary) is vastly easier than operating over the entire page UI. Definitely not impossible, but we aren't there yet. For now, I just want to make it super easy for you to generate your own useful page mods


Maybe I'm going too far from the tipping point of "this is easy" when it actually isn't, but the ability to clone an open source project now, modify some part of it, and then compile it locally seems like the future. This is almost trivial to do now.

Why not do the same for the web?

Without going off on a rant about all of the user hostile bullshit that's being shoved down our throats right now, I think one inevitable outcome of AI is that users are going to need to have defensive local AI agents protecting them and fact checking data. This is the trojan horse for the big tech companies that rely on ad revenue and dark patterns to manipulate their users: if they provide the AI agents, they will be obviously inferior and not super-intelligent, just like when Google's early public image-gen model was making image of ethnically and gender diverse nazi soldiers, etc.


Yeah I have thought about more user friendly Violentmonkeh before. This sort of thing just needs to be open source and non-profit, there isn’t even much upkeep to it. At what point will the investors want some form of return?

This is built from the system that created enshittifcation in the first place; a cleaner web is definitely not going to come from a startup.


Annoying not to include price or release date.

Considering the quest 3 came out 2 entire years ago this feels too close in terms of hardware instead of feeling like a next generation headset.

Responses do seem fairly positive though. I wonder if this had been released as the quest 4 though would you all be reacting as positively?


$5 a TB is insanely cheap and someone just getting rid of stuff and not really caring about the price. Nothing has ever been that cheap at retail or manufacturer recertified. Lowest was like $9-10 last year and now we are sitting at more like $15.


Yeah it was interesting because the guy knew what he was doing unlike other great deals I've had. I guess he just really didn't want them


... might wanna zero those drives just in case


They came uninitialized, but yeah it could be worthwhile to actually zero and fully format them. They are decently fast at 275 MB/s but even that's 16 hours to do one pass. Maybe let it run over the weekend


16 hours isn't that bad. I recently had to zero my old 2tb Time Capsule and just let it run overnight and during work, wasn't so bad. You'd also get to detect bad sectors, figure out how good the drive is in terms of what % of the drive is actually usable for large sequential writes


Memory bandwidth is a joke. You would think by now somebody would come out with a well balanced machine for inference instead of always handicapping one of the important aspects. Feels like a conspiracy.

At least the m5 ultra should finally balance things given the significant improvements to prompt processing in the m5 from what we've seen. Apple has had significantly higher memory bandwidth since the m1 series approaching 5 years old now. Surely an nvidia machine like this could have at bare minimum 500Gb+ if they cared in the slightest about competition.


It's funny they have this marketing blog post based on competing on price yet don't disclose any of their pricing on their site only a schedule a meeting which is just about the biggest RED FLAG on pricing there is.


Our library is open source, the price is 0!! :-) Haha

We're actually mostly talking to people (that "schedule a meeting") to see how we can help them migrate their stuff away (from Heroku, Vercel, etc.)

But we're not sure of the pricing model yet - probably Entreprise features like Gitlab does, while remaining open source. It's a tough(er) balance than running a hosted service where you can "just" (over)charge people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: