Hacker News new | past | comments | ask | show | jobs | submit | more NegatioN's comments login

The previous models were either 1. Limited in their capacity to create something that looked very cool, or 2. Gigantic models that needed clusters of GPUs and lots of infrastructure to generate a single image.

One major thing that happened recently (2ish weeks ago) was the release of an algorithm (with weights) called stable diffusion, which runs on consumer grade hardware and requires about 8GB of GPU RAM to generate something that looks cool. This has opened up usage of these models for a lot of people.

example outputs with prompts for the curious: https://lexica.art/


Is Lexica finding results previously computed? Or generating them? I could only work with very simple queries like "photo of a cat".


It's just a database of submitted works I think. You can try scrolling down on the opening page to see random prompts and outputs.


It's ~1.5 million entries inputted by users during the beta period on Discord.


There are a lot of prompts and results that aren't being included. Not sure what the criteria were.


The quality of the upper echelon of art may be raised, but there's still a discovery problem there. Having to sift through stuff to find the gems is already an issue imo. The OP makes a decent (pessimistic) point


My opinion on the problem of "too much content to watch in several lifetimes" will not be solved on the supply side, but the demand side.

All content is not created equal: we are social animals and what people around us do interests us much more, even if it is of lower quality. So, if anyone can generate professional-looking creative projects with relative little effort, we'll gravitate towards people creating content on niche subjects that interest us; thus creating small communities with high engagement. Even if they have low watch count, they'll matter to those participating in them. Fanfic communities already work that way.

There always be a place for conventional mainstream media outlets creating run-of-the-mill high-production-value works, with themes averaged to appeal to the masses; it's just that they'll have a lot more competition from communities of the first type.


Consider the extreme low quality visuals of SouthPark. They used their low quality imagery as joke enhancement for their presenting ideas far more sophisticated than the majority of animated media.


Exactly. And it only took a tool (Flash) that allowed a small team to do what used to require a whole company of trained experts (animated cartoons).


I think you missed my point: they put their energy into the writing, not the animation.


Maybe, but I think you missed mine: they were able to put all their energy into the writing because the animation had been made trivial by a new tool.

Had they had to draw the episodes the old-fashioned way, they would have had to put a lot more energy into the animation even to get the same result.


Yeah, agreed.


I might be wrong, but I could imagine that AI will push the absoulute maximum out of human creativity. To beat AI, you'll truly need to make something outstanding and I think there will be people achieving that and truly pushing the boundaries of creativity and art forward, in ways we haven't seen before. And those people will be rewarded. Everyone will have to step up their game.


Probably, artists will use AI not to "beat" it, but as a base tool for exploring the space of possibility and expanding it into new territories. People will see AI as just one more tool in the toolbox.

People using Dall-E or Midjourney naively will be like those unremarkable classicism painters in the late XIX century doing realistic yet conventional paintings which nowadays you can create as studio photographs.

Meanwhile, brilliant artists will train new AI models throwing in data collections that have never been seen before as their training input, to generate completely new styles - just like the -ism movements threw all academic conventions away in pursue of new art styles, bringing us modern and postmodern art.


I think AI will be pretty good at doing recommendations. Show you a bit of random, get your likes and dislikes, exploit what the algorithm learned. TikTok does this well already and, I expect, will continue to do well when the content is AI generated.


I work in the area of recommendations, and this is not a solved problem at all. You can only recommend what has been shown (without doing coldstart). One major issue is that other forms of content than 30sec clips can't easily utilize TikToks way of bootstrapping engagement when the item is fresh. Not everyone will understand or appreciate a "new Shakespeare" and it may fall by the wayside.

I too hope it gets better, but it's hard to replace a panel of experts that have sifted through their subject when it comes to quality recommendations in some fields.


Tiktok optimises for what people spend a long time looking at, but I don’t think that anyone would claim the metric it uses is what we would want to define as quality in the broader sense.


I seriously wonder if Tiktok uses eye gaze tracking in their interest assessments. If people's eyes follow the same gaze pattern on a clip repeatedly, that's a damn clear indicator of interest.


My personal hypothesis (based on nothing) is that TikTok just uses the very strong signal of watch time. If you watch a clip all the way, or multiple times, that's good. If you skip early - that's bad.

When I had Netflix I remember being frustrated that Netflix would recommend me shows "based on" content I had watched for a few minutes, decided I didn't like, and backed out of. Why would you recommend me content if you have a strong signal I dislike it?


AI won't be designed to serve the users recommendations, but what is in the best interest of the person designing the AI. There was a pretty good article on HN about this, yesterday I think, that covered this well.

This is the thread: https://news.ycombinator.com/item?id=32482523

and it's well worth reading the article.

TL:DR; AI is good at something, but don't expect it to be aligned with what the user wants.


For the people who answer this, I'd like to know if they booted from an sd-card or ssd-device as well. I noticed a huge difference in the responsiveness when using my Pi 4 (only via ssh and with a terminal) for some machine learning jobs after I switched to boot from an external SSD. For Pi4s made in the last couple of years, booting from ssd seems to be a built-in feature, without any need for flashing anything on the device first, which held me back from doing it earlier.


Indeed. There are also SBCs in similar price- and spec range from other vendors but with SATA or even m2 PCIe interfaces, which is generally a lot more reliable than gambling on dodgy USB3 adapters.

Alternatively, CM4 (or equivalent) on an appropriate board.

(I used one of those as a workstation for a month or so; worked fine enough, the only major friction was that I had to be mindful of closing browser tabs)


For me using SSD with Pi is the key point to make it useable for daily driver. SSD plus overclocking.


I was going to say this. The sdcard is the weak link in the Raspberry Pi system. Even a USB flash drive is better. I have a Pi 4 that I use as a secondary system and it boots from an external SSD. It makes a real difference in responsiveness.


One thing that struck me while reading this is the potential connection of doomscrolling to our evolution.

Humans are in large part "information foragers", in the sense that information has been vitally important to our survival as a species, and potentially a part of why we've developed larger brains than similar animals. [0] For example: A poisonous berry has a very different utility from an edible berry. Or: A monsoon season changes the climate enough to make an important difference for your tribe's survival whenever it happens. Querying your surroundings, or other humans for this information might have a large impact.

In that sense, it makes perfect sense that we "can't stop seeking novel and potentially interesting information" on these sites. Of course the way some of them are designed to be addicting doesn't help. But it illustrates why it's hard (or impossible) to quit doing this activity in it's entirety.

Maybe we shouldn't strive to quit searching for information, but make sure we have a satisfactory information scavenging activity as our go-to? I don't know what that would look like in practice, but the first thing that pops up in my mind is something like having a list of topics that seem interesting, and that you actively seek out information on, where you partially investigate some of the forks in the road.

Then there's the problem of being too exhausted to do something actively, which might need another solution entirely of course.

[0]: https://www.youtube.com/watch?v=F3n5qtj89QE Sadly I don't remember the timestamp of Jordan Peterson's statement saying humans are information foragers, but intuitively it holds up.


Torch.jit shouldn't impact your performance positively or negatively in my experience. Although I've only used it on cpu. It's as far as I know just used for model exports.

The nice thing about it though, is that you can embed native python code (that's compiled to c++) into the model artifact. It's allowed us to write almost all of the serving logic of our models very closely to the model code itself, giving a better overview than having the server logic written in a separate repo.

The server we use on top of this can be pretty "dumb", and just funnel all inputs to the model, which the Python code determines what to do with.

As for model speedups, maybe you should look into quantization? I also find that there's usually lots of low hanging fruit if you go over code and rewrite to quicker ops which are mathematically equivalent, but allocate less memory, or do less ops.


I'm pretty sure stars were discontinued because the best proxy for "what you will spend time watching, is what you spent time watching" and not your curated ratings. As in: the datapoint they funneled into their models was "time spent on video X" and not, "rating on video X".

Business of course wants to keep churn low, and they think time spent on the site is the best way to do that.

That (and "people mostly don't use ratings beside 1 and 5") is at least the reasoning I've read every time I've seen this topic mentioned in blogs/talks.

I do also feel Netflix is pushing it's in-house content way more these last few years, to the detriment of their recommendations though.


Just wanna throw this out there: Plex now has an app for accessing music in a way better way than their movie/series-app, called Plexamp. [1]

I dont see it solving discoverability yet, but it worked completely fine on my Android devices. It also does some automatic tagging of your music to generate dynamic playlists.

it is limited to plex pass subscribers for now though. i'd say it's worth trying a free trial at least.

1: https://plexamp.com/


This might be more fit for StackOverflow, but I have a related question.

I have a Go application that runs in Kubernetes, where memory usage steadily increases until it's at around 90% of the cgroup limit, where it seems to stabilize. As far as I can tell, Go GC uses the container memory limits to navigate it's total memory usage (this might be the fault of the OS not reclaiming what Go has already freed(?)).

However, my issue is that in this app, I also call out to cGo, and do manual memory allocations in C++ every 10-30minutes. This works well, except when the container is stabilized at a high memory usage, and my manual allocation brings it over the limit, thus forcing kubernetes to terminate it. (These allocations should as far as I know not be leaking. For a short while, I have two large objects allocated, and 99.9% of the time it's only one)

So, what I'd ideally want is to be able to specify a target heap size for GoGC, and then have a known overhead for the manual allocation. But as far as I'm aware, this isn't possible (?)

Does anyone have any experience with something like this, or see any obvious avenues to pursue to solve the termination issue?


Since Go seems to respect the memory limit, you could try using syscall.Setrlimit to set an artificially lower limit that you know will leave enough room for your other allocations. Have you tried playing with the GOGC environment variable from the runtime package? Maybe you could also manually collect a memory profile with runtime.MemProfile and call runtime.GC() if needed, but I've never done anything like this, just throwing out ideas I would probably try


I'm not sure I understand your stance here. It's not like Norway is continuing drilling for Oil /because/ we need to subsidize these electric cars. The drilling is being done because there's lots of money to be had, and lots of vested interests. I'm fairly certain the subsidies would continue even if drilling stopped this very instant.

Criticizing the Oil drilling is a separate topic entirely. The Norwegian stance on it is divided as well, making it one of the larger feuds in this years election.


So, I have clearly not been "paying attention in class".

This is some sort of board to connect several Pi's, and make a "cluster"?

What are the advantages to simply connecting them via my LAN, except cable management?


The ability to take Jetson cards is actually a really strong point in its favor IMO. Raspberry Pis by themselves aren't incredibly interesting in clusters except for pedagogical reasons, but with some Nvidia cards running whatever on CUDA, maybe with some RPis mixed in to do management/support tasks, and the built-in BMC, this could be pretty sweet for the right task.


I'm not sure these NVidia cards are very powerful. One decent GPU in a PC may blow several clusters of these out of the water. I haven't checked, though.


In raw performance, probably. The benefit these have (at least purportedly) is they're very energy efficient, consuming little power (and generating little heat) for comparatively large throughput.

So I can imagine someone wanting a few of these on a desk, running inference on some models or something, maybe as a small back-end for a hobby project. It may still be more power efficient to just use regular GPUs, but I suspect these win out because of the tight coupling between "CUDA cores" and the CPU.

Now, is that worth spending a bunch (many hundreds) of dollars on a carrier board and these Jetson modules? For me, no, but I at least see why it may appeal to some people.


Isn't cable management a pretty inconvenient thing? This also includes a switch. You need plenty of cables to replace this.


Not trying to minimize cable management, just trying to see if I'm missing anything here or not :)


I guess it is. But I suspect the price (as it's usually the case with "cool" rpi things) isn't going to look like one of a cable management solution. INB4: $200


The entire purpose of a PCB is cable management ;)


Ha, that's a good one... also somewhat true! By extension, ASICs really are just about cleaner PCB layout.

(Yes, yes, there are non-aesthetic physical, electrical, designed, and parasitic effects of cables vs PCBs and vice-versa. Spoil-sport.)


> ASICs really are just about cleaner PCB layout

ASIC's are just very small PCB's with all discrete components etched on the same material ;-)


Yes, exactly, thus tidying ('mother') PCB layout in the same way that a PCB tidies all the cables into a small arrangement with all discrete components fixed in the same plane.


These connect Pi Compute Modules, which are distinct from regular a Pi in a few ways (eg they don't function without a host board of some sort, so don't have certain things on board like network connectors, GPIO etc) but putting that aside, you'd get to a reasonably similar place if you hooked up some regular Pi's, it's simply more wires with the regular ones.


I think the biggest differentiator here is direct access to the PCIe bus and SATA that doesn't go via USB - that's something you can't get on a normal Rpi.


It's not just the networking. Which would be awkward enough. It's also power and IO/Storage.

Just take a look at the PI Clusters people have built, the volume is a few times that of the boards alone.

Also, the CM4 is a bit cheaper than a comparable "complete" SBC, though I don't know if you'd come out ahead with the price of the board.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: