Hacker News new | past | comments | ask | show | jobs | submit | binaryzeitgeist's comments login

This is very similar to work by Curious AI done a couple years ago, although it didn't work on high res videos.

Tagger: Deep Unsupervised Perceptual Grouping

https://arxiv.org/abs/1606.06724


Indeed, the only difference is they work on RGB space, and the dataset is a bit toy-ish (no offence), as the networks simply need to separate the objects either by color, or a regular texture pattern.

What proposed in this motion grouping paper, is more like on the idea level, which gives an observation that, although objects in natural videos or images are of very complicated texture, and there is no reason a network can group these pixels together if no supervision is provided.

However, in motion space, pixels moving together form an homogeneous field, and luckily, from psychology, we know that any parts of the objects tend to move together.


The difference is that the new work is based completely on optical flow (i.e. movement) input, while this one is based on.. something


This is rich coming from a state that introduced a bill like ACA-5.



What surprised me in the first place is why Tensorflow an "OpenSource" initiative by Google chose proprietary CUDA over "OpenSource" OpenCL.

Check Tensorflow issue 22 for more info.

Just sayin.


CUDA is superior to OpenCL. Also NVIDIA provides CuDNN, a proprietary library with very efficient implementation of deep learning primitives. If you want to train models faster, you have to use them.



290 image-heavy pages. smh


When are you planning to release the Mask-RCNN code?

I'm trying to implement the RoIAlign layer in Tensorflow and I've a few doubts and having the author's code would definitely help in implementing it.


No Shit, Sherlock !


Specifically to the CS context, I think some version of double blinded peer code-review should be made mandatory for a publication.

I've seen authors skip quite a many details that are quintessential to the replication process.

In short, if research is not replicable by the peer community it just useless, that's what it is.


CS is the craziest of them all. Those should be the easiest to replicate. "Here is the code, here is a manifest of the environment/container/disk image/etc." You should be able to take that and run it and get the same results.

Or are you saying that the code itself is the problem and that they've done the equivalent of "return True" to get the result they want?


In my other comment I mentioned the CS results I've largely struggled to reproduce is because they include enough detail for you to get the gist of how it works, but not enough to avoid going down some rabbit holes. Also, not all publications include code. Many venues don't require it.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: