Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So post

1) a high quality original video (30-40 Mbit).

2) a 2-Mbit video compressed with your algorithm.

3) a 2-Mbit video compressed with x264 with reasonable settings.

Until you present evidence, that article has very valid criticisms, which you have not addressed. Instead you went full PR astroturfing telling how great your tech is.

PROVE YOUR CLAIMS.



I think what Dror is saying that if you take #3 and run it through their tool you'll get a smaller size with the same quality. So it's some sort of h.264 post-processor... I agree evidence would be nice.


That's not what I think he's saying. I think he's saying that the whole point of Beamr is to not have to choose a fixed 2 Mbps bitrate. Thus the test should be:

So the post should contain:

1) a random selection of high quality original videos (30-40 Mbit). 2) those videos compressed with Beamr. 3) those videos compressed with x264 - ALL WITH THE SAME OPTIONS - those options tuned to match the average bitrate of #2.

Notice the difference between #3 and what Diaz has done: he has tuned the parameters manually for each and every video file. IMHO that's a rather harsh comparison: "Hey scumbag! Your software is worthless because a really smart and dedicated human can do it just as well/better!!!"


bjornsing, I think you might not understand what Diaz is doing. He is not manually tuning the parameters for each video file. He is actually letting x264 automatically choose the bitrate and encoding parameters, to achieve a desired level of quality.


The way I understood it is they say they optimize bit rate for a given perceptual quality and their secret sauce is that they measure perceptual quality in a way that approximates what people actually see more closely. They "aim" for the same quality in the video you feed them and try and optimize bitrate. An example would be finding coefficients in a macroblock that could be further quantized without sacrificing perceived quality. So to me it makes some sense though I can't put a number of what you could gain with this sort of secret sauce, that is how much is there to squeeze out of already well optimized videos that use other measures. I would imagine the applying this secret sauce to the original video would be a better idea since quality measure is inherently something that depends on a reference. Almost by definition any change to a compressed video is reducing its quality (but perhaps they "reduce" things you can't see).

Since this is about perception it's really hard to measure. I looked at some of the comparisons people have done of the demo images and if you look carefully in some background areas you will see noticeable difference in quality between two images that people claim to be equal.

As your attention is naturally drawn to the foreground most people wouldn't notice that.


Beamr Video is not a bitrate-driven encoder - you cannot specify the bitrate of the output clip, so we cannot provide 2). We can only provide an output clip with the same quality as the input and lower bitrate, but we can't guarantee in advance what that bitrate would be. That is the basic difference between Beamr Video which is a quality-driven video optimization technology, and a regular video encoder that is typically bitrate-driven.


You could compress something with Beamr first, see what bitrate it spit out, then compress with standard encoder at the same bitrate. This isn't perfect since it's clearly favoring Beamr, but it would be somewhat useful.


How would this be "favoring Beamr"? I'd see it as piggybacking on a potentially useful Beamr innovation. Someone slightly more fanatic about IPR might even call it "theft".

As I understand it, the whole point of Beamr is that you don't have to manually tune the parameters for each video file.


It took me a few reads to see what you meant. I had taken Beamr to be about optimizing things like which what size blocks to use, how often to put i frames, and so on, which means better perceived quality at some bitrate. If it's always using the same settings and the only smartness is about what framerate to pick, then you would be correct.


Ok,

1) Compress the original with x264, target around 2mbit

2) Compress the result of #1 with your algorithm.

3) Compress the original with x264, target the bitrate of the result of #2

Compare #2 and #3


Even if you compare #2 and #3 and they are similar quality, what would that prove? You could do #3 for a specific file after Beamr Video has processed it, but how would you know the right bitrate for #3 without applying Bearm Video in #2?


Independent person from the industry here following this (as I am sure there are many more onlookers to this thread).

drorgill i think now is the time to provide some hard empirical evidence on your part, given the initial claims and a sample test done here it is my feeling that it should be relatively trivial to provide a counter example. cheers.


He could start by actually displaying an understanding of how video compression works, maybe.


I'm not on one side or the other here, but...

What would prove your claims?


If you compare #2 and #3 and they are similar quality, it would prove your product is essentially not doing better than x264.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: