Hacker News new | past | comments | ask | show | jobs | submit login
Don't Buy the Snake Oil of Beamr Video (gist.github.com)
245 points by Daiz on Feb 27, 2013 | hide | past | favorite | 137 comments



TL;D(id)R version:

Beamr isn't claiming a superior encoder. They're claiming to provide a service for video compression that is similar to what the tools like optipng and jpegoptim do for images: recompression without (noticeable) loss.

Their big claim to fame, however, is that they determine the "minimum" bitrate for your video to still look good.

So, in theory, if I feed back in the compressed video they give me - Beamr should refuse to optimize it further because it should be as perfectly compressed as possible.

Someone please do this test.

Then, more importantly, redo the test but modify those option strings embedded in the video (as mentioned in OP) so that it can't cheat and detect a "perfect" video by noticing its own proprietary app name or a previously seen checksum of the file.

I'd love to see the results.

If it can't detect perfection, letting the user recompress video over and over again until it slides into digital blocks of moving mush, then there really is no difference between this service and using a CRF of 18.5 as mentioned elsewhere in this thread.


> So, in theory, if I feed back in the compressed video they give me - Beamr should refuse to optimize it further because it should be as perfectly compressed as possible.

Nuh-uh, he claimed Beamr will reduce the bitrate of any video: http://news.ycombinator.com/item?id=5290308

You're giving them way too much benefit of the doubt, yes what you descibe above is what a sane video optimization algorithm along these lines would do, but that's specifically what we are all guessing (that would make sense) and not at all what Beamr has been claiming.

What about an arithmetic encoder on raw video that refuses to compress when it can't reduce file size (including headers) with the message "cannot optimize further without loss of quality". While I can't market it for 100x improved compression vs Blu-Ray, at least it does exactly what it says on the box!


It will most likely reduce the quality of the video

I think someone did this experiment, of reencoding a jpg multiple times to see what happens, and of course, after some iterations you only get a blob

Because it's lossy compression, you can only do this once or twice without noticeable loss

Now, of course, they're probably assuming you took the video, compressed it one time and then submitted it to them


> It will most likely reduce the quality of the video

Of course. Anything else would be a mathematical impossibility.

> I think someone did this experiment, of reencoding a jpg multiple times to see what happens, and of course, after some iterations you only get a blob

> Because it's lossy compression, you can only do this once or twice without noticeable loss

This is besides the point though and makes things a bit more confusing :)

Because of the Pigeonhole Principle you must chose, .. actually there is no choice, in order to lose bits (file size), you have to dump some bits (information).

However, what you describe is generation-loss, and there is nothing about lossy compression that forces you to have generation-loss, it just depends on the codec. JPG does it, most audio and video codecs also do it. But using 2x2 sampling blocks is also lossy compression (x4 compression!!), but it only applies the first time: using 2x2 sampling blocks on something that's already pixelated won't change it anymore, and of course neither will it compress it any further.

> Now, of course, they're probably assuming you took the video, compressed it one time and then submitted it to them

"Took the video", from where though? This is important, see the other post up in the thread about use-cases. Most video an end-user can get their hands on is already compressed by the time they get it.


> "Took the video", from where though?

I'm assuming that, if you're streaming the video, you either have the original or got a copy from the content producer that's minimally compressed (like DV)

I don't think there's "raw video" today except for the most specific cases. (even REDCODE is lossy for video)

But I don't expect their customers to submit them a video downloaded from youtube.


In this post: http://news.ycombinator.com/item?id=5290308 he seems to claim both that beamr it can compress any file and that it is a downfall of x264 that it would continue to compress a file when reapplied.

Weird.


Yeah, this response is partly why I suggested that testing protocol.


>Beamr isn't claiming a superior encoder.

Their sure make it sound like it in their marketing material. Hell, they've even gotten articles written about them that start like this[0]:

New technology from video encoding experts Beamr claims to be able to out-perform the new H.265 format by merely encoding H.264 better. Their video optimisation apparently reduces the bitrate of video streams by up to four times, while retaining their resolution, quality, and - most importantly - their industry-standard H.264 format. It works on all frame-sizes up to and including 4K

All the writing out there pretty much implies that thanks to Beamr Video's new technology, H.264 suddenly compresses up to four times better. Which is obviously not the case. No site is talking about "how most videos have 'too much' bitrate and how Beamr Video can select a much lower bitrate that is enough for the video to still look good", because that would be much more along the lines of what they're actually trying to do.

Anyway, there's something else that came to my mind from the discussion in this thread. Check out this section from a PR piece of theirs[1]:

>Online streaming services such as Netflix, Amazon and iTunes typically encode full HD (1080p) streams at 5-6 Megabits per second (Mbps). Utilizing Beamr Video optimization can reduce the required bitrate for full HD streaming to around 3-4 Mbps, enabling smoother playback, support for customers with lower broadband connectivity, and significant delivery cost savings to the streaming service providers.

Let's stop and think about this for a moment. As we have learned today:

* Beamr Video is not going to offer any kind of quality / filesize control in their service. Not even any kind of upper or lower bounds.

* Beamr Video is intended for re-encoding existing lossy video.

* Re-encoding from lossy to lossy is always going to introduce generation loss to some degree.

Combine these two facts, and you'll realize that for content providers like Netflix, Amazon and iTunes, this service is completely and utterly useless. When you're doing online streaming, bitrate matters. These services should all be encoding their video streams from very high quality sources (higher than what can be found on Blu-rays). If they ran these sources through Beamr, they'd only get a single video out of it, most likely at a bitrate far larger than what they're willing to stream (as demonstrated on Beamr's page, Blu-ray sources are enough to make for 9-30 Mbps streams on average, while, as the quoted section mentions, streaming services usually offer 1080p video around 5-6 Mbps). Thus, it rules Beamr out for this kind of straight encoding from high quality sources for online streaming use cases.

Now, they could encode all their streams like usuals, in different resolutions and bitrates, and then run these streams through Beamr, but does that make any damn sense? No, it does not. As I've proven, at the same bitrate, Beamr offers nothing over x264, so if any legitimate service wanted to offer their videos at smaller bitrates, they could just encode their videos straight to those bitrates from the high quality sources. Or, in case of something like Netflix that already encodes videos to several different bitrates, they could simply drop the higher-end streams. Not to mention that re-encoding with Beamr could potentially make the streaming experience worse due to the fact that Beamr does not seem to set any VBV options[2] for their encodes.

It's also the same situation with any digital video download services: If they wanted to offer lower-bitrate content, they could just encode the content directly to said lower bitrates. And since any legit digital video downloads are DRM'd up the bum (man, why must the legitimate digital video download market suck so much?), you, as an user, would not be able to make the video smaller through Beamr either.

So I guess the big question is: What actual use cases does this service have? Re-encoding Blu-ray sources isn't really that feasible for end users either, since what kind of user would have the patience to upload several dozens of gigabytes to Beamr with a regular consumer internet connection (unless they have Google Fiber or something)?

I guess ultimately this leaves us with regular users wanting to compress their videos taken with mobile phones or something, but if they for example want to put these videos up on YouTube, then there's really no reason for them to upload the full-sized original video to Beamr, wait for them to re-encode it, download it, and then upload that to YouTube, which is going to re-encode it again, when they could simply upload the original full-sized video straight to YouTube and have it be re-encoded only once.

[0] http://www.redsharknews.com/distribution/item/455-beamr-clai...

[1] http://www.prnewswire.com/news-releases/beamr-unveils-breakt...

[2] http://mewiki.project357.com/wiki/X264_Encoding_Suggestions#...


Just for clarification - I completely agree with the testing that you have done here. I think you've proven that it's easier to get better results with CRF settings alone and that they're likely using x264 as the encoder behind the scenes.

I should have prefaced my commentary and noted that it was a response to the ("clarified") claims in this thread, from Beamr, about what the Beamr service actually provides.

In response to your tests, they claimed they were selling intelligent settings and "the lowest bitrate".

I just provided a means of testing this aspect as well.


>I just provided a means of testing this aspect as well.

I agree that it would be an interesting test, but seeing that they're not actually offering a real cloud encoding service for users yet, the only people who could run tests against their product are people working at Beamr Video. Which obviously wouldn't make for very reliable test data.

(By the way, it's "CRF", not "CFR". "CFR" generally refers to "constant framerate" in the context of videos and video encoding, whereas x264's "CRF" stands for "constant rate factor" - but as said, people generally just call the CRF mode "constant quality mode".)


Thank you, fixed


Not trying to pile on, but Daiz's comment...

> re-encoding with Beamr could potentially make the streaming experience worse due to the fact that Beamr does not seem to set any VBV options[2] for their encodes

...is crucial if you're sharing video online. As a video delivery network (VDN), we run into badly encoded streams all the time. Clients can play the files locally, but after uploading to us the files don't stream or stream badly when trying to play online. Daiz's comment mentions one of the more subtle causes of such problems.

If you are streaming video to end users, which is what most people here on HN are doing with video, keep in mind that users' bandwidths are generally constant. Someone with a 756kbps DSL line cannot magically download at 3 mbps during the explosive parts of your video.

Most encoding guides you find around with x264 settings are for your own content on your own computer, and neglect the options necessary to ensure your video doesn't use up bits faster than your pipe can refill them. Most pro encoding tools miss these settings as well.

In the old days (Windows Media) for the best streaming video quality, you could use constant bit rate (CBR) but define a long window for the CBR measurement, say, 15 seconds. That would give you variable bitrate from second to second (say, a scene change), but limit your bitrate to the CBR over any given 15 second window.

x264 has options that allow the same thing, essentially defining how many bits you need to have in reserve to encode complex parts from, and what constant rate can refill the reserve. For a good streaming experience, this is more important than "frame by frame quality". Casual users might not notice a 10% difference in picture quality, but everyone notices if the stream pauses.

Here's an explanation or two:

http://codesequoia.wordpress.com/2010/04/19/what-are-cbr-vbv...

http://aviadr1.blogspot.com/2010/12/vbv-for-dummies.html

See link [2] in parent comment for the VBV settings you need.

All this said, I can see a use case for Beamr for utterly non technical users that want to archive their own video from Bluray or DVD or broadcast quality digital sources. Looks like this tool can approximate a reasonable guess at perceptual quality with "zero config". As others have said here, I personally prefer to do that by having a multicore encoder and let x264 apply as much look ahead and calculation as it can to take advantage of motion calculation and perceptual quality. Play around with those settings till you hit an encoding time that you're happy with, then with those set, find the CRF that looks good to you, and you're pretty much set from then on. Just encode future videos to that CRF and you'll be happy with the quality -- x264 will pick unique bit rates tuned for each video that give you the PQ you want.


I've found that most claims of extreme video or image compression don't hold up under scrutiny. There's just too much research that has to go into it and so many tradeoffs to consider.

However, I know of one program that does in fact meet its claims: DLI Image Compression (https://sites.google.com/site/dlimagecomp/)

It only compresses images, and it is extremely slow at it too. But apparently it's better than anything out there. Dark Shikari (x264 developer) says [1]:

> I’ve seen that one before; iirc, last I tested it, it was competitive with x264 and even beat it significantly at times. Given that it was adapted for insanely high compression times, of course, this is not very surprising. Still rather impressive.

[1] http://x264dev.multimedia.cx/archives/541



A lot of these companies, like Beamr, with extraordinarily technological claims tend to be preparations for stock pump and dump schemes. Usually their parent company will eventually be listed on:

http://stockpromoters.com/

Tim Sykes talks a lot about this stuff on his website.


The important underlying issue that some people take for granted, and some are completely unaware of:

Most h264 encoders suck. If you take an h264 recording from your camcorder / phone / ip camera / blu ray rip, and push it through x264 (and no special configuration), you usually get 30% bit rate reduction with no loss of quality. With tweaking, you very often get 50%-75% bit rate reduction with no loss of quality, but you have to actually tweak. (e.g., I'm getting ~65% bit rate reduction on h264 streams coming out of Axis IP cameras by running through x264 with -tune slow ant no other change).

Beamr's claim to fame appears to be that they automate this tweaking. Diaz' claim seems to be that "--crf 18.5" is a tweak that can consistently deliver better bitrate reduction than Beamr.

My opinion: If you can use the x264 commandline, Beamer is probably overpriced for you. But if you're a professional photographer with no serious computer skills, Beamr might be useful for you. (Assuming they actually deliver something better than --crf 18.5 ; a claim of which I have no knowledge, but I will assume for the sake of argument)


ICVT actually developed some interesting algorithms, but it's a shame they're letting marketing hyperbole overshadow it. Another case where technical innovation doesn't sell, I guess.


Even if they had actually developed some "interesting algorithms", they sure don't seem to be using them, at least not in the actual video encoding process. There's really no other explanation for the fact that you can end up with practically identical video to Beamr's examples when using largely identical settings and bitrate with vanilla x264.


There actually is an argument that if they can do as good a job as a manual encoding but automate the process, that is valuable, even if the encoding process is the same as the manual process. If you had thousands of videos to encode, they all probably wouldn't be optimally encoded at the exact same settings, and automating that task could save a significant amount of time.


Yea, but what Daiz proves in the OP is that he personally can do a better job than this so-called "algorithm". I hear his next project will be proving the worthlessness of spell checkers by simply beating them, at spelling. That paper clip thing in Word is such Snake Oil! ;)

For the record I have nothing to do with this Beamr, but I do feel sorry for the those guys. The marketing speak on their website may be a little hyperbole, but there's nothing fraudulent there that I can see. I don't think they deserve this treatment.

UPDATE: LocalPCGuy, sorry for posting this as a reply to your comment. You obviously get it.


Actually, no. What Diaz showed is that Beamr is offering nothing that actually improves either visual quality or bitrate; every setting that their method uses actually reduces quality compared to default x264 settings.


> What Diaz showed is that Beamr is offering nothing that actually improves either visual quality or bitrate

Well too bad Beamr never made that claim (AFAIK). What Beamr does claim is that their software can automatically find settings (per frame) that will compress a certain video file to it's smallest size without affecting quality (too much - in some subjective measure).

I'm not saying that Beamr is great. I have no idea, I've never used it. But what I can say for sure is that Daiz has not made a fair evaluation of the technology in the OP.


From Beamr's site "Beamr Video is capable of reducing the bitrate of H.264 video streams by up to 75% (4X), while preserving their visual quality."

sounds like they are claiming to work wonders on your bitrate. I think Diaz just pointed out that these claims are dubious at best


> I think Diaz just pointed out that these claims are dubious at best

He did no such thing. He just pointed out that you can get the same benefits from using x264 directly.

Most people just take the h264 stream they got from their phone / camera / BluRay rip. These are horribly compressed. x264 can consistently improve those without degrading quality by 30% without much tweaking, and by 50-70% with some tweaking.

Apparently, Beamr saves you some tweaking. Diaz' claim is that the tweaking saved is ridiculously minimal and does not warrant all the hyperbole around beamr.


They explicitly state "The technology works in the domain of standard H.264 video, resulting in video streams that are fully compatible with any media player or consumer device."

Assuming H.264 compatibility, what techniques could be applied to compressing videos?


A video standard essentially defines a set of methods for compressing video - it's up to the encoding applications to decide how to apply those methods and where. Basically, the smarter decisions the encoder can make, the higher quality results it will produce. This is why all H.264 encoders are not created equal.

As an example of "techniques" that can be applied to compressing videos, one of the reasons why x264 is considered the best H.264 encoder out there is due to its advanced psychovisual optimizations - these are basically methods that will increase the perceived visual quality for human viewers, but decreases the quality in the eyes of metrics like PSNR and SSIM.


The techniques that could be applied in the domain of standard H.264 video are called "H.264."


> Assuming JavaScript compatibility, what techniques could be applied to minifying code?


You can minify code ensuring absolute correctness or make substitutions that are almost always correct.

For example, there are some cases where you could replace tests like `x===y` with `x==y` but this may not work in all cases. If you accept imperfect translation, then this is a valid technique, but it should be clear that the translation isn't perfect.

The reason why my question is different is because I'm asking about the constraints on the format. For example, is run-length encoding (RLE) acceptable? That wouldn't be acceptable for a .BMP bitmap


The folks behind Beamr H.264 also have a 'patent pending', proprietary JPEG image "recompression technology": http://www.jpegmini.com/

I'm curious what an analysis of one of their JPEG's would show?


Downloading the ros-k sample and playing with it in GIMP, it looks like exactly the same kind of chicanery using a few simple steps:

1. The "original" image is saved at a very high JPEG quality setting, somewhere around 99% by GIMP's figuring

2. The "JPEGmini" version is saved with a slightly lower, but still high quality setting of about 85%.

3. The "comparison" on the website shows the images scaled down to 25% of their encoded resolution.

In other words, the JPEGmini version is nothing special. If you save a JPEG at 85% quality and look at 1/4 scale, it will look exactly the same as a JPEG saved at 99% quality at 1/4 scale. And it will look just as good as if you pass it through Beamr's software.


Well, yes. But I think the whole point of this technology is that you don't have to come up with the 85% number.

It can be a lot of work to find the lowest quality setting that will be perceived as (near) lossless. Think millions of files.


I see now that this is what they purport to do. However I still maintain that the presentation is dishonest. Showing a comparison at 25% scale gives the impression that the tool is better at choosing "nearly lossless" setting than it really is. Look at the dog image, which on their demo appears to be identical to the original.

Now look at it at 100% scale: http://imgur.com/z12mHnd . Block artifacts galore. Now sure, it may still do a better job than just choosing a constant quality setting and applying it across the board. But the demo doesn't show us that. It doesn't even make the right kind of comparison.

What we need is a comparison of choosing a single quality level, and using JPEGmini. To be useful, here's the kind of demo we'd need to see.

On the right side: five JPEGs saved with JPEGmini, having a total file size of X, shown at 100% resolution.

On the left side: five JPEGs saved with a constant quality setting, chosen so that the total file size is X, also shown at 100% resolution.

Then we'd honestly know whether the program is worth using.

My guess? Probably not. The train station image, for example, is so grainy that you can compress it to damn near 600KB (50% in GIMP) before the artifacts are really noticeable (and well beyond that if you scale it down to 25% afterward). So did JPEGmini's visual model detect this and cut the bitrate down accordingly? No, it decided that the image should be saved at the equivalent of 83%, making the file more than twice as large as necessary. And this is on an image that was presumably handpicked as a shining example of how well the product works.

My guess is that if you just chose around a 75% quality setting and compressed all of your JPEGs that way, you'd do just as well as JPEGmini.


I agree 100% on the proposed comparison. In fact I proposed the same for a fair evaluation of Beamr video above. :)

For the rest, I have no idea. I've never used anything Beamr.


That's actually a very good analogy to why Beamr's "minimal bitrate for no quality loss" isn't groundbreaking at all, especially since there is necessarily quality loss in lossy H264 -> H264 encoding.

85% is already a quality number, saying relatively how much quality you are willing to give up for kilobytes in your output JPEG. Similarly, x264's CRF option is a quality number, saying how much quality you are willing to give up for bitrate.

Inevitably Beamr will produce some files that are inefficient, as well as some files that have noticeable banding and banding. The difference is CRF allows adjustments.


The files in the before/after preview slider are exactly the same file size when they're supposed to be showing how their compressed image looks as good as the original. They're also not the same files you get when you click the download button.


Those are just preview files used for fast page load, but they are based on resized versions of the original and the JPEGmini version. You are welcome to download the original and JPEGmini full resolution files, and compare them at "Actual Size" (100% zoom).


If you get two large images and resize them down to a small preview, they'll look the same. It's really complete bullshit showing those as "examples".


I downloaded the pic of the dog, opened the 'Original' in Paint.net, re-saved it at the same filesize as the JPEGmini version and it's indistinguishable from the other two versions. JPEGmini doesn't seem to do anything that I can't already do in any image editor.

Looks like snake oil to me.


Note my previous reply on this issue: JPEGmini adaptively encodes each JPEG file to the minimum file size possible without affecting its original quality (the output size is of course different for each input file). You can take a specific file and tune a regular JPEG encoder to reach the same size as JPEGmini did on that specific file. And you can also manually tune the quality for each image using a regular JPEG encoder by viewing the photo before and after compression. But there is no other technology that can do this automatically (and adaptively) for millions of photos.


>JPEGmini adaptively encodes each JPEG file to the minimum file size possible without affecting its original quality

That is indeed what the FAQ says, but that being the case, the tool does not actually work very well, and the presentation is incredibly dishonest. The fact is that the JPEGmini versions of these images do lose noticeable quality, but Beamr is hiding this by giving a demo where the images are shown at 25% scale.

Take the dog image, for example. Using the slider, you'd think that JPEGmini nailed it; no visible artifacts whatsoever. But let's look at a section of the image at 100% scale, and see if this tool is really that impressive: http://imgur.com/z12mHnd

...holy block artifacts Batman!


I hope you're aware imgur compresses images that you upload to it regardless of whether they were already compressed.


Maybe they do that for lossy images, but I just compared what's on imgur to the original on my computer, and it is pixel-for-pixel identical. They did shave about 7KB off of it though, so maybe they push PNGs through pngout or similar.

I mean, as far as I know, there aren't even any practically useful algorithms out there for doing lossy compression on PNGs (although there should be).


Fireworks, pngquant, and png-nq have quantization algorithms that will dither a 32-bit PNG with alpha down to 8-bit palletized PNGs with alpha. The palette selection algorithms the free tools use (I haven't used Fireworks) sometimes drop important colors used in only a small section of an image, resulting in a blue power LED losing its blue color.


Yeah, technically that's lossy compression, but what I meant was lossy 32-bit PNG; that is, a preprocessing step before the prediction step which makes the result more compressible by the final DEFLATE step while having a minimal impact on quality.


That sounds very interesting. I wonder what kinds of transforms would improve compressibility by DEFLATE. I know a bit about PNG's predictors, but not enough about DEFLATE to confidently guess. If you ever work on this, please let me know. I'd like to collaborate.


Even if imgur did use a lossy format (which it didn't), you can see a difference between the top and bottom halves of that single image.


>But there is no other technology that can do this automatically (and adaptively) for millions of photos.

Excluding of course, the for loop or the while loop; particularly when used in a shell/python/perl/etc script. Matlab is also particularly well suited for this. Then there are visual macro thingamajigs. iMacros for browser based repetitive tasks, which could then be used with an online image editor. Irfanview, I believe has batch image processing, as do many other popular photo/image editors. And last, but not least, Imagej.

You may have added feedback to the loop where many would have had none, but feedback is not a new or novel concept. The only thing I can see that is possibly non-trivial is your method for assessing quality of the output. Given the many high-quality image-processing libraries, and well-documented techniques available, and the subjective nature of assessing "quality" with respect to how an image "looks", I doubt there is anything original there. You've enhanced the workflow for casual users, the uninformed, and those who prefer to spend their time on something else. That arguably has value. While it seems a bit of a stretch to call it "technology" to this audience but, that is what the word means.

IMO, you'd receive a "warmer" welcome from the more technically-minded folks here if you'd dispense with the marketing hype (definitely stop making impossible claims), and show some real evidence of just how much "better" your output is over some reasonable defaults, including cases where your system fails to meet your stated goals (even a random quality assessment will get it right sometimes). Nobody is ever going to believe that any system as you've described works for every case, every time (simply impossible). In other words, you aren't going to sell any ice to these Eskimos.

edit: accidentally posted comment before I was finished blathering.


i would appreciate an explanation of what you mean by "without affecting its original quality." We're talking about lossy compression, so whether that goal is achieved is purely subjective, isn't it?


JPEGmini operates on a similar principle to Beamr. The JPEGs coming out of your camera are compressed with a very high (wasteful) quantizer setting, so JPEGmini analyzes the image and recompresses with a much lower but visually identical quantizer setting.


At first I was all, "oh man -- another small company trying to rip me off."

Then I was all, "oh man -- this actually seems like a pretty cool product"

I actually hate choosing compression settings, so for a piece of software to choose compression settings for me seems really nice.

Up to 4x -- so what if it is "over the top"? That's advertising. You present your best results. You pick the stuff that sounds the best. Why? Because people are usually too ignorant (not necessarily a bad thing) to know what is good for them.

Anyway. Props to you guys (drorgil) for making a good product and standing by it.


"I hate choosing compression settings" -- but you don't have to. Diaz has shown in this thread, that a free, open-source encoder can automatically choose settings for you which work better than Beamr's.


Daiz hello, this is Dror from Beamr. The technologies we have developed, JPEGmini and Beamr Video, are definitely not "Snake Oil", and I don't think it would be good practice to make such claims regarding any company's technology without checking the facts first and asking for the company's response before posting.

Our technologies for image and video compression have received excellent reviews in the media, and have been tested and proven by industry experts. You can find the reviews online, and I would be happy to send you the quotes we got from the industry experts as well. Thousands of users are using our free online photo optimization service at JPEGmini.com daily, and thousands more have purchased JPEGmini for Mac to optimize images locally, and they are all very happy with these tools.

The important thing regarding our technologies, and the main point you have missed in your analysis, is that they are used for automatic image optimization to the lowest bitrate or file size possible, and not for encoding an image or video to a specific bitrate or file size. Based on analysis of the image or video using a perceptual quality measure we have developed, we adaptively set the encoding parameters for each image or video. So the amount of compression for each image or video clip is different, and depends on the content and original quality of that image.

Comparing the quality of a clip encoded in Beamr Video with the quality of the same clip encoded at the same bitrate using another encoder is meaningless, since part of the uniqueness of Beamr Video is "knowing" what is the right bitrate (actually the lowest bitrate possible) for encoding each video such that it stays perceptually identical to the original clip. Beamr Video is the only technology that can adaptively reduce the bitrate of any clip to the minimum amount possible, while ensuring that quality of the output clip is perceptually identical to the quality of the input clip.

The same is true for our image technology: We reduce each JPEG file to the minimum file size possible without affecting its original quality. Of course, you can take a specific file and tune a regular JPEG encoder to reach the same size on that specific file. And you can also manually tune the quality for each image using a regular JPEG encoder by viewing the photo before and after compression. But there is no other technology that can take millions of photos and automatically adapt the size of each one to the minimum possible size while preserving quality.

The clips on our website are an initial demo of Beamr Video we are offering today, based on Blu-Ray sources. In a few weeks we will launch a cloud-based Beamr Video encoding service, where everyone will be able to process their own clips and reach their own conclusions. There are free trial versions of JPEGmini available for anyone to download and try. So again, I don't understand the basis of your "Snake Oil" claim. I would be happy to continue this discussion with you privately if you like, you can email me at dror@beamr.com.


But what does that concretely mean, "lowest bitrate or file size possible"? What do you mean by "possible"?

You almost certainly mean choosing a lower bitrate whose numerical perceptual quality is similar to the video encoded at a higher bitrate. Of course, x264 can do that—that's what the CRF value is. It already has technology to choose the best bitrate per GOP given its perceptual model.

Even if I take everything you say at face value, your semantics, at minimum, are the snake oil. You make it sound like this perceptual compression technology doesn't exist, when it actually ships on by default in Handbrake.

I think what Daiz is talking about is snake oil of the technology, not semantic, variety.

So let's talk technology snake oil. Your special perceptual model might find situations where it could reduce the bitrate in a way that x264's model would disagree. But what about situations where x264's model is better? The end-user never gets to see where x264 makes a subjectively better image than your system does at a lower bitrate.

In other words, quality arbitrage: you can always claim to reduce bitrate by adjusting your model to be a bit more permissive than the competition. Knowing there is no rigorous numerical way to compare two perception-optimizing compressors, you can get away with "arbitrage" bitrate reductions.

Put another way, a lot of people are going to demand the wrong tests, because they don't understand the inherent contradiction of comparing two perception-optimizing compressors (like allegedly Beamr and x264's CRF model).

The way I'd know if your perceptual model were in fact better is if you could show me situations where it is wrong. In other words, convert your "lowest... possible" Beamr Perceptual Model parameter to a CRF parameter. You could do it subjectively. Find me situations where x264 chooses a lower bitrate than your model. At least if your model is falsifiable, it isn't snake oil.

Incidentally, Diaz did exactly this test. And your model calculate a worse bitrate than an equivalently-chosen CRF in all cases.

So your model isn't snake oil—it's just bad. Perhaps we would randomize the subjective differences, but my suspicion is the default Handbrake CRF (20.0) will work better more than 50% of the time for randomized videos against a randomized audience.


> my suspicion is the default Handbrake CRF (20.0) will work better more than 50% of the time for randomized videos against a randomized audience.

For those last few words there you actually got close to the problem, as I understand it from drorgill's explanation. So am I right to assume that you're critique, in this tone, is based solely on a "suspicion" of yours? As I read the OP this is absolutely not the test Daiz did...


Looks like Daiz did a similar test to the one I described below. Naturally we would need blindly randomized videos and randomized audiences, rather than just Daiz audience of one knowing exactly which videos were encoded by what. We'd also need some function converting the CRF parameter to the Beamr Perceptual Model parameter ("minimum possible" parameter), which we could obtain by knowing the Beamr parameter for a given CRF when Beamr and x264 choose the same exact bitrate.


randomized blind A/B testing really can make all the difference. they did the same on the Hydrogen forums, for testing lossy audio codecs, and if there was only a hint of information which bit was which, results would be heavily biased.

crazy, but that's how human perception seems to work. it also raises the question whether, at some point, this placebo effect might not be way stronger, than any actual perceptual differences left over after correcting for biases.

and if that's the case, maybe we don't need better codecs, but shinier TVs, and better people to convince us that video quality is better and more enjoyable. like snake-oil salesmen. or like that high end electronics brand that put a weight in their remote controls, just so it feels more 'solid' and high quality (brilliant idea, I forgot what brand, might've been Scandinavian).


Finding the optimum settings for a given video is a valuable product, and this makes the criticism of being just x264 irrelevant, but his quality comparison still seems valid. If Beamr is recompressing something to a lower quality at a given bitrate compared to standard techniques, this implies that you haven't actually solved the problem of finding the optimum settings. If the tool doesn't support this, it would probably be best to turn off the constant bitrate feature until it does support it well. If you make a feature available which is directly comparable to something else, you can't legitimately complain that people make the comparison.


We don't have a constant bitrate feature. And the comparison you refer to was not done on content encoded with Beamr Video, but on content encoded with "Beamr-like" settings for an x264 encoder. Since we are controlling the video encoding at the frame level, these comparisons are meaningless.


I'm going to answer you in detail, but it's going to take a while since I'm going to do some additional test encodes for it.

Also, you should note that I am only talking about your Beamr Video product in my post - I haven't used or tested your JPEG tools, so bringing them up here is largely irrelevant. Also, even if you have developed something effective for JPEG, does not mean you could develop something equally effective for a much more complex format like H.264.


> We reduce each JPEG file to the minimum file size possible without affecting its original quality.

This is blatantly incorrect. In files encoded with JPEG mini, there is visible banding. They are smaller, yes, but there is certainly a perceptual difference in image quality. I tried it myself.


Can you give examples? I've compressed thousands of files with JPEGmini and never found any banding or other loss of quality.


I'll leave it up to you to determine which is which. The results are undeniably atrocious.

http://i.imgur.com/kadvgkyh.png

http://i.imgur.com/OGvb2vM.png

(Yes, the file name is the same in both because I imported the two and switched the layers on and off.)


When JPEGMini came out I thought it was a hoax. I've been working with various image compression tools since 1992, and was highly skeptical about their claims, but after encoding a ton of edge cases, I was convinced it was the real deal.

Have since encoded thousands of images for various clients with JPEGMini, and have never noticed ANY difference or persistent issues. Most of all, it saves me from having to experiment with individual image optimization settings. I always use it as a last step in every development project (last step because it has the annoying 'feature' that it overwrites the originals).

I dare you... Give me the original JPEG of your PNG samples (you DID start out with a JPEG, right?) and I'll repeat your experiment with my local JPEGMini install, then upload them to IMGUR for re-post here.

As to the issue at hand (Beamr/Snake-Oil): if Beamr saves me the trouble of manually having to fiddle with all kinds of optimization parameters to get very good result, much more difficult for video than JPEG, because scenes and requirements change, then that too is worth my time and money.

To be clear: I have nothing to do with Beamr or this company. I've been lurking around on HN for a long while, check my profile. The original Diaz post and claims like yours are simply a load of crock.

PS I know you can get better compression results with JPEG2000 and various more obscure formats, but the clincher for me has always been that the end result was straight ole JPEG. Works everywhere.


 > The original Diaz post and claims like yours are simply a load of crock.

 > you DID start out with a JPEG, right?

I hope you like My Little Pony.

 > then upload them to IMGUR for re-post here

I wouldn't try uploading JPEG images to Imgur, they get put through a compressor.

Original — http://d.pr/GIWO+

Lossy — http://d.pr/XIUf+


Ignoring the fact that the original image [1] had banding issues (e.g. top left), and ignoring the fact that JPEG is a format which optimizes file size by taking advantage of the limitations of the human eye as well as typical properties of REAL photographs (ie NOT cartoons, and NOT images with largely the same tone {'orangy' in this case}), I think you have to agree that my result from the standalone JPEGMini app [3], is markedly better than your "lossy" example [2]

[1] https://dl.dropbox.com/u/139377/ThreePonies/00-ORIGINAL.jpg (1.8MB)

[2] https://dl.dropbox.com/u/139377/ThreePonies/01-YOUR-LOSSY.jp... (121kB)

[3] https://dl.dropbox.com/u/139377/ThreePonies/02-MY-LOSSY%28JP... (410kB)

Your lossy sample is brighter, has major artifacts around pointy hair bits, the sickle, etc. and in zoomed in mode, which is not how you should compare these things, there's a distinct lack of detail.

If you were to repeat the same comparison with a realistic photo, the differences between the original and the jpegmini version, there are some, but you have to look hard, would be even less noticeable.

Please note: JPEGMini does not allow me to set any parameters at all.

In short: a single drag/drop on my part reduced a file from 1.8MB to 410kB and differences are extremely minimal. Another win for jpegmini.


 > is markedly better than your "lossy" example

I used the exact same application as you did. JPEGmini lite from the App Store. I'm not sure what could account for the discrepancies between our examples. As you mentioned, there's no possible parameters either of us could have changed.

 > and in zoomed in mode, which is not how you should compare these things

I see no issue with zooming to demonstrate already visible compression artefacts.

 > ie NOT cartoons, and NOT images with largely the same tone

JPEG compression actually looks better on images with flat areas of color of gradient. This can be demonstrated by exporting an image in the format with and without a very slight gaussian blur added to the image. The results are typically a good deal smaller with the blur applied.


> I used the exact same application as you did. JPEGmini lite from the App Store. I'm not sure what could account for the discrepancies between our examples. As you mentioned, there's no possible parameters either of us could have changed.

I used the non-lite version (ie I paid for it). Version 1.4.2 to be exact (I see the latest version is 1.4.3). Not sure if the "full" and "lite" version are using different compression levels or algorithms. I do know that older versions had a much lower megapixel limit.

> I see no issue with zooming to demonstrate already visible compression artefacts.

I do, because that's the crux of the story. JPEGMini (and Beamr too I guess), provide VISUALLY similar results with smaller file size and with a minimum of effort required.

Loss of information is inherent to lossy compression. Achieving a file size reduction requires that sacrifices are made somewhere, typically by removing detail. The trick is making these sacrifices in the right places, so that before and after appear the same.

If the processed image is displayed at 800x600 on a monitor with 150DPI, one would make different choices/assumptions about what to sacrifice than if the resulting image is viewed at 300% magnification. JPEGMini's feat is making the right sacrifices (note that they do NOT change the compression method, the end result is a STANDARD jpeg file, not a special format).

> JPEG compression actually looks better on images with flat areas of color of gradient. This can be demonstrated by exporting an image in the format with and without a very slight gaussian blur added to the image. The results are typically a good deal smaller with the blur applied.

Correct, but photos are still very different from cartoons, vector graphics, text, etc (think: colors occurring in nature vs colors picked by a designer, inherent blurriness of large parts of a photo, very few hard edges in most photos, the human visual model - color, luminance vs chrominance, filling in the gaps etc etc).


To add to my my point, check out these two real pictures, hope you like pink bags:

OriginalPony [1] - 451kB - https://dl.dropbox.com/u/139377/ThreePonies/ponyphoto2-origi...

JPEGMiniPony [2] - 151kB - https://dl.dropbox.com/u/139377/ThreePonies/ponyphoto2-jpegm...

Now tell me, where is the banding?

(CC, source: http://www.flickr.com/photos/dreamcicle/3552305929/sizes/l/i... )


You picked the wrong input to prove your point.

That image has too much noise. You need smooth gradients to get banding. The pony, the smoothest part of the image, has a lot of sensor noise (more chroma than luma but still) in the 3-6 pixel frequency range.


I was trying to prove my photo point ("If you were to repeat the same comparison with a realistic photo, the differences between the original and the jpegmini version, there are some, but you have to look hard, would be even less noticeable.")


There's certainly less impact on that image. The lower left section looks awfully blocky, but that's somewhat to be expected given the size of the image. The main difference in all of these examples is the loss of CCD noise, which could be considered a good or bad thing depending on your aim.


You damn shill. Stop peddling your wares on YC!


Man, you need to do your homework. Throwaway account to accuse me of something? Flagged bro. I've been here for 4+ years. Loser.


What you've described sounds a lot like x264's --crf option, which adaptively chooses a bitrate based on the input. How is Beamr different?


The decision Beamr Video makes are "smarter" than x264's CRF mode, since they are based on a perceptual quality measure we have developed. This quality measure is similar to the one used in JPEGmini, our image optimization technology, and has been proven (in standard ITU BT.500 testing) to have higher correlation with subjective results than other quality measures such as SSIM.


Now we're getting to the nub.

I'm no expert on this matter, so apologies if this is a dumb question, but: has the perceptual quality measure used in CRF been subject to the same ITU BT.500 testing?

If it hasn't, then I'm afraid your case remains unproven.


If that is so, show examples of video files which prove it! Show us some uncompressed video files which compress both smaller and with better quality using Beamr (as compared to x264's CRF mode).


Being generous - It sounds like beamr has a tool that can choose, for a given input video, which minimum crf option will provide a 'transparent' transcode.


> proven by industry experts.

Care to elaborate on this? As someone who has spend a considerable amount of time with video encoding, please cite your so called "experts", or is this just another bold weightless claim?


Okay then, you asked for it, so here you go.

>The important thing regarding our technologies, and the main point you have missed in your analysis, is that they are used for automatic image optimization to the lowest bitrate or file size possible, and not for encoding an image or video to a specific bitrate or file size.

This is exactly what the CRF mode in x264 is for. Now, in another comment you're claiming the following:

>The decision Beamr Video makes are "smarter" than x264's CRF mode, since they are based on a perceptual quality measure we have developed.

Let's see how well that measures up to reality then, shall we? For this comparison, I used the same four clips as earlier, and ran each of them against the following command line:

  x264 --crf 18.5
That's it. Completely default settings, with the exception of setting the CRF to 18.5 for everything. This shall be our vanilla x264 equivalent of your "technology that can adaptively reduce the bitrate of any clip to the minimum amount possible, while ensuring that quality of the output clip is perceptually identical to the quality of the input clip". Since we're using the same value for everything, it involves as much choosing on the user end as your service would. Now then, let's see how your "smarter than CRF" technology actually fares against x264's CRF. Here are the results:

* Clip 1 - http://check2pic.ru/compare/26755/ (CRF 18.5 encode is ~11.7% smaller)

* Clip 2 - http://check2pic.ru/compare/26756/ (CRF 18.5 encode is ~4.4% smaller)

* Clip 3 - http://check2pic.ru/compare/26757/ (CRF 18.5 encode is ~17% smaller)

* Clip 4 - http://check2pic.ru/compare/26758/ (CRF 18.5 encode is ~19.5% smaller)

( The full videos are available here: http://blisswater.info/video/beamr/set3/ )

As we can see from the comparisons, x264, at defaults setting and CRF 18.5, can produce practically identical visual results (as in you wouldn't notice any quality difference in action), while producing smaller bitrates all over the board. Looks like your supposedly "smarter than CRF" technology for choosing bitrates isn't so smart after all, eh?

In short, even if you have developed some sort of quality-based bitrate-choosing technology of your own, in practice it still seems to lose consistently to x264's CRF mode (and since x264 actually allows you to control the CRF value, it's much more versatile than your "no quality settings at all, we know best" offering). As such, given the substantial claims about the capabilities of your technology, the Snake Oil verdict shall remain.


Again, you are missing the point. Where did the CRF number of 18.5 come from? Did you test several parameters, and found what parameter gives lower bitrates than Beamr Video on this specific clip set?

Would you apply the same CRF parameter of 18.5 to any video file to reduce its bitrate? And does this parameter ensure reducing the bitrate of any video file without hurting its quality? If so, you could apply it recursively on a video clip and reduce bitrate indefinitely...

The bottom line is that there is no setting of x264 that guarantees reducing the bitrate of ANY input video file while maintaining its visual quality. And this is exactly what Beamr Video guarantees.


>Did you test several parameters

Nope. I just picked a value and tested what I'll get with it. Also, it doesn't matter where the number comes from. As long as I'm not changing it for different videos, the bitrate selection is completely up to x264's CRF mode. And what do you know, here it produced smaller results than your technology while providing the same level of quality!

>Would you apply the same CRF parameter of 18.5 to any video file to reduce its bitrate?

That's what I did for all the test videos here, and it produced better results compared to your technology on all of them (basically identical quality, smaller filesizes).

>And does this parameter ensure reducing the bitrate of any video file without hurting its quality? If so, you could apply it recursively on a video clip and reduce bitrate indefinitely...

Constant lossy re-encoding is going to degrade video quality no matter what - nothing is going to change that. Not your technology, not x264's CRF. Why are you even bringing up something as silly as this?

>The bottom line is that there is no setting of x264 that guarantees reducing the bitrate of ANY input video file while maintaining its visual quality. And this is exactly what Beamr Video guarantees.

I have just demonstrated that encoding with x264 --crf 18.5 is a more effective solution than your technology on all your presented test cases. You have given us absolutely nothing to actually "guarantee" your claims, which is exactly why your product is Snake Oil.


> it produced better results compared

"better" = "smaller files"

But how did the visual quality compare between the original and two compressed versions (yours and Beamr's)?


> Would you apply the same CRF parameter of 18.5 to any video file to reduce its bitrate? And does this parameter ensure reducing the bitrate of any video file without hurting its quality? If so, you could apply it recursively on a video clip and reduce bitrate indefinitely...

> The bottom line is that there is no setting of x264 that guarantees reducing the bitrate of ANY input video file while maintaining its visual quality.

Wait wait, what? so Beamr Video can be applied recursively on a video to reduce bitrate indefinitely?

You actually claim that Beamr Video can reduce the bitrate of any input video while maintaining its visual quality?

Listen, if you're in the business of data compression, you can't really just not know about the Pigeon Hole Principle and not make a fool of yourself sooner or later.

Also, just like talk of perpetual motion has no place in a serious discussion about energy efficiency, neither does talk of infinite data compression have any purpose in a serious discussion about data compression and bit rates. Regardless of whether you just claimed that your technology does that, or whether you just claimed that an industry-standard open source encoding solution with default settings ought to do this, in order to be compared to your technology.

> And this is exactly what Beamr Video guarantees.

I got some videos of white noise, I need them bit-identical, but smaller. Claude Shannon died 12 years ago, what's he gonna do about it? That silly Source Coding Theorem can't tell us what to do, or claim, right?!


Nice response. I'd phrase my own rebuttal like this:

If you repeatedly apply lossy compression to a video, you'll find that the video either very quickly balloons or becomes a blocky or blurry mess. The encoder ends up spending bitrate to preserve compression artifacts, and the bit bill keeps growing. This is something that x264, even in its efficiency, can't get around, and if Beamr is just tweaking x264's setting knobs, it doesn't have a snowball's chance in hell of doing better.

If Beamr is just offering a fire-and-forget solution to tweaking x264's settings, they should just say so and provide some hard counterevidence to Daiz's report. There's no shame in trying to offer a tweaked encoding service, but they'd better be able to back it up when challenged by evidence.

Making statements that are technically unlikely or impossible hurts the speaker's credibility and chances of garnering goodwill. It might have worked for Monster's mass-marketed cables, but it probably won't fly in a more technical group of consumers.


You write that "The bottom line is that there is no setting of x264 that guarantees reducing the bitrate of ANY input video file while maintaining its visual quality. And this is exactly what Beamr Video guarantees."

So, turning this sentence around, Beamr Video guarantees to reduce the bitrate of any input video file while maintaining its visual quality. (I think that's what you're saying. Please explain if not.)

So what happens if I take an input video file, run it through Beamr Video, and then take the output and run it through Beamr Video again? Is the output the same size or smaller?

If the file is the same size, the statement that "Beamr Video guarantees to _reduce_the_bitrate_ of any input video file..." can't be true.

If the file is smaller, what happens if we repeat the process again... and again... and again. We started with a file containing a finite number of bits. If each iteration reduces the number of bits, we eventually end up with an empty file. Obviously, it's not possible to maintain visual quality when there is no data, so the statement that "Beamr Video guarantees to reduce the bitrate ... _while_maintaining_its_visual_quality_" can't be true.

Either way, I can't see how your statement can be true.


The crfs are based on perceptual quality from the docs: 18 is visually lossless

http://ffmpeg.org/trac/ffmpeg/wiki/x264EncodingGuide#crf


I'll give you this Daiz: you have now done something that begins to resemble a fair evaluation of Beamr, and that's only 2 hours after you posted your original conclusion!

For the record: I have nothing to do with this Beamr thing, but I do feel sorry for those guys. Despite your 50 min evaluation here I'll give them the benefit of a doubt and conclude they're probably hard working honest people trying their best to build something valuable in this world, and sell it. If your analysis holds up they may have failed, but that's no crime.


You seem quick to dismiss his review. While the branding of "snake oil" may be an overly strong term, it might not be incorrect: the evidence I've seen thus far (as a user admittedly unfamiliar with the subject) does seem to point to profit off open-source work. But if we were always so quick to give the benefit of the doubt while ridiculing investigations as you did, the thievery would be out of control. I will say that this situation seems like we could take a lesson from Mr. Musk though, and make statements instead of accusations. I'll be interested to see how this turns out.


If you pay close attention to the lines around the woman's mouth in Clip 1, CRF 18.5 seems to have more defined lines and freckles (i.e. more signal in image processing parlance).


From the long answer above - and I'm no video expert - the tl;dr is that Beamr answers the following question and returns a video that fulfills the answer:

"Given that you are concerned with ZERO loss of quality in the output video, what is the optimal per-frame compression that minimizes output video size."


I am somone with a very low understanding of Video compression, who (working in a small company) had to create a "simple video compression system" for the dozen of videos we put online everyday.

I'm saying that becaouse I'm probably someone that may be interested in Beamr technology since my "video compression system" is mostly a bad mash up of mencoder and handbrake with a few tweaks that is used for automatically convert all our videos for the different bitrate/devices we stream to.

The problem is that most of what you find in your website or your comment is marketing enhanced technical dumbed down mumbo jumbo.

What someone with a little (very little) understanding of video compression would like to see is an example of Beamr working it's "magic" and see if it's really effective.

A simple test would be to take a random sizable selection (at least 5/600) of different videos from different sources encoded with Beamr or with a vanilla version of x264 or others encoders tools.

Then show us a chart (geeks love charts) of the difference in compression and (some examples of) the difference in quality to let us see how amazing Beamr actually is.


Dror, if you re-read the review, it's quite obvious that the root of the backlash is not in what you claim, but rather in not giving an explicit credit to x264 team.

To everyone else - there's a very simple resolution to this situation. In fact, it will straighten itself up in, because if what Beamr did is trivial and working, I doubt x264 project will have any problem replicating it. If it's not working, then Beamr video service won't take off. And, lastly, if it's working and non-trivial, then their claims are true. As others said, the only way to confirm or deny their claims is to test against larger collection of clips and with real people. Well, duh, that the exact opportunity that their video service is going to provide. Let's just wait and see how it plays out.


So post

1) a high quality original video (30-40 Mbit).

2) a 2-Mbit video compressed with your algorithm.

3) a 2-Mbit video compressed with x264 with reasonable settings.

Until you present evidence, that article has very valid criticisms, which you have not addressed. Instead you went full PR astroturfing telling how great your tech is.

PROVE YOUR CLAIMS.


I think what Dror is saying that if you take #3 and run it through their tool you'll get a smaller size with the same quality. So it's some sort of h.264 post-processor... I agree evidence would be nice.


That's not what I think he's saying. I think he's saying that the whole point of Beamr is to not have to choose a fixed 2 Mbps bitrate. Thus the test should be:

So the post should contain:

1) a random selection of high quality original videos (30-40 Mbit). 2) those videos compressed with Beamr. 3) those videos compressed with x264 - ALL WITH THE SAME OPTIONS - those options tuned to match the average bitrate of #2.

Notice the difference between #3 and what Diaz has done: he has tuned the parameters manually for each and every video file. IMHO that's a rather harsh comparison: "Hey scumbag! Your software is worthless because a really smart and dedicated human can do it just as well/better!!!"


bjornsing, I think you might not understand what Diaz is doing. He is not manually tuning the parameters for each video file. He is actually letting x264 automatically choose the bitrate and encoding parameters, to achieve a desired level of quality.


The way I understood it is they say they optimize bit rate for a given perceptual quality and their secret sauce is that they measure perceptual quality in a way that approximates what people actually see more closely. They "aim" for the same quality in the video you feed them and try and optimize bitrate. An example would be finding coefficients in a macroblock that could be further quantized without sacrificing perceived quality. So to me it makes some sense though I can't put a number of what you could gain with this sort of secret sauce, that is how much is there to squeeze out of already well optimized videos that use other measures. I would imagine the applying this secret sauce to the original video would be a better idea since quality measure is inherently something that depends on a reference. Almost by definition any change to a compressed video is reducing its quality (but perhaps they "reduce" things you can't see).

Since this is about perception it's really hard to measure. I looked at some of the comparisons people have done of the demo images and if you look carefully in some background areas you will see noticeable difference in quality between two images that people claim to be equal.

As your attention is naturally drawn to the foreground most people wouldn't notice that.


Beamr Video is not a bitrate-driven encoder - you cannot specify the bitrate of the output clip, so we cannot provide 2). We can only provide an output clip with the same quality as the input and lower bitrate, but we can't guarantee in advance what that bitrate would be. That is the basic difference between Beamr Video which is a quality-driven video optimization technology, and a regular video encoder that is typically bitrate-driven.


You could compress something with Beamr first, see what bitrate it spit out, then compress with standard encoder at the same bitrate. This isn't perfect since it's clearly favoring Beamr, but it would be somewhat useful.


How would this be "favoring Beamr"? I'd see it as piggybacking on a potentially useful Beamr innovation. Someone slightly more fanatic about IPR might even call it "theft".

As I understand it, the whole point of Beamr is that you don't have to manually tune the parameters for each video file.


It took me a few reads to see what you meant. I had taken Beamr to be about optimizing things like which what size blocks to use, how often to put i frames, and so on, which means better perceived quality at some bitrate. If it's always using the same settings and the only smartness is about what framerate to pick, then you would be correct.


Ok,

1) Compress the original with x264, target around 2mbit

2) Compress the result of #1 with your algorithm.

3) Compress the original with x264, target the bitrate of the result of #2

Compare #2 and #3


Even if you compare #2 and #3 and they are similar quality, what would that prove? You could do #3 for a specific file after Beamr Video has processed it, but how would you know the right bitrate for #3 without applying Bearm Video in #2?


Independent person from the industry here following this (as I am sure there are many more onlookers to this thread).

drorgill i think now is the time to provide some hard empirical evidence on your part, given the initial claims and a sample test done here it is my feeling that it should be relatively trivial to provide a counter example. cheers.


He could start by actually displaying an understanding of how video compression works, maybe.


I'm not on one side or the other here, but...

What would prove your claims?


If you compare #2 and #3 and they are similar quality, it would prove your product is essentially not doing better than x264.


I'm not a native speaker and thus not good at phrases. Can anybody explain the "snake oil" please?


Snake oil is an expression that originally referred to fraudulent health products or unproven medicine but has come to refer to any product with questionable or unverifiable quality or benefit. By extension, a snake oil salesman is someone who knowingly sells fraudulent goods or who is himself or herself a fraud, quack, charlatan, and the like.


The irony is that the original "snake oils" actually were effective, just mislabeled (contained no actual snakes).

"Snake oil" as used today in tech is the opposite -- accurately labeled, but ineffective.


"were effective" for what purpose? And do you have a source for that?


http://www.scientificamerican.com/article.cfm?id=snake-oil-s...

real Chinese snake oil was essentially some kind of omega-3 rich thing for topical application.

And, from the wikipedia: http://en.wikipedia.org/wiki/Snake_oil

"snake oil" as sold in the US was essentially capsaicin, which is used today, both as a topical treatment and others, for a variety of muscle/bone/tissue ailments. http://en.wikipedia.org/wiki/Capsaicin#Medical

Whereas snake oil in cryptography tends to be worse. In practice even snake oil algorithms are probably enough protection for short-term, limited use (no one is going to devote real cryptanalysis resources to high school notes), but it can encourage people to have a false sense of security and do things they wouldn't otherwise, or it ends up getting used for purposes not originally intended.


Which pretty much describes about 90% of the American economy.


You have now the official definition of snake oil, but in my experience what it often means is a knee jerk reaction against something that fall out of common acceptance, and then too often becomes mere insult.

Witness here how certain posters seem determined to trash the product. No matter what the defenders say, they simply must come back with something, and they will keep going until the defenders simply give up. Notice each attempt subtly shifting the goal posts each time the defending argument is put.

Do I want to post examples? Absolutely no. All that will do is end up the same way. No matter what I say, there will be endless comebacks until people get annoyed with the whole thing ending up in a similar cycle. I'll leave it as one of those things a read can decide for themselves.

So be careful, the use of the term snake oil can tell you as much about the person using the term as it can the target.


I'm surprised you didn't think of google. The first hit explains it quite well.



http://en.wikipedia.org/wiki/Snake_oil

Basically a fake remedy that actually does nothing.



If anybody is interested to read up on the rate control algorithms that x264 provides (and what this seems to be mostly about):

http://git.videolan.org/?p=x264.git;a=blob_plain;f=doc/ratec...

There is a also an interesting thread related to CRF on the doom9 forums:

http://forum.doom9.org/showthread.php?t=116773


A company called RayStream tried to pull something like this a while back. Maybe Beamr actually has a proprietary video compression algorithm...

http://news.ycombinator.com/item?id=3211630


It's good to see a response from Beamr staff. Perhaps rhetorical but....

How do you feel licensing an amazing open source tool, adding a few patches you feel needed for perceptual quality in crf encodes, and then calling it your own product? Even a "patent pending" product?

You've done one small bit of coding, building upon the _years_ of work that open source devs have done.

You present this minorly tweaked x264 as a revolution in online video and allude that it's all your work. No reference to x264 at all on your site, which I guess is your right having paid the license fee. Still not impressive for anyone looking at your product, how it works, or wondering about the toolchain involved.

How about submitting some patches and pull requests to the tool that makes your product possible? Oh wait, that's right, you'll take that product that's 99% not your work and make as much money as you can.

Are you gonna approach all the big sites using x264 and try to convince them to change to beamr? No, I didn't think so. Snake Oil semantics for the uninformed.

As a wonderful T-Shirt I saw once states " My free software runs your company ".

UPDATE

Actually I've thought long and hard about this. They might not be so blameworthy or snake oily, they might just be using FAR too broad of explanations for what they've done. We've all jumped on them because it was almost like they've said they created a new encoder.

The possibility I didn't really think of is that they have coded their own proprietary solution that, as the media put it loosely in the original piece posted here : "According to the company, the compression method mimics the human eye and removes elements that would not have been processed by the human eye in the first place." [1]

Now I don't know if their solution; #1 Directly changes/effects the x264 source code, or #2 is something that runs before encoding and simply determines x264 settings to be used. Or even a mixture of both.

#1 If the changes are to x264 itself, I shake my head and my fist at them and again point to the years of open free development done in x264. (Most notable in this case it's amazing psy optimizations which do exactly what this company is claiming to have advanced ... it adjusts quality internally for the human eyes perception instead of metrics like PSNR and SSIM.) Submit a patch!

#2 If their software is completely separate and determines x264 settings via their proprietary methods, then what they've done is not so ridiculous.

They've done a confusing job explaining things, but I can imagine at least one scenario:

----- A user uploads a home video to their servers.

Their software scans it and takes careful note of scenes with high levels of movement, scenes with human shapes moving, scenes with human faces, scene's that match algorithms for water, grass, natural environs etc etc

They then parse those notes and either set x264's many advanced settings globally, or perhaps even change each scene's x264 settings accordingly. ---

Who knows.... It's been interesting following this nonetheless,

[1] http://nocamels.com/2013/02/beamr-can-cut-video-file-size-by...


Apparently ICVT paid for x264: http://x264licensing.com/adopters

In general, the market tends to solve the "you've done one small bit of coding" problem; if a proprietary product is only epsilon better than the open source version then people will only be willing to pay epsilon for it. Conversely, if people are willing to pay real money for that small bit of coding, then by definition it must be a valuable bit. (It's even possible to make money by selling unmodified open source with some slick marketing, which really tends to annoy hackers because it implies that the marketing is worth more than the code...)

Also, "you must provide your source code modifications back to x264 LLC" http://x264licensing.com/faq


^ "you must provide your source code modifications back to x264 LLC"

Very nice find, I'm ridiculously happy that that provision is in there.

So then it may well be they have their own software that determines x264 settings as I outlined above.


I'm going to check with CoreCodec (the folks helping us administer x264 LLC) and see what's going on here. If they're abusing the terms of the license, we'll make sure things get fixed. If not, we'll publish all the changes they've made -- and honestly, I would be shocked if they've done anything significant besides change the program name.


^ The man himself. My mind is now at rest, and I look forward to your report.

I too would be rather surprised if they changed anything but the program name.


> Their software scans it and takes careful note of scenes with high levels of movement, scenes with human shapes moving, scenes with human faces, scene's that match algorithms for water, grass, natural environs etc etc

The funny thing is, this is the sort of thing that H.264 encoders intrinsically do. x264 has some fairly advanced algorithms in it to optimize for human visual perception (thanks DarkShikari, akupenguin, et al).

I assume these guys are basically determining settings to feed to x264. (They can't be modifying x264 itself since they'd then need to submit their source code changes upstream, if I've read the other replies right.)

If all they're doing is turning x264's settings knobs, they'll have to have studied the effects of those knobs in depth. I can hardly see how any analysis they're doing can be usefully turned into settings for x264. I find it a bit difficult to phrase my reasoning, but I'll try:

-----

- Trying to outdo x264's analysis at optimizing for perceptual quality while still depending on it is like trying to optimize a car engine from the driver's seat. It's not likely to happen.

- x264's settings are, generally speaking, macroscopic - they apply to the whole video segment. (You can apply different settings to different segments, but in the end your control is still limited by whatever knobs x264 offers.)

- If there were a way to optimize things better than x264 has, it almost certainly requires working directly within the encoder's analysis code itself rather than carrying out a pre-encoding analysis process and then fiddling with rough-control knobs. I simply doubt the complex interplay of settings within x264 lends itself to mere knob-turning. Many of the settings are mainly for making tradeoffs among the impossible trinity of encoding speed vs output quality vs output bitrate, rather than to allow the encoder to improve the output perceptually (because that's x264's job).

- Even if they managed to build a pre-analysis model that figures out decent settings to feed into x264, it would break to some extent whenever x264's code/algos are changed. That doesn't seem like a stable base to build a business on.

- All the above reasoning is overkill, because the settings the Beamr encoder turned out look outright silly to me: setting b-frames to 0 is just shooting x264 in the foot (b-frames are central to bitrate savings through discarding unnecessary visual data). Plus it turns off mb-tree and psy in 3 out of 4 samples, which basically discards two of x264's more powerful adaptive bitrate-vs-quality features. (mb-tree detects motion and saves bits on and around moving objects; psy is various optimizations for psychological quality perception, e.g. grain level.) It's just plain regressive.

-----

More fundamentally, the claim that "a minimum bitrate for visually lossless encoding of a video can be found" is quite doubtful, because of the fuzziness of the claim and its assumptions. The trouble is, a pure marketing line like this is easy to sell to a non-technical crowd. And x264 is good enough that anyone re-encoding a crappy source with it will find good bitrate savings, even with dumb settings.

Anyone looking to encode video in the cloud should instead use a well-priced, well-tuned service that doesn't overstate its case. Daiz suggested Zencoder, so it's probably a good bet. People who just want to shrink videos should grab Handbrake or any other x264-using encoder package, and use one of the presets (or just stick to the defaults). The result will probably be better than this service, as things stand.

(Thanks for patrolling the video frontier, Daiz.)


> If there were a way to optimize things better than x264 has, it almost certainly requires working directly within the encoder's analysis code itself rather than carrying out a pre-encoding analysis process and then fiddling with rough-control knobs. I simply doubt the complex interplay of settings within x264 lends itself to mere knob-turning.

You're treating x264 like some box of black magic.

As far as the perceptual tunings go, x264 already provides the three most important knobs (aq, psy-rd, psy-trellis) that, due to their nature, can't be a "one size fit all" deal. x264 makes no attempt to guess which of those settings would fit your source best and only offer a conservative Default and a series of tunings for marginally more fine-grained control.

A hypothetical, "better" approach would be to have an amazingly intelligent first pass be done to split the movies into scenes and calculate the perceptual weights for each scene. That way, in a movie like Kill Bill, the animated, fast-action, and talking-head scenes would all be perceptually optimized, as they all need vastly differently settings. Splitting the movies into zones also open up a whole plethora of options for fine-tuning quality. Again, you can always reach these options from the command line.

(Also, x264's psy-rd is hilariously unoptimized for high-stress bitrates. I'm not sure if the purported "service" being discussed in this thread can handle those bitrates, but it's an area needing massive tweaks. x264's main role seem to be high-quality archiving, however, so this is merely a tangent.)

> Even if they managed to build a pre-analysis model that figures out decent settings to feed into x264, it would break to some extent whenever x264's code/algos are changed. That doesn't seem like a stable base to build a business on.

One could just not update, or only update the required parts. I assume reading and modifying code is already a prerequisite here.


May I just say I love threads like this. VCs should read these discussions.


It's too bad that more SaaS apps aren't held to the same level of scrutiny. Most stuff on HN is snakeoil.


Are there any particular examples that come to mind?


Whenever you go to a site and it promises something, you have to give it your email address to possibly get access to a beta later, and then it either never appears, doesn't live up to the hype, or pivots and becomes something you're not interested in. Or, how about something that promises it will be around forever, and doesn't. I won't name names.


This reminds me of the hey days or porn, remember when it was powering innovation? Anyway there were Snake Oil video's salesmen all over the place, wrapping windows media and real play encoders into 6 figure custom super amazing but actually nothing special systems.

And people did buy them.


For similar benchmark fallacies (and straight up scams) discussed previously, see "A web video company that is most likely a hoax" -

http://news.ycombinator.com/item?id=3211630

http://seekingalpha.com/article/316946-raystream-remains-und...


No doubt. Still looking forward to wider acceptance of h265 which has real gains in quality/bit


It's highly ironic that Daiz is commenting on other encoders, when his encodes tend to be bloated. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: