Hacker News new | past | comments | ask | show | jobs | submit login

I'm a bit out of the loop, but what was wrong with Theora?

Daalas goal "is to provide a free to implement, use and distribute digital media format and reference implementation with technical performance superior to h.265."

Wasn't that exactly what Theora was as well?




Theora was never very good. Not really the fault of its creators -- rather, all the good methods of compression are being hoarded by patent trolls (MPEG LA and friends), even though most of the methods are obvious. I believe this new attempt at a codec will also face the same issues.


>rather, all the good methods of compression are being hoarded by patent trolls

Nah, that is simply untrue. Theora was designed in the late _90s_ and targets a different computational envelope than AVC— it's pretty darn good compared to MPEG-2 (or even MPEG4 part-2/DivX) which are more contemporary.

Today there is a much large computational budget available, plus new experience and understanding.


>> rather, all the good methods of compression are being hoarded by patent troll

> Nah, that is simply untrue.

Could you provide something to back this up? I cannot find it now, but I seem to recall an x264 developer discussing the difficulties in implementing an open source codec equivalent to or better than H.264 and the problem was mainly due to the fact that all the best algorithms had been patented.

Perhaps if you are a Theora developer (or know one), you can clarify.


> Perhaps if you are a Theora developer (or know one), you can clarify.

I am a Theora and Opus developer, although I'm not exactly sure what clarification you'd like.

I can tell you that in my codec experience the patents have seldom (never?) been a major direct barrier to progress by precluding an essential technique... By their nature they tend to not absolutely foreclose anything except outright copying a technique— and even then only in a specific context, and video coding and signal processing are old enough and mathematical enough that the basics are unpatentable. (Keep in mind— the patent office does not believe it allows patenting pure mathematics, their definition of "does not" and "pure mathematics" and mine may not agree, but their evil is finite and thus surmountable)

The impediment from patents seems to most often take the form having to spend time and effort in patent research and negotiations, not being sure that you can just implement some randomly discovered research, erroneously discarding some useful techniques which could be used but isn't worth the effort to clear, and the cost of spending time with attorneys teaching them enough codec engineering— or, frankly, spending time correcting misconceptions on the Internet— rather than coding.

Or in short, _A patent_ is almost never a big problem for a designer, but _the patent system_ wastes a lot of effort by creating big largely non-engineering overheads that sap engineers time and energy. The system also complicates or discourages cooperation by creating odd business motivations and incentives to be secretive (especially about defense strategies). But the non-free codecs suffer from some of these costs and pressures too— plus additional ones like arguing over which winners and losers get their techniques in the format and thus a share of the fees (and access to cross licensing).

This mess also exists just as much outside of media codecs. But the enforcement is less active— I suspect partially because codecs are unusually attractive to monetize due to network effects (switching is MUCH more expensive) and because they make nice attorney-understandable atomic units of infringement that map to visible features. "I own h264, you have h264. Pay up!" works better than "I own computing the absolute value by this series of bit operations. You may or may not do this, I can't tell because of your obfuscated binary. Pay up! Maybe?". The network effect also makes very narrow patents more useful— it's much easier to write a patent that reads on implementations of H.264 (a single format with a fairly exact specification) than one that reads on any format similar to H.264. Very narrow patents are less costly to obtain and enforce (Less risk of invalidation), and the network effect says that you must implement H.264 not almost-H.264 so they are no less good at extracting royalties. But no one really cares if their kernel uses xor-linked-lists or not, and it's usually no compatibility problem to switch if someone starts making threatening noises.

If you were thinking of Jason Garrett-Glaser's early technical analysis of VP8, I don't think he was saying quite what you walked away with... but it's also important to note that Jason didn't (at least at the time) have substantial video coding experience outside of his work on x264 (and some related things in ffmpeg), and didn't have substantial experience working with patents, and had never been involved in a RF codec effort of any kind (including the ones attempted in MPEG). He's "just" a particularly brilliant guy that came in writing assembly code like a force of nature and made x264 much better. To the extent that he could have been emitting the impression that everything in video is patented he would have been just repeating the not-very-well-informed conventional wisdom.

Patent infringement is all about the fine details— so even a patent expert's off-the-cuff comments are going to be somewhat useless. I'd take Jason's thoughts on the latency of PAVGW as the word of God, codec patents, when he'd not even looked at the involved patents? meh. Later revisions of his analysis substantially softened some the remarks, but few people went back to read them. (I recall that I was especially amused by some of the things he derided as being 'unnecessary', as I thought I had a reasonable guess which patents Google/On2 had been specifically dodging).



H.264 != H.265. Theora tried to compete with the former, Daala will try to compete with the latter.


mostly theora lost it's momentum because during the web video "debate" there were many articles shitting on theora for not having hardware support and being slightly to fairly worse in various cases.

and now they need a new attempt to get people back on the open codec wagon.


Since H.265 doesn't (?) have hardware decoders yet, maybe Daala will have a fair chance competing.


The folks working on H.265 have already been producing material (http://phenix.int-evry.fr/jct/doc_end_user/documents/10_Stoc...) to counter the belief that you must have hardware accelerators for video decoding.

And they have a point: the _screen_, radio, etc. (http://www.ietf.org/mail-archive/web/rtcweb/current/msg03785...) of most mobile devices is already using something like 90+% of the power, so even in the impossible case that hardware accelerators make decode itself use no power they can't make that much of a battery life difference, so long as the device was fast enough to decode in the first place. And this trend will only continue as devices becomes faster and process shrinks lower the computing per joule.

But it's amusing to see some of the same people arguing that hardware accelerators were _must have_ are now arguing the opposite now that its their latest format which is suffering for the lack of them.


Remember that it wasn't just hardware decoders in end-user devices. H.264 already had a lot of buy-in from video producers. With support at both ends of the chain, trying to replace it just for the sake of patent issues (which most people didn't care about) was a fool's errand.


Theora can't compete with H.264. If something, VP8 could.


Well, Theora wasn't, was the problem.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: