This is not true, many are moving to OFTC. But the obvious answer is, Libera is run by the ex-freenode staffers, and runs on the same software. (Unlike OFTC)
So Libera is basically Freenode's spiritual successor with a different name.
I don't see that (some communities opting for libera.chat, some for OFTC) as a problem, though. Recent events show that some amount of IRC network diversity can only be a good thing.
While you are right that Libera is ran by former freenode operators, it's not 100% the same software. When you connect to freenode the server tells you it's running ircd-seven-1.1.9, while libera tells you it runs on solanum-1.0-dev. OTC says hybrid-7.2.2+oftc1.7.3.
AIUI the old freenode folks had been working on solanum for a while now, and this was the perfect time to switch to it. So it's not the same software, but it's the spiritual successor too.
Freenode was in the process of migrating to Solanum but this was one of the projects that Lee halted (and in a very underhanded way too). So it makes sense that Libra runs Solanum
> XMPP & Matrix bridge together fairly well these days.
I want to test this claim. After an hour of google searching I'm none the wiser:
1. Can I bridge private messages, or just MUCs? I use XMPP only to keep in contact with friends that use XMPP. I would love to integrate them into a single client, but afaict it does not appear possible.
2. Do I have to run my own XMPP bridge, or is there some automatic integration service I'm completely missing? For comparison, the experience of bridging a matrix room to IRC was as easy as clicking "add new bridge" in the Element UI.
Well, one question we could ask is why basically every single popular open source multimedia project (FFmpeg, x264, MPC-HC/madVR, mpv, avisynth/VapourSynth, MVtools/SVP, heck even LAME, .....) is made mostly by european authors? Some projects are almost 100% european, some are partially european with some russian and other non-american countries thrown in.
The only projects I can actually think of that are based in the US and made by american authors are those by xiph.org, and they only get away with it because their entire business model is developing royalty free alternatives to MPEG codecs.
Even if multimedia patents might not affect big corporations much, they definitely seem to strongly affect the open source community. I imagine if we had similar dystopian laws here in the EU, our best and most beloved multimedia software would plain not exist.
I wonder why compression seems to be extra sensitive to the existence patents. What about all other fields, aren't there ripe opportunities for European companies to be able to build software that Americans simply can't compete against? Shouldn't there be a cottage industry that does exactly that? What are some of the most stifling non-media patents?
While software patents impact every company's long-term survivability, short-term survivability for startups strongly hinges on investors. European investors are more risk averse and investments in Europe tend to be smaller than in the US. American investors prefer to invest in US companies for obvious reasons.
So a good portion of the startups you're imagining die early due to a lack of investment or incorporate in the US at some point in order to have a better chance of finding investors. And then some of the rest simply aren't as growth-focused because they need to focus on short-term profitability to survive, which means they'll likely end up silently dominating a particular industry niche rather than making a big entrance on the global stage.
Software patents are only a problem if you can survive long enough to be sued. If you're not exceptionally unlucky, you're more likely to go bankrupt before that happens.
SMPlayer can barely even be considered an mpv front-end and I hope nobody who uses it will ever make an mpv bug report again. It's a gigantic pile of hacks from the MPlayer age, and it “interfaces” with mpv using the most horrible method possible (embedding the mpv window and sending keystrokes to it)
Perhaps from the implementation standpoint, but from the user standpoint it's second to none. For example, it's the only one I know that supports dual subtitles (--secondary-sid in mpv). For more advanced features, it also allows you to pass CLI flags to mpv, e.g. for things like `--video-stereo-mode=sbs2l` (subtitles and UI for 3D videos).
Also, enable "Run mpv in its own window" under Preferences -> Advanced. This removes all the issues caused by the default mode of embedding the mpv window (such as subtitles being on the video and not in the black bars).
Nobody is trying to argue that 4:2:0 video looks objectively superior to 4:4:4 video if given a free choice. Obviously, full chroma information will always be better, such as is the case for something like a PC monitor vs a TV with subsampling.
The problem is that 4:4:4 chroma requires more bits to compress, so when you're designing a video/image codec, you have to ask yourself whether the difference in bitrate between 4:2:0 and 4:4:4 is worth the difference in quality, and the answer seems to be “no”.
This means that when you're serving, say, a 5 Mbps youtube video where the bitrate is already fixed, 4:2:0 is going to give you more bits to put into useful stuff (e.g. luma plane) instead of having to waste them on mostly-redundant chroma information.
> what i think of as undershooting or overshooting is relative to the range... and besides that, what is wrong with clamping? its how computer graphics has always had to deal with these things... limited range simply doesn't exist in that context, and it doesn't harm anything.
As far as I understand it, limited range was historically used so you could use efficient fixed-function integer math for your processing filters without needing to worry about overflow or underflow inside the processing chain. You can't just “clamp back” a signal after an overflow happens.
Of course, it's pretty much irrelevant in 2016 when floating point processing is the norm and TVs come with their own operating systems, so these days it just exists for backwards compatibility with the existing stuff - which is a property that video standards have tried to preserve as much as possible since the early beginnings of television.
> Chroma subsampling isn't going anywhere. You'll usually get subjectively better quality with 4:2:0 chroma compared to 4:4:4 at the same bitrate. And this means you can't have everything in RGB, so all the colorspace conversion complexity can't be ignored.
What's more, YCbCr is more efficiently compressed than RGB even if you don't subsample, for the same reason that a DCT saves bits even if you don't quantize: Linearly dependent or redundant information is moved into fewer components, in this case most of the information moves into the Y channel with the Cb and Cr both being very flat in comparison. (Just look at a typical YCbCr image reinterpreted as grayscale to see what I meant)
isn't it the case that amount of data required to store the result of a lossless DCT is bounded below by the size of the data, and this is why lossless JPG compression does not use such a scheme?
I'm not actually sure. In retrospect, I'm not sure what ‘DCT without quantizing’ really means, since the output of the cosines are probably real numbers? I guess the interpretation would be quantized to however many steps to reproduce the original result when inverted (and rounded).
In lossless JPEG it seems they omitted the DCT primarily for this reason: It not being a lossless operation to begin with, if you actually want to store the result. What other lossless codecs often do is store a lossy version such as that produced by a DCT, alongside a compressed residual stream coding the difference (error).
In either case, it's important to note the distinction between reordering and compressing; reordering tricks like DCT can reorder entropy without affecting the number of bits required to store them, but the simple fact of having reordered data can make the resultant stream much easier to predict.
For example, compare an input signal like this one:
FF 00 FF 01 FF 02 FF 03 FF 04 ...
By applying a reordering transformation to move all of the low and high bytes together, you turn it into
FF FF FF FF FF .. 00 01 02 03 04 ..
which is much more easily compressed. As for whether that's the case for (some suitable definition of) lossless DCT, I'm not sure.
This comment is pretty much what I was going for. I've reworded it to make it clearer.
The issue you can run into in practice is stuff like softsubbed signs, which can clash and look out of place with the native video if you render them at full res. There's also a related issue, which is that if you're using something like motion interpolation (e.g. “smoothmotion”, “fluidmotion” etc. or even stuff like MVTools/SVP), softsubbed signs will not match the video during pans etc., making them stutter and look very out-of-place - the only way to fix that is to render them on top of the video before applying the relevant motion interpolation algorithms.
Personally I've always wished for a world in which subtitles are split into two files, one for dialogue and for signs, with an ability to distinguish between the two. (Heck, I think softsubbed signs should just be separate transparent video streams that are overlayed on top of the native picture, allowing you to essentially hardsub signs while still being capable of disabling them)
Also, sometimes, rendering at full resolution is prohibitively expensive, e.g. watching heavily softsubbed 720p content on a 4K screen.
So Libera is basically Freenode's spiritual successor with a different name.