Dropping PVQ was a hard choice. We did an initial integration into libaom, but due to substantial differences from the way that Daala was designed, the results were not outstanding [1]. Subsequent changes to the codebase made PVQ regress significantly from there, for reasons that were not entirely clear. When we sat down and detailed all of the work necessary for it to have a chance of being adopted, we concluded we would need to put the whole team on it for the entire remainder of the project. These were not straightforward engineering tasks, but open problems with no known solutions. Additional changes by other experiments getting adopted could have complicated the picture further. So we would have had to drop everything else, and the risk that something would not work out and PVQ would still not have gotten in was very high.
The primary benefit of PVQ is the side-information-free activity masking. That is the sort of thing that cannot be judged via PSNR and requires careful subjective testing with human viewers. Not something you want to be rushing at the last minute. After gauging the rest of AOM's enthusiasm for the work, we decided instead to improve the existing segmentation coding to make it easier for encoders to do visual tuning after standardization. That was a much simpler task with much less risk, and it was adopted relatively easily. I still think it was the right call.
The primary benefit of PVQ is the side-information-free activity masking. That is the sort of thing that cannot be judged via PSNR and requires careful subjective testing with human viewers. Not something you want to be rushing at the last minute. After gauging the rest of AOM's enthusiasm for the work, we decided instead to improve the existing segmentation coding to make it easier for encoders to do visual tuning after standardization. That was a much simpler task with much less risk, and it was adopted relatively easily. I still think it was the right call.
[1] https://datatracker.ietf.org/doc/html/draft-cho-netvc-applyp...