Typical process might be - and this is at a very hight/crude level - 1) job that checks out & runs unit tests triggering 2) a job that creates a build triggering 3) a job deploying to an integration/staging environment and possibly even triggering (or with an air gap, requiring manually running) 4) a job to promote to a production environment.
Testing can/should be added between various steps there depending on the type of product you are working with. Automated functional tests be integrated into a process like that at several points (qa ownership of portions of unit/integration testing framework and automated tests that smoke test new deployments to a staging environment) and knowing those points/tools that integrate well with your CI system/ability to implement those integrations is probably something you want a qa manager to be familiar with.
Because depending on your level of CI integration, many of the automated tests that QA writes can be run as part of the CI process. Developers are really bad at maintaining CI systems, so it usually falls on the QA team to ensure new tests are included in the CI workflow.
Ultimately, it's QA's job to certify a release. With CI, that certification is often done in an automated fashion. Thus, QA should have ultimate responsibility for the configuration and execution of the tests as part of CI.
That doesn't really work, though, because the test cases need to be updated at the same time a code checkin that would modify their assumptions happens or that checkin won't pass. Really, for the most part developers have to at least be skilled at updating the CI tests if not creating them in the first place.
Plus, good unit or component tests are generally written to validate architectural and interface assumptions, not so much business rules and requirements. That's what most people really run in CI, not so much full-stack systems integration or user acceptance tests.
The types of tests you would use to do heavy acceptance verification often don't run that well in CI due to either having ecosystem concerns that can't/shouldn't be mocked, or because they simply run too slowly (most UI test frameworks fall into this).
At the end of the day, everyone really needs to know how to do some level of testing, at least at to verify their own assumptions about the work they're generating.
Edit: and you don't necessarily use CI to validate a release--you do in Continuous Deployment, by necessity, so that covers a lot of HN's web startup audience for sure.
But my experience is the majority of other kinds of companies need a release acceptance pass to independently verify a final bundle against requirements.
The type of QA described in this document wouldn't mesh with a CI/CD-only organization anyway. In those orgs, just write the tests and run a code coverage tool. You don't really have a process step that would allow you to do much with this kind of documentation or rigor anyway, since it's build/push/results/deploy.
If you build your workflow so that you build a branch with a certain user story, you don't merge back to trunk until the tests are also written. It's actually pretty easy to manage (at least with Git).
And you're right -- this is for a continuous deployment workflow, which many large companies are moving towards as a next step from Agile. Continuous deployment is ultimately a business capability; rather than have your product guys focus test features, etc. you can just use a hypothesis-driven approach to set up A/B tests and go with what works. This works even better if you're an old-guard company with millions of users already. It short-circuits a lot of the hand-wringing and political maneuvering around product features when you can say "Eh, let's just break off 5% of our user base and test both versions of this for a few days".