That's not correct. There are a number of attacks that can be mitigated by both, but PKCE serves as a very effective defense in case an authorization code leaks to an attacker. Such a leak can be caused by a malicious script on the redirect URI, referer headers, system or firewall logs, mix-up attacks and other problems even when the redirect URIs are restricted.
There is a good reason why we mandate both redirect URI allowlisting AND PKCE in the OAuth Security BCP RFC draft. One learning from our discovery of mix-up attacks with "code injection" was that client authentication is not sufficient to prevent the misuse of authorization codes.
For starters, without restrictions on the redirect URI, I (as the attacker) can just redirect a user to the authorization endpoint with a client ID of a trustworthy client, a redirect URI pointing to my server, and a PKCE challenge that I selected so that I know the PKCE verifier. The auth code will end up at my server and I can redeem it, giving me (instead of the trustworthy client) access to the user's resources. If the client is a confidential client, I can use a authorization code injection attack to redeem the code and work with the user's resource.
Okay, but why was this ever put into a standards track in the first place? This isn't an obscure error, it's the kind of glaring error that shows that a security standard is being designed by people who know next-to-nothing about security. Recently, when I implemented authentication to a site that used Resource Owner Password Credentials, my client's CTO reported it as a bug because he (correctly) noticed the security flaw. If you're making this sort of error, you have no business writing production authentication code, let alone standards for production authentication code.
And this is only the most glaring error. Another example: there's no specification for how to generate tokens, and I've implemented OAuth for platforms that return tokens which look suspiciously like UUIDs. Not only are UUIDs explicitly recommended against as credentials by RFC4122[1] because people could reverse engineer your UUID generation, but collisions have occurred in practice[2] without any reverse engineering, meaning that a user could authenticate as another user without even hacking.
And to clear, I'm not a security auditor and I've never done a security audit of any OAuth implementations except my own. These are problems I discovered by treating other people's OAuth implementations as black boxes, all of them compliant to the so-called "standard". Security is hard, and with so little specified, I imagine the majority of OAuth implementations in the wild are actually not secure.
The resource owner password credentials grant MUST NOT be used. This
grant type insecurely exposes the credentials of the resource owner
to the client. Even if the client is benign, this results in an
increased attack surface (credentials can leak in more places than
just the AS) and users are trained to enter their credentials in
places other than the AS.
Furthermore, adapting the resource owner password credentials grant
to two-factor authentication, authentication with cryptographic
credentials, and authentication processes that require multiple steps
can be hard or impossible (WebCrypto, WebAuthn).
The BCP recommends either sending back client_id and iss (but that draft[1] is long expired, and nobody seems to support that implementation), or asking the client to provider a separate exact-match return URI for each AS. The second solution is what I'm doing when implementing multi-AS/IdP OAuth clients, but this requires the clients to be aware of this vulnerability, and that's a rather tall requirement.
There is a good reason why we mandate both redirect URI allowlisting AND PKCE in the OAuth Security BCP RFC draft. One learning from our discovery of mix-up attacks with "code injection" was that client authentication is not sufficient to prevent the misuse of authorization codes.