Not sure if that's making things "fair". Grep & find are insanely powerful when you're a CLI power user.
Nonetheless, I'm particularly curious which cases the AI tool can find things that are not easy to find via find & grep (eg: finding URLs that are created via string concatenation, those that do not appear as a string literal in the source code)
Perhaps a larger question there, what's the overall false negative rate of a tool like this? Are there places where it is particularly good and/or particularly poor?
I evaluate a lot of code, like ten-twenty applications per year currently, terminal tooling is my goto. Mostly the basic stuff, tree, ripgrep, find, wc, jq, things like that. I also use them on top of output from static analysis tooling.
It's not as slick as SQL on a RDBMS, but very close, and integrates well into e.g. vim, so I can directly pull in output from the tools and add notes when I'm building up my reports. Finding partial URL:s, suspicious strings like API keys, SQL query concatenation and the like is usually trivial.
For me to switch to another toolset there would have to be very strong guarantees that the output is correct, deterministic and the full set of results, since this is the core basis for correctness in my risk assessments and value estimations.
Nonetheless, I'm particularly curious which cases the AI tool can find things that are not easy to find via find & grep (eg: finding URLs that are created via string concatenation, those that do not appear as a string literal in the source code)
Perhaps a larger question there, what's the overall false negative rate of a tool like this? Are there places where it is particularly good and/or particularly poor?
edits: brevity & clarity