Hacker News new | past | comments | ask | show | jobs | submit login

I agree with what you wrote, but I've worked with people who balk at even automatic gates. "Pass pep-8, as enforced by pycodestyle" was practically too much for one coworker I had, who, when it was pointed out in review that "the automated CI step was failing, can you please make it green?", would fix like one violation, then just ping their reviewer? And around the mulberry bush we went. Various odd arguments like, "well, my IDE has pycodestyle built in, and it's fine" (it seemed like his IDE had its own, separate, lenient config? … but like, nobody cares about your IDE…?) etc.

At $different_employ, there was lots of thrashing of teeth pertaining to `yarn audit` running against the codebase, and failing people's builds. Some people fought tooth and nail to get it to not fail the build, because they didn't like that their new feature, which didn't introduce any vulns. per se, was failed. Later, other people did much gnashing and wailing over being "confused" because the yarn audit step was red-X, and they were confused as to why it was breaking on their feature branch. It wasn't: it was failing, but marked to be ignored, so the build continued. Other, required steps, were failing. (And were also marked with red Xs.) This was "all too confusing", and it was petitioned for yarn audit's CI step to just "fail" with a green checkmark. You can imagine, I'm sure, how many vulns. were addressed after that point.

I've also worked with devs who are just amazing, and aside from being brilliant in how they architect their code/systems in the first place, … they have a definite tendency to view lint errors as just "oh, oops, silly me fixed" and just Get It Done.

QC is hard.

(Regarding our audit step, too: we also had an ignore list. You could just add the vuln. to the ignore list, and that would technically placate CI. Obviously not the best … but CI didn't demand that you fix whatever the issue was.)




> Some people fought tooth and nail to get it to not fail the build, because they didn't like that their new feature, which didn't introduce any vulns. per se, was failed.

Well, here is the problem this introduces. Who is supposed to fix this and which PM's bucket of time does it come out of? This implementation just makes it seem like the answer is "whichever team is unlucky enough to make a PR when it is discovered." The perverse incentives of that are enormous.

> You can imagine, I'm sure, how many vulns. were addressed after that point.

This tells me that the responsibility isn't specifically assigned. This seems to be the broader issue, with the audit in the pipeline a managerial attempt to get vulns somehow fixed without doing the work to figure out how they will be fixed and who will do the work.

Someone actually needs to be responsible for it and be given time to do the changes. It seems like this was not done.


> Who is supposed to fix this

The team of eng responsible for the code against which a vuln. has been detected, would be my answer.

> This tells me that the responsibility isn't specifically assigned.

Well, no, for the very reason you've highlighted: PMs will be damned to have that time come out of their bucket. That is a problem, and I rather doubt this subthread'll solve the idiocy that is PMs.

Nonetheless, if we accept that vulns. are something the industry should pay attention to, i.e., that "keep stuff using secure versions of dependencies" is actually a best practice the industry should adhere to, someone has to triage the audit report, and then someone yet has to do the deed of correcting it.

Yet both your comment & a sibling have advocated the position of "no, drown the test result, don't block builds". If builds aren't blocked, there is zero impetus towards ever fixing vulns., so they don't get fixed, ever. You cannot, IMO, ignore human nature when setting up these kinds of QC checks in CI: if you make them optional they're as good as dead, cause people are lazy.

You're missing the larger point of my comment, too: there is no sensible setting for that CI step. Say I don't care about what the CI step does, say I don't attempt to take a position in this idiotic debate. I can't: literally any configuration boils down to one of two cases: a.) we're flagging the problem, and devs don't like that and whine and complain or b.) we're ignoring the problem, and security people claim they don't like that, and whine and complain.

I think you're also forgetting how software is actually developed in the wild by many companies: this example, like so many, use a "monorepo". I don't like this (I'd argue for single-purpose repos, and it'd be obvious who the owner of a vuln. is, since the whole repo is only owned by one team.) but nonetheless I'm pretty consistently outvoted on this point, and the majority of firms out there seem to go the monorepo route. So, in a monorepo, "who" is responsible for a vuln. against a third-party dependency? (It can be figured out, but the answer is certainly not as straight-forward anymore!)

(Single-purpose repos aren't without issue here either: often I find in that case that establishing these sorts of "common" CI steps to be hugely problematic: you sort of have to c/p a lot of code between repos, keeping that step's code updated is then a PITA, and many CI systems lack good support for "use this common step, perhaps not even located in this repo".)


To me it seems like the better way to run such audit code is against the main branch periodically, and it's someone's job to triage failures at high priority. Attaching it to unrelated changes is half the problem. But it's not something that fits into the common CI workflow which assumes breakage comes from code changes, not updates in an external vulnerability database.


> Some people fought tooth and nail to get it to not fail the build, because they didn't like that their new feature, which didn't introduce any vulns. per se, was failed.

Well... yeah? They're working on a feature and get arbitrarily told that they're not allowed to merge the feature until they go fix some completely unrelated vulnerability problem. How is that reasonable?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: