When you take stock of everything a lot of the changes are obviously bad when you think about them from the "big picture" perspective, but sometimes it's hard to have that perspective, especially as changes tend to creep in incrementally. The biggest red flag I'd highlight is just bureaucracy in all its forms. I don't mean things like code reviews or fixed processes for doing things (although in some cases those can be bad). The biggest red flag is when there's a switch from certain important decisions being made by individuals to decisions being made by some rule based on certain metrics.
Over reliance on metrics is often a huge red flag. If your continued employment, your bonus, or your promotion prospects rest on how many bugs or tickets you close, how many test cases you automated, etc. then that's problematic. Because metrics can always be gamed. And if people are being judged relative to one another based on gameable metrics then only the people who game the metrics will get ahead and everyone else will be left by the wayside. To the detriment of employee morale and actually getting the right work done that needs to get done. If you tell someone that they have to close lots of bugs to show they are doing their job well then they are incentivized to figure out how to wiggle out of responsibility for a bug or "fix" bugs using the most expedient hack possible. Instead of taking the time to investigate a bug thoroughly as far as their expertise and context allows, doing a root cause analysis, stepping back and analyzing the meta-context that made the introduction of the bug even possible, and maybe doing the work to design a very thorough fix for those problems, beyond just the limited scope of making the one bug go away in the short term.
Good development work can often defy all metrics that attempt to measure it. It can look like an average of a net negative number of lines of code written. It can look like spending a month on a seemingly inconsequential bug (which turned out to have a very interesting cause that revealed a fundamental flaw in the design of the system which required several months to fix but led to a huge increase in overall reliability). It can look like days, weeks, or months spent not writing code or fixing bugs at all. Maybe that time is spent doing documentation, maybe it's spent doing design work, maybe it's spent doing research, maybe it's spent picking apart an existing system to learn exactly how it's put together, all of which might end up being hugely valuable. The thing is, there's no metric for things like "prevented a thousand bugs from being filed over the next year". And that gets to how difficult it is to objectively measure the work of coders.
The other major red flag is when you see people treated as just interchangeable resources. Does the company seem to value people's time? Does it treat people like human beings? Does it seem like the company/employee relationship is a cooperative (vs. exploitative) one? Is the company flexible about working from home, working non-standard hours, etc? Or does it treat knowledge work like a factory job where it cares about hours worked, "butts in seats", mandatory "crunch time", etc.
Over reliance on metrics is often a huge red flag. If your continued employment, your bonus, or your promotion prospects rest on how many bugs or tickets you close, how many test cases you automated, etc. then that's problematic. Because metrics can always be gamed. And if people are being judged relative to one another based on gameable metrics then only the people who game the metrics will get ahead and everyone else will be left by the wayside. To the detriment of employee morale and actually getting the right work done that needs to get done. If you tell someone that they have to close lots of bugs to show they are doing their job well then they are incentivized to figure out how to wiggle out of responsibility for a bug or "fix" bugs using the most expedient hack possible. Instead of taking the time to investigate a bug thoroughly as far as their expertise and context allows, doing a root cause analysis, stepping back and analyzing the meta-context that made the introduction of the bug even possible, and maybe doing the work to design a very thorough fix for those problems, beyond just the limited scope of making the one bug go away in the short term.
Good development work can often defy all metrics that attempt to measure it. It can look like an average of a net negative number of lines of code written. It can look like spending a month on a seemingly inconsequential bug (which turned out to have a very interesting cause that revealed a fundamental flaw in the design of the system which required several months to fix but led to a huge increase in overall reliability). It can look like days, weeks, or months spent not writing code or fixing bugs at all. Maybe that time is spent doing documentation, maybe it's spent doing design work, maybe it's spent doing research, maybe it's spent picking apart an existing system to learn exactly how it's put together, all of which might end up being hugely valuable. The thing is, there's no metric for things like "prevented a thousand bugs from being filed over the next year". And that gets to how difficult it is to objectively measure the work of coders.
The other major red flag is when you see people treated as just interchangeable resources. Does the company seem to value people's time? Does it treat people like human beings? Does it seem like the company/employee relationship is a cooperative (vs. exploitative) one? Is the company flexible about working from home, working non-standard hours, etc? Or does it treat knowledge work like a factory job where it cares about hours worked, "butts in seats", mandatory "crunch time", etc.