I agree with the others here that classic assert statements are in a weird spot. You enable it when testing, in other words when the data flowing through your program is likely to be constrained and lazy. And then turn it off when your program is about to meet real data (production).
There's some wrinkle to this where you might want to log certain things in production instead of aborting with an assertion failure. But let's assume failing hard and fast is preferred for now.
Basically there are three cases.
1. The performance hit of the assertion checks are okay in production
2. It has a cost but can be lived with
3. Cannot be tolerated
For number two there is some space to play with.
What I've wanted is something more fine-grained. Not a global off/on switch but a way to turn on asserts for certain modules/classes/subsection, certain kinds of checks, and so on. It would also be nice to get some sort of data about the code coverage for assertions which have lived in a deployed program for months. You could then use that data to change the program (hopefully you can toggle these dynamically); maybe there is some often-hit assertion that has a noticeable impact which checks a code path that has been stable for months. Turning it off would not mean that you lose that experience wholesale if you have some way to store that data. I mean: imagining that there is some history tool that you can load the data about this code path into. You see the data travels through it and what range it uses. Then you have empirical data across however many runs (in various places) that indeed this assertion is never triggered. Which you can use to make a case about the likelihood of regression if you turn off that assertion.
That assumes that the code path is stable. You might need to enable the assertion again if the code path changes.
Just as a concrete example. You are working with some generated code in a language which doesn't support macros or some other, better way than code generation. You need to assert that the code is true to whatever it is modelling and doesn't fall out of sync. You could then enable some kind of reflection code that checks that every time the generated code interacts with the rest of the system. Then you eventually end up with twenty classes like that one and performance degrades. Well you could run these assertions until you have enough data to argue that the generated code is correct with a high level of certainty.
Then every time you need to regenerate and edit the code you would turn the assertions back on.
Logging systems often have the specificity you're describing, with selectors for modules etc. You could expand a logging system to handle asserting. "Log the current state, which should never be Running."
Thanks. Yeah, I've also been having similar thoughts for logging systems. In particular right now we (at my job) configure the log level through compile-time switches. It would be nicer to be able to toggle things dynamically. Yes, including per module instead of just "debug" or "info"
Basically there are three cases.
1. The performance hit of the assertion checks are okay in production
2. It has a cost but can be lived with
3. Cannot be tolerated
For number two there is some space to play with.
What I've wanted is something more fine-grained. Not a global off/on switch but a way to turn on asserts for certain modules/classes/subsection, certain kinds of checks, and so on. It would also be nice to get some sort of data about the code coverage for assertions which have lived in a deployed program for months. You could then use that data to change the program (hopefully you can toggle these dynamically); maybe there is some often-hit assertion that has a noticeable impact which checks a code path that has been stable for months. Turning it off would not mean that you lose that experience wholesale if you have some way to store that data. I mean: imagining that there is some history tool that you can load the data about this code path into. You see the data travels through it and what range it uses. Then you have empirical data across however many runs (in various places) that indeed this assertion is never triggered. Which you can use to make a case about the likelihood of regression if you turn off that assertion.
That assumes that the code path is stable. You might need to enable the assertion again if the code path changes.
Just as a concrete example. You are working with some generated code in a language which doesn't support macros or some other, better way than code generation. You need to assert that the code is true to whatever it is modelling and doesn't fall out of sync. You could then enable some kind of reflection code that checks that every time the generated code interacts with the rest of the system. Then you eventually end up with twenty classes like that one and performance degrades. Well you could run these assertions until you have enough data to argue that the generated code is correct with a high level of certainty.
Then every time you need to regenerate and edit the code you would turn the assertions back on.