Regulations are like lines of code in a software project. They're good if well written, bad if not, and what matters more is how well they fit into the entire solution
A major difference with regulations is there’s no guaranteed executor of those metaphorical lines of code. If the law gets enforced, then yes, but if nobody enforces it, it loses meaning.
There's a reason we call them judges. Selective enforcement is there for a reason. Lawmakers can't anticipated everything. Just look at how bad of an idea zero tolerance policies in schools have been with thinks like getting expelled for biting a sandwich into the shape of a gun.
The world isn't black and white. Flexibility, including selective enforcement, is necessary in a just system.
The reason that selective enforcement exists is that it is very hard to avoid having rules selectively enforced.
But the history of selective enforcement strongly suggests that it does not usually lead to just results. It is often instead something that unaccountable officials find themselves easily able to exploit for questionable purposes.
For a notable example, witness how selective enforcement during the War on Drugs was used to justify mass incarceration of blacks, even though actual rates of drug usage were similar in black and white communities.
Yes, I would argue that it would be better for more to have been incarcerated, for that would bring greater focus to injustice and the law would be changed. Selective enforcement interferes with the feedback mechanism that would otherwise make the law work better.
Any instance of selective enforcement being necessary is ipso facto evidence of a bad law. This is completely orthogonal to the matter of the world not being black and white - you're right, it's not, but a good law recognizes that fact, and laws can also be amended as needed.
> Any instance of selective enforcement being necessary is ipso facto evidence of a bad law.
All laws are in some degree bad; perfect laws do not exist.
Some laws are useful and produce more good than harm in the concrete situation in which they exist.
Should laws be improved where possible? Yes. Does the need for selective enforcement indicate a problem? Yes. Does it provide sufficient information to determine the precise form of a better law to replace the one it shows a problem with? Very rarely.
Legislation is much worse than organically derived common law, for the common law comprises decisions that apply to particular conditions with all their details while the former are mere idealizations.
> Any instance of selective enforcement being necessary is ipso facto evidence of a bad law.
Yep, and while we fix that bad law we need judges to be able to say "I won't apply that" or "I won't sentence you to jail for this". That's kinda the point.
That's what jury nullification is for, in principle.
Allowing judges to not enforce bad laws turns them into unelected legislators. It's also worse from a corruption perspective because a single bought judge in the right place is much more cost effective than having to buy a new randomly selected jury at every trial.
Not only in the executive/enforcement, but in the actual impact of the regulation in practice as applied by millions in a distributed system. Regulations influence decision paths as opposed to encoding deterministic code paths.
The problem with laws that both the enforcer and the subject (enforcee?) agree are bad, is that enforcement is variable. And that leads to corruption. Every damn time.
The fix for corruption is vote the bums out of office. It is not to go whole hog into blind application of the law.
Think about how hard it is to write code that has no bugs. Now imagine you're using English and working with a system with so many parameters and side effects that you can't possibly anticipate all eventualities.
And now you want to rigidly apply your operators to this parameter space?
Selective enforcement is necessary for justice, because no law is perfectly just, and selective enforcement helps move toward justice.
It unfortunately also means there is the eventuality of corruption. So you just have to keep vigilant. Because a rigid system with no selective enforcement has no fix for injustice other than "live with it."
> The fix for corruption is vote the bums out of office.
That doesn’t seem to be working.
I argue there’s an acceptable level of corruption, only the particular flavours change from time to time.
Come out of government better off than when you when in. Fine, good on ya. No need to tells us about how you’re going about it while you’re going about it.
Learn to be at least a little bit discreet, and at least do something occasionally that comes across as good for the average person.
You could also optimize everything for future updates that optimize things even further for even more updates...
Humm.. that was supposed to be a joke but our law making dev team isn't all that productive to put it mildly. Perhaps some of that bloat would be a good thing until we are brave enough to do the full rewrite.
Ah, but "simplicity" is not necessarily "fewest lines of code".
Code is first and foremost for human consumption. The compiler's job is to worry about appeasing the machine.
(Of course, that's the normative ideal. In practice, the limits of compilers sometimes requires us to appease the architectural peculiarities of the machine, but this should be seen as an unfortunate deviation and should be documented for human readers when it occurs.)
This is just a belief about code, and one of many. Another belief is that code and computer systems are inseparable, and the most straightforward and simple code is code that leverages and makes sense for it's hardware.
As in, you can pretend hardware doesn't exist but that doesn't actually change anything about the hardware. So, you are then forced to design around the hardware without knowing that's necessarily what you're doing.
Exhibit A: distributed systems. Why do people keep building distributed systems? Monoliths running on one big machine are much simpler to handle.
People keep building distributed systems because they don't understand, and don't want to understand, hardware. They want to abstract everything, have everything in it's own little world. A nice goal.
But in actuality, abstracting everything is very hard. And the hardware doesn't just poof disappear. You still need network calls. And now everything is a network call. And now you're coordinating 101 dalmatians. And coordination is hard. And caching is hard. And source of truth is hard. And recovery is hard. All these problems are hard, and you're choosing to do them, because computer hardware is scary and we'd rather program for some container somewhere and string, like, 50 containers together.
> code and computer systems are inseparable and the most straightforward and simple code is code that leverages and makes sense for it's hardware
You're missing the point. Code is separable from hardware per se, even if practically they typically co-occur and practical concerns about the latter leak into the former. The hardware is in the service of our code, not our code in service of the hardware. Targeting hardware is not, in fact, the most straightforward option, because you're destroying portability and obscuring the code's meaning with tangential architectural minutiae and concerns that are distracting.
> you can pretend hardware doesn't exist but that doesn't actually change anything about the hardware
You're mischaracterizing my claim. I didn't say hardware doesn't matter. Tools matter - and their particular limitations are sometimes felt by devs acutely - but they're not the primary focus.
My claim was that code is PRIMARILY for human consumption, and it is. It is written to be read by a person first and foremost. Unreadable, but functioning code is worthless. Otherwise, why have programming languages at all? Even C is preposterously high-level if code isn't for human consumption. Heck, even assembly semantics is full of concepts that have no objective reality in the hardware, or concepts with no direct counterpart in hardware. Hardware concerns only enter the picture secondarily, because the code must be run on it. Hardware concerns are a practical concession to the instrument.
So, in practice, you may need to be concerned with the performance/memory characteristics of your compiled code on a particular architecture (which is actually knowledge of the compiler and how well it targets the hardware in question with respect to your implementation). Compilers generally outperform human optimizations, of course, and at best, you will only be using a general knowledge of your architecture when deciding how to structure your implementation. And you will be doing this indirectly via the operational semantics of the language you're using, as that is as much control as you will have over how the hardware is used in that language.
> Exhibit A: distributed systems. Why do people keep building distributed systems? Monoliths running on one big machine are much simpler to handle.
In principle, you can write your code as a monolith, and your language's compiler can handle the details of distributing computation. This is up to the language's semantics. Think of Erlang for inspiration.
> People keep building distributed systems because they don't understand, and don't want to understand, hardware.
Unless you're talking about people who misuse "Big Data" tech when all they need is a reasonably fast bash script, that's not why good developers build distributed systems. Even then, it's not some special ignorance of hardware that leads to use of distributed systems when they're not necessary, but some kind of ignorance of their complexity and an ignorance of the domain the dev is operating in and whether it benefits from a distributed design.
> But in actuality, abstracting everything is very hard. And the hardware doesn't just poof disappear. You still need network calls. And now everything is a network call. And now you're coordinating 101 dalmatians. And coordination is hard. And caching is hard. And source of truth is hard. And recovery is hard. All these problems are hard, and you're choosing to do them, because computer hardware is scary and we'd rather program for some container somewhere and string, like, 50 containers together.
This is neither here nor there. Not only are "network calls" and "caching" and so on abstractions, they're not hardware concerns. Hardware allows us to simulate these abstractions, but whatever limits the hardware imposes are - you guessed it - reflected in the abstractions of your language and your libraries. And more importantly, none of this has any relevance to my claim.
> Code is first and foremost for human consumption. The compiler's job is to worry about appeasing the machine.
Tangentially, it continues to frustrate me that C code organization directly impacts performance. Want to factorize that code? Pay the cost of a new stack frame and potentially non-local jump (bye, ICache!). Want it to not do that? Add more keywords ('inline') and hope the compiler applies them.
(I kind of understand the reason for this. Code Bloat is a thing, and if everything was inlined the resulting binary would be 100x bigger)
`inline` in C has very little to do with inlining these days. You most certainly don't need to actually use it to have functions in the same translation units inlined, and LTO will inline across units as well. The heuristics for either generally don't care if the function is marked as `inline` or not, only how complex it is. If you actually want to reliably control inlining, you use stuff like `__forceinline` or `[[gnu:always_inline]]`.
Regarding code size, it's not just that binary becomes larger, it's that overly aggressive inlining can actually have a detrimental effect on performance for a number of reasons.