Programming languages extensively use this to reduce coding errors. If there was no redundancy at all in a language, then any random sequence of characters would be a valid program.
This is completely opposite to the philosophy of the APL-family languages, and those who have learned them seem to also argue strongly that the conciseness reduces errors and makes for higher productivity.
Probably the best example of redundancy in a programming language is unit tests.
Some of my worst experiences have been fighting unit tests that were "overspecified", i.e. testing for inputs which would otherwise be clearly impossible in the context of the whole system, with the result that any trivial change to the code can cause test failures that consume time to diagnose, despite zero change to the ultimate functionality of the system.
I have yet to see any language feature that cannot be used to create a steaming pile of parrot droppings by a well-meaning programmer ticking every box of "best practices".
Useful redundancy is a continuum. On one side is adding "unnecessary" null checks. On the other side re-implementing complex business logic checks. I like to strike a balance.
"Overspeced" unit tests? They should only be testing the API contracts and all edge cases. And refactoring your code should not result in changes to the unit tests as the purpose of the unit test is to tell you if your refactoring broke something. If you have to change your unit tests when you refactor, you're doing something wrong.
Fighting unit tests is a code smell for testing implementation details or poorly designed API that needs to keep changing.
I've been bitten by "clearly impossible in the context of the whole system" way too many times. That said, I'm not going to do horribly complex implementation detailed validation. But sanity checks are worth their weight in gold.
> And refactoring your code should not result in changes to the unit tests as the purpose of the unit test is to tell you if your refactoring broke something. If you have to change your unit tests when you refactor, you're doing something wrong
I disagree a bit here. Without getting pendantic about units and refactors, in practice, it is common to want to modify methods that have unit tests for them. Maybe you want to break out the method into two, or break the class into two classes, or you want to improve the method's interface, change the types it takes, its parameters, the structure it returns, etc.
All these changes will most likely break your unit tests. If they didn't break the unit tests for the method, I'd actually be curious what it was even testing.
Above this, people often in practice also unit tests private methods, so you've often got to deal with those type of unit tests as well even if you're only touching the inside of a class and leaving the public methods intact.
And beyond refactoring, sometimes you are simply adding a new feature or fixing a bug, and this will result in you maybe choosing to modify the behavior of some existing class and its methods. These changes will also break your unit tests.
What all the use cases I just listed have in common is that all of them have good reasons to break the unit tests, and will result in people simply going and changing the tests to reflect the new changes.
> testing for inputs which would otherwise be clearly impossible in the context of the whole system
Off the top of my head...
1. When reusing code, unit tests that specify unexpected inputs are useful because you don't know the future use case. That's a rather weak reason to write the unit tests now, but a reason.
2. There is no way to enforce what is an expected input to a function, once a system grows large enough. Changes in the upstream dataflow being traced through the system by investigating every unit test is impractical. Being defensive up front, is reasonable.
...and thus you shouldn't care about it. I've seen enough problems caused by attempts at "futureproofing" to know that YAGNI is always a good choice. When/if it does matter, then you can do the work; but inevitably, requirements always seem to change in such a way that your attempts to anticipate them results in code that's structured to be flexible in exactly the wrong way, adding unnecessary complexity, or even hinders the desired changes.
Having types, isn't a binary choice. eg Generics, plain objects, functions, pointers. There are inevitably gaps in how the type system can ensure your what your outputs will be (even if it's a halt).
APL is a wonderfully imaginative language. But - it is difficult to decipher, and I would imagine rather difficult to debug. (Not to mention requiring a specialty keyboard...)
This is completely opposite to the philosophy of the APL-family languages, and those who have learned them seem to also argue strongly that the conciseness reduces errors and makes for higher productivity.
Probably the best example of redundancy in a programming language is unit tests.
Some of my worst experiences have been fighting unit tests that were "overspecified", i.e. testing for inputs which would otherwise be clearly impossible in the context of the whole system, with the result that any trivial change to the code can cause test failures that consume time to diagnose, despite zero change to the ultimate functionality of the system.