I don't think this is a naming issue at all. In the provided example, 'Accuracy' is the correct name for the parameter, as that's what the parameter represents, accuracy. The fact that accuracy should be given as a value in an interval from 0 to 1 should be a property of parameter type. In other words, the parameter should not be a float, but a more constrained type that allows floats only in [0,1].
EDIT: Some of you asked what about languages that don't support such more constrained types, so to answer all of you here: different languages have different capabilities, of course, so while some may make what I proposed trivial, in others it would be almost or literally impossible. However, I believe most of the more popular languages support creation of custom data types?
So the idea (for those languages at least) is quite simple - hold the value as a float, but wrap it in a custom data type that makes sure the value stays within bounds through accessor methods.
velocity() simply returns the first argument, after doing validity checking based on the second argument.
You probably couldn't reasonably use this everywhere that you would use actual constrained types in a language that has them, but you could probably catch a lot of errors just using them in initializers.
This got me thinking: What about a situation where the accuracy is given in a real-life unit. For example, the accuracy of a GPS measurement, given in meters. I've sometimes used names like 'accuracyInMeters' to represent this, but it felt a bit cumbersome.
Edit: Thinking more about it, I guess you could typealias Float to Meters, or something like that, but also feels weird to me.
More complex type systems absolutely support asserting the units of a value in the type system. For example, here's an implementation of SI types in C++: https://github.com/bernedom/SI
I've used "fraction" for this purpose .. but that isn't general enough. In fact a convention I've used for nearly 2 decades has been varName_unit .. where the part after the underscore (with the preceding part being camel case) indicates the unit of the value. So (x_frac, y_frac) are normalized screen coordinates whereas (x_px, y_px) would be pixel unit coordinates. Others are like freq_hz, duration_secs and so on.
Another thing you can do is define a "METER" constant equal to 1. You can then call your function like this: func(1.5 * METER), and when you need a number of meters, you can do "accuracy / METER". The multiplication and division should be optimized away.
Good thing about that is that you can specify the units you want, for example you can set FOOT to 0.3048 and do "5. * FOOT" and get back your result in centimeters by doing "accuracy / CENTIMETER". The last conversion is not free if the internal representation is in meter but at least, you can do it and it is readable.
If you are going to use such distances a lot, at least in C++, you can get a bit of help from the type system. Define a "distance" class operator overloads, constants and convenience functions to enforce consistent units. Again, the optimizer should make it not more costly than using raw floats if that's what you decide to use as an internal representation.
Some languages provide more than just an alias. Eg Haskell lets you wrap your Float in a 'newtype' like 'GpsInMeters'.
The newtype wrapper doesn't show up at runtime, only at compile time. It can be set up in such a way that the compiler complains about adding GpsInMeters to GpsInMiles naively.
> While true, if your language doesn’t support such a type
You'd be surprised where the support is. In C#, you would declare a struct type with one read only field of type double, and range validation (x <= x <= 1) in the constructor.
Yes there's a bit of boilerplate - especially since you might want to override equality, cast operators etc. But there is support. And with a struct, not much overhead to it.
I don't think runtime validation is that special. You can bend most languages into this pattern one way or another. The real deal is having an actual compile-time-checked type that resembles a primitive.
Actually what I want is just the reading experience of seeing
"public Customer GetById(CustomerId id)" instead of "public Customer GetById(string id)" when only some strings (e.g. 64 chars A-Z and 1-9) are valid customer ids.
Compile-time validation would be ideal, but validation at the edges, well before that method, is good enough.
The main issue with techniques such as this, which are certainly easy to do, is that if it’s not in the type system and therefore not checked at compile time, you pay a run time cost for these abstractions.
Great that you like typed languages, and ones that allow for such constrained/dependent typing as well.
It seems disingenuous to me to suggest that anyone using
other languages do not have this problem. And really, there are quite a few languages to not have this form of typing, and even some reasons for a language to not want this form of typing.
So please, don't answer a question by saying "your questions is wrong" it is condescending and unhelpful.
Equally condescending is saying "It's great that you like X, but I don't so I'm going to ignore the broader point of your argument."
The point remains that the fact a given parameter's valid values are [0,1] is not a function of it's name. You can check the values within the method and enter various error states depending on the exact business rules.
"your question is wrong" is indeed unhelpful, especially as a direct response to someone asking a question.
"here is what seems like a better question" is helpful, especially in a discussion forum separate from the original Q/A.
But if "here is what seems like a better question" is the _only_ response or drowns out direct responses, then thats still frustrating.
> condescending
As a man who sometimes lacks knowledge about things, when I ask a question, please please please err on the side of condescending to me rather than staying silent. (No, I don't know how you should remember my preferences separately from the preferences of any other human)
I'm genuinely sorry if I came across as condescending, that was not my intention at all.
I merely wanted to point out that, in my opinion, this property should be reflected in parameter type, rather than the name. Just like, if we wanted a parameter that should only be a whole number, we wouldn't declare it as a float and name it "MyVariableInteger" and hope that the callers would only send integers.
You mentioned that there are quite a few languages that do not permit what I proposed, would you mind specifying which ones exactly? The only one that comes to my mind is assembly?
So, then the user calling the library with foo(3.5) will get a runtime error (or, ok, maybe even a compile time error).
To avoid that, you need to document that the value should be between 0 and 1, and you could do that with a comment line (which the OP wanted to avoid), or by naming the variable or type appropriately: And that takes us back to the original question. (Whether the concept is expressed in the parameter name or parameter type (and its name) is secondary.)
> So, then the user calling the library with foo(3.5) will get a runtime error (or, ok, maybe even a compile time error).
I'm not sure I understand this. See below, but the larger point here is that the type can never lie -- names can and often do because there's no checking on names.
I think what is being proposed is something similar to
newtype Accuracy = Accuracy Float
and then to have the only(!) way to construct such a value be a function
mkAccuracy :: Float -> Maybe Accuracy
which does the range checking, failing if outside the allowable range.
Any functions which needs this Accuracy parameter then just take a parameter of that type.
That way you a) only have to do the check at the 'edges' of your program (e.g. when reading config files or user input), and b) ensure that functions that take an Accuracy parameter never fail because of out-of-range values.
It's still a runtime-check, sure, but but having a strong type instead of just Float, you can ensure that you only need that checking at the I/O edges of your program and absolute assurance that any Accuracy handed to a function will always be in range.
You can do a similar thing in e.g. C with a struct, but unfortunately I don't think you can hide the definition such that it's impossible to build an accuracy_t without going through a "blessed" constructor function. I guess you could do something with a struct containing a void ptr where only the implementation translation unit knows the true type, but for such a "trivial" case it's a lot of overhead, both code-wise and because it would require heap allocations.
You're solution is the ideal one and safest, although in the interest of maximum flexibility since the goal here seems more documentative than prescriptive, it could also be as simple as creating a type alias. In C for example a simple `#define UnitInterval float`, and then actual usage would be `function FuncName(UnitInterval accuracy)`. That accomplishes conveying both the meaning of the value (it represents accuracy) and the valid value range (assuming of course that UnitInterval is understood to be a float in the range of 0 to 1).
Having proper compile time (or runtime if compile time isn't feasible) checks is of course the better solution, but not always practical either because of lack of support in the desired language, or rarely because of performance considerations.
That's fair, but I do personally have a stance that compiler-checked documentation is the ideal documentation because it can never drift from the code. (EDIT: I should add: It should never be the ONLY documentation! Examples, etc. matter a lot!)
There's a place for type aliases, but IMO that place is shrinking in most languages that support them, e.g. Haskell. With DerivingVia, newtypes are extremely low-cost. Type aliases can be useful for abbreviation, but for adding 'semantics' for the reader/programmer... not so much. Again, IMO. I realize this is not objective truth or anything.
Of course, if you don't have newtypes or similarly low-cost abstractions, then the valuation shifts a lot.
EDIT: Another example: Scala supports type aliases, but it's very rare to see any usage outside of the 'abbreviation' use case where you have abstract types and just want to make a few of the type parameters concrete.
Sure, such other languages have the problem too, it's just that they are missing the best solution. It's possible for a solution to be simultaneously bad and the best available.
In languages with operator overloading you can make NormalizedFloat a proper class with asserts in debug version and change it to an alias of float in release version.
Similarly I wonder why gemoetry libraries don't define separate Point class and Vector class, they almost always use Vector class for vectors and points.
I understand math checks out, and sometimes you want to add or multiply points, for example:
Pmid = (P0 + P1) / 2
But you could cast in such instances:
Pmid = (P0 + (Vector)P1)/ 2
And the distinction would surely catch some errors.
Point - Point = Vector
Point + Point = ERROR
Vector +/- Vector = Vector
Point +/- Vector = Point
Point * scalar = ERROR
Vector * scalar = Vector
Point */x Point = ERROR
Vector * Vector = scalar
Vector x Vector = Vector
I work in games where these values are extremely common and 'accuracy' wouldn't be very descriptive in a lot of circumstances: explosion radius falloff damage, water flow strength, positional/rotational lerps or easing, and more.
I wish I were commenting here with an answer, but I don't have one. "brightness01" is a common naming convention for values of this type in computer graphics programming, but niche enough that it got raised in review comments by another gameplay programmer.
That's a very good observation. We still could need a (new) term for this common type. Maybe floatbit, softbit, qubit(sic), pot, unitfloat, unit01 or just unitinterval as suggested?
This begs an interesting tangential question: Which programming languages allow such resticted intervals as types?
type percentage:=int[0,100]
type hexdigit:=int[0,15]
…
since this might be overkill, sane programming languages might encourage assert statements inside the functions.
I think this is right, but it's still IMO basically a natural language semantics issue. For instance in haskell (which has a pretty advanced static type system), I would still probably be satisfied with:
-- A float between 0 and 1, inclusive.
type UnitInterval = Float
foo :: UnitInterval -> SomeResultPresumably
foo accuracy = ...
i.e. I think the essential problem in the SO question is solved, even though we have no additional type safety.
A language without type synonyms could do just as well with CPP defines
Looks like it’s actually possible to string something like this together in Python; custom types are of course supported, and you can write a generic validation function that looks for your function’s type signature and then asserts that every UnitInterval variable is within the specified bounds.
You’d have to decorate/call manually in your functions so it’s not watertight, but at least it’s DRY.
Nothing wrong with the name ZeroToOneInclusive. Seems like a great type to have around, and a great name for it. UnitFloat or UnitIntervalFloat or other ideas ITT are cuter but not much clearer.
It's a succint way to say "No, not types that will automatically work as primitive types (which normally the variable passed for 0 to 1 would be), and that will work with numeric operators".
Or in other words, a succint way to say "Technically yes, but practically useless, so no".
In Elm (and many other languages, I assume, I'm just most familiar with Elm) there's a pattern called "opaque data type". [0] You make a file that contains the type and its constructor but you don't export the constructor. You only export the getter and setter methods. This ensures that if you properly police the methods of that one short file, everywhere else in your program that the type is used is guaranteed by the type system to have a number between zero and one.
-- BetweenZeroAndOne.elm
module BetweenZeroAndOne exposing (get, set)
type BetweenZeroAndOne
= BetweenZeroAndOne Float
set : Float -> BetweenZeroAndOne
set value = BetweenZeroAndOne (Basics.clamp 0.0 1.0 value)
get : BetweenZeroAndOne -> Float
get (BetweenZeroAndOne value) = value
You would just make the constructor return a possible error if it's not in range, or maybe some specialty constructors that may clamp it into range for you so they always succeed.
It's the same question of, how can you convert a string to a Regexp type if not all strings are valid Regexps?
EDIT: Some of you asked what about languages that don't support such more constrained types, so to answer all of you here: different languages have different capabilities, of course, so while some may make what I proposed trivial, in others it would be almost or literally impossible. However, I believe most of the more popular languages support creation of custom data types? So the idea (for those languages at least) is quite simple - hold the value as a float, but wrap it in a custom data type that makes sure the value stays within bounds through accessor methods.