That is nice and simple, but it makes ten comparisons for each character in s, where only two are needed. Of course it would be a good approach if the set of characters you're testing against is not contiguous, unlike 0..9.
Correct, but we're talking about "idiomatic" not "most efficient" solution. Input validation is usually not time-critical, and for these cases the code clearly showing the intent is much better than optimal one.
Good news then, it’s also linear time like the marginally faster but enormously grotesque for loop provided previously. I would take that clarity of intent a hundred times over squeezing a couple of comparisons out.
You raise an interesting point. It got me thinking about how big-O notation has failed us in some ways: it teaches us to ignore constant factors.
In big-O, an algorithm that makes 1000 comparisons per element is no different from one that makes a single comparison per element. They are both linear time. But you can't deny that one of these will likely take 1000 times as long as the other.
Of course, like you, I favor simple and readable code over grotesque code that is hard to understand and mentally verify.
Sure, asymptotic time complexity only says anything about the asymptotic behavior. On the other hand, I have had far more cases of things blowing up due to cubic or even quadratic time complexities than I have for linear; linear is what you expect, if it takes a long time for a toy input, it'll take twice as long for a real input at double the size. Not so with the superlinear ones, your toy problem might work fine, but cubed? Ain't happening this millennium. This combined with the constant factor tending to also grow as the problem size grows really is a potent foot-gun.