What about something like casting a bignum to float? It's easy to imagine a simple implementation optimization where any number smaller than the machine's maximum integer size is stored as a native value, so e.g. float(100) would be constant time but float(2^64 + 1) might not.
What's it mean to cast in non-constant time? Instead of O(1) performance, it's O(N). But in that case, performance is linear with respect to what? With respect to how big of a number is being casted?
I think that might be possible in dynamic languages like Lisp that can compile to assembly. Maybe Franz or Allegro Lisp compilers do something like that. For the JVM or .NET, I'm not sure that can happen. For C or C++, every typecast is constant time.
I can see non-constant conversion times, especially with floats. Floats have lots of weird edge cases involved in them, especially when you get to the very big and the very small scales. If your number's too big, you have to see if you need to just throw back infinity and/or negative infinity. If your number's too small, positive and negative zero. Finally, if you're hanging out with tiny numbers near zero, a good floating point library will start to have to worry about denormal numbers, where you will start padding with zeroes, sacrificing precision to denote that the number isn't quite zero. So yeah, it would be N in respect to the size.
A conversion from string to int is O(n) in the length of the string (O(lg n) in the "size" of the number). Such a "cast" would be syntactically indistinguishable from builtin casts eg. in C++ which allows user-defined conversions.