I don't see any functions in the OP's library that would require dedicated UTF-8 handling. The string length is given in bytes, not characters or codepoints. There's no functionality to give you the character at n-th location etc... you can easily implement all UNICODE-specific functionality in a separate library and use it together with the OPs library. IMHO that's even preferable.
Yes, but don't call it string library then. Strings should handle strings, and strings are unicode now. Unicode needs to be normalized and needs case-insensitive support.
And it's not easy. I implemented the third of its kind. First there was ICU, which is overly bloated. You don't need 30MB for a simple string libc. Then there is libunistring which has overly slow iterators, so not usable for coreutils. And then there's my safelibc, which is small and fast, but only for wide-chars, not utf-8.
I fixed and updated the musl case-mapping, making it 2x faster, but this is not in yet. And there's not even a properly spec'ed wcscmp/wcsicmp to find strings. glibc is an overall mess. I won't touch that. wcsicmp/wcsfc/wcsnorm are not even in POSIX.
Hey, even C contains a locale-dependent string comparison, namely `strcoll` (since 1990!).
I admit two words "string" and "text" are now interchangable. But that doesn't make strings have less requirements, people are just expecting more out of strings.