Unfortunately many variable length integer encodings aren't well thought out. The integer encoding used in Protocol Buffers for instance allows for multiple valid encodings of any particular integer (there's no one true canonical form)... which can make the format difficult for use with things like hashing and digital signatures
FWIW UTF-8 allows multiple encodings of any particular integers. Decoders should reject non-minimal encodings (they are a security risk, as it becomes possible to smuggle ASCII payload in a non-ASCII form which blindsides some security systems) but don't always do so. And of course if you don't decode UTF-8 data at any point and don't validate it either, you're still fucked.