Language (i.e. character sets) is not a good explanation for differences in software outcomes. It is possible write Japanese entirely in Hiragana charset (~50 chars) which can be represented in 8-bit alongside ASCII chars, which is how early Japanese computers like the PC-8001 worked. The JIS standard for double-byte representation of kanji also characters was also made in the 1970s. So that doesn't explain why Japan fell way behind in software the 80s, 90s, and 2000s.
Language is more than charger sets. Japanese is often read in vertical columns, then right to left. The space of UI’s that feel natural in both languages are just different.
Not true. Nearly all text in Japanese print media, ads, brochures, websites, video games, etc. is left-to-right horizontal. Only books and some magazines are written in vertical columns. Anyway, this again is not an insurmountable technical hurdle that held back Japan's otherwise burgeoning software industry.
Well, yes, to some degree this does affect localization in the US for software that was originally built in Japan. However, given that the Japanese character sets are more demanding than US character sets, it would usually mean that software made in JP is easier to port to US, and not the other way around. Emoji for example came from Japanese mobile phones ("e" pronounced "ay" means picture, and "moji" means letter)
You misunderstood, the fact that the language does both means the design space is different. Stuff that would seem natural in Japanese would seem very odd in English.
Similarly there are far more people fluent in English so small dev teams can reach a much larger audience without considering translation issues.
My impression is that the technical mess from Unicode(Han unification, nonlinear byte count <-> letter count relationship) and difference in types of words used("parts of speech", whether a noun or verb or a short sentence is used in labels) are larger problem than character sets and reading directions for Japanese translations.