I think it's fair to say that font issues were a significant reason for its decline in the late 80s and 90s (well before good unicode support). Other major factors were spreadsheets, which did many of the things APL was best at with an intuitive graphical interface, and OOP. APL didn't have OOP, so it was for dinosaurs. Structured programming could have had the same effect—mainstream APLs picked up ifs and while loops 10-20 years later than the rest of the world—but I think the usability gap between APL and anything else for arrays was just too large at the time for that to hurt APL too much.
The article is wrong about mixing strings and numbers though. The requirement for homogeneous data, and the use of boxes, is a feature of the SHARP APL lineage, which includes J (although these languages could allow mixed arrays if they wanted to). But the APL2 and NARS families simply allow anything as an element of an array. These are much more popular, to the point that every recent APL I'm aware of uses this style of array. Possibly a reason why J wasn't very successful; probably not a reason why APL usage disappeared.
It's an interesting wonder if now that Unicode is much more ubiquitous, and people grow used to (and fond of) complex IMEs such as "emoji keyboards" and intricate ligatures of digraphs and trigraphs (fonts like Fira Code, Cascadia Code), if there will be an interesting APL resurgence or perhaps a modern Unicode "native" successor.