One of the problems is that people treat experts on AI technology as if they were also experts on AI philosophy, which leads to poorly edited thought salads being published in a respectable context.
Just because someone understands variational autoencoders doesn't mean they have a clue about how the field of AI will look like 5 years from now, and it certainly doesn't mean they can anticipate the societal and political impacts of those technologies any better than the average (intelligent) Joe.
I don't know what 'AI philosophy' is, if it even exists, but if it does exist then poor quality articles might reflect the state of AI philosophy, or it might be just crappy articles.
I'm not comfortable with the idea that philosophy is intrinsically nebulous and poor quality, any more than observing amateur footballers being not very good would lead you to assume that there can be no such thing as a good footballer; of course there are, but there are a damn sight rarer than your local kids having a kick around for fun.
AI philosophy is its own academic discipline. Has been since around the 1970s. A lot of early academic experiments involving AI, e.g. ELIZA, most ALife research, Prolog-based expert systems, etc., can be best categorized (retroacively) as research into AI philosophy. No novel Computer Science principles were being explored; rather, what was being explored was the impact that certain novel applications of existing CS principles would have upon the world.
AI philosophy wasn't a very popular / widely-researched field, though, until recently, when AI ethics (ethics is considered part of philosophy) — and a specific subfield of that called "alignment research" — became something that a good number of philosophers became very concerned with.
Now there are many AI philosophers, employed not just in academia, but also in think-tank-like arms of AI technology companies like OpenAI (mostly because the AI tech companies know they're perceived as being irresponsible with how quickly they're iterating toward more-powerful AI, and so use employing AI philosophers as something like carbon credits to offset that perception.)
There is a lot of very good work done in AI philosophy; with many of the insights from alignment research specifically, being incorporated into the work that the AI tech companies are doing.
But none of this really "surfaces" in articles about AI that you might see floating about, because alignment research is really high-context — it's not really something you can write a fluff piece about; it's all stuff that requires knowledge of both philosophical (e.g. linguistics, decision theory) and computer science concepts to understand. Instead, all the normal articles about AI philosophy, are written by people doing amateur AI philosophy — usually resulting in them badly retreading the same ground that was already thoroughly explored in the 1970s.
Just because someone understands variational autoencoders doesn't mean they have a clue about how the field of AI will look like 5 years from now, and it certainly doesn't mean they can anticipate the societal and political impacts of those technologies any better than the average (intelligent) Joe.