Or perhaps one step further (albeit verging into conspiracy theory territory): they intentionally push ahead with known-flawed approaches, projects and engineering practices because it's profitable and there's generally a net benefit to them in being more-aware and more-in-control of the vulnerabilities within that ecosystem than anyone else could be.
(instead of taking the time to wait for research results, best practices, security reviews and privacy concerns up-front at design-time, and even -- shock -- perhaps deciding not to build some societally risky products in the first place)
Apple spends more on security than all but 2 other industry firms (they may spend more than those 2 as well), and has a comparable computing footprint to those firms. This is a facile complaint.
My comment may have been facile and poorly-argued, sure, but if consumer devices are being sold that can be remotely exploited without user interaction during something as commonplace as rendering images.. surely it's worth considering the potential for structural improvements in industry?
Perhaps the associated billions of dollars of spending is indeed the answer, and will translate into measurable improvements. If so, very well.
Perhaps there are Conway-style architectural issues at hand here as well, though. Can disparate teams working on (a large number of) proprietary interconnected products and features reliably produce secure results?
It seems wasteful that similarly-functioning tools -- like messaging apps -- are continuously built and rebuilt and yet the same old issues (generally exacerbated by increasing web scale) mysteriously re-appear time and again.
(instead of taking the time to wait for research results, best practices, security reviews and privacy concerns up-front at design-time, and even -- shock -- perhaps deciding not to build some societally risky products in the first place)