I too have a background in Mechanical Engineering and while many software products are complex I wouldn’t categorize all of them as engineering projects in the historical sense of the word. That’s not to say there are not quality software products that satisfy real businesses requirements. But it is to say that a lot of software projects would be WAY too expensive if they were engineered the way a passenger jet or a skyscraper was engineered.
The software development field is quite new compared to the other engineering disciplines and many, many decisions are made on gut feel, intuition or out right personal preference. Alan Kay has some very good talks on this specific subject, referring to the current state of our field as a Cargo Cult.
However, I would also say firmware would be the least expensive to engineer because the requirements for that type of software are better known and more rigid.
I believe that a part of the problem with software engineering is the "we can always fix this later" mindset.
Even during development, the only cost of iterating over errors until you get it right is just time.
But HW engineers just don't have the luxury of making 100 iterations of a product until it works, nor the safety net of "we'll update it over the internet". They must put a lot of effort into testing and verification until they say "ok, this is good, let's ship it."
Also, failure modes of mechanical products are often known and intuitive.
I am guessing that before the advent of Internet, the average quality of shipped software was higher on average. Nobody would dare ship a hot mess like Battlefield 2042 if they knew it's the last version they ship.
Automotive and other mixed-criticality systems is where these two worlds butt together and have a lot to learn from each other.
Mech eng processes on one side, ASIL-style safety requirements in the middle, and someone wishing to pour a bucket load of Android apps into the same computer from the other end.
Are they ever really "the same computer"? I don't think that's true even in entirely software-mediated-control vehicles like Teslas.
The discipline of robotics (which is really what you're talking about here — cars are just very manually-micromanaged robots these days) is all about subsumptive distributed architectures: e.g. the wheels in an electric car don't need a control signal to tell them to brake if they're skidding; they have a local connection to a skid sensor that allows them to brake by themselves, and they instead need a control signal to stop braking in such a situation.
This is why, in anything from planes to trains to cars, you see the words "auxiliary" or "accessory" used to describe infotainment displays et al — the larger systems are architected such that even an electrical fault (e.g. dead short) in the "accessory" (non-critical) systems can't impact QoS for the "main" (critical) systems.
I really can't imagine a world where they've got engineers building the car that understand that, but who are willing to let Android apps run on the same CPU that's operating the car. They'd very clearly insist for separate chips; ideally, separate logic boards, connected only by opto-isolated signals and clear fault-tolerant wire protocols.
The point you're making is valid in general and you provide valuable context. A modern car does have many different computers, and there is a lot of intentional partitioning (and even some redundancy) into different CPUs, as well as guests under hypervisors.
For example, a typical headunit computer (the "infotainment computer") tends to contain two to three SoCs performing different duties, and one or two of them will run hypervisors with multiple guest operating systems. And that is just one of multiple computers of that weight class in the overall car architecture.
That said, there's an overall drive to integrate/consolidate the electrical architecture into fewer, beefier systems, and you do now encounter systems where you have mixed criticality within a single computational partition, e.g. a single Linux kernel running workloads that contribute both to entertainment and safety use cases. One specific driver is that they sometimes share the same camera hardware (e.g. a mixed-mode IR/RGB camera doing both seat occupancy monitoring tasks and selfies).
Safety-vs-not-safety aside, you also simply have different styles of development methodology (i.e. how do you govern a system) run into each other within the same partition. AUTOSAR Adaptive runs AUTOSAR-style apps right next to your POSIX-free-for-all workloads on the same kernel, for example.
What however is typically not the case in that scenario is that the safety workload in a partition is the only contributor to its safety use case, i.e. typically you will always have another partition (or computer) also contribute to assure an overall safe result.
In more auto terms, you might now have ASIL B stuff running alongside those Android apps on the same kernel, but you will still have an ASIL D system somewhere.
In general, you will start to see more of both in cars: More aviation- and telco-style redunancy and fault tolerance, and more mixed criticality. The trends are heading in both directions simultaneously.
> I don't think that's true even in entirely software-mediated-control vehicles like Teslas.
Tesla has been in the media for bugs such as flipping tracks on your Bluetooth-tethered phone or opening the wrong website in the headunit web browser rebooting the Instrument Cluster display. This is an example of mixed-criticality (done wrong). Many other cars are not architected quite as poorly. However, IC and HU/central displays sharing the same computer (not necessarily the same computational partition/guest OS) is increasingly common.
The software development field is quite new compared to the other engineering disciplines and many, many decisions are made on gut feel, intuition or out right personal preference. Alan Kay has some very good talks on this specific subject, referring to the current state of our field as a Cargo Cult.
However, I would also say firmware would be the least expensive to engineer because the requirements for that type of software are better known and more rigid.