Disclaimer up front: I work for GM. I don't work on chips or components for my day job. What follows is solely my own opinion.
Some things to emphasize:
OEMs (GM, Ford, Toyota, VW, etc) do not design components, and they do not want to. They design specifications for components, and then get suppliers to bid. This is great for efficiency in established ecosystems, not great for agility.
To my knowledge, GM did not cancel any chip orders, because GM itself had no chip orders (this is oversimplified). The suppliers cancelled chip orders.
For a 1st tier supplier to move to a different process/chip would also be difficult, because they do the same thing the OEM does - supply a specification, and get 2nd/3rd tier suppliers to bid on it.
A wholesale migration to a modern architecture is risky and costly.
Smaller process/feature size on a wafer is believed to be less resilient, for example to heat and vibration.
The risk to large automakers is that something goes wrong and they have to do a recall. The risk to up and comers (Tesla) is failure to grow. Also, Tesla has a small product line and absolute loads of cash.
If you were going to design a new car electrical architecture from scratch today, you would have something like a 40-60V system with a centralized controller (or pair of controllers in a safety redundant configuration).
Even with a largely cleansheet design, Tesla uses 12V because of the sheer ubiquity of 12V components.
The suppliers are running in pretty thin margins aren’t they? Did they really have any choice but to cancel those orders or potentially go bankrupt?
If you sit on someone’s shoulders and forget that they’re doing most of the work, you can be shocked all you want when they collapse out from under you but nobody on the ground is going to have any sympathy at all for you.
I’ve worked for software subcontractors and it’s always frustrating when the customer is so excited about all the things we’re going to accomplish together, all the plans that hinge on our mutual success, and isn’t it so wonderful that they’re getting such a great deal on it too?
Meanwhile we’re slowly going broke because we can’t make money on that sweetheart deal loaded with scope creep not spelled out properly in the contracts. Good luck with those plans when we go under.
> If you sit on someone’s shoulders and forget that they’re doing most of the work, you can be shocked all you want when they collapse out from under you but nobody on the ground is going to have any sympathy at all for you.
Ajax Fasteners where a small manufacturer in the name of Division of Labour, and when they went into receivership, it took down Australia's entire car industry. Let me repeat - Australia no longer has a car industry.
> Ajax Fasteners where a small manufacturer in the name of Division of Labour, and when they went into receivership, it took down Australia's entire car industry. Let me repeat - Australia no longer has a car industry.
You'd think the car companies could have just bought the manufacturing assets of the supplier if it was this critical (that's assuming they weren't looking for a back door excuse to shut down their own Australian production, that is).
It did died a slow death (about 7 years after Ajax), but IMHO it was the beginning of the end. There were a tonne of factors, including iron ore prices, trade tarrifs, the car industry crash in Detroit etc, but for us, I think Ajax shutting should have raise alarm bells that the end was nigh (rather than throwing hundreds of millions more to prop up foreign nations in the name of local jobs).
A good rule of thumb I have is if a politician is there at a ribbon cutting, and talks about jobs, then it's nothing more than vote buying in disguise.
Do you think there was a moment when the utility stopped, or was it a process? I'm only really familiar with the last days of the Australian car industry.
I never thought that Australia having a car industry as a utility. When something is not economically viable, unless the government want to run it for social benefit at a loss rather than a for-profit cost center, then it's never going to be able to sustain itself over the long run
I looked up Ajax Fasteners to see if they made more than fasteners, and it seems like fasteners really were the bulk of their business. How can it possibly be true that there were no other suitable fastener suppliers in a country with massive mining, steel/aluminium production, and other heavy industries? It sounds like there's a story here.
Automotive is notorious in MFG for being one of the WORST industries to be a supplier in (at least for US Based Suppliers). Low Margins, and unrealistic demands by the big brands.
If you don't allow for scope creep you won't get any contracts. What I've seen in the industry is most companies will under bid hoping to make it up in change request fees. The successful companies are the ones who manage the OEM such that they keep them happy while maximizing change requests. It's a fine line to walk.
I don't think money is a problem in this specific case. Because of car shortage conditions they can afford to add a few hundred dollars to the MSRP and pay more for non-cutting edge chips and still come out fine.
>If you were going to design a new car electrical architecture from scratch today, you would have something like a 40-60V system with a centralized controller (or pair of controllers in a safety redundant configuration).
Actually this point is something im curious about - it seems to be redundant to have a 3X redundant processor ..probably placed in different places on the car body for safety.
Would that not be superior to current architecture..while being able to tap into the most modern chip architecture of the day ?
I'm not a car engineer and dont understand CANBUS, etc. But just wondering if this is indeed possible.
Do not take any of this as a particular endorsement of a safety system.
Airplanes need 3x redundancy on safety critical components, because they carry a lot of people and are not fail safe when they are in the air. Cars can generally stop safely.
As far as changing architectures - think about safety critical loops and real time computing. Some processes should never be pre-empted.
"Brake-by-wire is used in all common hybrid and electric vehicles produced since 1998 including all Toyota, Ford, and General Motors Electric and hybrid models."
"Three main types of redundancy usually exist in a brake-by-wire system:
Redundant sensors in safety critical components such as the brake pedal.
Redundant copies of some signals that are of particular safety importance such as displacement and force measurements of the brake pedal copied by multiple processors in the pedal interface unit.
Redundant hardware to perform important processing tasks such as multiple processors for the ECU"
"The highest potential risk for brake system failure has proven to be the Brake Control System software. Recurring failures have occurred in over 200 cases documented in NTSB documents. Because each manufacturer guards the confidentiality of their system design and software, there is no independent validation of the systems.
As of 2016 the NTSB has not directly investigated passenger car and light truck brake-by-wire vehicle accidents, and the manufacturers have taken the position that their vehicles are completely safe, and that all reported accidents are the result of "driver error"."
Generally speaking those are not purely brake by wire. The master cylinder is still mechanically connected to the pedal. However, in normal conditions the brake actuation will be performed over wire.
This is a question of semantics, but I don't understand your usage of "purely".
How can it not be "purely" brake by wire, if, when depressing the brake pedal, the friction brakes are not always triggered?
If electronics decide not to apply hydraulic force when everything is going fine, that there must be a potential failure mode where they ignore the pedal inappropriately.
In a brake-by-wire car, if you stomp on the brake pedal with all you've got you end up engaging a cylinder that can directly exert force on the front brakes, even if the electronics are fully dead. (Maybe there are some systems where the brake pedal is truly just a "dimmer switch", but I've not been able to find them).
You probably know this, but in case others are curious: In most fossil-powered cars, you can test the powered braking during start-up by pumping the brake and keeping pressure on it. When you turn on the ignition, the brake will depress further if the system works.
Yeah, brakes that don't function when power is lost would just be too brittle. That'd be unacceptable in any market regulated by one or more working brains.
It's a matter of semantics, but I object to saying a braking system isn't purely brake by wire when electronics alone can cause it to totally fail.
There's a difference between a fail-safe that usually works, and a mechanical connection that's always there.
Electronics can fail in many interesting ways other than simply turning off. And they do! Ever looked at nhtsa.gov?
"Brakes Failed problem of the 2017 Honda Accord Hybrid
Failure Date: 10/31/2018
There have been three instances of total brake failure. Initial instance occured on vehicle startup, a number of errors were given including "aadaptive cruise conrtol problem","collission mitigation system problem","road departure mitigation system problem" and "brake system problem". When I put the car in drive it immediately began moving forward. Car was towed to dealer where Honda blamed the issue on the undercaoting causing components to overheat. This was October 2018 in philadelphia. Replaced brake system and removed undercoat on affected areas. Second instance occured in February 2019 it had stopped raining but road was still wet and had puddles. Driving downtown, at a yellow light brakes completely failed, I ran the light and was able to stop with emergency brake. Car was giving the same set of errors. This time restarting the car made errors go away. Brought to the dealer, they said they did a software update but could not replicate the problem so did nothing else. Third instance was February 2nd, while driving about 30-40 mph, again either after rain had stopped or still had a slight drizzle. This time I saw the errors while driving before I needed to break and recognized the issue. Managed to use emergency brake to pull over. Again restarting the car cleared the issue, I have not brought back to the dealer yet."
We also have an older Civic (2005 FK) plagued with electrical issues regarding steering and VSA. The mechanic said it occured during peak current draw, but I'm a bit skeptical since all the lights went on once in the summer while travelling at highway speeds.
SpaceX is definitely pushing the use of off the shelf components, and good on them, but things will probably be different when they are building rockets that they expect to keep in service for ten years.
Aren’t most rockets, outside of SpaceX’s ones that can land after launching, single use? Then keeping the same rocket going for 10 years would be a problem unique to spaceX unless we count the space shuttle?
I'm not in the car industry. But I spec mixed analog/digital parts and so see a lot of ads and press releases for automotive parts. I'm seeing increasing offerings of two pair Power over Ethernet controllers, cables, and connectors.
That makes me think that as companies retire legacy gas/diesel products they'll probably switch to an automotive PoE standard. Allows you to to power the window lift regulator, door locks, and read the switches off a single 20 gauge cable. Weld a fully assembled door assembly to the body and than plug in a single cable. What's not to like?
Siblings covered why 12 V is annoying, let's talk about why 12 V was chosen in the first place.
6 and 12 V electric systems in vehicles came about because lead-acid batteries made sense in this application, and cetpar. a higher voltage lead-acid battery is overall costlier to manufacture, because it contains more individual cells. Another big thing is that there are many switches and relays in a car, and those often switch significant power. E.g. the light switch for the headlights has to deal with around 200 Watts total for incandescent/halogen headlights, indicators are like 15 Watt each front and back, the starter motor requires a tremendous amount of power, and has a very robust power switch built into it.
All of those switches become much more expensive when you increase the voltage. DC at 12 V and sizable currents is something you can reliably switch with mechanical contacts without costs going through the roof.
Being able to use 48 V for everything in a car is more or less dependent on using silicon switches for everything, not something that was possible in the past. The reason why legacy ICE cars (all ICE cars are now legacy) stick with 12 V is because everything is 12 V, and everything would have to change for the new voltage.
I seem to recall, several years ago (when stopping the engine at traffic lights and restarting as soon as the driver released the brake), at least one company was exploring moving to 24V. However, the effort failed precisely because the lifetime of 24V switches was significantly shorter.
Trucks and agriculture equipment already use 24V systems, so the point is moot. Vehicle usage during the lifetime of a truck greatly exceeeds the lifetime of a personal passenger vehicle and agriculture equipment is also exposed to dust and grime, yet both continue to operate normally. So I guess it's more of a cost problem than a component issue.
FWIW Almost no switches in a modern car (turn signals, wipers, door switches) carry a significant amount of current as they are controlled via confuzers.
Just built a car without a single relay. I used PMUs (basically boxes of mosfets and current sensors AFAIK).
Above 30V - 50V DC, contact arcing becomes a huge problem. (It's a problem below that, but less huge.)
With AC, the arc will self-extinguish a half-cycle after the switch opens. With DC there is no cycle, and contacts can be completely vaporised in tens of milliseconds.
Commodity switching components are usually rated "30V DC, 250V AC" for this reason.
It is possible to design switches so that even DC arcs self-extinguish, but the result is expensive and not as reliable as one could wish.
If cars had been invented after 1980, they would probably use solid-state switches except for the very high current circuits such as the starter solenoid and headlights. (Transistors were around a lot earlier than that, but engineers are sensibly cautious about new technologies.)
Edit: To answer your question directly: no. Cost is prime.
if we are going to 48V then pretty sure all power switching would use solid state relays/MOSFETs, Any mechanical switch would just be the input to a micro at 5V or lower.
You could also just switch the HV (400V+) bus directly down to 48/60V as all the high current devices motor/ac/power steering etc run off the HV bus and you would just be left with lights, entertainment and servos on the low voltage side. The 12V battery is probably a bit silly when you have a 50KWH main battery and 97% switching conversion efficiency regulators and I am guessing it may disappear in the next 10 years
Yes, my Model Y has a "traditional" 12v lead acid battery. Granted its a small battery as it doesn't have to run any heavy loads (starter motor). Furthermore, it's my understanding that newer Tesla vehicles are being delivered with 12v gel batteries instead of the the lead acid design.
>To answer your question directly: no. Cost is prime
I meant to the customer or owner, over the normal lifetime of the vehicle, not to corporate accountants that translate pennies into millions of dollars.
"More expensive" presumably is a small amount per car; the average new car in the US is over $40K.
Power is volts x amps. The more amps, the thicker the wires need to be (volts are irrelvant).
So to minise weight/wire size, use the highest voltage possible (and thus the lowest current). Losses are purely related to current (I^2*R) so the incentive is to squeezee the current (so needing to increases the voltage).
There's a reason the high-voltage overheads are 432 kilovolts (or more); 10 amps at 432kV = 4.3MW (MegaWatts) while 10amps from a stock AC outlet is 1.2KW (KiloWatts). The wire thickness required for both is the same (though the HV wires need to be better insulated, out of the way of crazy fools etc).
So a 60V system for a car carries 1/5 the current of a 12v system, and the wires can have 1/5 the cross-sectional area.
(yes, I know there are caveats when it comes to AC).
> The more amps, the thicker the wires need to be (volts are irrelvant).
Nitpick: the wire as a whole includes insulation, which technically does need to be thicker at high voltage (though at 40-60V, and maybe even at 40-60kV, it's probably dominated by tolerances for erosion and abrasion and such).
High voltage power lines use air, which is actually a fairly good insulator per se, but has the problem that wires tend to pass through it on their way to a short circuit if not well-restrained.
Another factor I haven't seen mentioned in the responses yet is safety to human bodies. You (or your kid) can stick a wet finger in a 12V DC charger socket in your car and not get an electric shock, including any of the exposed contacts under the hood, including the battery terminals. But once you're up to 40-60V, the risk of electric shock to humans is actually something that needs to be factored in.
You'll get a shock if wet slightly below 9V. It just stays on your skin. Voltage penetrates dry skin at around 50V, and this is the legal definition of high voltage.
Amperage is set by the resistance of the human body. That's in the order of 2kOhm, so at 50V, you get a current of 25mA. That's in the zone where it becomes quite intense for you.
At 12V, we are just talking about 6mA, where you can feel the electricity but it should not be hurting too much.
I'm not the OP nor an electrical engineer, but I believe that you can get higher DC power with less resistance losses using higher voltage and lower amperage. Additionally, this allows the use of thinner wire (lighter, cheaper, easier to package in tight locations).
Other comments pointed out why higher is better. The cap at 60v is somewhat arbitrary, but as others said, higher voltage is harder to switch, and above 60v DC it is easy to kill people.
Voltage levels sometimes arise from the geometry of semiconductors, capacitors, etc. 60V is a common cutoff, usually to give some slack to a 48V design voltage.
Less current draw, thinner cables mainly. 40-60V or higher is also great for electrical motors and power electronic, but anything above 50V is high voltage, so 48V is still safe enough. Cables in a passenger vehicle amount to >4 km and ~30 kg.
I'm an electrical engineer working in an automotive related field.
One of the big issues with 40-60V systems, which have been right around the corner for 20 years now, maybe more, has been that typical relay contacts arc over at about 28V DC. Yes, there are ways to adress this. No, none of them are as practical as relays yet.
This isn't really a fundamental problem. It just means using relays with contacts rated for higher voltages. It's probably the easiest part of the bom on a typical vehicle to move to a higher voltage range. I use 125VDC contact/coil rated relays all the time.
It is a fundamental problem (it's physics). The current rating for relays drops off sharply as the DC voltage increases; you might see 125 V DC rating, but the current will be a fraction of the nominal current at the rated AC voltage. If you're using an relay rated for up to 250 VAC/DC, then the breaking capacity with DC might be as low as 1 % compared to AC. Bad relay datasheets don't mention this at all, or maybe only for 30 V. Good datasheets have plots of capacity vs. voltage and current vs. life (rated capacity of relays is generally for around 100k cycles).
I am an engineer in a field that has been using >100VDC relays for a hundred years. There is a sizable supply chain out there for these things. You can buy them in whatever amperage you need, they just use different construction from typical low frequency AC relays. Wipers are often high quality metallurgy to extend life and they make use of simple passive magnetic snubbers to 'blow out' the arc that is self-extinguishing in AC circuits. They aren't substantially more expensive than regular high quality AC relays and use standard form factors.
Speaking of good relay datasheets, some will also show contact resistance vs current. I recall one which noted that for small signals the resistance was very low, but if higher current was applied, the contact coating would burn off and the resistance rises permanently.
Yep, those are gold-over-silver(-over-nickel-on-contact-alloy) plated contacts. Good for both signal and power switching, as long as you use the same relay only for one of those things.
> A wholesale migration to a modern architecture is risky and costly.
I have no doubt that your company and all the other big automakers are running the numbers and trying to weigh those risks and costs with the risks and costs of not updating.
Given that the chipmakers and supply chain experts are not making rosy predictions for things to drastically improve in the next 12 months or longer, I wonder if the balance is shifting towards taking action.
> "I wonder if the balance is shifting towards taking action."
I can't find a public source, but some actions are being taken. Bear in mind that any major shifts would happen in a new product, that is, something 5-7 years out.
Fundamentally, what is it that prevents the established auto makers from making changes mid lifecycle? Seems like Tesla keeps pulling it off; why can’t GM?
Is it that the component specification+acquisition cycle you describe is optimized to take as long as the development cycle of a new car?
Small company risk is failure to grow. Big company risk is every other failure.
Tesla is valued as a growth/tech company. As long as people believe in their growth, they get more cash.
I don't personally understand how Tesla hasn't had more problems with their apparently uncontrolled engineering changes. At least part of it is customer enthusiasm for the product papering over any drawbacks.
It's not that GM can't make changes like this, it's that GM WON'T make changes like this.
> I don't personally understand how Tesla hasn't had more problems
It's that nobody cares and the media doesn't cover it. A friend of mine-- his Tesla has broken down a dozen times and he still happily pre-ordered the cyber truck.
My 10-year-old Honda has never had any work done to it other than maintenance-- but that doesnt make the news either.
"On July 16, 2021, we issued a notice of redemption to the holders of the 2025 Notes informing the holders that we will redeem the notes in full in August 2021 at a redemption price equal to 102.65% of outstanding principal amount, plus accrued and unpaid interest, if any."
It appears to me that they are steadily issuing stock at the rate of about 20% of their market cap per year, or roughly at the rate of $10 billion per month.
It looks like the number of (diluted) shares outstanding increased by over 20% between 2019 and 2020.
What did that consist of? Their annual report says mainly:
- Issuance of common stock for equity incentive awards
- Issuance of common stock in public offerings
e.g. "On February 19, 2020, we completed a public offering of our common stock and issued a total of 15.2 million shares (as adjusted to give effect to the Stock Split, as described in the paragraph below), for total cash proceeds of $2.31 billion, net of underwriting discounts and offering costs of $28 million."
"On September 1, 2020, we entered into an Equity Distribution Agreement with certain sales agents to sell $5.00 billion in shares of our common stock from time to time through an “at-the-market” offering program. Such sales were completed by September 4, 2020 and settled by September 9, 2020, with the sale of 11,141,562 shares of common stock resulting in gross proceeds of $5.00 billion and net proceeds of $4.97 billion, net of sales agents’ commissions of $25 million and other offering costs of $1 million."
"On December 8, 2020, we entered into a separate Equity Distribution Agreement with certain sales agents to sell $5.00 billion in shares of our common stock from time to time through an “at-the-market” offering program. Such sales were completed by December 9, 2020 and settled by December 11, 2020, with the sale of 7,915,589 shares of common stock resulting in gross proceeds of $5.00 billion and net proceeds of $4.99 billion, net of sales agents’ commissions of $13 million and other offering costs of $1 million."
Also, as of their 2020 annual report, roughly a billion shares were authorized to issue, which is on the order of another 100%, or double the current outstanding.
You might dismiss this as "way back in 2020", but it does seem to be their latest annual report.
Their latest quarterly report is as of June 30, 2021, and guess what? Shares outstanding are up about 4.6%. Compounded over a year, that's going to be just about 20% again.
As you mentioned, TSLA's last public offering was in 2020.
2020 was a significant shift in the company as it became profitable ex-ZEV credits. Future public offerings are unlikely absent major changes in the company's fundamentals.
Stock based comp is a standard cost of business in most large tech companies; given how badly TSLA is been beating competitors (esp in the context of the chip shortage), it seems to paying off.
>As you mentioned, TSLA's last public offering was in 2020.
>Stock based comp is a standard cost of business in most large tech companies
The public offerings come at intervals, but the total raised seems to be pretty steady. It all contributes to keeping the business running. Money is fungible. "Everybody does it" is not an argument for anything.
Your opinion could be perfectly correct, in terms of predicting the future, and I am not an expert on Tesla. But your comment doesn't convey to be even the tiniest hint of why you hold your opinion.
Go look at how much of the raised money is simply cash reserve. There is a difference between raising money because you need to and wanting to raise to build up a cash reserves because you think the stock price is favorable.
Have you considered that Tesla actually has a process and good engineers and good models to understand their changes?
Maybe they are simply more dynamic, have better on the fly testing, more flexible software to manage their production, are more vertically integrated and have thus more control over their production.
The argument that Tesla had significant more quality problems compared to others really doesn't hold much water today. The fact is Tesla had far fewer problems with batteries and drive trains while delivering far more EV. People love to complain about minor panel gaps (that most people don't ever notice anyway) and ignore that Tesla has a very good track record in terms of drive train and battery.
Compare Tesla Model 3 to Leaf or the Bold that came out at the same time and had far more problems. Leaf had to replace all early batteries and Bolt is being recalled right now.
Tesla should get credit for this, rather then just them being 'lucky' or that Tesla costumers are just willing to buy broken products (another myth that doesn't hold up).
In terms of drive train and battery Tesla worse failure by far was that they had to under-power a number of Model S produced in 2017. There has never been a large recall of Model 3 or Model Y.
>"Have you considered that Tesla actually has a process and good engineers and good models to understand their changes?
Maybe they are simply more dynamic, have better on the fly testing, more flexible software to manage their production, are more vertically integrated and have thus more control over their production."
Yes. That's cost of entry. It works when the customer takes delivery.
They can also do smart things like have consistent APIs, electrical bus systems, etc (I don't know if they do, but they could)
Even with all of that, technical debt on physical products in the real world will drag you down to the depths of the ocean.
You see this with the customer experience on repairs. But like I said, Tesla is a growth company. As long as people believe Tesla is a growth company it will continue to be a growth company.
> You see this with the customer experience on repairs.
Most complaints about experience with repairs are more about how long it takes because of limited service centers, not actually bad service or overly broken cars.
On the other hand, many people have a really great experience as well. The mobile service is absurdly popular and make many repairs 10x better then what basically anybody else has.
In terms of 'check this individual horror story', you can find those for every car maker.
I have not yet seen any systematic real analysis on costumer experience for repair. The evidence seems to be based on individual stories.
Partly I think because Tesla is so integrated all repair problems are more directly associate with the company, while for other companies you just had a bad dealer experience.
I honestly don't know how to evaluate this in any systematic way, 'my friend had X problem with Tesla Service' is not good enough.
Alternatively, maybe just like SpaceX has faster development cycle AND greater safety and reliability of rockets, Tesla has faster development cycle AND greater safety and reliability of cars?
Judging by the number of recalls, Tesla doesn't seem to be doing worse than GM, Porsche or Ford.
Their EMMC issue on the S was pretty egregious, that's something you get right in even basic consumer electronics. They also had numerous issues with door handles(I had two fail once the car was outside of warranty).
There's things they do well but I don't know if I'd qualify them as having better reliability.
Disclaimer: I work and worked for subsidiaries of big automakers but this opinion is of my own.
I would guess it’s a mixture of culture, cost, and scale. Culture wise, Tesla is extremely vertically integrated so that gives them a lot more breathing room. Cost wise, Tesla pretty much still sells car at a loss and relies heavily on carbon offset subsidies for income. When your revenue is established, trying to change margin or anything is probably much more difficult. Lastly, it’s scale, while Tesla is trying to catch up, the throughput of the major automakers is absolutely mind defying. The whole system is an oil machined that any sort of downtime is detrimental and difficult once the line is established. To put into perspective, Tesla “monumental” Q2 quarter shipped 200K cars or so. GM in the US only sold 200K per month.
> Cost wise, Tesla pretty much still sells car at a loss and relies heavily on carbon offset subsidies for income.
I'm sorry but that is just straight up complete nonsense. Like seriously, you are directly disagreeing with public financial statements. We know exactly how much margin Tesla has, with and without carbon offset.
The simple fact is, Tesla has leading automotive margins even when you exclude any carbon credits.
If you look at the 10K (https://www.sec.gov/Archives/edgar/data/1318605/000156459021...), Tesla received $27,236M in automotive revenues (which includes sales of regulatory credits, thanks Elon). The corresponding cost of sales is $20,259M giving gross profit of $6,977M and gross margin of 26%. But after that, you have operating expenses ($4,636M) (blah, blah, interest, taxes, other, blah) and a final net income of $721M.
What matters in terms of what we are discussing now is automotive margin.
Their 'Automotive gross margin' is 28.4%. If you exclude regulatory credit that is still '25.8%'.
Those are flat out great margin number in the automotive industry.
These facts literally disprove this phrase:
> Tesla pretty much still sells car at a loss
Unless you simply interpret that phrase differently then everybody else.
If you want to make a larger statement about Tesla on a company level, that is a whole different thing.
If you want to ignore unit economics Q2 they made 354 million $ in credits. Total GAAP gross margin 24.1%, ignoring credits its still 21.8%.
I'm to lazy to calculate the operational margin, but its still good excluding regulatory credits. This regulatory credit storyline is literally on its last legs.
Tesla is still a growth company and their margin and profitability are already good despite them not even having manufacturing in all large markets. In Q2 they didn't even build their high margin vehicles.
My main point being again, claiming that Tesla sells vehicles at a lose, is literally nonsense.
Tesla has to deduct the “cost” of issuing new shares for stock compensation like for their CEO. It’s not a cost as in they spent cash, but is deducted because it lowers shareholder value. A better sense of their actual profit/loss is their free cashflow which is positive even after your deduct ZEV credit sales.
Cash flow =|= profit. You can be cash positive and profitable (best case), cash positive and not be profitable (at least you won't go under automatically), cash negative and profitable (you face the risk of going bancrupt by running out of cash) or cash negative and not profitable (usually dead).
Whatvyhe other poster did was diving into Tesla's sec fillings. Those show:
Tesla is offering stock -> they still raise money
Tesla is selling emissions certificates -> that explains most, if not all, of their profits.
Not sure how new funding factors into their cash flow, and I am too lazy to look it up. It does seem so, that Teslas car business isn't enough to stand upon for now.
The little changes are going to end up as technical debt they will pay for later. And notice that they haven’t come out with any meaningful refresh of any of their cars, some of which have been on the market quite a while. It’s definitely too soon to suggest they have a good, effective update strategy.
Tesla has yet to do even a face lift on its Model S. In the meanwhile most incumbent OEMs either came up with EVs based on existing platforms or completely new EV platforms.
> OEMs (GM, Ford, Toyota, VW, etc) do not design components, and they do not want to. They design specifications for components, and then get suppliers to bid. This is great for efficiency in established ecosystems, not great for agility.
There are some great stories about SpaceX trying to source components this way and then finally ending up having to DIY.
What they found was that modern CAD and rapid prototyping made it easier than it used to be. Car companies would probably find the same thing, and have the luxury of doing it piecemeal at their own pace by gradually insourcing components in the order of necessity or benefit.
The “do not want to” part probably points to these companies being run by Ivy League MBAs educated in the 1990s and 2000s when this was conventional wisdom. The world is changing.
I worked in aerospace for six years before changing into automotive. The difference could not be bigger.
In aerospace cost didn't matter a lot and many things are custom built, even down to custom alloys and materials. Also the amount of planning, numerical analysis, simulation, preparation and especially testing is insane. Project timelines are sometimes more than a decade.
In automotive 'cost down' is the mantra and is often enough measured in fractions of cents. Almost nothing is customized and parts reused for different product lines whenever possible.
I wish company leadership knew that conventional wisdom is time-bound and has an expiration date. Of course, actually trying to figure out if the conventional wisdom still holds involves risking time and money on R&D. And, the larger the institution the more risk averse the leadership will be. It’s a real shame that the companies you would think have the most ability to absorb risk are the most averse towards taking them.
>>>To my knowledge, GM did not cancel any chip orders, because GM itself had no chip orders (this is oversimplified).
Come on, Really? So GM cancelled the orders for all the tings that the chips go in, and you want to spin that as "well technically GM did not cancel the chips"
That is bullshit.... The supply chain does not work like that, if GM cancels an order for 100,000 ECM's the supplier for the ECM is going to cancel their order for 100,000 chips for them to make the ECM for GM, it is unrealitic to believe anything else would happen.
> If you were going to design a new car electrical architecture from scratch today, you would have something like a 40-60V system with a centralized controller (or pair of controllers in a safety redundant configuration).
Aircrafts historically been using high frequency AC, on both sides of the iron curtain (as both sides were copying each other.)
Old analog instruments actually benefited from AC availability.
And when SMPS were big, and heavy, transformers were much more reliable, and smaller (and they are still are, depending on the frequency)
3 phase power was also giving some interesting cost cutting benefits at the time
> “We were able to substitute alternative chips, and then write the firmware in a matter of weeks,” Musk said. “It’s not just a matter of swapping out a chip; you also have to rewrite the software.”
As far as I can tell, Tesla plays fast and loose, treating their product like a manufacturer of consumer electronics and not a manufacturer of a dangerous and durable good. That obviously allows them to out-compete other auto manufacturers who are more aware of things like product liability.
> However, on October 24, 2013, a jury ruled against Toyota and found that unintended acceleration could have been caused due to deficiencies in the drive-by-wire throttle system or Electronic Throttle Control System (ETCS). Michael Barr of the Barr Group testified[30] that NASA had not been able to complete its examination of Toyota's ETCS and that Toyota did not follow best practices for real time life critical software, and that a single bit flip which can be caused by cosmic rays could cause unintended acceleration. As well, the run-time stack of the real-time operating system was not large enough and that it was possible for the stack to grow large enough to overwrite data that could cause unintended acceleration.[31][32] As a result, Toyota has entered into settlement talks with its plaintiffs.
Not following software best practices(i.e. ISO 26262) can expose your customers to unnecessary risk and leave your company vulnerable to lawsuits. Tesla may learn a very expensive lesson one day, or they may get lucky. Time will tell.
---
Depending on the design of your system and software stack, switching SoCs can be anywhere from a massive effort to a simple recompilation. Even switching architectures could be trivial for some components(i.e. a UI written in HTML5 making REST API calls) or a nightmare for others(ECU, ABS or other real-time algorithm written to use bare metal modules on the SoC without an OS providing abstraction).
Toyota accelerator-gate is largely a farce. Almost all of the people involved with Toyota unintended acceleration were over the age of 65 and had the poorly designed loose floor mats in their cars.
Even if they had perfect readable John Carmack tier coding they still would have lost.
Sure but that's not related to the point I'm making. The point is that not following software design best practices can leave you open to liability, or at least put you in such a weak legal position that you are compelled to settle.
If/when Tesla gets hauled into court over its software flaws, any lack of adherence to best practices will make the company appear negligent.
It's certainly a better argument than the GPs claim of "the safest cars out there" with 0 evidence given to substantiate that. I think it fair to refute such an unsubstantiated claim with a reported instance where safety was not maintained as can reasonably be expected.
Tesla cars have the highest safety ratings in the world and they have the the best active safety features and complete fleet compliance with their whole fleet for new standard of active safety features (literally the only company that does have that).
This is literally just the basic fact of the situation. If you want to disagree with literally every state safety agency in the world, then you are the one who needs to come up with real evidence.
How do you square that with them removing sensors from their cars and going all-in on doing self-driving exclusively using vision? If they were so passionate about spending money to make the safest cars, they'd be doing sensor fusion.
> If they were so passionate about spending money to make the safest cars, they'd be doing sensor fusion.
That's your opinion. They have shown data where the information from radar was of a lower quality than what their vision stack was providing (e.g. lack of vertical resolution). Watch their AI day webcast for examples. Their current vision-only stack has been validated with LIDAR ground-truth data.
If the radar data conflicts with what vision is saying, which one do they trust? They have shown that their vision stack has surpassed what they can do with radar. So "fusing" that data in only makes it worse when it gives conflicting information.
Sensors are obviously important, but it doesn't really matter if two sensors conflict, the important thing is which one is consistent with their running model.
So far their biggest problem is that they allow flicker. That's why sometimes the software picks the wrong lines. (Of course this is a very hard problem. Our brain conveniently smooths over sensory changes for us, because that's how our everyday reality is. Things don't flicker in and out of existence, nor does a car suddenly appear as a different thing, then switches back.)
And if it turns out the sensor(s) failed, it has to be able to handle that too.
I worked for ford many years ago and I remember a really funny conversation between young engineers and MBA types. The issue was the Lincoln Towncar was designed to have extremely low pedal pressure because that is what the old buyers liked. Combine old people with limited hearing, no pedal pressure, and a quiet cabin it was a perfect recipe for old people to shift and send the car through the garage wall. Lol. The classic issue of when customers want something stupid, do you build it?
The PCM (engine controller) that a university audit got access to had 12,000 global variables. Their conclusion was it was not possible to prove either way.
There was a definitely real issue with floor mats getting the pedal stuck. I don’t know enough background on it past the audit that was done.
Easy to do. I stepped on the accelerator instead of the brake once. First impulse is to step harder, until you realize what’s going on. If your realization is slow, bad things can happen.
If bad things happen, the tendency is to blame someone else.
I followed the accelerator-gate thing fairly closely, and I came out pretty convinced it was a computer failure. If it wasn't Toyota's floor mats were many times more likely to jam the accelerator than any other make or model.
There was a plausible root cause - a bit flip in the CPU. It was proved such flipping the right bit would cause uncontrolled acceleration that characterised each event.
I don't agree Toyota was guilty of shoddy work, or cost cutting. At worst, there were guilty of working some fairly shitty, hard to support code. (Someone below mentions 10,000 global variables.) If anything that shitty code just proves how dedicated Toyota was to testing something until until all the safety related bugs are gone, because despite some horrid kludges like replying on watch dogs resetting the thing when it got stuck and a reset so fast most people wouldn't notice it, despite the intense scrutiny of the code no one every found a safety issue with it. Toyota was well aware of a electrically noisy area like a engine bay could cause bit lips, and defended against it. Every variable was stored in two places in RAM, and on use they were always read and compared, and if they differed it was reset.
Sadly for Toyota they used a proprietary ECU and OS provided by NEC. NEC didn't let their customers look at their proprietary stuff. NEC didn't defend against bit flips. And it turned out a bit flop in a task ready bit could cause that task to never be scheduled again. If that task was supposed to turn off the cruise control acceleration, then bingo.
I have no idea what changes Toyota made in response to this mess, but if I was to hazard a guess it would be when it comes to software, they demand their supplies open their source to them.
The pedals are close together in my car, and a couple times I have hit the accelerator rather than the break. But I knew immediately what was wrong and adjusted.
It literally took me years to understand it - how can you accelerate by accident? Until someone mentioned that all of these cars were automatics.
But every automatic I've ever driven (not many - I prefer a clutch) moves forward unless you're pushing the break in. In the rest state, it's in motion.
The problem isn't Toyota, the problem is a broken system.
That's a mechanical side effect of the torque converter used in automatics, which even at engine idle with no acceleration input is able to transfer power to the drivetrain.
The Toyota case was an instance of the car's engine control software unexpectedly commanding acceleration not requested by the user.
In principle it could happen on a manual car as well, but most of Toyota's vehicles in the U.S. are automatics.
People panic and jam the wrong (or both) pedals.
(There are quite a few people who use two feet to drive automatics, one foot for break, other for gas..)
> There are quite a few people who use two feet to drive automatics
They are very irritating to drive behind. Some seem to keep pressure on the brake pedal, and the brake lights stay on as they accelerate. When they slow it takes longer to realise they are braking as the lights have been solidly on.
Are you sure they're not just driving a manual transmission car? I can keep my foot lightly on the brake and apply the clutch to accelerate, without touching the accelerator pedal (diesel engine has a lot of low-end torque). Not that I do this in practice, except in drive-through lines just for the fun of it.
In first gear I can reach 8 km/h, but it's possible to get all the way to fifth gear while idling (takes a long time though).
Perhaps, but left-foot braking is a useful technique, and common practice in some forms of racing.
In street driving, the main application is being ready to brake while driving normally. It is much faster to brake with the left foot hovering over the brake pedal than by moving the right foot from the throttle to the brake. Of course covering the brake pedal with the right foot precludes driving normally on a flat road or uphill.
This should of course be practiced in low-risk situations before being attempted in high-risk situations.
Not sure how is this possible. Automatic cars almost always require You to push brake constantly if you not driving. You can't turn car on without pressing brake, can't change to drive mode from parking state without pressing brake. There should be something very broken in Toyota case.
From earlier discussions here (https://news.ycombinator.com/item?id=9643551 "10,000 global variables"), Toyota's conservative, arguably antiquated approach to software architecture wouldn't necessarily prove reliable as that software evolves.
Trying to assess whether thousands of global variables are still playing nice during a major rewrite to accommodate a new chip would be definitely be difficult and time consuming!
Not that a fast-and-loose approach is ideal, either.
Writing reliable software in the context of an organization is hard.
10K global variables sounds insane, until i realize "oh, huh, we have that too, we just call it _config_".
Almost every place we would have a constant (tuning parameters, etc) we instead have a configurable value with the likely default declared in code, but all overridable in config. Managing config can be a hassle, but the number of times we've merely had to tweak a value rather than roll a new build pays for itself every day.
We have thousands of such "global variables"... of course they are read-only, so aren't used to share state.
If Toyota is doing that: nbd. If they are using them to share state ... god have mercy on their souls.
I always interpreted the "10K global variables" to mean 10K directly accessible, heap-allocated symbols. It's a code reek. If you use a configuration object or other abstraction, the number of accessible variables may not change, but their scope and access method does.
It's a matter of interpretation I guess, but I would not classify a wrapper object (like .Net's ConfigurationManager singleton) as "10K global variables" even if the accompanying config contained 10K items, or if the ConfigurationManager backing store was preallocated in the data section of the binary.
>I always interpreted the "10K global variables" to mean 10K directly accessible, heap-allocated symbols. It's a code reek.
Which to me on an old _accelerator_ design is suspect because that's alot of heap for the micros that would have been used back in the day. This isn't a Linux system. It was at best a micro with like 8KB of RAM. (I'm not actually sure what it is but there's no way it was impressive).
> Other egregious deviations from standard practice were the number of global variables in the system. (A variable is a location in memory that has a number in it. A global variable is any piece of software anywhere in the system can get to that number and read it or write it.)
However, that was from the article author explaining the testimony, and not the testimony itself. It's not totally clear whether the variables were writable.
I would have guessed they were statically allocated rather than heap allocated, but what really matters is whether they were `const` — and they probably weren't `const` because otherwise this testimony wouldn't have rung true:
> "And in practice, five, ten, okay, fine. 10,000, no, we’re done. It is not safe, and I don’t need to see all 10,000 global variables to know that that is a problem," Koopman testified.
I would suggest you watch the Munro Live videos where they disassemble and analyses cars. This company has been doing this for decades and recently started making some videos.
The changes in Tesla vehicles are absurdly fast. They disassembled original Model 3 and then China Model 3, and since then the refresh 3 and the Y.
Recently they showed the new Model 3 LFP (Lithium Iron Phosphate) and the electronics and how it was integrated is already different again.
How does that work out for maintainability? Is Tesla going to maintain parts inventory long enough to support owners for the lifetime of the vehicle(15-20 years?)?
> That obviously allows them to out-compete other auto manufacturers who are more aware of things like product liability
A lot of this era comes down to this. You have a legacy industry with tons of regulations, then a new guy comes up, steps on all the lines, and convinces people they're smarter for doing more for less. (up until regulations come back in the equation).
Its very easy to claim that everybody else is not following regulation and that the only reason you are losing is because you do. Its much harder to prove.
If anything, GM had more problems and failures of their EVs so far. So the claim that Tesla is ignoring liability and doing much higher risk development is literally just an accusation based on nothing.
In actual fact, GM is spending 10s of billions buying back every single Bolt and they have been forced to shut down the line. At the same time Tesla had no such issues.
So, how about these people actually prove or substantiate in some way that the new comer ignores regulation or quality. Specially when, they themselves have a worse record.
Let's start by assuming that the engineers are at least basically competent, shall we?
I mean, I could implement a PID controller without any 74xxx logic at all: a basic op-amp, a handful of resistors and capacitors and boom I'm done!
But there's a reason that engineering doesn't do that these days and it's not that engineers don't know how. My PID control built from an opamp? In 2021 that's almost always a stupid idea. We use computers because doing it digitally has many advantages. We don't use 74xx (or any other logic family) for this because microcontrollers are far more flexible, allow for behavior changes without reworking hardware, allow inventories to scale (the same component can be used in hundreds of products) and more reasons I won't detail.
The reality is that in 2021, often the simplest, most robust and cost-effective way to do something trivial is by using a computer to do it.
Ok, lets assume for a moment that throttle can be modelled by a PID plant (it can't) - is the circuit temperature stable? You don't want the characteristics changing as the engine. Now you need a compensator circuit. What about power supply stability and noise? Car power busses are hella noisy, due to alternator and spark coil. Now you need some serious conditioning and filtering. What about knocking? Modern ECUs adjust parameters on knock detection to reduce the damage from pre-ignition. Gotta put in a control system for that.
Repeat for manifold air temp, exhaust temp, emissions, rev limiter, etc., and as you mention, make that 3x for redundancy. Now you have hundreds to thousands of components, in a high noise, vibration, and temperature environment, and you have something which is about as efficient as an 80s car. And you can't as easily simulate it cause it's analog.
Or just use some digital chips and simulate all the tuning.
It's really easy to over react because of the silicone shortage, but in many cars these chips are replacing physically manufactured parts or significantly increasing efficiency in a way that isn't possible with physical based components.
Electronic fuel injection is a _great_ example. Much of modern fuel efficiency stems from the fact that an engine can accurately control fuel in the cylinder based on a bunch of factors. Not only does this system require physically fewer materials, it uses less gas (in turn saving production effort).
Very little applications strictly need a turing complete machine. However microcontrollers are ubiquitous because they are so flexible. One chip does the job of many, and without needing to be specialised, allowing all applications to take advantage of the economies of scale in semiconductor manufacturing. (Also as pointed out here you drastically underestimate the complexity of a modern ECU)
By that explanation, I actually 100% confident I'm understanding you correctly.
Most of the functionality of these chips can be represented in some form of hardware. However, that hardware is often much more expensive, significantly less flexible, and likely significantly bulkier.
There’s nothing trivial about a modern internal combustion engine. The throttle control system is doing quite a bit more work than just opening and closing a physical throttle valve.
Okay, so you've designed it. Now, make sure it works with the dirty electrical signal of a typical car's power bus. Then make sure it fits in the space requirements of under the car's hood, or in the door, or any of the other tight places that these embedded systems go. Then make sure that it's verified to act over several years of vehicle life. Then, make sure your suppliers will guarantee you that they won't discontinue the part for at least 10-15 years because of legal requirements for spare parts. Just throwing discrete logic at the problem doesn't always help.
ECUs are simpler to build than the mechanical components they replaced and do things impossible to do mechanically (given the space and weight limits of a car).
Jadon Cammisa has a great series on this stuff. Here's just one episode on one topic, drive by wire throttle control: https://youtu.be/gKsCHx5NOMM
Have you looked at any vehicle from the past 3-5 years? I own one with drive by wire and you can feel and hear it reacting to variables when you push it to the limits (either in inclement terrain or on a closed course). The linked video has specific video examples from production vehicles as well.
Deciding how much fuel will be injected each cycle is not just a simple PID. The throttle body control may just be a basic servo (although it probably isn't) but the system controlling it is more complicated.
So much speculation here. Let me set the record straight(er). (I worked in Tesla FW, left 5 years ago.) Tesla does use industry best practices. All code must pass MISRA checkers, code is modular, and reused when possible. We used all sorts of chips (that were auto qual) of many different architectures. We had extensive tools for e.g. stack monitoring, diagnostic readouts, and much more. If you move from, say, one ST ARM chip to another one in a similar family, the peripherals may be a little different, but generally work the same way (with maybe a few more or fewer features). So reworking the I/O drivers is work, but given the excellent layering of Tesla "body control" FW, it's really quite straightforward to pick a different processor from a given family. It's for sure true that if you went from ST ARM to Freescale/NXP ARM the peripherals are diff, so yes, more time would be needed to write the I/O drivers. But the effort ongoing when I left was exactly to make the appropriate SW generalizations so it would be possible to do exactly this.
To elaborate on this, the feature sets on these low level microcontrollers are no standardized. There is no common API for them to implement. Even chips coming from the same manufacturer will have different hardware capabilities, and although low-level drivers can abstract that way to a certain extent, there will always be differences.
The biggest challenge is when you need to update your firmware to use a microcontroller from a different manufacturer. Often times these chips are specifically chosen due to the set of hardware functionality they offer, and the firmware is written to take advantage of that from the start. The two are coupled.
Now you are forced to use a different chip, and the firmware that was written for a specific set of hardware now has to be modified for a new chip that may have a different feature set. Things fine print on how things like Analog to Digital converters becomes extremely important.
Even a single hardware iteration intended to be a drop-in replacement in some cases won't be due to relying on implementation-defined behavior that is not guaranteed by the datasheet and wasn't considered a constraint by the people designing the silicon. Sometimes it's a bug, sometimes it's just "didn't think that was important".
Either way, you're left with something where the saturation behavior changed, or it's no longer possible to read out a value without risk of corruption since they assumed it can be treated as write-only, or some other hard to find and debug problem.
Swapping out chips is an exercise in testing and risk management, and never should be done without care, even before we start talking about safety critical applications.
I see this as a failing of microprocessor design. I/O should be standardized and there should be high level API's that do not change across chips or companies. MCU's are intentionally fragile in this respect to try to lock buyers into one MCU product family. It is absolutely stupid that I/O ports work differently for different MCU's and don't use a common API. It means every designer always has to reinvent the wheel when moving to a new MCU. This must have had a huge negative effect on product reliability and innovation. It's also why many products have moved to using Arduino's.
There is also a huge lack of improvement in MCU designs. Why doesn't my SPI bus autonegotiate everything? Why can't the serial port do the same? This should all be available in hardware as a standard feature of the ports. What a waste of decades of engineering talent to have to repeatedly troubleshoot everything at the bit level.
Thank goodness they don't all use a standardized design. What if the standard design was the wrong one? You couldn't switch to a competitor with a better design!
Tesla is much better at writing software than most of their competitors. That's how they can adapt so quickly.
They also do a lot more in house than their competitors. That means they can optimize their development processes, and align with chip suppliers across different components. If you have to work with a multitude of suppliers that each ship their own hardware and software, life is a lot more complicated. Changes take years in such an environment. Even a simple thing such as over the air updates to software is still science fiction in a large part of the industry. VW famously struggled with doing that for the ID.3 only managing their first updates fairly recently.
Of course, Tesla is still affected by supply issues as well. They can't switch suppliers every quarter.
I think you underestimate how poorly software development is done in traditional industries. Practice makes perfect and they mostly practice writing reports not software.
It isn’t that their engineers are inherently better at writing software. The parent comment explained exactly why their organizational and supply chain structure allows them to be more agile and better at writing software and delivering it.
You cannot replace a Xeon with an Epyc without redesigning the motherboard. In reality is even worse. You replace a chip from ST with one from Renesas for example.
I feel like GM and Ford are at US Government and NASA size where getting any change through takes so long and so much middle management and red tape that it stretches to infinity.
I’m sure Tesla is more agile than that. For better or worse sometimes…
The bigger the company, the more effort that’s needed to coordinate the parts. At some point one person can’t keep the whole in the head and has to lean on process. Tesla is small enough, and has a CEO that can keep everything in his head.
Tesla has over 70,000 employees. I know people think Elon is god, but come on, he can't have "everything in his head". Tesla isn't some mom and pop shop.
If this is true, stay away from anything Tesla does. I work in the industry. 2 examples: 1) you have to change a transistor with one from a different supplier, same characteristics. You need to do some HW engineering tests. If the transistor is part of a safety relevant function, you need to do also system, SW and safety tests. And then environmental and EMC tests. Of course you can tailor but you need a very good justification. 2) You change the microcontroller. You need new SW, new tools (emulators - takes some time to be delivered), new sourcing, new HW design etc. It is from development perspective a new product so you need testing (HW, SW, System, Safety) and design validation (environmental tests, EMC) , maybe more than one loop.And when everything is passed and sourcing is done (i.e. you can buy the component) you can start production.
I get that what Musk quoted is not a trivial task, and if accurate, is probably much quicker than OEMs and Big Auto could do since Tesla is presumably much better at software.
The article quotes Intel at 16nm, which is, what, 10 years from cutting edge process node? At this point mature OEMs and Big Auto should have been on a refresh process to auto-migrate forward the chips. The semiconductor industry is 40-50 years old now. And new generations produce better chips (cost, power use, performance).
As other comments say, it smacks of laziness and lack of forecasting.
Tesla has other advantages, since they are so much more vertically integrated, they can probably manage migrations a lot more easily and centrally.
Big Auto is more like "Big assemble OEM parts". OEMs make all the components, Big Auto just wants to screw them into place and wire them together. It's part of why they suck at software that integrates things: They don't make any of the components, and a dozen different departments order them from a hundred suppliers, so getting interfaces/protocols/specs is a lot more time consuming than for Tesla.
Wouldn't really make any difference. The last company I worked for did pretty much everything with MCU's based on ARM Cortex-M.
All that shit's still not in stock anywhere. And remember that ARM, x86, etc. only defines the processor architecture. The I/O is far more important in a control application and I/O is anything but standard across platforms.
For one product we ended up buying a vendors entire supply and redesigning the product to use it, crossing our fingers that the supply chain would have unwound itself by the time we ran out of inventory.
Tier 1 suppliers have an incredible amount of influence in that ecosystem, and Tier 1 suppliers have an incentive to not be easily replaceable. That'd be my guess.
I mean, it's an entire MASSIVE industry that's comprised almost entirely of truly commodity technology, and so you get all the perverse outcomes that happen when trying to "differentiate" despite being commodity by inventing ways to be "special" to drive lock-in. Interoperability only really benefits the OEM, and the OEM isn't the only part of the chain.
As others have mentioned, the peripherals/etc related to a specific model of microcontroller and how you work with them play a big factor.
But I'd also like to throw in:
- 16 years or so ago, the US side of the Auto industry was trying to steer towards PPC. Motorola/Freescale's presence in the market probably had a bit of a play in this, as well as ARM's then-status of 'not quite powerful enough to be future proof.' A StackOverflow post from 2012 [0] seems to indicate they did indeed standardize, whether they moved on from there is another question.
- Most Carmakers probably -don't- want to change designs outside of a refresh. There's a few reasons for this, including both external (updating documentation to repair network) and internal (when I worked at a place that did software/services for a US automaker, we had to submit pages of documentation/paperwork to change a couple of items placement/wording in a dialogue box.)
Correct, but you seem to have disregarded half my comment. ECUs are not failure-proof and fail all the time.
I've never heard of anyone dying due to ECU failure either..I'm not sure how that would even happen given all the critical systems on a car are mechanical first with electronic assist. So you can't lose steering, braking, etc. The worst that happens is you lose power, which is about the same risk as a subpar standard transmission driver stalling out (and can also happen in a number of different ways). Can you expand on how an ECU failing might kill you?
I would love if someone could elaborate on why I'm wrong instead of drive by downvoting. This isn't reddit.
Its easier and more realistic to kill the company via a recall.
Take for example the Boeing 737-MAX. Eh, its bigger, needs a little more elevator movement to simulate the older model, just flex the software so it can wiggle a bit more what could possibly go wrong?
Likewise, remember the VW Diesel emissions "scandal". You can nearly kill a company without actually killing anyone.
So, the specified ECU chip (which is no longer in stock) could output 40 mA to the gas tank vacuum solenoid so we spec'd the solenoid to draw 30 mA on the coldest day of the year, usually it draws much less. Got a substitute chip only rated to 20 mA usually it'll work fine, what could possibly go wrong? Until it burns out and the vacuum solenoid fails open and nationwide millions of gallons of "excess" gas evaporate per year from sitting cars. Harmless on an individual scale but on a nationwide scale its a lot of ozone... Insert yet ANOTHER $35B recall to replace all the ECUs for willful emissions violations ...
I'm just saying its not binary where either people die and it kills the company or people aren't even harmed and nothing bad happens at all. Plenty of "company killer" situations where "what could possibly go wrong" got the F-around and find out treatment.
Because the architecture is not so important. What is important is what periphereals are available. And even if you have the same architecture you still need to test if you change something. And things change rapidly also in automotive. 10 years ago you had a 100 pin micro at 32Mhz clock with 64kb ram, now you have a 384 pin micro running atva couple of hundred MHz with 256 MB ram.
There's an incentive for chip vendors to not standardize since it makes migration hard and drives vendor lock-in. The CPU cores are one part but equally important is the peripherals and external featuresets.
That incentive might exist, but what really drives the lack of standardization is that every application has different needs, so you end up with a common CPU and wildly diverging configurations tailored to various applications.
i'd guess adding an abstraction layer (between a general purpose microcontroller and the custom interfaces it needs to support) is likely not worth the added complexity (and bugs), since cars are on development cycles of many years anyway (for the benefit of simpler software development, and the risk reduction of being able to swap out the microcontroller if need be).
They actually theoretically have a thing for this already. It's called AUTOSAR (now AUTOSAR Classic). It's supposed to be a component architecture to write down the software against a framework that can be backed by different MCUs. Of course, in practice much of the underlying esoteria of any given MCU/board configuration bled up through the abstractions and/or produced custom extensions.
Indeed, it's a mess and the surrounding tooling ecosystem is a pile of weird Windows GUI tools of about the quality you'd expect for very expensive tools that do very little provided to a captive customer base. It's all very circa 1990s RAD-tool mania-esque still in many ways.
That said, we're working with some groups in some automotive companies who are breaking this mold rapidly.
I can't upset the NDA Gods on this one, unfortunately. I can say one is a traditional OEM that you wouldn't necessarily expect to be the sort to take vanguard steps like this and the other is a newer OEM that's not as encumbered by tradition (though due to the supply-chain also can't rid themselves of it).
Shameless plug WARNING: we (https://auxon.io) are hiring for engineering, marketing, and BD if industrial & operational technology is your thing.
I never signed the NDA, although I follow the space distantly, and they went public on their own website a couple years ago with a list:
BMW, Bosch, Continental, Daimler AG, Ford, General Motors, PSA Peugeot Citroën, Toyota, and Volkswagen.
I always rooted for an older competitor GENIVI which (very handwavy in a general sense) boiled down to "make dbus great again". I just always have a soft spot for anything that replaces CORBA and DCOP. More or less GENIVI had the same players as AUTOSAR.
I believe AUTOSAR suffers from trying to do too much; GENIVI's "all you need is a compatible bus" is the minimum you need so simplicate and add lightness and that's what should win; doesn't really matter if that guy runs QNX and that other guy runs FreeRTOS as long as the busses talk; whereas AUTOSAR tries to own and control everything top to bottom which just ends up stifling any productivity.
If you look through the "Adaptive AUTOSAR" literature and documentation that's out there you can find that enormous chunks of it are lifted straight from GENIVI's stack/specs.
However, GENIVI/Adaptive-AUTOSAR tends to serve a different function in the vehicle architecture. It's mostly cockpit and less control/platform. AUTOSAR Classic is still the champ on the MCU side of the house.
Except that in some cases, the automakers are not using a CPU with external interface hardware, they are using microcontrollers similar to the PIC series, example here: https://ww1.microchip.com/downloads/en/DeviceDoc/40300C.pdf (Note, I'm not suggesting they use these exact chips, this is just an example of the level of onboard integration that is available for micro-controller chips).
Single chip system, with built in analog comparators, timers, signal capture and PWM generation module, serial UART, etc. I.e., a whole "system on a chip" with the external world interfaces already built into the chip. Changing one of these for an ARM CPU with external analog comparators, timers, PWM modules, serial UART's, etc., is a redesign effort, not just a "change the chip" effort. If the clock speed possible via the internal clock oscillator is sufficient, then this chip needs only +5V and ground (plus programming) to be able to interface to and control some external analog or digital equipment.
Looks like the Cortex-R52 and Cortex-R5, a CPU and MCU, respectively both have been ASIL-D (the highest rating) qualified. Which is fantastic frankly. There's a bunch of legacy PPC, MIPS, and TriCore tech that's just miserable to work with because of their ancient and/or bespoke & proprietary toolchains. So, really the automotive industry lacks excuses that I would buy for why they're not moving forward onto new platforms other than the institutional inertia and supply-chain entanglements that its prioritizing instead.
I think the Cortex-R chips are more targeted here- I think the V8-R chips are designed specifically for automotive use, so I assume they have appropriate ratings.
There are very lazy car companies which simply specify "automotive certified" car chips for everything, including MCUs running window switches. So, don't be surprised $20 ARM-R based MCUs being used to do something trivial like driving window motor.
I seen a few dashboard where there is an STM32 sitting connected to a single button, whose only duty is to register a key press, and burp something on the CanBus in response.
A) I doubt you need a $20 component to register a key press and do some canbus IO
B) If you did have a $20 microcontroller for Canbus control of a window motor driver but it saved $19 in extra wiring/labor (driver controls usually route to all windows), provides reliable debounce, automated one-push-to-open, and allows things like holding keyless lock to close all windows, it's totally worth it.
Every component ( resistors, transistors, microcontrollers, etc ) must be Automotive qualified (AEC-Q ). That's how you have a minimum standard. A button cannot communicate on the CAN bus that's why you need the microcontroller.
A big chunk of this crisis was caused by car manufacturers being insanely focused on cost cutting. Doing obviously dumb things with BOM isn't exactly a calling card of the automotive industry.
Vehicle electronics have some real environmental challenges as well.
They have to function correctly
* after they've been parked outdoors in Death Valley all day.
* after they've been parked at -40 degrees for a long time.
* when doused with slush containing road salt.
* when their wiring harnesses deteriorate after a couple of decades of hard use.
* at least for a few seconds after a catastrophic crash.
It takes the kind of risk-taking guts that Tesla exhibits to push new, better, electronic parts into test and production. Most car companies' executives, designers, and test engineers just don't want to take those risks.
> risk-taking guts that Tesla exhibits to push new, better, electronic parts into test and production.
After their delaminationg non-automotive grade screens I'd be wary.
Their solution to it was to make the air conditioner always run in the sun when parked and promote it as a feature, wasting untold megawatt hours of electricity.
That sounds like possible death trap. Drive Tesla to some place without range, stay out there for week. Then come back an notice car is dead and you can't reach next charging station.
I looked into this a few weeks ago while exploring purchasing an electric car, and from what I could find "tow charging" was experimented with but has basically been given up on. The expectation is that if you lose power, you will need a flatbed truck to cart the car to a charging station.
> Charging efficiency isn't 100%, so you'd have to tow it further than the range you need
That's not how it works. You basically get 5 mpg or less on the truck, while the car charges at significantly higher rates than the speed of the truck (in mph), because the car is more efficient at using the energy.
Ah I think I was tripped up by the same thing as the top commenter. This sounds like it violates the law of conservation of energy.
The key here is that when towing (as opposed to going downhill) you can put much more energy into the motor generators per mile than is required to drive a mile. Due to charge/discharge losses, for your example this would have to be > 4x the amount. This obviously assumes the EV can regen at high sustained rates and you have a powerful tow vehicle.
So yes, you're right. The energy/distance argument should be one and the same.
I did a short stint in the automotive industry ~10 yrs ago (on the software/safety side) at one of GM’s brands.
The lack of forward-thinking was absolutely horrible, as was the constant insistence that anything new would never work (you remember the constant doomsaying about tesla?).
I got out as fast as I could, I seriously think that being in that environment might cause serious mental impairment…
So, not in the least bit surprised about this. Just nice to hear that the world is moving on whether the big auto brands wants it to or not.
Admittedly it was a long time ago, but I worked in the semi industry and this isn't how things were done. We manufactured batches of totally obsolete EOLed devices for various customers when they had sufficient volume. Devices that are in volume production (e.g. for cars) are carefully managed by people who do nothing but ensure that they're available in the right place at the right time in the right volume. In order to not have product available to meet demand either a factory has to go on fire, or the customer has to screw up their forecast.
The customers screwed up the forecast. That's a huge part of the problem. They assumed the pandemic would kill demand and cancelled orders, but it didn't, and the cancelled capacity was already sold to someone else.
As I understand it, there's also the case that you can't just move from one fab to another of the same process size at will. There's different setup/software/procedures, so the capacity isn't instantly interchangeable. There may even be fabs with excess capacity, but no "ready" customers.
I know of dealerships that had sales drop by 95% in the initial months of the pandemic. What they got wrong was the duration of the drop and that sales would bounce back, rather than simply return to normal after a bit.
Now some dealerships are selling new cars above MSRP, just because supply is so low.
There are a lot of people who have benefitted enormously from the economy/stock market crashing, then being resuscitated by this unprecedented money injection:
Politicians and bureaucrats should have planned with industry ahead of time and notified them that the COVID impact would be cushioned by huge financial stimulus. But why do that if you as an individual bureaucrat or politician can make millions by trading on that insider knowledge?
Its unfair to blame suppliers when there has been Government conspiracy against them.
Due to covid I’m back to driving trucks and they have the same supply problems as everyone else — the dude I’m working for can’t find a used truck to put me in because new trucks are taking 6-8 months to get.
Used trucks sell as fast as they go up on the websites (with the lease returns from the company we’re running for selling even before that) and the truck dealers apparently think this is the time to charge excessive finance fees (basically doubling the cost of the truck) because they can.
Quite a stressful time…for him. I don’t really care since I’m making money and team driving isn’t as bad as I remember it from the last time I did it 23 years ago.
Come on, let's say they get the STM32 for a quarter for a really high volume. There are tens if not hundreds of them in a car. 25k car would give you 100k stm32. 1k-10k cars is hardly decades.
I always thought chips were dominated by fixed capital costs of fabs. If they were dominated by variable material cost of wafers, as the article seems to imply, it wouldn't make sense that we see 90nm microcontrollers that sell for $1 and a high-end 16nm PC CPU that sell for $1000.
So the question is, what's the reason that 90nm microcontroller sells for $1?
I'm trying, and failing, to figure out an economic model that explains the market dynamics we actually observe.
If building a new 90nm fab costs $billions, almost as much as building a new 16nm fab, why does the 90nm microcontroller sell for 0.1% of the price of the 16nm Xeon?
If building a new 90nm fab costs 0.1% as much as building a new 16nm fab, why can't existing chip companies, some startup or GM themselves spend $10's of millions building a fab that can unblock $100's of millions of product, and alleviate the shortage?
Chips made on modern processes are dominated by capital costs. Old chips are made on old foundries, which are fully depreciated and therefore have no capital costs. If you made a 90nm foundry today, then it probably couldn't sell those microcontrollers for less than $100 each, just like a modern CPU.
>why can't existing chip companies, some startup or GM themselves spend $10's of millions building a fab that can unblock $100's of millions of product, and alleviate the shortage?
They can't do it fast enough. Standing up a new foundry takes years under the best circumstances, and today you couldn't do it all, since all the tooling is sold out and deeply backordered. If GM could snap their fingers today and magic a cleanroom into existence, they'd still be waiting a hell of a long time to put tools in it.
Secondly on the price question, microcontrollers just have way fewer gates than a desktop CPU. A dozen registers, a thousand bytes of RAM, a few kilobytes of flash. This makes them physically smaller, which lets you put more on a wafer, and makes the unit price cheaper. The total die area of the ATmega8 is just 7.9 square millimeters... at the 500nm node! https://zeptobars.com/en/read/atmel-atmega8 This gets you 8,100 dies from a single 300mm wafer. (If there are any 300mm foundries at 500nm, which there probably aren't)
Thirdly, what makes you think anyone could build a 90nm foundry at all?
Which makes sense. Fabs don't make all their tools in-house, that's done by vendors. As an example of one tool, by one vendor, the NXE:3400B EUV stepper by ASML. Unit cost, $175 million: https://www.tomshardware.com/news/tsmc-euv-tools-order
These are ultra-bespoke, super-low-volume machine tools. ASML makes a couple dozen or a hundred steppers of a given model, then shuts down production and starts upgrading to the next node. Is it even possible to buy a new 90nm stepper today?
I'm sure they have all the documentation and could roll back to the previous generation easily enough, if they had to. (Which they don't! They are maxed out just supplying current-gen tools) But given all the voodoo in semiconductor lithography, people retiring etc, I bet that rolling back two decades would pretty much require starting from scratch.
Any discussion about why you can't transition to newer chips that doesn't mention the software is incomplete.
Chips are not fungible in part because of software compatibility.
OK, so you got a newer chip, and have redesigned the board and everything to fit. Now you have to get all the old firmware running on it and validate it.
If the new chip isn't a 100% backwards compatible version of the old one, including all the peripherals, that could be a considerable effort, fraught with risk.
In the long run we'll only use FPGAs and soft cores.
In 2050 if your old 2035 Ford needs a new ECU you'll just stuck a somewhat newer and larger FPGA on it, and if the 2035 ECU needs three hardware I2C bus with clock stretching, one of which bus has to do 10 bit I2C addrs and it also needs three hardware PWM pins with 11 bit resolution and two 8051 cores running at 5.25 MHz each and two CANBUS, you (or more likely Ford) will just compile the brand new FPGA to have exactly and precisely all that stuff and it'll work fine.
> In the long run we'll only use FPGAs and soft cores.
People have been saying this for decades and it is no truer today than it was then.
FPGAs and soft cores lose on cost and power--which are both ferociously tracked engineering goals when you have real volumes.
To a first and second order, if all FPGAs suddenly disappeared, nobody would care. Networking and test equipment would get somewhat more expensive and double their lead times. Nobody else would notice.
Would a soft core which is 10 times smaller than the original chip being emulated always lose on power? What if it's clocked down so that the emulated chip doesn't run much faster than the original?
In automotive, is power so ferociously tracked? Most of the energy in a car is to get it moving, and even running the cabin fan requires more wattage than a reasonable control system.
> Would a soft core which is 10 times smaller than the original chip being emulated always lose on power?
If that were even possible (which it probably isn't--the fundamental "building block" of an FPGA is easily 10x the size of a similar fully custom gate), the soft core would still likely lose. Power in CMOS is proportional to C Vdd^2 f. Vdd hasn't moved in a while--so you've lost the biggest hammer in terms of power consumption. And C goes up when you change nodes--not down. And we're holding f constant according to your assumptions.
Furthermore, FPGAs are general purpose so they need things like clock spines that are optimized to work at their max frequency rather than at whatever frequency you chose. You burn a remarkable amount of power and area so that the chips can operate at say 400MHz even if you never run them faster than 8Mhz.
> In automotive, is power so ferociously tracked?
Not as such--automotive is generally more interested in whether your chip can survive power spikes. However, automotive is super sensitive to cost which is all about chip area.
> Most of the energy in a car is to get it moving
While true of ICE cars, that is less true than you might think with electric cars and regenerative braking. I can go for a very long time on 20mph roads between charges but the moment I get on a 65mph freeway, my battery gets sucked dry. I think atmospheric drag is something like to the third power of velocity--it's a really huge increase when you go even slightly faster.
However, as you point out, climate control is the real killer of energy--running AC or heating is power hungry.
In addition, the self-driving car folks discovered that running all that vision and radar and then processing it sucked down a remarkable amount of juice. Computes and sensors aren't exactly free.
I think you are probably right, economies of scale mean that using a ridiculously powerful modern process to emulate an old processor often makes sense.
The C64 mini runs a modern(ish) arm processor and an emulator to pretend to be a C64 - that's a machine that is literal multiple orders of magnitude more powerful than the original system.
When a Power PC is perfectly capable for a MARS rover missions, I dont see any reason why they must run Snapdragons.
Unless some evil company, starts ditching physical controls or starts shoving Ads into Speedometer.
And everyone follows.
Im thankful that automobile tech didn’t discover electron yet.
Imagine your speedometer consuming 200mb of ram.
Speaking from experience, I can tell you automakers are already shipping vehicles that some Infotainment apps are web technologies. Not electron, but a similar system to run an encapsulated web page.
I’m doubtful of the cluster being run by a browser engine though.
I look forward to the cyber-punkish future (real soon now?) where there's an old beige computer case with a Pentium MMX sticker on it on someone's passenger seat with wires snaking into the engine bay. "Yeah the ECU died, so I had to hack together this replacement...".
There are cars where the enthusiast scene burns their own ECU EEPROMS to compensate air:fuel calculations after installing larger injectors, better flowing intakes, etc.
Edit: There's also some "universal ECUs" from companies like Haltech. They sell a single ECU plus some adapter cables to make that single ECU run on a broad variety of cars from the 90's.
Possible since the 90's cars all have the same rough types/numbers of sensors. Switchable software does the rest. I imagine this doesn't work past some year of make as the cars got more complex.
Well I never wrote "just a few", but I did write "mostly".
Also, disclosure: I'm intimately familiar with the ECU software of a major player in the industry, though primarily the part that governs the air system. It's surprisingly sophisticated and extremely configurable just by fiddling with those calibration values.
There are two problems, long duration supplies of a chip on the same process, and the cost of making chips.
If the semiconductor companies can figure out an ASIC/FPGA type strategy that would allow any of the current chips in a car be produced "dynamically", then the semiconductor company could achieve their economies by just fabbing one design, and car companies could achieve longevity by getting commitments for production of that one design.
Xilinx recently made some steps in a new direction by producing what they called an "RF" SoC. Where RF stands for radio frequency. Basically it had all of the analog to digitial and digital to analog bits on the chip in addition to some FPGA fabric and some ARM AARCH64 cores. Its expensive and small quantity but I think it will turn out to be important in the long run as the vanguard of chips that are not 'type specific' at the time of manufacture.
Imagine a company that has the same basic FPGA architecture, packaged in a variety of automotive spec packages, with one-time programmability. That would reduce the number of SKUs considerably (basically by package type).
This feels like lack of planning. The semiconductor industry was a known entity for many years before this level of integration came to vehicles. They should have known this would happen.
Ultimately the industry will have to maintain their own production for consistent silicon.
Texas power outages took down one of the few major US plants and a random fire took down another, at the same time as a global pandemic. We're almost at the level of "imports banned from China"-levels of implausibility here.
And you have to witness the mess without these people, or if they aren't allowed to do their job, in order to believe it. Better even, live the mess, I can assure you it is quite an experience.
Why can't they just make pin-compatible versions of the chips using new processes? That would take time, but surely easier than investing in new fabs for old processes?
At the root is the question: "Do we make it more complex" or "Do we do try it 3 times".
The carmakers have been making the wrong call for the past decades.
One of their suppliers has eclipsed their impact on society, marking one of the first times in something like ~70 years that things didn't bend over to meet their needs.
----
And before you ask. Yes i would rather have a pace-maker with triple redundancy and thrice redesigned.
But for the cost, couldn't (for instance) a 16nm fab produce 90nm chips?
If not, how complex would it be to "port" an existing 90nm chip, to produce a 16nm revision? Impossible, or quite easy but not cost effective? From the POV of the car companies, could such revisions be used directly, or would they need to be qualified in the same fashion as new parts?
If you're bored you can go to mycmp.fr and check their process catalog and compare a typical 55nm run to a 160nm run. Its not quite as standardized as ordering PCBs over the internet.
Different metallization (you usually don't get to choose Al or Cu) will have different resistances. Generally the smaller processes will be faster so a design with no race conditions or metastability problems on 160 might not run reliably or at all on 55. Some processes are analog oriented so they'll guarantee up to 60 volts or more, if you're trying to design power devices, other processes are logic oriented and they might have a standard voltage of 2.5, 1.1 etc.
You can design something that'll run on a 55nm process and something that'll give similar performance on a 160nm process BUT they'll be different designs. I think the closest analogy would be changing processes is like changing manufacturing material. You can make a car piston out of steel or aluminum but you can almost never just swap materials in an existing assembly line.
No and about as expensive as having designed it in the first place. Given design costs are the largest portion of per-unit costs it's not a good investment.
Could you explain why not? Naively, having a much more dense process should allow automatic conversion. Going from 90nm to 16nm is 10x the density, that's a lot of margin for an automatic tool to use. Why doesn't that work?
It's more than just a size change and there are features besides transistors. Say you have a 100 fF capacitor; is that because that's the right value based on an external constraint (say, interacting with a crystal) or because it's matching the inductance of a long internal path? Because you adjust them differently based on circumstance. And your transistors with a specific load must still supply the same current as before so they can't shrink as much as ones for internal logic.
Also because the materials have changed the dialectic constant probably has and now the relative sizes of components need to adjust. And circuits are designed to minimize switching loss based on the old switching time and now they'll be wrong.
Oh, and the breakdown voltage of the new process can't handle the voltage many of these old circuits use IIRC.
Sure, I understand that making a new car in combination with making a new ECU or brake controller on a new process is going to be stupidly dangerous and troublesome. So just don't.
Rather than doing it as a part of a single project, there should be a department or separate company making the ECU/controller. This way when the semiconductor company moves forwards, ECU Group starts a project targeting the new process, and releases the results when it's ready. And meanwhile what goes into the cars is the ECU built on the previous, well tested process.
Also, are modern cars really in much need of custom silicon? Micro-controllers are stupidly powerful now. There has to be off the shelf hardware capable of handling what a car needs. There are long standing architectures like ARM that don't require starting from scratch every time somebody makes a better chip.
And usually this is what happens. Sort of. Big brands have new cars in the pipeline, and their product teams and thus the whole supply chain has newish stuff in development. [0] But since the market is rather fragmented/niche/specialized (even if the gross volume is large, the number of market participants is small), it simply makes no sense to launch a new component without already having orders for it.
The problem is that when COVID struck almost everything was put on hold, scrapped, and ... it simply takes too much time to get things back on track. (As there is basically a full half year of shortage.)
> are modern cars really in much need of custom silicon
Well, again, because the market is special, there's a lot of custom stuff, even if it's almost the same chip internally. Lack of openness and standardized interchangeable components lead to this. (Yes, there are a lot of aftermarket parts, but they are probably even worse.)
Also, probably the big dilemma for automakers is that they are simply not accustomed to this level of transparency in their supply chains. Now they point the blame at chipmakers, but they don't buy chips, they buy components from (the cheapest) vendors. (Who are cheap because they also never had any stored inventory, nor rainy day funds, nor any elegance and care in their designs that would allow this kind of retooling to other chips.)
[0] But only very-very-very tiny incremental newishness. Large major version upgrades are seen as something that is too risky. (See this part from the article: "Quigley added that trying to design new chips and vehicles that will use them in parallel often introduces yet more headaches.") And even if the whole industry is a decade behind, there's no time to catch up, because there's just no market need for upgrading that subcomponent to have a better software/CPU platform. Instead automakers launch wholly redesigned lines. Or source completely new components for the new set of functionalities envisioned. (But if some vendor can meet the new specs with again a bit of incremental upgrade, then that'll happen.)
Start reading IEEE, SAE, or other trade magazines; get an EE degree in the Midwest; get a job in the automobile supplier industry; go to industry conferences or just read their proceedings...
Really well written investigative piece. Its so often forgotten that safety critical hardware like a car ecu dances to a different rhythm than consumer electronics that just has to not explode in your pocket..
Is a part of the problem that the chips they are depending on are so trivial and small that you would end up losing more when cutting it up on a smaller process? I guess there is a limit to how many chips you could put on a single wafer even if the chip was only a single gate?
Maybe the old nodes are efficient enough for the chips they need. Then again it would make sense to update designs frequently enough to stay at nodes that have plenty of capacity, perhaps every 5 or 10 years or so?
Capital costs of a fab are the single highest cost of producing chips. So long as the fab is running it's cheap to keep running so little reason to spend money to put cheap things on other processes (which is an expensive design cost) when you can later switch to a more complex chip (which should allow you to reduce supporting components too) which takes advantage of the new process size.
Can't is a really strong word. One potential solution that might open some paths to innovation would be to split up the product a bit more. Ship vehicles with extremely basic features only and some of those features intentionally designed to be swapped out in the short term. Use third party suppliers to swap in instrument clusters, engine controllers, and such and allow those to use all the latest chips.
My old car runs fine controlled with a single atmega128 for the injectors, and that is the only chip in there (outside of the radio that I never use (I already carry a phone for all sort of entertainment))
The amount of safety features you require alone would not allow this. To be compliant you need to move all your models to a number of safety features.
In addition, you need to actually meet fuel standards, and that alone requires a whole lots of sensors.
And people actually 'shocking' want to have advanced features, and the ability to link the car with the phone.
The simple reality is people wouldn't buy a low tech car you describe. Seriously, people are not spending 30-50k on a car that can not play music from your phone.
If you completely redesign the product, maybe you could reduce the number but that is effort that is actually far larger the problems with supply. And if you started to redesign the product now, you would almost certainty use more chips anyways because the requirements are going up, not down.
Can we manufacture designs for larger/older process nodes on newer ones? I understand it's a little bit like the DPI on a printer. I imagine there are many other changes, and expense, but would it work?
Early electronic fuel injection was pretty shit. https://en.wikipedia.org/wiki/Jetronic No throttle control, cold start compensation with a bimetal strip + heater coil controlling extra air flap. Sensors were super primitive or omitted altogether (no temp sensors), air was measured with turbines restricting flow, no lambda, etc etc.
Crazy Honda fact: all their ECUs from 1992 (P05) onward had full facility to drive individual coil on plug, but officially Honda switched away from mechanical distributor in 1999 (S2000?). You can trivially convert 92 car with a small adapter board, ECUs shipped with all the needed firmware and hardware in place sitting unused for 7 years.
> "I’ll make them as many 16 nanometer chips as they want"
However,
> Carmakers have bombarded him with requests to invest in brand-new production capacity for semiconductors
> featuring designs that, at best, were state of the art when the first Apple iPhone launched.
He then says of that:
> "It just makes no economic or strategic sense"
So if we look at this from a capitalistic standpoint, automakers are not offering enough money to convince Intel to continue supplying their "old" types of chips. Either the automakers need to supply a convincing amount of money, or they need to adapt as their suppliers change priorities.
Blame the EPA, California for more aggressive pollution and economy standards. You could still build a 1970s car today without chips, you just couldn’t sell it.
Highly inefficient death traps with no active safety feature and no ability to play music or tell you where on the map you are. Great solution there buddy.
Sounds like the auto-makers play down on the talent they have and prefer to push the work outside, probably fearing they are not able to execute.
This reminds me of being a 3rd party selling service to some company and they constantly ask for us to implement features that they could easily add - but can't. They don't say that flat out but we all understand what's happening..
Give me a microcontroller with between 32K and 2MB of onboard flash, some eeprom, multiple CAN peripherals, ADC, SPI, good timers, and some specialized peripherals - synched PWMs with dead time for driving power electronics, and some other special things. Dual-core lockstep for safety critical things (like the fronk latch). Some combination of all that at a price starting under a dollar.
Oh, and a temperature range of -40 to +125C ambient.
Can Intel do that on their obsolete 16nm multipatterning process for anywhere near the price target? Didnt think so.
ESP32 satisfies each of your requirement. They are available for $1(without BT sensor) in bulk pricing, have 4MB flash, have eeprom, have CAN, ADC, SPI, goof timers, good PWM, good DAC, is realtime, dualcore, and ambient temperature range of -40 to +125 C.
> The aspect of the problem not talked about is that a lot of automotive chips are really, really old.
> And they are old, because of many certification requirements set partially by the industry, and a few odd governments.
> In reality, most of them are both less reliable, and harder to work with in comparison to open market parts.
> The only two things I seen chips with micron scale nodes used in my life were: air conditioner boards, and car parts.
For example, who in the world today makes PMOS TTL chips? I bet the few foundries who can still do that will bill car parts makers an arm, and a leg for keeping making something this old
Want to migrate? Find somebody who can can translate hand-drawn TTL chip to moderately modern CMOS under 60 years old.
At times, the cases about I heard of car makers were having troubles with were also nothing less of direct consequence of conscious overengineering: some BWMs have one STM32 per window switch just to blink the LED, and do ADC despite the switch having just 5 positions. And now they can't ship because of this single LED blinker.
Window controls are safety regulated modules in modern vehicles. They detect the closing force and reverse direction off the required motor torque exceeds av threshold so that you can’t accidentally or intentionally choke someone with the windows. This feature is required by standards bodies and some regulators.
Why not put those smarts in one big module, say the Body Control Module which is always huge and already gets redesigned and uses more expensive, smaller feature chips anyway you ask? Well, we need to run a bunch of wires to the door, several for the switch, power and grind and Hall effect sensor for the motor and we need to account for the noise and power loss on the long wire run because as a safety feature the sensing has to be ASIL compliant to high fault tolerance. Turns out that that length of wire is several dollars more than putting the module in the door… These things are all done for a reason and the automotive engineers aren’t solely stuck in the old ways. They’re frequently penny pinched to death, but if you look at the constraints they’re doing their best.
> Window controls are safety regulated modules in modern vehicles. They detect the closing force and reverse direction off the required motor torque exceeds av threshold so that you can’t accidentally or intentionally choke someone with the windows. This feature is required by standards bodies and some regulators.
All of this is achieved with a single <$1 part, fixed torque clutch... But who in the world wants to intentionally choke somebody with a power window? This add to the long list of absurd product liability regulations in the spirit of one demanding "do not operate the appliance with your genitals" to be printed on every stove.
> A 1997 government study by the National Center for Statistics and Analysis estimated power windows sent nearly 500 people to emergency rooms in one year, and that half the victims were small children.
I remember watching A Faster Horse, a documentary on the Mustang. There was an interview with an accountant type at Ford who pointed out a three cent difference in part cost per car multiplied to millions of dollars at their scale. A $1 part might have given him a heart attack.
Seriously people need to watch Sandy Munro on YT. He literally went threw some calculations one the money you can save on removing a single fastner from the design.
He was a former chief engineer at Ford and had his own company for decades now. He also worked in Ford Global Finical Head Quarters.
When he sees some small board screwed down with to many screws he starts ranting.
Some things to emphasize:
OEMs (GM, Ford, Toyota, VW, etc) do not design components, and they do not want to. They design specifications for components, and then get suppliers to bid. This is great for efficiency in established ecosystems, not great for agility.
To my knowledge, GM did not cancel any chip orders, because GM itself had no chip orders (this is oversimplified). The suppliers cancelled chip orders.
For a 1st tier supplier to move to a different process/chip would also be difficult, because they do the same thing the OEM does - supply a specification, and get 2nd/3rd tier suppliers to bid on it.
A wholesale migration to a modern architecture is risky and costly.
Smaller process/feature size on a wafer is believed to be less resilient, for example to heat and vibration.
The risk to large automakers is that something goes wrong and they have to do a recall. The risk to up and comers (Tesla) is failure to grow. Also, Tesla has a small product line and absolute loads of cash.
If you were going to design a new car electrical architecture from scratch today, you would have something like a 40-60V system with a centralized controller (or pair of controllers in a safety redundant configuration).
Even with a largely cleansheet design, Tesla uses 12V because of the sheer ubiquity of 12V components.