Hacker News new | past | comments | ask | show | jobs | submit login
Toyotas Unintended Acceleration and the Big Bowl of “Spaghetti” Code (2013) (safetyresearch.net)
152 points by UberMouse on June 2, 2015 | hide | past | favorite | 145 comments



This is appalling:

    Toyota had more than 10,000 global variables.

    “And in practice, five, ten, okay, fine. 10,000, no, we're done. 
    It is not safe, and I don't need to see all 10,000 global 
    variables to know that that is a problem,” Koopman testified.
and:

    Toyota’s failure to check the source code of its second CPU, 
    supplied by Denso —even as executives assured Congress and 
    NHTSA that the cause of UA couldn’t be in the engine software
and:

    He was critical of Toyota watchdog supervisor – software to 
    detect the death of a task -- design. He testified that Toyota’s
    watchdog supervisor “is incapable of ever detecting the death 
    of a major task. That's its whole job. It doesn't do it.

    Instead, Toyota designed it to monitor CPU overload, and, 
    Barr testified: “it doesn't even do that right.
and:

    Barr also testified that Toyota’s software threw away error codes
    from the operating system, ignoring codes identifying a problem with 
    a task.
When the news first broke a few years ago, given Toyota's reputation for quality and process, I thought this was an American industry lead witch-hunt of a Japanese competitor. But if this testimony is correct, what Toyota engineers have done is unforgivable.


> “And in practice, five, ten, okay, fine. 10,000, no, we're done. It is not safe, and I don't need to see all 10,000 global variables to know that that is a problem,”

At an old job, my boss asked me to take a look at a business critical application to see if it could be improved upon. It had some deficiencies that were really hampering things.

I got the source from a coworker (this is when I worked on rockets, so none of us were or are professional software people, but even my dumb engineer self knows this is not how it should be). I opened up the folder. Lots and lots of files with little apparent structure. Ah, there was a file called "MAIN". I opened it. Visual Basic 6. Over 29,000 lines for global variable declarations alone. The actual program logic heavily used a commercial library designed to (shudder) make VB6 development more like making macros in spreadsheets. The original programmer then implemented the main program logic in this hellish abomination. The logic took up another 50,000 lines or so.

I told the boss that it would be easier to write a new program from scratch then attempt to understand it.

Has anyone seen a program with more globals than this?


I once worked for a billion dollar retail company. Their core back office applications (sales tracking, logistics, orders and fulfillment, staff rosters, etc) were written in COBOL and the inputs and outputs were read from / written to temporary files; kind of like pipes but via temp files in case of job failure. The real problem was that there were about 3,500 COBOL programmes and 2,500 shell scripts all in one directory. All the executables, COBOL and shell alike, were given a 4 digit filename; there was no up to date documentation on what program did what (apart from the source), or in which order they should be run, or the dependencies there in. To top it off end of day processing would take between 12 to 16 hours to run; any major problems and it would start affecting the next day's EOD. There was only one guy in the office (who had been there 15 years) who understood how it all fit together; when he died things got much worse (no he wasn't that old, how he died is another whole war story all together).

I only lasted there 6 months before I left for a start up, in many ways it was too much to bear. A while after I left taxation for retail changed substantially, there was a massive IT stuff up and the billion dollar business wound up being carved up and sold off; several thousand jobs affected, many lost their jobs either directly or indirectly as a result.

Sure this is probably at the worst end of the scale (short of causing deaths) and is rare but it does happen. Billion dollar company succumbs largely because of IT failures. It got to the point that the cost of the technical debt was higher than the carve up losses...


To the grand parent: you're not supposed to look at generated code, if you have changes you're supposed to re-generate it. So at some level, that's the 'why' that happens.

In Cobol 85, all variable are global to the program, and statically allocated at program start up. It's not uncommon for a Cobol program to import 10's or 100's of 'copybooks' and have 100-1000's variables in the global scope. They are, however, name spaced after a fashion.

My experience with this came from a similar behemoth mainframe app; 35 MLoc, 15,000 programs, thousands of jobs, running a batch and interactive green screen app that a company hosted. Was the source of about $500M a year in revenue.


>To the grand parent: you're not supposed to look at generated code, if you have changes you're supposed to re-generate it.

Generated code? What do you mean? (I have no familiarity with VB6 at all. I have written quite a bit of code in C, Python, and MATLAB). Besides, all this was handwritten (I eventually had a conversation with the code's original author while at a party). The guy was crazy enough to write all of that by hand. I genuinely think the effort made him a little unhinged.

And of course he could have been lying...


"Generated code? What do you mean? (I have no familiarity with VB6 at all."

I think the parent is referring to generated COBOL code.


Sounds like you need to submit this to DailyWTF...


Nowhere near as many, but I had to deal with this very recently and this little WSJ gem showed up:

https://github.com/ewfelten/Tracking-Report-Card/blob/master...

I'll let the code speak for itself.


That looks like autogenerated code. That's a different beast altogether.


It is. Mozmill is a testing framework. What we're looking at there is a file containing automated tests (which I think should be fairly obvious to someone looking at that file...)


I'm well aware of what it is. Like I said, I had to deal with it recently - I didn't just pull it out of thin air.

What you're looking at is a horror beyond comprehension. There's much better, more maintainable ways to write automated tests.


I guess we'll have to disagree, especially about this part:

> horror beyond comprehension

"Horror"? No, but I guess that's debateable (apparently). But "beyond comprehension"? Absolutely no. Even if we say the repetition offends our aesthetics, I can't even imagine what sort of issue you had to deal with where this file posed a problem. Anything you could possibly want to do with it should be serviceable inside of 5 minutes, except in the dumbest of Notepad-quality (i.e., little more than a text control) editors. That includes rewriting the whole thing to be less offensive, if that's your slant.

This file is nothing compared the types of things mentioned upthread, things which, again, exist in a context of actual production code.


If that is horror beyond comprehension then you are a very, very lucky person. The auto-generator even indents the code nicely!


That's not bad at all - it's a lot, it's not DRY and it could be reduced to about 25 lines of code (or 25 + the number of URLs to check), but it's legible, clear, and easy to understand.


It's a strange feeling when some small webdev are using clean modular testable code while real life businesses are abusing VB in creative ways. Really makes me feel paradigms do matter. My last job involved a huge pile of VB (VBA to be exact) code and again, it was an unstructured mess.


Yes, just about every damn PHP project I inherited that wasn't bound to a framework, or wasn't an existing OSS project.


After having lived and worked in Japan, this doesn't surprise me. Software engineering education is really weak there, managers who became managers because of their seniority tend to impose their own antiquated decisions and do not listen to best practices. The other problem is that status wise, it's considered lower level to be a software engineer than to be a mechanical or electrical engineer, so the smartest kids do not study computer science (this might have changed since).


This is also an interesting contrast to the (I assume higher quality) kind of work being put out by their other engineering departments- how can they make other sound engineering decisions on the rest of the car and it's manufacture, but fail at the software?

Is it something specific to software vs. traditional engineering in Japanese culture, or is it something inherent to software in general? Given the parent's complaints about workplace culture in general, shouldn't the entire car be badly designed?


I worked in new auto development at Toyota (Japan) for four years in mid 2000s, right before this happened. The unintended acceleration problem has also perplexed me, because in my experience any safety issue was always given top priority. But I did experience limited familiarity with software/firmware. Usually that was looked at as part of the component, for example the engine control module in it's physical plastic housing. Remember the production auto will have Denso (or similar) delivering that ECU to the factory as an "assembly" and hundreds or thousands of these are assembled into the car and function together. At the end of the assembly line, various tests are run on the system and vehicle, but these appear to be of the "input/output" that someone else described. So why isn't software done better?

I'd say one problem is the testing process is not robust enough for software/firmware as it is for mechanicals. The tests revolve around driving the car, and variations on driving, and have evolved over years and decades. Heavy software control is newer, and in my experience was often not fully understood. Chief Engineers tend to come from mechanical. And there are always hundreds of problems to solve and limited time, and the black-box nature of firmware makes it hard to develop tests for.

A development review meeting tends include various mechanical departments, like Drivetrain or Engine or Chassis, many times they bring physical prototypes to the meeting and the problems and proposed solutions are comprehensible. The build testing procedures subject the car/parts to various mechanical tests, and long-lead tests are performed which stress parts till breaking. It's something you develop experience with an can get your head around.

Firmware bugs are not like that -- having just gone through this with my current consumer electronics startup I experienced the black box component first hand. I remember one software based problem we reviewed at Toyota, where something happened like: when the car has been sitting for >3 hours at <50F weather, and the ignition is started when the car is at a 15 degree angle (side of road) and driven on to a flat plane within 15 seconds, the "check engine" light turns on. Even though we had no test for that, we heard about it for customer complaints came through. But the fix is to have the supplier (in this case, but could be the Toyota engineer) fix that problem, with most of the discussion centered on timing and certainty of fix. I don't remember any discussions about global variables or lines of code.


my perspective from some friends that bought a Japanese software startup and moved over there to help run it a few years ago:

promotion is based on age and length of employment. employment is essentially for life. you can't really be fired and you if you quit you may never again be hired (and i can't imagine whistle blowing would bring you any better). seniority reins supreme and unquestioned. there is a huge focus on hard work and long hours but also on loyalty and face.

contrast this to where and how you're working and think about the differences it would make. my friends were constantly frustrated that, try as they may, they could not elicit feed/push-back from anyone that worked under them. part of this is them being stupid foreigners that can't "read the air" properly, but part of that is people being extremely hesitant to touch anything that could be considered a break in seniority or disloyal. and this was a start-up. these were the f'in cowboys in comparison.

from what i gather, this system worked out pretty good with the long, patient and deliberate development cycles in hardware. when a product cycle takes ten years it might not be that bad to have people running it 'cause they've been there 10-20 years. but it doesn't work very well in software.


Exactly why I'm commenting so much on this. I always tell my team "Pay attention to detail, quality, method. Be more like the Japanese", to now see that even the Japanese (and their quality poster-boy no less) have trouble applying their quality philosophy to software. Not sure where that leaves my advice.


It doesn't change your advice. Toyota has a well-deserved reputation for quality in a lot of areas: customer service (they've been replacing frames on trucks for a few years at no cost to the consumer), construction, reliability, maintainability. That they have flaws in one area just means that they need to apply their practices to that area too.


There was an article about this (in New Yorker, I think...) a few years back, t how Japan has this strange relationship with software.

How it is a hardware-first country. Many reasons were given -- cultural, how "real men build real things", infatuation with appliances, which are seen as being "hardware" in a way, perhaps the langauge barrier i.e. existing software tools using English for menus and inputs etc.

And it is kind of interesting, looking back at a country that many in the 80's feared with surpass and leave the Western world in the dust technology-wise, we haven't seen that much software produced there. Can you think of large well known software from Japan (excluding console games). Vine Linux, SoftEther VPN, TrendMicro, anything else?

One wonders if they see Western software companies and if there is any desire to catch up or try to keep up.

Maybe anyone from Japan can provide a better perspective?


I'm not from Japan, but I work in the US for a large Japanese company. I know that they have a large software engineering team in Japan, but if it isn't hardware related they seem to have a hard time producing quality software. They seem to have realized this, and now focus on producing hardware and have started up 3 different software teams across the US to build software for said hardware.

They occasionally send some of their engineers to our offices (for 6-12 months usually) to learn about our culture. There is a clear disconnect when they send these people. They work hard, no problem with long hours, but they have a very hard time getting into the collaborative nature of our teams, and I don't think it is just the language barrier. I guess I take it for granted usually, but it made me realize how much time we spend discussing things between developers, questioning management, writing furiously on the whiteboard, arguing, trying things out, failing, trying something different, finding problems with other people's code, fixing said problems, etc. It can take a bit of thick skin to absorb such criticism and engage in this process, but I think it is crucial to good software development. Someone who just sits quietly in meetings, then waits for a task to be assigned, and works at that task until it is complete is somewhat missing the process, because sometimes it turns out the task your manager assigned doesn't make any sense, or can't be done with this framework, or that you would have to use 100 global variables to accomplish the task this way.

I am merely speculating but it's possible that a very rigid social and management hierarchy where it is more important to work hard and respect your position in the company than to solve the problem the right way if it involves bucking authority, could negatively impact the ability of developers to innovate and/or produce a quality product.


> Can you think of large well known software from Japan

Ruby.


That's certainly not a shining example of software quality. I also don't think it's fair to generalize about software quality for an entire country based on 1 or 2 examples.


I've written some C extensions for Ruby, and its internal APIs seem decent. What makes Ruby a bad example of software quality?


The number of bugs, mostly referring to the older MRI versions though.


Why exclude console games?


I would say games are probably where Japan has most excelled in software, at least in the 80s and 90s. But the code is probably very bad practice by modern standards (hand coded assembly hacks, all game state in global variables). When you only have a few KB of RAM/ROM there is a lot less scope to mess up (although I have come across bugs in cartridge games myself).


I guess I see it as part of the appliance. Not something you download and install or use an API to access.


The console frameworks and API's are all produced in an SDK that other developers use .. so its just as valid an example of a successful Japanese software product.


> I guess I see it as part of the appliance. Not something you download and install or use an API to access.

Aren't car ECUs "appliances" without API access, just like games?


let me know if you can find that New Yorker article, I'd be interested in reading it.


I think I found it http://www.economist.com/node/18958643 it looks like it was in the Economist not New Yorker


thanks!


> it's considered lower level to be a software engineer than to be a mechanical or electrical engineer, so the smartest kids do not study computer science (this might have changed since).

Which is very interesting, because in US it is completely opposite (and frankly it is not good either). Is it because US deindustrialized itself last 40 years?


It was not the opposite in the US 30 years ago, when I chose EE over CS, because EE was clearly more respected as being the more intellectually demanding. This had a lot to do with the capabilities of hardware at the time and the locus of value. IBM sold its hardware for millions. It gave away the software. The hardware was the differentiator; software was generic. EEs who could improve the expensive hardware were a lot more valuable than programmers who could improve the free software. In fact, since doubling a hardware resource such as RAM far more than doubles what you can then build with software, the hardware guys were more important than the software guys--even to the software.

As hardware limits were relaxed over the years, what you could accomplish in software, even with cheap hardware, grew so enormously that the primary value was increasingly in the software. This was such a rapid transition in the computer industry that it nearly destroyed IBM, it propelled Microsoft to power, and I ended up with a career in software.

Toyota and other Asian "hardware" vendors are in some sense where IBM was. They sell prize-winning hardware and whatever software that comes with it is essentially free. (Apple is a lot like this, which is why their hardware keeps getting better and their software--well, did I mention the hardware is thinner?)

The Toyota "platform" still can't do much in software, but it's growing exponentially. Eventually the software in a car might matter more than the (commodity?) hardware, but that's not the world in which Toyota managers were formed. They are obsessive about hardware, but software is an afterthought. It's for guys who couldn't cut it as "real engineers", as was the case in the American "computer industry" until the 1980s.

I also lived and worked in Japan. I was a strategy consultant, and I was told by some big Japanese and Korean companies (whom everyone has heard of) to emphasize my hardware background and not be seen spending too much time hanging out with the software guys if I valued my reputation.


Apple is not like that at all. Their software is excellent. My guess is that a large percentage of their customers keep buying their hardware because of iOS or OS X not because it's thinner.


Did you mean "Deindustrialization or deindustrialisation is a process of social and economic change caused by the removal or reduction of industrial capacity or activity in a country or region, especially heavy industry or manufacturing industry."

?

That doesn't sound applicable to the USA, which currently produces more goods than ever before, and more than any other nation except China.


I am talking about industry as share of GDP. Check the numbers: http://en.wikipedia.org/wiki/Economy_of_the_United_States_by.... Manufacturing kept going down.


It does so with far fewer people than before, though. In that sense, there has been a "deindustrialization".


I can state that this matched my experience though I was in the US - this was in the 90s, and in the automotive industry. I used to work at Motorola Automotive and Industrial and our team built the Computer-Aided Manufacturing systems that managed the production lines. It always felt like a tier-2 job compared to process engineering.


This is a genuine question: why can't they outsource their software devel ?


Given how they had the researcher effectively locked up in a hotel room with strict security measures, I'd say it's because they're terrified of corporate espionage and suchlike.


I wouldn't be so quick to blame the engineers. They might be working within a managerial structure which doesn't give them the latitude they need to do their jobs. Yes engineers have a duty, but in software it's often hard to know exactly what the impact of cruft will be, and often all you can do is quit.


I agree, but I also believe that as an engineer I have an duty to refuse to build something under conditions that, once built, would cause harm to others. Or at least to relay my concerns to management in writing before proceeding.

And sometimes, quitting is the ethical thing to do.


Quitting certainly is the ethical thing to do. I wonder if I'd quit if I was in their position.

Handing your concerns to management in writing isn't really a done thing in this culture. I sometimes get laughed at for even suggesting it. And people who quit often take a massive pay cut, are forced into contract work with no safety, or even don't ever get hired in software again - especially in Nagoya where everyone seems to knows everyone.

And when you have a family and a mortgage to pay (selling a house here is also a guaranteed massive loss), it can realistically be a choice between you and your family on the street or doing what you're told.

While the engineers aren't blameless, the system over here is really broken.


    people who quit often take a massive pay cut, 
    are forced into contract work with no safety
Is this due to the salaryman culture in Japan? I'd probably have a heart attack if I had to work under that kind of work culture.


I don't know about heart attacks, but Japan has pretty high suicide rate.


But what if you have a life. And dependents. And bills to pay.

As much as I would like to agree with you, people have different value systems. And money talks. So quitting isn't always an option, even though it is the most ethical. Can't pay the bills with ethics.


> So quitting isn't always an option, even though it is the most ethical.

If you think quitting a dangerous project (to be replaced by another dime-a-dozen engineer by a large corporation which employs thousands) is the ethically correct choice, your moral compass is broken.

We're talking about a device sold in the millions of units that is capable of dealing great bodily and structural harm to us and our world around us.

Whistle blowing (preferably before real world harm) is the only ethical choice for someone that is made aware of such a situation.


Then you probably should not be taking on work that human lives depend on. Placing my bills over someone else's bills is a perfectly human reaction. But placing my bills over someone else's life -- a typical human should not feel this way.


This is a systemic problem though, not a problem with the ethics of the engineers. Systems that incentivize bad behavior are to blame.


Unless you're living without luxuries then you are prioritizing your access to luxuries over the lives of others. That's unconscionable, even if you word it as "bills to pay and mouths to feed".

If you're working as an engineer for Toyota's firmware it's unlikely that you won't be able to find sufficiently well paying work to keep the kids in clothes and food and keep a roof over your head.


And, at large Japanese companies, people are shuffled around every two years. I had heard this and didn't believe it until I saw it. A distributor of my company would replace people with novices once the previous person had just gotten really good at the job.

If Toyota is doing this, it would explain a lot.


Blow the whistle? Contact regulatory agencies?


I was a victim of this. Thankfully only minor damage resulted. I owned a Toyota minivan in Thailand and was in stop-and-go traffic, when suddenly the engine started revving up. I slammed on the brake and the engine downshifted to first to compensate, so I ended up bumping the car in front of me. After the bump (total stop?) the engine recovered it's sanity I guess, because the revving stopped at that point.

Naysayers will point out that the engine wouldn't rev up if I'm putting on the brake. That seems to be part of the symptoms of this bug. Oh, and I didn't accidentally push the accelerator instead of the brake, otherwise I wouldn't have just "bumped" the car in front.

Edit: I took the vehicle into the dealership and they said "user error".... whatever, no consumer protection in Thailand anyway, so I just moved on.


The last company I worked for outsourced a large firmware project to Denso based on their relationship with Toyota.

Turns out they spent 18 months writing unmaintainable code that barely worked, and the codebase had to be totally scrapped with product version 2.0. The code was chock full of cut and past, globals, and hacked mutexes- they did not even use the mutexes built into the RTOS.

Perhaps Denso stuck their B-team on the project, who knows.


> When the news first broke a few years ago, given Toyota's reputation for quality and process, I thought this was an American industry lead witch-hunt of a Japanese competitor. But if this testimony is correct, what Toyota engineers have done is unforgivable.

These two possibilities aren't mutually exclusive.

Software engineering practices aside, was it ever demonstrated that unintended acceleration could actually arise from the behavior of this software?


Yes, another part of Barr's testimony was a series of proof-of-concept demonstrations on the ECU to demonstrate the ability to kill tasks, leaving the hardware those tasks were controlling in unintended states.


>was it ever demonstrated that unintended acceleration could actually arise from the behavior of this software?

That's entirely the wrong question to ask. The correct question is "has the system design been proven to make it impossible for unintended acceleration to occur?"


> was it ever demonstrated that unintended acceleration could actually arise from the behavior of this software?

No, but the software engineering practices were so bad that it was a moot point.


Toyota has a reputation for mechanical engineering quality and process which it probably still deserves.

Software engineering is apparently the opposite.


> When the news first broke a few years ago, given Toyota's reputation for quality and process, I thought this was an American industry lead witch-hunt of a Japanese competitor.

You. You were one of the many people I had no clue where they came from. When I first saw the reports they seemed legitimate and Toyota seemed to be hiding something but so many people just didn't believe it and the news just sort of went under the radar.


There's no need to be emotional. Some people jumped to biased conclusions. Others based it on NASA's software review (which we have now found to be flawed). See the report: http://www.safetyresearch.net/Library/BarrSlides_FINAL_SCRUB...


In C, there may be static variables with infinite lifetime which are locally scoped (module scope or function scope).

This is done to reduce stack usage; it does not mean that these 10000 variables can be accessed from everywhere else.

But I don't want to ruin the "hurr-durr, stupid C programmers" party that the rockstar full-stack webdevs here like to celebrate. Obviously, they know more about embedded software development than people who have worked in this field for several years.


The criticism of global variable count comes straight from Phillip Koopman, "a Carnegie Mellon University professor in computer engineering, a safety critical embedded systems specialist, authored a textbook, Better Embedded System Software".

So this isn't a web developer providing criticism, but someone with extensive experience in embedded software development. Perhaps reading the article linked before jumping to conclusions might be useful!


Locally-scoped (i.e. static) variables are, by definition, not global variables. The article is pretty clear that the 10,0000 figure refers to truly global variables, and I would hope that the original expert witness has not misled us by referring to static variables as 'global'. (Of course, without being able to see the source, there's no way to make our own judgement.)

Sadly, you would be surprised at the number of embedded systems that store and pass around their state using global variables, despite the obvious stupidity of that approach.

(For the record, I am not a web dev and I did indeed spend several years working in the field of embedded development.)


There are a bunch of neat comments from past threads about this if you search HN for "Michael Barr":

"On a cyclomatic-complexity scale, a rating of 10 is considered workable code, with 15 being the upper limit for some exceptional cases. Toyota’s code had dozens upon dozens of functions that rated higher than 50. Tellingly, the throttle-angle sensor function scored more than 100, making it completely and utterly untestable." https://news.ycombinator.com/item?id=7711771

"For example, http://www.edn.com/design/automotive/4423428/Toyota-s-killer... quotes Barr's claims: 'Toyota’s electronic throttle control system (ETCS) source code is of unreasonable quality.' 'Toyota’s source code is defective and contains bugs, including bugs that can cause unintended acceleration (UA).'" https://news.ycombinator.com/item?id=8906513 (and the linked article has a link to slides which are enlightening)

A number of comments at https://news.ycombinator.com/item?id=6636811, "Toyota's firmware: Bad design and its consequences"


So, take this into account, and add in self driving cars. The complexity involved in these, and the coordination required, is nontrivial and its clear from the article they do NOT have a good handle on things.

I mean, most cars do seem to get around OK, so I'm partly amazed that under the hood, things could really be so scary. But we can't be lazy about adding complexity and systems, especially as more faith is placed on them.

That this is not NASA grade code (or anything close) does not give me any warm fuzzies. It reeks of unprofessionalism, greed, and laziness.

There needs to be a sane-software assesment. People rely on products with an ever increasing amount of source code - it would be interesting if a third party could come along and certify that a given codebase is not a big scary unmaintainable mess; not to say there won't be bugs or issues, but that these guys are at least trying to make a well designed system, and aren't doing a bunch of crazy stupid things.


The companies currently most actively involved in automated automobiles (hah) are pretty good at what they do. Volvo has been producing big trucks for several years now that have remarkable safety features, like automatically detecting stopped traffic ahead and bringing the fully loaded truck to a complete stop in what seems like an impossibly short distance (https://www.youtube.com/watch?v=HoCknasKdRU).

There are plenty of things I dislike about Google but I can't say they're bad at software. If anybody can build a safe automated vehicle, they can.

Tesla, too, who have a lot riding on their hard-earned reputation for building next-generation vehicles.

I'm an old-school car guy, I haven't been a big fan of the complexity added to cars ever since the really bad emissions control systems in 80s cars, but even I have to admit that all that complexity has probably saved a lot more lives than it has cost -- antilock brakes, airbags, better pollution control, better fuel economy, way better structural safety features.


If you've ever driven a vehicle with them you'd know that air brakes are very, very powerful.


There are standards, Toyota simply chose to ignore all of them. Standards such as MISRA [1] and DO-178C [2] exist for the purpose of ensuring software quality in safety-critical situations. Most embedded software development environments even include tooling to help verify that you're not doing the things that are unsafe. The problem lies more in that automakers aren't required to use any such standard, unlike what the FAA has required for decades.

[1] http://en.wikipedia.org/wiki/Motor_Industry_Software_Reliabi... [2] http://en.wikipedia.org/wiki/DO-178C


Toyota couldn't ignore these (voluntary) standards given that MISRA-C nor the earlier DO-178 didn't exist at the time the code was developed.


MISRA-C dates from 1998, and DO-178B, which was the previous revision of DO-178, dates from 1992.


Yeah, I mistyped (should read "...later DO-178..."). You're quite correct that DO-178B was around then, but the C revision was not.

According to testimony, Toyota's coding standard was in place in 1997, before the first MISRA-C publication.


First, you're ignoring the words "such as".

Second, DO-178 was originally published in 1992.


Very awesome comment! I didn't know there were standards, but it's great to see there are. However, Toyota should have followed them. I mean, did they think they knew better? I think with self driving cars, the standards will become more requirements; though I doubt before a series of deaths result from software failure.


Given that a lot of computer vision work is based on randomised algorithms, do you think that these standards would be enough? You could demonstrate 100% MC/DC coverage through a neural net implementation, but the weights are where the faults probably exist, for example.


That actually leads to a very interesting point; would self driving cars be vulnerable to adversarial imagery?

Neural nets are well known for being easily fooled [1] ... I wonder if you could create similar situations for self-driving cars.

[1] http://www.evolvingai.org/fooling


[deleted]


Shouldn't toyota be closer to NASA than google? I mean, we're talking about systems that peoples lives depend on. I would be OK if my website was built by some IT department, if it meant that some level of NASA engineering is going into the control systems i rely on in my day to day life. That this is not the case is a little surprising to me, to be honest.


Self-driving cars seem less scary to me. You can take unreliable components (like, say, a car with crappy firmware) and make a reliable system out of it if the software running on top is fault-tolerant. The "autopilot", unlike a human, will pull over the moment the car's brakes begin to fail, etc.


This issue returns to Hacker News and to public consciousness every few months, and each time there is something missing: namely, some kind of direct proof that an actual failure of this software resulted in unintended acceleration, anywhere, ever. Let alone a case of runaway acceleration which the driver is unable to stop with the brakes (requiring a simultaneous brake failure). Since the accelerator pedals in these Toyota vehicles have been pressed no fewer than tens of trillions of times in recent years, that's not an appalling safety record at all. In fact, stuck pedals (whether due to bad lubricants or rolled-up floor mats) are many orders of magnitude more likely to cause stuck throttle than software failures, while runaway vehicles (ones with brakes not working either) are generally caused by "pedal misapplication", i.e. stomping on the gas instead of the brakes.

The fact that software has an execution pathway leading to something bad does not mean that this pathway can ever be entered, since in a closed realtime system like this, it is not possible to receive every combination of inputs, unlike in a system loading user data from a file. This is not to say that Toyota shouldn't clean up and verify its code, but the moral panics over what this code says about programmers, the human condition, Japan, etc. etc. are unwarranted.

Quote from http://en.wikipedia.org/wiki/2009%E2%80%9311_Toyota_vehicle_...:

On February 8, 2011, NASA and the NHTSA announced the findings of a ten-month study concerning the causes of the Toyota malfunctions of 2009. According to their findings, there were no electronic faults in the cars that could have caused the sudden-acceleration problems.


Every time this article comes up on hacker news there are a few comments that show up talking about how no one was able to prove that the bugs definitely caused the crashes, and thus the criticisms of Toyota are overblown. I'm left wondering how on earth to reconcile those viewpoints with general software engineering (which is a huge part of what this community is about), where we regularly see articles about new concurrency frameworks, fault-tolerant programming, formal methods to prove correctness of code, etc. If it's important enough for Amazon to use TLA+ to prove correctness of DyanmoDB then shouldn't it be considered outrageous that Toyota doesn't do the same thing for something that is potentially an accelerating death machine?


The HN community at large isn't experienced with software engineering. It seems like there are a handful of commenters who are, but they are a very small minority.

Personally, I'm surprised that the majority of the HN community overlooks or ignores the reports from NASA and Exponent (Toyota's outside expert witness) in favor of the testimony from Barr, the witness for the plaintiffs in the Oklahoma case. (I'm honestly amazed that Barr was allowed as an expert witness) Barr did show that the code "smelled" and that it was possible to inject specific faults to induce an uncommanded acceleration, but he did not show that his proposed failure mode occurred nor that it was probable to occur. In fact, part of his failure mode was that it left no record of a DTC. It's the perfect kind of failure mode for a sympathetic plaintiff to wield against an arrogant defendant with deep pockets: possible that it occurred, possible to demonstrate, understandable for a jury (who hasn't had a BSOD?), and impossible to disprove.

You bring up the practice of software engineering, but I think you need to place it in context of how software engineering was practiced around the time that the electronic throttle system was developed. MISRA, which HN readers are becoming aware of, didn't exist. Proof assistants, which today remain challenging to use and integrate into an SLDC, were even more arcane. In fact, your example of TLA+ wouldn't fly even today since TLA+ doesn't do code generation, and that would be a necessary component of the verification process of a regulated SLDC. I don't think TLA+ existed at the time either. Things certainly have changed!


> You bring up the practice of software engineering, but I think you need to place it in context of how software engineering was practiced around the time that the electronic throttle system was developed. MISRA, which HN readers are becoming aware of, didn't exist.

DO-178B was created in 1989, Ada was created in the 70's, standards for mission and safety critical systems have been around since the 70's, I was reading about Ada and safety critical programming as a kid in the 80's (I was a weird kid).

http://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_s... has a very good overview.

Whether you accept that multiple independent experts in safety critical programming are right or not is up to you however the stuff that came out of the trial painted a very clear picture to me of just how bad this software is.

A cynical person might consider that if Toyota is this poor how many other manufacturers are also as bad.

EDIT: I'm not saying that the UA was caused by this, without the source code and some proof I can't say that however the general setting is pretty terrifying.


> DO-178B was created in 1989, Ada was created in the 70's, standards for mission and safety critical systems have been around since the 70's, I was reading about Ada and safety critical programming as a kid in the 80's (I was a weird kid).

I'll hazard a guess that the HN readership consists of many formerly weird kids. :)

DO-178B was around then, but I'm not aware of any automotive groups using it then or now—I'm not in the automotive sector, so I could easily be mistaken. I would argue that IEC 61508 would have been the best model to follow in lieu of something more specific (especially since IEC 26262 is an adaptation of it), but I don't think even 61508 existed at the time outside of a draft. Ada was around, but it was a different beast than it is now and the tooling wasn't very good at the time (my opinion). I can't think of many that were using Ada outside of the government mandate.

For general consumer products (i.e., outside of aviation and military), I'm having trouble thinking of industry standards for mission- and safety-critical electronic systems that existed at the time that Toyota's electronic throttle system was developed. Maybe SAE had something at the time? A cursory Google search didn't pull up anything.

> Whether you accept that multiple independent experts in safety critical programming are right or not is up to you however the stuff that came out of the trial painted a very clear picture to me of just how bad this software is.

To clarify, I'm not saying that the software isn't poorly written by modern standards, but I am questioning that it was uniquely poor relative to the rest of the industry at the time it was developed. That said, Toyota was/is arrogant, and ignoring their own processes and requirements at the time is unquestionably the wrong thing to do. I remain skeptical that the software component was responsible for the unintended acceleration issues.

> A cynical person might consider that if Toyota is this poor how many other manufacturers are also as bad.

Isn't the null hypothesis that they were all equally bad?


I think the relevant standard for comparison, from a safety point of view, is with mechanical throttle linkages that were used before throttle-by-wire. And it's pretty obvious that the mechanical linkages are less reliable: they involve a long cable snaking around the engine bay. A sticky throttle used to be a common complaint. With these systems, it's not. And of course, it's the multi-channel redundant brakes that are the critical backup safety system here.

BTW, I read through Barr's attempted reengineering of Toyota's ECU (his slides are linked in the comments). It's just making me angry. After going on and on about how bad Toyota's code is (spaghetti, blah blah blah), he starts presenting his failure modes: suppose that a random hardware memory error flips some bits in the CPU's task table. Then the task monitoring the pedal angle is going to die. Now to get actual unintended acceleration as described (when the driver is pressing the brake), you also have to suppose that the throttle position variable is corrupted at the same time. Other than the general unlikelihood of this, consider: suppose that Toyota's code did not contain any "spaghetti" or global variables. Suppose in fact that it was beautiful enough to make angels weep tears of joy. Would that make the slightest fucking difference, pardon my Japanese, when you start flipping bits in the task table? Of course fucking not.

His complaint amounts to amateur backseat engineering: you protected variables A and B from corruption by having multiple copies, so why not C? Your watchdog will restart tasks X and Y when they die, so why not Z? And so on. Which is an OK suggestion for the future, but how are they liable for something for not making an already extremely safe system slightly safer, when it's much safer than previous systems, and has a fantastically reliable backup?


Among other things, what I found interesting was that Barr was giving expert testimony on engineering and "engineering certainty", when as far as I can tell he doesn't have a PE license (if he does, he's the first engineer I've ever heard of that doesn't conspicuously advertise it). I was under the impression that wasn't permitted in any state.

edit: Toyota did plenty wrong. Chiefly, I'd say that ignoring their own documented processes should be at the top. Having a system utilization of >70% would be another, as is using recursion. All are things that, at the time, were no-nos. I don't however think that one can fairly argue that it's practical or sensible to implement emerging standards during the multi-year car development process, which many seem to be arguing. I also remain skeptical that the unintended acceleration events are software related. It seems that for one to buy the bit flip argument from the trial, one also has to assume that the drivers depleted the service break vacuum, and that combination doesn't seem probable to me.


Sure, they should improve or clean up the code on general principles. However, there are 2 issues here:

1) A lot of these articles imply that some expert has demonstrated how a coding error will cause unintended acceleration. Then when you look at the actual source, it becomes clear that the "demonstration" involves changing the internal state of the controller in all sorts of arbitrary ways, and sometimes rewiring its sensors in an invalid way as well. In other words, this is not a recipe along the lines of "blip the throttle while changing from N to D, press the brake within 0.2 seconds, the throttle will now be wide open". The fact that no such recipe has been found, despite many millions of dollars spent on expert analysis, suggests that it doesn't exist in the wild. Implying that this code is killing people somewhere out there is misleading.

2) Yes, it has been overblown. We know that throttles can stick open, usually due to jammed or sticking linkages and pedals. That's not a huge problem, since brakes are powerful enough to stop cars in this state. I don't think it makes a huge amount of sense to rewrite software to stop software-induced unintended acceleration, which probably doesn't even happen, instead of writing software for lane departure/collision warning/assisted emergency braking and other safety systems, which we know can help drivers avoid accidents.

I guess a third issue here is the American litigation system, which can turn companies into villains without the slightest indication that their products cause any more issues than anybody else's - all it takes is a non-zero probability of failure (true for most products) and a media + legal frenzy.


> The fact that no such recipe has been found, despite many millions of dollars spent on expert analysis, suggests that it doesn't exist in the wild.

I don't agree with this. Analysis of the source code, especially with global variables reducing the value of analyzing a unit, is not going to imply that it will discover anything about the (presumably millions of) users who exercise the code in live situations daily. Just the inability to CLEAR the source code from blame is a failing in responsibility on behalf of the brand.

The american litigation system has its issues, but I would argue that class action lawsuits are not among them. It allows social change through clear legal decisions when people are otherwise disagreeing on matters much like this.


The reason generally given is that in at least standard cars the brakes can apply far more torque than the engine can. It's telling how naive people are, they think the problem is 'unintended acceleration' A mechanical engineer thinks the problem describe is 'unintended brake failure'


Absolutely right. After reading any number of stories about these "runaway death machines", what becomes clear is that the only realistic way for a car to accelerate at full power against the wishes of the driver is for the driver to be flooring the gas while thinking he is on the brakes.

We should be adding smarter automatic systems to the cars to assist drivers, rather than e.g. throwing out drive-by-wire linkages and replacing them with mechanics, which are generally less reliable.

There are already cars which will slam on the brakes in any collision which is strong enough to deploy airbags - it stops the car from careening around too much. It should be possible for cars to detect an impending collision and slam the brakes 1/2 second early to prevent or reduce it.


And if the brakes won't stop the car, try the key switch! An engine that is not running will not cause acceleration. (Sure there are probably some oddball push-button ignitions that can't be turned off while in gear, but most vehicles can be.)


Turning off the ignition may deactivate your airbags* which could make a bad situation worse.You would be much better off attempting to shift your car into Neutral first.

* This was what made the GM Ingnition switch problem so deadly. The engine shut off (taking power steering and vaccuum-assist braking with it) and the airbags deactivated making any subsequent crash less survivable.


You're right that shifting is better. That is a pretty terrible failure mode for airbags, however. How much would it cost to put a big reliable capacitor in the airbag circuit, to keep the power on for 30 seconds? A dollar?


Oh please. Your arguments are exactly the same as creationists' arguments against evolution. There exists no definitive proof for a chaotic system.

The whole point of software engineering is to manage and avoid complexity, not to obfuscate/complexify your software beyond any control and then weasel your way out of it by saying that your software can't be proven wrong. The fact that no proof can be given either way is a failure of the design process, not of the claimant.

"The fact that software has an execution pathway leading to something bad does not mean that this pathway can ever be entered" On the contrary, that's exactly what the term "execution pathway" implies.

"there were no electronic faults in the cars that could have caused the sudden-acceleration problems." No electronic faults? Sounds to me like they tested for wire insulation, not runaway code.


> There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.

-Tony Hoare


> Un bon mot ne prouve rien. (A witty saying proves nothing.)

—Voltaire


In the 1990s an engineer for one the U.S. automakers told me that the code in their cars was a black box. Nobody knew how it functioned; all they could do was watch the input and output, and add patches as needed to modify the output.

Maybe the engineer was talking about code for a specific component, but perhaps Toyota's software isn't unique.


Exactly my thoughts. I bet any company when put under the spotlight would show some really bad practices/anti-patterns.

Is that however, a problem with the industry? I can't help think of Fight Club's "recall" equation here...


"In 1998 that standard had a Rule No. 70 called -- I don't remember the exact language. But function should call themselves. And the rules basically are the same in but they changed the numbering system, so in the standard this rule, same rule is No. 16.2. So this violation of the MISRA C rule."

"What was NASA's view about this recursion? A So NASA's view, NASA was concerned about stack -- possible stack overflow. They had a couple of pages devoted to it, about five pages. I pulled some quotes here. Recursion could exhaust the stack space leading to memory corruption and run time failures that may be Yes. In what way? The stack can overflow due to this recursion in the Camry. And create memory corruption? And that would create memory corruption, that's difficult to test -- detect in testing."

Man, some elementary stuff here. They changed the specs numbering and did not update their manuals as well as forgetting that recursion in very costly in memory and could overwrite the stack... Jee, these are things a C programmer learns from day one, even in CS50...


There is, to me, a more interesting question:

If not Toyota, then who? Which auto manufacturers do it right? And is there any public evidence to support that? If not, then I'm probably just as "safe" or "unsafe" in any other brand of car as in a Toyota.


This is mere speculation but the software in Tesla cars is probably at least decent since they dared reuse some of the technology on the Spacex Dragon V2 capsule.

I think this is one field in desperate need of open sourcing. Anyone who has the capability and the desire should be able to audit the software. Modifications should be prevented by signing, unless the user signs that he assumes full legal responsibility for the effects of the modifications and has the vehicle recertified as roadworthy.


Open sourcing it is nice and all from an idealist standpoint, but given the complexity of the software (millions of LOC), the fact that all the software would be used in commercial companies, and simply the area itself (cars), do you really think the community would pick up on it? I doubt it. Plus, the researcher did audit it - 18-20 months' work, and that was just analyzing it, not actually improving or fixing it.

Plus there is no car manufacturer ever that would allow anyone to change the firmware on their own.


Here are the slides Barr used in his presentation: http://www.safetyresearch.net/Library/BarrSlides_FINAL_SCRUB...


Please add (2013) to the title.

People may have forgotten this because it was in the news a few years back. But Toyotas randomly experienced "unintended acceleration" due to software bugs.

Many reported incidents were undoubtably due to the problem that people in a panic can step on the wrong pedal and will misremember what they did. Others were due to a floor mat that could jam the pedal. But some were Toyota's fault.


Was that ever proven? Did Toyota ever come out with a software patch that had to be installed in all cars made by them?

I remember them replacing carpets that might get stuck, and this http://www.thetruthaboutcars.com/2010/03/the-best-of-ttac-th...


It was never proven because it never happened. The wikipedia article has much more detail.: http://en.wikipedia.org/wiki/2009%E2%80%9311_Toyota_vehicle_...

It is not merely coincidental that all unintended acceleration events occurred in automatic cars. Also, a widespread unintended acceleration problem has not been observed in Toyota cars outside the USA.

It's a shame that so many on HN are so swift to condemn Toyota


> It is not merely coincidental that all unintended acceleration events occurred in automatic cars. Also, a widespread unintended acceleration problem has not been observed in Toyota cars outside the USA.

If my car went nuts on acceleration the first thing I'd do would be to jump on the clutch pedal and apply brakes (and possibly set the stick to neutral) but that's because most cars here are manual thus that's what we learn to operate†. Pursuing on that line: once at a stop, cut the ignition and disaster is averted. Start again and you're probably fine due to the reboot. Then drive to nearest Toyota to complain, but since "nothing" (i.e no damage, no injury) really happened, you'll get a shrug and a reflash.

† In a country where people learn only on automatic cars, setting back to neutral is most probably not reflex (if mechanically possible at all)


I, too, am left wondering over the paradox of the brake supposedly being powerful enough to hold back any amount of torque the engine and transmission can apply - unless, in the name of anti-skid braking or traction control, the system has the ability to override the driver and release or limit the braking torque.

Furthermore, those explanations of the floor-mat recall that I have checked all say that the mats were suspected of having interfered with the accelerator, not the brake. Toyota subsequently performed a second recall, to correct problems with the accelerator pedal. These both imply that unintended acceleration was at least a significant causative factor in the accidents - unless Toyota and the NHTSA were simply acting in order to be seen doing something.

With regard to proof, the article covers the issue in some detail: after a far-from-exhastive series of investigations, there have already been found so many ways for the software to get into unintended states that only a vanishingly small number of the vast range of possible causes have been considered. Asking for proof of an error is the wrong question to ask: the burden of proof clearly rests with the manufacturer to show that their systems are safe - and Toyota cannot do that with this software.

That Toyota replaced floor-mats is certainly not evidence for the correctness of their software, and nor is the non-appearance of a software patch. The article quotes a concerned Toyota engineer whose statements indicate a culture of denial within the company.


Doh, all the reddit links were from today didn't realize it was from 2013. Updated the title.


Things were so much simpler when the only problems that could cause unintended acceleration were mechanical in nature. A broken return spring, seized linkage, etc. My daily driver has a mechanical throttle (although it is electronically fuel-injected), and honestly I can't see much point in "drive-by-wire" for cars.

But, on the other hand, even despite so many possible bugs, AFAIK no one has been able to demonstrate one instance of unintended acceleration even with extensive testing.


Drive-by-wire is good for a lot of efficiency technologies because it enables the elimination of the throttle plate, and therefore of many kinds of throttling losses (see MultiAir, ValveTronic, ValveMatic, etc.). It also enables simpler cruise control and traction/stability control systems. The addition of computers to cars is mostly a plus (IMO) but the hidden complexity is frightening to say the least.


A 150 foot long skid mark seems like pretty compelling evidence of unintended acceleration.


Have you never encountered such a bug? It isn't difficult to understand how hard a bug might be to track down when it only affects such a small subset of your users and only under specific circumstances. It is the dreaded Heisenbug


There are no mechanical throttles for electric cars.


Similar to the aerospace and medical may be there should be strict regulations in terms of processes to be followed for development in the automobile industry also. Usually in medical, failing the audit would restrict the producer to sell the product in market for 1 year. May be something similar could be enforced in automobile industry also. Ofcourse it might increase the cost of cars but atleast they will be lot more safer.


These are safety-critical systems. It still is mind numbing that the quality of the code is so bad. After this came out, I've assumed all modern fuel-injected Toyota's suffer from the same lack of safety. I don't think other automakers may be better.

I own a Toyota, but it has a mechanical throttle body. :)


So many questions remain...

How do I tell which Toyotas are affected? Do they all have these problems? What, if anything, did Toyota do to fix their software engineering processes?

I ask because my parents have a 2005 or 2006 Toyota Sienna, and I don't feel very comfortable with them driving it now.


If you're concerned about safety, look into the stats. With the number of accidents happening overall, I wouldn't be surprised if even a known flaw doesn't increase risk a whole lot. There's probably dominating factors, such as location (weather, etc.) and distance driven at which times, or which model of vehicle.

Maybe I'm wrong and this is a significant risk, but the approach should be the same. Even if they found and fixed one problem, the stats would still tell you which cars are safest. It could very well be that other vendors are worse, just haven't been investigated.


Toyota cars are some of the safest on the road in the US according to the safety statistics -- even if (big if) some of them had problems with "unintended acceleration".

This tells us that even if (big if) the issue was real, it was actually never important. Also funny that it never was an issue outside of the US.


Which safety statistics? IIHS crash test reports or actual road statistics?


The latter.


Do you have a link? I'm genuinely interested.


"Defects in the car’s electronic throttle control system (ETCS) were directly responsible for the Camry’s sudden acceleration and resulting crash." [0]

I couldn't find an exact description of how the driver crashed. Was it using the on-board cruise control or normal throttle use? Never use the cruise control. Must ask when I get my car serviced if it has electronic or mechanical throttle body cf @bliti

[0] http://www.usatoday.com/story/money/cars/2013/10/25/toyota-s...


it was a symptom of neither cruise control or normal throttle use. the way the issue manifested is actually pretty terrifying:

"Koua insists that his 1996 Toyota Camry sped up to between 70 and 90 mph despite heavy braking."

http://en.wikipedia.org/wiki/2009%E2%80%9311_Toyota_vehicle_...

out of context, but "we're in trouble, there's no break"

https://www.youtube.com/watch?v=03m7fmnhO0I


thx @catshirt, I note this about my Camry: "Toyota Australia announced that its accelerator pedals are made by a different supplier and that there is no need for a recall of Australian made vehicles".

Reading through the notes I see trying to turn the engine off while going effects various electrical sub-systems. Me, I'd try putting the car in neutral (auto).

A couple of years ago the alternator blew in my car. The battery had enough charge to let me drive +30km home after a re-start by a local RAC. When I drove the car the three kilometres to the shop, the charge started to drop and battery failure light (alternator failure) clicked on.

First the car started loosing systems: seat-belts, overdrive, ABS... etc. This continued, till I bunny hopped the car into the garage where everything stopped working. No doubt a sticky accelerator is scarier than the lurching I had in busy traffic, kept going till the end. Power steering was the last to go, then the engine itself.


Does Google (or Tesla or Uber) use Ada for their self-driving car projects? I am afraid that answer is likely C++.

I noticed that this article mentions MISRA-C but not Ada. I thought I had read the Toyata used SPARK Ada. Michael Barr's slides have more technical details about Toyota's C code:

http://www.safetyresearch.net/Library/BarrSlides_FINAL_SCRUB...


Software that can modify any aspects of a vehicle's handling should be subjected to a higher level of testing and scrutiny than things like airbags and safety belts. A faulty cruise-control component is demonstrably more dangerous than a faulty safety belt.


Please edit the title to read "Toyota's", not "Toyotas".


10,000 god damn global variables. It would be very difficult to expect some kind of unit testing with that many straggling race conditions bouncing around. Wow.


> Toyota had more than 10,000 global variables. “And in practice, five, ten, okay, fine. 10,000, no, we're done. It is not safe, and I don't need to see all 10,000 global variables to know that that is a problem,” Koopman testified.

I wonder why people want to believe 5 or 10 are okay.


Check engine light? Passenger without a seatbelt?

I mean, it's an embedded system, resources are scarce. There are more elegant solutions, yes. But a pattern of setting an error condition from a bunch of different sensors, that's checked from a bunch of locations seems reasonable. At least, not insane.


He's not sanctioning 5-10 globals, only saying he could deal with that number of them without freaking out. Which is certainly true.


For simple configurations that have global affects, global variables might be easier than some other methods of keeping track of settings. It's a bit more frequent in embedded software than other software though.


Any ideas if other Japanese car makers have the same problem with their software?


Counting global variables is just silly. I think any number of global variables is perfectly fine.


You're using a throwaway in what seems to be an apparent troll. For newbie programmers out there the reason we don't use global variables willy nilly is because it is very easy to lose track of all the places where one can be accessed. The result is that your code becomes very fragile.


Lots of downvotes, no rebuttals.


You might as well claim that you think that dogshit makes for a fine breakfast. It doesn't warrant a rebuttal.


That's a consequence of the use global variables having been recognized as an anti-pattern for a very long time now, and people downvoting the obvious instead of writing a rebuttal.


Maybe you should cite something to back your claim made upthread.


hi




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: