Hacker News new | past | comments | ask | show | jobs | submit login
How Airbus is debugging the A350 (businessweek.com)
122 points by hencq on Feb 17, 2014 | hide | past | favorite | 48 comments



I am impressed they have a distributed CAD/CAM system which lets them share the schematics of the planes construction with all the partners. "Source Code Control" in the 3D CAD space was abysmal, got better in the 3D digital feature space as studios created systems for asset management, and it seems to be solidly implemented by Airbus here : (video link: http://videos.airbus.com/video/dc6bd25e7f3s.html)


> "Source Code Control" in the 3D CAD space was abysmal, got better in the 3D digital feature space as studios created systems for asset management, and it seems to be solidly implemented by Airbus here

I doubt that when technical drawing "versioning" predates CAD by decades, in the form of drawing release/review and configuration management (CM) groups that were highly organized by WW2. This was then implemented by packages like ENOVIA, SolidWorks PLM, which facilitate the review/signoff process to be paperless, but it is basically the same. This is completely different from the entertainment industry, which doesn't care about part compatibility, analyst reviews (stress, aerodynamics, weights, etc.), not to mention the nature of CAD data (rife with engineering metadata on assembly hierarchies, dimensions with tolerances, materials) being very different from the "looks good" graphics of entertainment/art.


My take on it has always been that 'drawings' (in the draftsperson meaning) were essentially the archive format of designs. So while you could look through a list of change orders in a drawing, seeing what that change was, or more commonly not seeing it, has been the challenge. Boeing made a big deal about this when, as a Sun customer in the '80s, Sun helped them put basic drawings online as living models/schematics.

I would agree its less impressive if everyone is forced to used the same CAD package. The video did not state whether or not that was the case.


EADS/Airbus are using CATIA by Dassault Systemes/IBM. Though they had some issues during development of A380 (different software versions, change management, etc):

  Initial production of the A380 was troubled by delays 
  attributed to the 530 km (330 mi) of wiring in each 
  aircraft. Airbus cited as underlying causes the 
  complexity of the cabin wiring (98,000 wires and 40,000 
  connectors), its concurrent design and production, the 
  high degree of customisation for each airline, and 
  failures of configuration management and change control.
  The German and Spanish Airbus facilities continued to use 
  CATIA version 4, while British and French sites migrated 
  to version 5. This caused overall configuration 
  management problems, at least in part because wiring 
  harnesses manufactured using aluminium rather than copper 
  conductors necessitated special design rules including 
  non-standard dimensions and bend radii; these were not 
  easily transferred between versions of the software
http://en.wikipedia.org/wiki/A380#Production_and_delivery_de...

Nevertheless, CATIA is top-notch is used by many car, ship, aircraft and spacecraft manufactures.


Thanks for that, so the cost of doing business with EADS/Airbus is you need to buy CATIA license? In the past this was a challenge with people picking up small machine shops and what not since they had their own design flow. So I'm curious if IBM/Dassult made allowances for that or if there is a $50K "membership" fee which you have to pay to get into the EADS club :-)


It's not implemented by Airbus, the revision control is built into CATIA, the commercial software they use (made by Dassault Systemes).


I'm currently working with some 3D CAD software like Inventor and Creo Parametrics. They still save versonioned files (gear.prt.1, gear.prt.2).

Not sure what to think about that.



Airbus has been one of the success stories commonly told by the static analysis community:

http://www.astree.ens.fr/

(Here I mean https://en.wikipedia.org/wiki/Static_program_analysis , not https://en.wikipedia.org/wiki/Static_analysis )


I'm fairly ignorant of the details of static analysis, but why is it being done on programs written in C?

Shouldn't they use languages specially suited for this kind of analysis?

I remember learning that stateless programing (ie. functional programming) makes this kind of analysis several orders of magnitude easier since it eliminates coupling and control flow dependence. Yet I've never heard of critical software being written in Haskell or whatever.


When you're writing safety-critical code, what you want above all else is lack of surprises. Sure, C has pitfalls, what language doesn't? But we know what the pitfalls are. We have decades of experience in avoiding them. The toolchains are mature and very well tested. The source code maps fairly directly to the hardware. You don't have to put your trust in esoterica like trying to find a garbage collector that claims to be able to meet real-time constraints and then trying to understand the edge cases in the analysis on which that claim is based.

It's okay to have bleeding edge technology in the ancillary tools like the static analyzer. But for safety-critical work, you don't want bleeding edge technology in the language in which you're writing the actual code.


Also, a straightforward mapping from source code to machine code is important for auditing generated code.


C99 with some restrictions isn't actually that big a language, it's quite possible to put together a formal semantics for it, especially if you disallow heap allocation.

There's at least one fairly mature implementation of a certified compiler out there (CompCert) with only minor restrictions to the language.


I suspect the arrow of causality goes the other way: the control software was written in C first. Later, Airbus wanted to gain confidence in its correctness.

In other words, the static analysis works on C programs because there are more extant (and mission-critical) C programs than Haskell ones, and the authors of the static analysis software wanted their tool to be as useful as possible, so they chose to analyze C.


If I had to guess, it's because C lets them model their software closely to how the hardware is designed.


Not that much, if any, real-time software written in Haskell, on account of the runtime not being amenable to real-time constraints. And anyway, I suspect it's an industry where "let's rewrite it from scratch" is not something you hear very often.


"on account of the runtime not being amenable to real-time constraints"

What are basing that on?

A stateless side-effect free language would be significantly more amenable to real-time constraints b/c you can guarantee run-times for your functions.


Yes, you could, but chances are that the provable upper bounds on memory usage or execution time are orders of magnitude above what you think it should take. Anything that produces a new value where you cannot prove that another value becomes unreachable (in which case the compiler could translate it to a destructive update) could trigger a garbage collection that might write half a GB of memory and takes .1 of a second (numbers may be realistic, but if they are, it's pure luck)


Sure.. a garbage collector would mess you up, but garbage collection isn't an intrinsic property of stateless languages.

EDIT: Seems I'm wrong http://www.haskell.org/haskellwiki/GHC/Memory_Management


It is not intrinsic, but hard to avoid. Alternatives include:

- just allocate, never collect (not infeasible with 64-bit memory spaces, if you have lots of swap and can rebozo fairly frequently, but bad for cache locality)

- garbage collect at provable idle times. Question is: when are those?

- concurrent garbage collect, and proof that it can keep up with allocations

Finally, you could try and design a language where one can (often) prove that bar can be mutated in place in expressions such as

    bar = foo(bar,baz)
(That's possible if you can prove there's only one reference to bar at the time of the call)

(Rust's memory model may help here)

I am not aware of any claims that it is possible to write meaningful systems based on this model that do not have to allocate new objects regularly. Problem is that, to guarantee the 'one reference' property, you have to make fresh copies of objects all the time, and that beats the reason why you want that 'one reference' rule.


Thank you for the explanation. That's a lot to think about.


Um, no you can't. Garbage collection and laziness both completely destroy the ability to guarantee runtimes for functions.


Given for how long we have been developing airplanes and even planes in almost the same size as the A350 the lack of a somewhat standardized development process astounds me. Did newly developed planes used to be less safe and were more problems worked out during actual use? Or did they just not have as many problems to begin with due to less automation and sturdier but heavier materials?


Having worked in the flight test industry in a 'prior work life', nothing that this article describes sounds especially interesting or novel. I think what happened is that when the planes started getting technologically intense and at the same time the development team became highly distributed (geographically and contracted), there was a period of time where things 'got out of control' in that a) the design/simulation tools didn't have good capabilities for dealing with this level of distributed/revisioned work b) it was more important to 'get the job done' than making sure that everyone used the same exact toolchain and was working on the same version of the model etc. Eventually this caught up with them and they experienced some significant issues (like the wiring snafu the article mentioned, and I also recall another issue where fuselage parts wouldn't mate up), which finally made the industry pull back and get serious about fixing these design tool/practice issues.


The release cycles for aerospace companies are a lot longer than in software. The previous A330 that they are talking about was originally released back in the early 90's. I assume Airbus didn't think it needed a full development system overall for incremental upgrades but believed it did when it came to designing a whole brand new airplane.


So if I'm following this correctly, Airbus's breakthrough design philosophy is to use distributed version control to facilitate iterative construction with a heavy emphasis on integration testing?


The megastructures documentary provides a pretty captivating look at construction:

http://www.youtube.com/user/megadocumentary1


Link is to doc on A380 not A350.


Imagine the difficulty in debugging modern CPUs. Remember the floating point problems Intel had? There are far too many possible edge cases to be confident that testing alone will reveal them. Consequently, both Intel and AMD use formal proof methodologies to verify the correctness of their processors. I know that AMD uses (or used to use) the work of Boyer and Moore for validation of their designs. Intel uses it own prover. [1]

[1] Fifteen Years of Formal Property Verification in Intel by L Fix, 2008 [http://www.cs.ucc.ie/~herbert/CS6320/EXS/LimorFix%20Intel%20...]


This is nothing new and isn't different from Boeing in anything they mentioned in the article. Yes the 787 had issues, but the same types of testing occurred. The 787 was fundamentally different from previous Boeing aircraft with lots of primary components made by subcontractors. Lack of rigor and believing things would just work (too optimistic) from what I have heard on the outside.

Iron birds, flight tests, etc are the requirements from the certification authorities. I.e. this is a fluff piece acting as journalism where the title and conclusions don't match the data.


To me, not being an expert, the article mentioning a lot of rigor and thoroughness to not run again into the A380 problems, this

"This is nothing new and isn't different from Boeing in anything they mentioned in the article."

contradicts this

"Lack of rigor and believing things would just work (too optimistic) from what I have heard on the outside."


This is an article with no attempt to understand the state of the art outside of what Airbus wanted written and is PR spin. Most of the techniques here were used in the 737NG program in the late 90s


So your answer in this discussion is "PR spin. PR spin."


just look at how the production ramp up looks like (hint airbus has a public plan, there was no such thing for 787).

The guys over at airliners.net are tracking the state of building frames. They are ~1100 hours into a 2400 hour test flight program and they have only 4 planes flying (or almost there) and 2 in various states of building. Compare that to 787 which had to fix so many uncomplete frames after finding issues in test flights


A good fiction book for those interested in aircraft engineering, testing, maintenance, root cause analysis after problems, etc. is "Airframe" by Michael Crichton.


Anyone know how they built their 3D graphic page?

http://images.businessweek.com/graphics/airbus-a350-3d-graph...

How did they go from the Trimble/Sketchup A350 model to showing the model in the browser in "3D"?


No idea, but it put this in the Javascript Console...

   Recommended listening: 
    http://youtu.be/AjzcdvF3gDc?t=3m48s 
    http://youtu.be/mGF_0AcHaGs 
    http://youtu.be/kn6-c223DUU 
    http://youtu.be/eF-4Cr9Iy_8

edit: further investigation looks like they're using http://threejs.org loading a COLLADA-format file (that can even be QuickLooked on my Mac somehow) http://images.businessweek.com/graphics/airbus-a350-3d-graph...


I love easter eggs like this. I'm going to start checking the JS console for every website from now on..


OS X has had OpenCOLLADA built into QuickLook (as well as Preview) for a few years now.

That said, it can still be fussy with many scenes.


Here is the script of the 3D scene (not minified) http://images.businessweek.com/graphics/airbus-a350-3d-graph...


Hi, I made this. (The page, not the model.) Y'all basically figured it out. Three.js atop WebGL, and here's the Collada loader: https://github.com/mrdoob/three.js/wiki/Using-SketchUp-Model...

A small thing, but hugely gratifying that kalleboo found the recommended listening.


To answer my own question, looks like they are using three.js as described here to load the Collada/DAE file.

http://tech.vg.no/2013/07/08/webgl-dae-model-viewer/


Re: "Derisking"

Can someone explain me whether/how Agile methodologies would be applied to an Airbus project? I'm asking this because I can't always explain how to do Agile when people pretend there are a lot of reqs, so an industrial project would be a good example to try it on.


Not realistic. Agile has many very good practices, but the ones that are missing are exactly the ones needed when you have a zero bug tolerance. You mention derisking - creating a proof of concept (a short sprint that proves you can do something you're unsure about) is one way of eliminating risk. Agile however, includes no practices for identifying, analysing and managing risk. Similarly, absolute reliability requires upfront design, and documentation. See https://www.wittenburg.co.uk/Entry.aspx?id=99bb5987-e08d-4e8... .


I posted this story over the weekend and it didn't get traction. So my question is, what is the lag threshold to when it becomes a new submission?


"The Best Time to Post on Hacker News": http://nathanael.hevenet.com/the-best-time-to-post-on-hacker...

The short answer is 9:00–10:00 AM EST on a weekday.


This story has an extra "#p1" at the end of the link, which defeats HNs duplicate detection. The submitter probably did it accidentally, by going to a different page of the story and then back to page 1.


Well I'm glad it did slip by the detection; it meant more people got to see the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: