Hacker Newsnew | past | comments | ask | show | jobs | submit | more mrb's commentslogin

Yup. That's exactly what experts said of American Airlines flight 191 which was basically the same engine mount, same failure. Engine flipping over the wing.


The failure of the pylon appears to be different. On AA 191, the pylon rear bulkhead cracked and came apart. In the case of UPS flight 2976, the pylon rear bulkhead looks to be in one piece, but the mounting lugs at the top of the rear bulkhead cracked.

Admiral Cloudberg has a great article on AA 191 that covers exactly what happened: https://admiralcloudberg.medium.com/rain-of-fire-falling-the...


American 191's engine mount failed because of improper maintenance. It remains to be seen whether this failure had the same cause or if it was something else, such as metal fatigue.


A failure due to metal fatigue would still be a failure to properly maintain the aircraft, right? I know by "improper maintenance," you're referring to actual improper things being done during maintenance, and not simply a lack of maintenance. But I'm reading things like "the next check would've occurred at X miles," and, well... it seems like the schedule for that might need to be adjusted, since this happened.


Yes, when I said "improper" I meant the American 191 maintenance crew took shortcuts. The manual basically said "When removing the engine, first remove the engine from the pylon, then remove the pylon from the wing. When reattaching, do those things in reverse order." But the crew (more likely their management) wanted to save time so they just removed the pylon while the engine was still attached to it. They used a forklift to reattach the engine/pylon assembly and its lack of precision damaged the wing.[0]

Fatigue cracking would be a maintenance issue too but that's more like passive negligence while the 191 situation was actively disregarding the manual to cut corners. The crew chief of the 191 maintenance incident died by suicide before he could testify.

[0] https://en.wikipedia.org/wiki/American_Airlines_Flight_191#E...


> The crew chief of the 191 maintenance incident died by suicide before he could testify.

To be clear, a crew chief (Earl Russell Marshall) did. But he wasn't directly involved in maintenance of the specific DC-10 that crashed. Or at least, I haven't found a source saying he was, and some sources say he wasn't. https://www.upi.com/Archives/1981/03/26/The-wife-of-an-airli...


If the (FAA-approved) maintenance schedule says "the next check should occur at X miles" and X miles hasn't happened yet, then it's not going to be classified as improper maintenance -- it's going to be classified as an incomplete/faulty manual.

Now, of course, if that maintenance schedule was not FAA-approved or if the check was not performed at X miles, that's going to be classified as improper maintenance.


A more likely metric for this particular inspection would be hours or cycles, in other words starts and landings, not 'miles'.


According to various comments the plane was nowhere near the cycling for a special detailed inspection of the aft pylon mount lugs: SDI is at 29200 cycles and the plane had 21043.

There was a lubrication task in October, but according to tech comments that would just in greasing the zero fittings, no taking apart anything.


Those pictures of that torn up part are pretty hefty, that's a clean break, no stretching as far as I can see it just tore the material in half, you can see the grain. There does not seem to be any torsion either so most likely that was the first part to go, if the problem had been in the engine then I would expect this part to be mangled, not pulled apart. What stress damage there is occurred shortly after that first break. A valid question would be whether or not that crack was there before take-off or not.

I'm very curious what the metallurgic analysis of the mirror part on the other wing will come up with, especially whether there are any signs of stress fractures in there. If there are that will have substantial consequences for the rest of the still flying MD11's, about 50 or so are still in service.


The preliminary report mentions fatigue cracks on both sides of the aft lug, and one side of the forward lug, with the other showing no trace of fatigue, only overstress.

From this it seems like the aft lug was way fucked, and the forward lug was hanging on for dear life, until it could not.


Yes, that's exactly how I read it. The aft lug was the first to go, the forward lug shows signs of stress so it held on longer.

I don't think they're going to be skimpy on the metallurgy report so looking forward to the analysis of the mirror parts on the other wing. Those will tell without a doubt whether it was maintenance related or age related fatigue. Right now I would bet on the latter but the former could also still be a factor, for instance, that bearing might not have had enough lubricant.


Zerk fittings, not zero (stupid autocorrect), the grease fittings.


Grease nipples. They had a perfectly fine name already, which autocarrot was perfectly happy to spell.


It depends. This aircraft was made near the beginning of the MD-11 production and if the original analysis for the fatigue life of this location was wrong, then you would expect to see that appear in older aircraft first. If that ends up being the case then it's not an inspection or maintenance issue, it's an engineering failure. Given aerospace accident history I would say that is less likely than some maintenance issue but we won't know for sure for a bit.


Even if it was an inspection or maintenance issue (which given the kind of failure and available data looks increasingly doubtful, though it can not yet be ruled out) this part failed in a catastrophic way when it should have had ample engineering reserve over and beyond the load to which it was subjected. It just snapped clear in half those breaks are indicative of a material that has become brittle rather than that the part deformed first and then broke due to excess stress.

In other words, a slow motion video of a camera aimed at that part during the accident would have shown one of the four connections giving way due to fatigue cracks and then the other three got overstressed and let go as well, in the process damaging the housing of the spherical bearing.

The part at the bottom of page 9 is the key bit. Now I very much want to see the state of the mirror part on the other wing, that will show beyond a doubt whether it was maintenance or an over-estimation of the design life of that part.

It would also be interesting to have a couple of these pulled from the fleet and tested to destruction to determine how much reserve they still have compared to the originally engineered reserve.


According to the preliminary report, 3 of the 4 showed fatigue cracks, and the 4th overstressed. So yes, agree a random sample of these parts should be pulled from the fleet and tested - but something pretty crazy was happening here re: fatigue.

That it was so far from the maintenance schedule to be inspected AND that the fatigue cracks seem to have formed in areas that would be hard to visually inspect anyway points to either a engineering problem (especially bad, since the DC10 problem of a similar nature happened in roughly the same parts, albeit due to different abuse - you’d think the engineers would overdo it there, if nothing else), or some specific type of repeated abuse that particular pylon received, which is pointing more to a design problem.


Re-reading the 1979 report might be helpful here. This isn't my field, but it seems that the engine is attached "hard" to the pylon, then the pylon is attached via a bearing mount system to the wing frame. The bearings wear out, and hence have to be replaced (not sure how often, but they were doing it on the entire fleet prior to the 1979 crash). The 1979 investigators thought that the fatigue cracks were caused by removal of the entire pylon/engine assembly as one unit (because that put excess stress on the aft bearing, they suspected due to support being provided from below by a fork lift). After the 1979 accident engines had to be removed first, then pylon, supposedly removing that cause for mount cracking. Perhaps there was another cause.


Before the 1979 accident, engines also had to be removed first.

Airlines have to follow the approved maintenance manual procedures; that manual called for engine removal and installation from a pylon that was on the wing. American was improvising a maintenance procedure without the legal authority to do so, resulting in 191.


> you’d think the engineers would overdo it there, if nothing else

No kidding, especially given the lack of redundancy in the design.


There is nothing here to say it being a maintenance issue is doubtful. It could quite literally be a similar issue to Flight 191, we don't know yet.

It did have ample engineering reserve beyond the load it was subject to... before fatigue damage initiated a crack which then grew until there was no reserve left. The question is why the fatigue crack initiated prematurely? Maintenance damage? Analysis mistake? We don't know yet.


If you read the original 1979 report in full, I think you'll begin to realize that this "improper maintenance" thing was a cover-up. Actually quite similar to the 737MAX -- find someone or something to blame other than the design of the aircraft, then move on.


The picture of that part that is torn into two pieces certainly seems to suggest so, that's a clean break, not an overstressed part deforming and then breaking.


Ironically, AA flight 191 could have been salvageable, because the engine detaching didn't start a fire. However, it led to loss of hydraulic pressure on that wing, which led to the flaps/slats retracting on just the left wing, which led to the plane becoming uncontrollable. After that accident, the DC-10 was retrofitted with hydraulic fuses to prevent something like this happening again. Unfortunately, that didn't help the UPS crew, because in their case, the detachment caused more damage to the wing...


Unpopular opinion: the real source of the problem is not scrapers, but your unoptimized web software. Gitea and Fail2ban are resource hogs in your case, either unoptimized or poorly configured.

My tiny personal web servers can whistand thousands of requests per second, barely breaking a sweat. As a result, none of the bots or scrapers are causing any issue.

"The only thing that had immediate effect was sudo iptables -I INPUT -s 47.79.0.0/16 -j DROP" Well, by blocking an entire /16 range, it is this type of overzealous action that contributes to making the internet experience a bit more mediocre. This is the same thinking that lead me to, for example, not being able to browse homedepot.com from Europe. I am long-term traveling in Europe and like to frequent DIY websites with people posting links to homedepot, but no someone at HD decided that European IPs couldn't access their site, so I and millions of others are locked out. The /16 is an Alibaba AS, and you make the assumption that most of it is malicious, but in reality you don't know. Fix your software, don't blindly block.


A funny coincidence is that the solar system was formed 4.6 billion years ago which is exactly when the universe's rate of expansion peaked according to figure 3.

If you want to believe in an intelligent creator—not that I do—it's as if they were accelerating the expansion until the solar system was formed, then turned the control knob down.


Turns out the universe is one giant PID controller.


Interesting observation.


Another thing my dad demonstrated to me a few weeks ago: you can grab a nettle by the base, move your hand upward, and as the nettle is sliding through your closed hand, it won't sting at all. This is because the sting cells are oriented perpendicular to the surface of the plant (or pointed slightly upward) so their pointy end doesn't come in contact with the skin at an angle where it would penetrate the skin.


Queen Anne’s lace is sort of the same way. When I grab it to pull it, I do it fingertips first, then roll my fingers and palms down onto the stalk which flattens the hairs due to the angle.

Works on a few types of thistle with small thorns but a stick works better.


What an extremely confusing blog post! I don't understand why it seemingly presents profiles as a new feature? Firefox has had profiles for years. What the heck is new?

I had to click the other link https://support.mozilla.org/en-US/kb/profile-management to understand that this is all about simple UI improvements to make it easier to work with existing profiles.

The blog post is more than confusing, it is misleading. HN should link to the support page instead of the blog post.


100% agree. It feels like Firefox has had an identical announcement of this as a new feature every second major release for the last ~2 years.

They are seriously dropping the ball here in terms of communication and it just makes Firefox seem stale.


> Firefox has had profiles for years. What the heck is new?

Even further back — Netscape had profiles! https://web.archive.org/web/20000816175642/http://help.netsc...


It looks like AI slop to me. "Profiles in Firefox aren’t just a way to clean up your tabs. They’re a way to set boundaries, protect your information and make the internet a little calmer." - classic meaningless comparison.


If it's buried in about:config, it's not a feature, it's just a dev tool. (FWIW, I read that config settings may reset during updates, so at best it's just a temporary patch.)

It's like saying you have a responsive website, but only if I edit the layout in the DOM.


Nit: It's not a parameter in about:config, it's been available for a while under about:profiles, no tweak needed


And now they're rolling out a new UI for it: https://support.mozilla.org/en-US/kb/profile-management


I've been using profiles for 15 years and I didn't even know it was in about:config.

Just add -p in your shortcut to start on the profile manager.


No, you use it in the ProfileManager, which is a GUI that pops up on start.


For some reason neither my Dark Mode add-on nor the built in reader mode (which also makes pages dark as I prefer) work on that page. Very annoying; will skip reading that to preserve my eyes.


Oh woah, very insightful discussion thread you found there.

So the tl'dr is: the leading very preliminary theory is that the MD-11's left engine fell off the wing just like https://en.wikipedia.org/wiki/American_Airlines_Flight_191 (a DC-10, the immediate predecessor of the MD-11) which was caused by maintenance errors weakening the pylon structure holding the engine.


The parallels with AA Flight 191 are striking. In THAT accident it was found [1]:

1) improper maintenance—American Airlines had used a forklift shortcut to remove the engine and pylon together, rather than following McDonnell Douglas’s prescribed method

2) The detachment tore away part of the wing’s leading edge, rupturing hydraulic lines and severing electrical power to key systems, including the slat-position indicator and stall warning (stick shaker).

3) The pilots followed the standard engine-out procedure and reduced airspeed to V₂, which caused the aircraft to stall and roll uncontrollably left. This procedure was later found out to be incorrect.

Defective maintenance practices, inadequate oversight, vulnerabilities in DC-10 design, and unsafe training procedures combined to cause the crash, killing all 273 people on board and leading to sweeping reforms in airline maintenance and certification standards.

[1] https://www.youtube.com/watch?v=F6iU7Mmf330


And just to add for those that aren't pilots: When they say "reduced airspeed to V2" that doesn't mean reducing engine power, it means pointing the nose higher while thrust remains at the maximum permissable setting. You're loosing speed but climbing faster.

This can happen if you accelerated past V2 (V2+20 is normal) before the engine failure and then after the failure you slow down to V2 to get the best climb angle on a single engine plus some safety margins above stall etc.


(asked earnestly out of lack of familiarity with this field) Are maintenance/certification standards distinct between passenger and cargo carriers?

It's hard for me to tell if this suggests a step backwards in application of the reforms instigated after AA191 or that those reforms were never copied over to cargo aviation.


Yes but mostly related to purpose specific things: passenger carriers have additional safety checks for cabin things like seats, oxygen and evacuation systems. Cargo carriers have additional safety checks for things like cargo restraint and decompression systems.

Furthermore (and I don't know if this is related to the cause of this crash), cargo jets tend to be older/refurbished passenger planes that have outlived their useful lives flying passengers.


> cargo jets tend to be older/refurbished passenger planes that have outlived their useful lives flying passengers.

Exactly what happened in this case; the airplane was built in 1991 to carry passengers, and then converted in 2006 for freight.

https://www.planespotters.net/airframe/mcdonnell-douglas-md-...


worth noting about AA191:

  With a total of 273 fatalities, the disaster is the deadliest aviation accident to have occurred in the United States.


To expand on #2, the loss of hydraulic pressure also caused the uncommanded retraction of the leading edge slats on the left wing, which was found by the NTSB to be part of the probable cause. Full report is here (PDF): https://www.ntsb.gov/investigations/AccidentReports/Reports/...

(I do not mean to imply that this exact slat retraction is necessarily relevant in the Louisville crash, however - I believe aircraft since AA191 are designed to maintain their wing configuration after loss of hydraulic pressure.)


MD11 has a physical slat lock that won’t fail under hydraulic pressure yeah


This video from an aviation youtuber contains a picture of the engine: https://www.youtube.com/watch?v=U4q2ORhIQQc&t=526s (the video itself is also worth watching in full IMHO).

What strikes me as odd is that this looks like the "naked" engine, without the cowling/nacelle that usually surrounds it? Anyway, if an engine departs the aircraft shortly after (last-minute) maintenance was performed on it, that's indeed suspicious...


The cowling was probably easily torn off when the engine went full speed like a missile for a few seconds after detaching.


The fan cowl and thrust reverser cowl are structurally fastened to the pylon/strut at the top, they only wrap around the engine, and are fastened to themselves at the bottom using latches. The strut considered part of the airframe structure. The inlet cowl is bolted directly to the engine, I saw in a picture that it was found approximately mid-field on the airport property.


The cowling isn't particularly structural so if your engine falls off on takeoff it's not so surprising that the cover didn't land with it.


From all the annals of aviation disaster, flight 191 is possibly the one that haunts my nightmares the worst. Perhaps because the scenario feels plausible for every single take-off. Perhaps just because of the famous photo.


They should match the acronym and call it No Evil Systems Tolerated, or No Evil, Sane Tech firmware (N.E.S.T)


It doesn't take $100 to transfer. Fees are currently around $1. You are off by a factor 100!

Also, Bitcoin can process far more than 7 tps through the Lighting network.

I wonder where your misconceptions come from?


>Fees are currently around $1.

Fees are CURRENTLY around $1 because no one is using the L1 network. There is no demand now because of all of the times when the transaction fees were $30.

>Also, Bitcoin can process far more than 7 tps through the Lighting network.

The lightning network is insecure during periods of high demand because you aren't able to safely close channels. Also, you still need to fund channels on L1 in the first place!


Agreed. Currently is the key there. I was referring to whent a run starts. It's been at $100 before and we haven't even seen a serious run yet. $100/tx is the "every few years" rate.


I can't think of a scenario where this is useful. They claim "Full-throttle, wire-speed hardware implementation of Wireguard VPN" but then go on implementing this on a board with a puny set of four 1 Gbps ports... The standard software implementation of Wireguard (Linux kernel) can already saturate Gbps links (wirespeed, check) and can even approach 10 Gbps on a mid-range CPU: https://news.ycombinator.com/item?id=42172082

If they had produced a platform with four 10 Gbps ports, then it would become interesting. But the whole hardware and bitstream would have to be redevelopped almost from scratch.


It's an educational project. No need to put it on blast over that. CE/EE students can buy a board for a couple hundred bucks and play around with this to learn.

A hypothetical ASIC implementation would beat a CPU rather soundly on a per watt and per dollar basis, which is why we have hardware acceleration for other protocols on high end network adaptors.

Personally, if I could buy a Wireguard appliance that was decent for the cost, I'd be interested in that. I ran a FreeBSD server in my closet to do similar things back in the day and don't feel the need to futz around with that again.


I agree that if the goal is to be educational, it's an excellent interesting project. But there is no need to make dishonest claims on their web page like "the software performance is far below the speed of wire"


There’s a strong air of grantware to it. The notion that it could be end-to-end auditable from the RTL up is interesting, though, and generally Wireguard performance will tank with a large routing table and small MTUs like you might suffer on a VPN endpoint server while this project seems to target line speed even at the absolute worst case routing x packets scenario.


what do you mean by grantware?


The project got a grant from NLnet. I think they do a great job, they gave grants to many nice projects (and also some projects that are going nowhere, but I guess that is all in the game). NLnet really deserves praise for what they are doing!! https://nlnet.nl/thema/NGI0CommonsFund.html


Academic projects which receive grant money to produce papers and slides. This still can advance the state of the art, to be clear, and I like the papers and slides coming out of this project. But I wouldn’t cross my fingers for a working solution anytime soon.


Amusingly, a lot of people have always been convinced that doing 10 Gbps is impossible on VPN. I recall a two-year old post on /r/mikrotik where everyone was telling OP it was impossible with citations and sources of why but then it worked

https://old.reddit.com/r/mikrotik/comments/112mo4v/is_there_...


Mikrotik's hardware often can't even do linespeed beyond basic switching, not to mention VPN, so yeah.


I meant the comments. Sadly I've linked the wrong permalink and confused everyone.

> > > I see. I'll terminate at the Ryzen 7950 box behind the router and see what I get.

> > That will still be a no. Outside of very specialized solutions this level of the performance is not available. It is rarely needed in real life anyways. Only small amount of traffic neess to be protected this way; for everything else point to point protection with ssh or tls is adequate. I studied different router devices and most (ipsec is dominant) have low encryption truoughput compared to routing capabilities. I guess that matches market requrements.

> It looks like I can get 8 Gbps with low CPU utilization using one of my x86 machines as terminal. This is pretty good. Don't need 10 G precisely. 8G is enough.

I've done precisely this so easily. I just terminate the WG at a gateway node and switch in Linux. It's trivial and throughput can easily max the 10G. I had a 40G network behind that on obsolete hardware providing storage and lots of machines reading from that.

Reading that thread was eye-opening since they should have just told him to terminate on the first machine behind. Which he eventually did and predictably worked.


You are right. It's amusing how this pattern emerges often: an unoptimized tech stack gives mild performance results. This is "good enough" for most people. Over the years everyone seems to assume that's just the way it is and it will always be because the tech is inherently "complex". Then a competitor comes out of the water and their performance blows everyone out of the water, so everyone realized the tech could have been optimized all this way if anyone had just tried to.


Yeah, this is especially true with multi-gigabit networking. It's actually really depressing how hard it is to find performant solutions, be it for file sharing or just HTTP.


They're discussing mikrotik hardware specifically? Enterprise stuff or a powerful server can easily do it.


It's highly going to depend on the hardware in use.


Why would you even need dedicated hardware for just 40 Gb/s? That is within single-core decryption performance which should be the bottleneck for any halfway decent transport protocol. Are we talking 40 Gb/s at minimum packet size so you need to handle ~120 M packets/s?


Because the entire stack is auditable here. There's no Cisco backdoor, no Intel ME, no hidden malware from a zombie NPM package. It's all your hardware.


Except FPGA chips/boards aren't free from malware either: https://www.iacr.org/archive/ches2012/74280019/74280019.pdf

Nor will you be immune from AMD Vitis/Vivado sideloading crap into the bitstream.

Sadly, you have to fab your own chips using sovereign facilities if you want security. Individuals simply cannot access genuinely high assurance product and there's no major government in the world with the slightest interest in changing their stance on this policy. There are simply too many governments long on SIGINT to go down such a route.


I can see this as a hardened VPN in a mission-critical deployment, which could not be as easily compromised as a software stack.


If a PC can do 10Gbps, are there any cycles left for other stuff?


bps are easy. packets per second is the crunch. Say you've got 64 bytes per packet, which would be a worst-case-scenario - you're down to 150Mpacket/sec. Sending one byte after another is the easy bit, the decisions are made per-packet.


IMO it would be cool if they added Wireguard to Corundum but it would be expensive enough that they wouldn't get any hobbyist cred.


My dude: As far as I know, it's the first implementation of Wireguard in an FPGA.

It does not have to be all things for all people today. It can be improved. (And it appears to be open-source under a BSD license; anyone can begin making improvements immediately if they wish.)

Concepts like "This proof-of-concept wasn't explored with multiple 10Gbps ports! It is therefore imperfect and thus disinteresting!" are... dismaying, to say the least.

It would be an interesting effort if it only worked with two 10Mbps ports, just because of the new way in which it accomplishes the task.

I don't want to live in a world where the worth of all ideas is reduced a binary concept, where all things are either perfect or useless.

(Fortunately for me, I do not live in such a world that is as binary as that.)


Nitpick:

"resolution of 3 arc-seconds (~100m²)"

This resolution is equivalent to tiles of 8500 m² not 100 m². I think the author confused tile edge length (92 meters exactly) with tile area.


Damn, that's the second time I've confused those 2 concepts. The distinction is significant. I updated the article, thanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: