I read elsewhere, from a reliable source that I can't find right now. that the mean time between reboots on 'mature' fighter planes is 8 hours and the F-35 is at around 4 hours so far with plans to improve it.
The toughest problem the program is having is matching the timing of the aircraft’s fusion software with its sensors’ software. “As we add different radar modes and as we add different and capabilities to the DAS system and to the EOTS system, the timing is misaligned,” and then you have to reboot it. Bogdan said he’s aiming for eight to nine hours between such software failures when a radar or DAS or EOTS needs to be rebooted, which is what legacy aircraft boast. Right now they are at four to five hours between such events. “That’s not a good metric.”
I don't understand a lot of that, but I think the fusion softwre refers to a unified UI for the pilot that merges information from the plane's very many sensors and presents it in a comprehensible way on one screen. Apparently pilots in older planes suffer from data overload and from watching many different screens at once - all while trying to fly a plane in combat. UI is reported to be a highly consequential improvement for the new plane.
> The toughest problem the program is having is matching the timing of the aircraft’s fusion software with its sensors’ software
I don't know if I'm interpreting that correctly but what I'm getting out of it is clock skew between sensors and the fusion system, which I assume to be referring to sensor fusion.
That makes sense, analogous to a typical distributed systems issue. I would imagine the skew needed for the type of sensing they do is quite small. I doubt they can just toss an atomic clock on the plane :)
> They totally could. Some atomic clocks are quite small.
Are they mil spec? In other words do they operate correctly over a very large temperature range, with large input voltage variance, under high G-forces, in high vibration environments, etc?
I don't know anything about atomic clocks past what I can read on wikipedia but, I can say there is generally a large difference between mil spec ICs and commercial or industrial grade ICs, I would assume this would extend to atomic clock scale devices as well, but just a guess.
If you're traveling at high speeds and relying on this timing to adjust flight control surfaces, or trying to use radar to get position of combatants and/or the ground, a few microseconds clock skew would be a major issue.
I didn't say clock skew. I said drift. As in, you lose GPS timing signals so your own internal master clock freewheels. That has nothing to do with clock skew (different components within the jet 'seeing' different time due to delays or other issues). The jets are designed to function with degraded or no GPS functionality. But, if it's there, they will take full advantage of it.
I would imagine the skew needed for the type of sensing they do is quite small. I doubt they can just toss an atomic clock on the plane
I don't think a single atomic clock would do it -- sounds like they'd need a high accuracy clock for each separate subsystem.
But unlike distributed systems, it seems like an aircraft is small enough to have one clock that everyone uses - any idea why they don't do that? Single (or too much of a) point of failure? Disparate vendors can't agree on a standard? Too much extra wiring or other complexity?
The standard way to do timing in a system like this is for each subsystem to run its own binary clock, with the main system sending period time-of-day messages to keep everything externally synchronized. In that sense, all the systems do have one clock. Many do not need to worry about wall clock time or synchronization, though.
That is one explanation. I've worked in avionics telemetry & remote sensor fusion and there is actually a push to go from synchronous archs (using PCM on I2C buses) towards modular and networked archs (using some variant of IP/etc. protocols).
Problem is you need a reference time clock. On monolithic arch, is easy : the backbone bus timestamp the paquets it sees. On networked arch you need a time server a la NTP, otherwise you have time skew and loopbacks.
Even with a accurate time clock, you may also need to correct for timing issues due to cable length.
I'm curious what you mean by 'synchronous' here, for example an I2C bus is clocked by the master, ignoring clock stretching and that clock need not be in any way synchronous with the processor clock on the slave.
And to clarify, we're talking about PHY level protocol clocks here, not time clocks.
Check out PTP, and then the WhiteRabbit research project. It's not exactly easy to implement, but you can get down to ~10ns sync on an ethernet network using it.
I doubt they can just toss an atomic clock on the plane :)
As I understand it, an early variant of the TCAS system for collision avoidance did just that. Every aircraft would have a Cs or Rb standard that was synchronized to a master clock, and they'd broadcast their position and altitude at an assigned time slot.
This would have been very expensive at the time (late 60s-early 70s?) they were proposing it, and it would only have worked if everyone had the necessary equipment. So it didn't fly. But in later decades it was no big deal to put a high-quality atomic time/frequency standard onboard an aircraft, and I wouldn't be surprised if most large airplanes actually do carry a Rb standard for one reason or another.
I think the image a lot of us have is that an atomic clock is a PC-to-dishwasher-sized box. Rubidium standards are smaller than that but less accurate than the lab grade equipment.
You don't really need to toss an atomic clock on the plane. Just make the plane use GPS.
Where one of the tricky parts comes in, is in distributing this clock signal to all portions of the avionics system that need it. But that's beside the point.
It doesn't really sound like a timing skew problem (as in, a problem with making all systems recognize the same time simultaneously) but rather with making sure that all relevant information needed to construct a coherent 'picture' from all sensors reaches the core processors in a timely manner.
You can't provide a real time display if, say, your radar data is delayed for a significant period of time due to problems or instabilities with the radar's onboard processor.
> You don't really need to toss an atomic clock on the plane. Just make the plane use GPS.
These are warplanes - you don't want critical systems reliant on satellites that could be jammed, or downed, during wartime, which I suspect is why they don't do this already.
> But fighters do use GPS. Daily. They use it for accurate navigation. But they do not absolutely depend on it.
But...
>>> Just make the plane use GPS.
...implies you would make them absolutely depend on it. Military aircraft use GPS daily, because it's cheaper and better. However, they also have accurate inertial navigation systems as a backup, so they can still operate effectively if the enemy makes GPS unavailable.
It is relatively uncommon for a single-seat (& single-pilot) fighter jet to fly more than 8-9 hours. If they do, then AFI & OPNAV duty rules restrict how these flights are conducted (including combat ops) and how soon the crew can fly their next mission. Aircraft like the B-2 have two pilots and planned sleep periods.
According to references, 13.5 hours [0] is the record for the F-16 combat mission, and this was actually a SEAD (Suppression of Enemy Air Defenses) mission in a single-seat F-16CJ. Similar missions in the EA-18G extended to 13+ hours with a single pilot, along with a WSO/EWO in the back seat.
Various unofficial reports indicate F-16s do fly 14 hour ferry flights, but that is highly unusual.
What I'm wondering is how these fighter jet pilots deal with needing to go to the bathroom. 14 hours stuck in a tiny cockpit kinda makes it hard to relieve yourself...
Not sure why you are being downvoted. It is a serious question, if a bit gross.
At the Evergreen Air & Space museum (home of the Spruce Goose), they have an old SR-71 on display. Part of the exhibit has a TV playing a video loop with pilots and crew talking about their experiences. One of the pilots recalls how they'd have a "high-protein, low-residue" breakfast like steak & eggs before a long mission. The idea was to provide long lasting energy without having to go "number two" in the suit. He says that some of the crew who would help them out of the flight suit at the end of a mission were female, so it'd be especially embarrassing if you'd pooped yourself. I got the impression that, while rare, it did happen sometimes. :-\
This is what happens when you have an entire industry (defense/general gov't services) that treats software development as a cost center instead of a profit center.
This article should clarify what the radar reset that they are referring to is. Integer, origin, and timing resets are a function of radar tech, and are not specifically related to something like a system reboot. This would be something like restarting a process due to bad data getting into your process from a sensor and potentially leading to a bad result from a range or trajectory.
I have no experience with aviation, why do they need to be rebooted at all in less time than they can fly without refueling? It would seem with such a critical system the tolerance for rebooting would be exceptionally low. The F35 is not a dogfighter; does it have any meaningful capabilities while radar-blind?
> why do they need to be rebooted at all in less time than they can fly without refueling?
I don't know much about Navy / Air Force ops, but it's possible that a sortie may consist of taking off and returning to rearm / refuel without shutting down, which could easily span more than four hours. In addition, lots of fighter planes do refuel in mid-air, so the requirement is still very prevalent for the subsystem to remain functional.
Most military jets can be hot refueled on the ground, with engine(s) running. Some fighter jets (F/A-18s etc) can hot swap crew on the ground, with one engine running.
With the F-35 and other jets, its possible to swap crew and perform minor maintenance with the jet electrically powered up and cooled with auxiliary air supply, although the engine is shutdown and then restarted. This actually saves significant time on the pre-takeoff procedures. I would think the Lockheed Martin engineers could also perform diagnostics after a failure, without interrupting power or restarting the sub-system.
I'm actually curious what possible technical reason would prohibit them from running for 48+ hours (or indefinitely). IE some kind of garbage collection issue. I'm unaware of any reason they should ever have to be rebooted aside from software upgrades and curious why it might be necessary (technical reason). Unfortunately, from the article: "But the Air Force did not go into detail about when the instability in the radar system occurred..."
It is certainly possible, but not normal. Normally the engines are shut off during servicing and the aircraft is hooked up to a ground cart to provide limited systems functionality.
I am getting a little tired of the "F-35 is bad" headlines. There should be more context that tells us how common such problems are in general. I remember listening to a podcast where they said that SR-71 pilots had to restart their engine every so often. Same for the Concorde. Sounds really bad but neither of these problems hurt the airplanes' usability and were not really a problem.
I firmly believe that military procurement is a mess but the discussion reminds me of the political discussion where Republicans jump on anything that goes wrong under Democrats and vice versa.
The SR-71 was a bit more exciting than just restarting the engine. The problem there was actually an inlet unstart. Those big spikes at the front of the engines could move, and had to be adjusted to just the right location to direct the shock waves properly so the engine proper got nice airflow.
Naturally, this was all done automatically, but the electronics weren't perfect. Sometimes they'd get too far out of adjustment, and the shock wave would blow out. This was apparently a pretty violent event, with a loud bang from the engine, often accompanied by another loud bang as the crew's helmets were slammed against the side of the cockpit.
I believe that the SR-71's control systems were eventually upgraded to where this didn't happen anymore, or was very rare, but it was apparently somewhat common for a while.
Some discussion here, along with bonus discussion of the F-35's inlet:
None of this affects your overall point, I just thought it was a fun thing to elaborate on a bit.
I think you're on to something with this. It sure seems like the F-35 is not a particularly good airplane, but that's mainly because of cost and capabilities. Stuff like this does sound like piling on, rather than any substantive criticism.
To be fair, the SR-71 was a highly specialized, and ultimately, highly experimental aircraft at its time.
The Concorde was similar. Only a few were ever made, and they were never expected to be the ubiquitous aircraft of their time or even their type (reconnaissance and passenger transport).
The F-35 is intended to replace several aircraft models which currently satisfy more specialized roles. It was intended to be the ubiquitous (in various configurations) fighter aircraft for the US military (and some allies). This problem may not be the worst of its issues, it may not even be a major issue or really (from the pilots' perspective) an issue at all since similar reboots (but less frequent) are the norm for them in other aircraft.
But many other issues are demonstrating, and the reduced production numbers and increased production and development costs reveal, that there are numerous problems with this project in particular. It shows failings within the US political structure (that's essentially forced this project to continue), DoD procurement (that allowed many of these issues to occur initially), defense contractor procedures (LM, IIRC, doubling or tripling the number of engineers on the software side to "speed things up").
Really, it's going to be a classic for future engineers, computer scientists, MBAs and others to study. So I guess that's a good thing.
I totally agree that the F-35 project is a mess. But I would like to see a real analysis of the problems and not just jumping on any problem that sounds bad on the surface. Maybe a hard reboot is a problem, maybe not. I don't know.
This particular issue is not really worthy of being reported in the media. Yes, there are radar problems that require it to be rebooted. But the issues have been identified and a fix is in work and will be rolled out soon. This story is a non-starter.
This is correct. A radar or mission avionics system reset isn't a critical emergency. It may force a mission abort, but the jet is still flyable.
A reset event would be more worrying if it was a FCS (Flight Control System) computer reset in flight. Air Asia 8501 did something similar and the jet crashed into the sea. An F-22 pilot failed to properly reset his FCS before takeoff, and destroyed a $150m+ aircraft at Nellis AFB.
I assume that bugs requiring reboots to resolve is part of the design of the failure model. When a software fault is detected, its consequences are unknowable. Those consequences could include killing the pilot and destroying his $100M aircraft. Therefore, the default response is to reset everything and get to a known-good state ASAP.
This is not a bad general-purpose failure strategy: It's been believed for decades that most software faults encountered in production are Heisenbugs[1], and disappear rather than reproduce.
Though to be fair, the software isn't due to reach 1.0 until 2050 (assuming the pentagon can scrape up another billion dollars to train ada programmers). No surprise that alpha software is bug-ridden.
You are correct. But the Ariane crash was the result of improper testing when reusing (valid and correct) code in a new situation that resulted in the crash.
Better testing and development procedures would have, potentially, prevented this. And it could have occurred regardless of the languages involved.
What Ada does bring to the table, however, over C, C++ and Fortran (the other 3 primary languages used in the embedded avionics world) is a much better type system and concurrency system that gives greater confidence when developing the system. Much as the ML worlds type system reduces or eliminates certain categories of errors, or moves them to compile time rather than runtime. Where they are still detected in runtime, Ada also offers, particularly during test and development, much greater ability to determine the earliest location of the error if the type system is being exercised properly.
Here's a timeline of the plane so far [0] and a page that includes some details from 2014 including a master schedule and a production status [1]. The 2050 comment was presumably an exaggeration referencing the frequent problems and delays with the development.
Developing avionics software is quite different from what almost everyone on this site does. Check out JPL's 10 rules to get a taste of how it works, then consider coding within a bureaucratic process where not a single character of the source or supporting documents changes without multiple approval and confirmation steps.
It's a painful and expensive process, but it can work, as demonstrated by the handful of teams that have reached the ~1 bug per 1e6 opportunities defect rate _all_ using variations of the above (tho some of them use Ada).
Let's check ourselves a bit and realize developing flight software is a different world from web services and mobile apps.
Smug condescension aside you didn't actually say why using a less secure/quantifiable language is a benefit to avionics software? I read JPL's 10 rules, they're pretty common sense stuff that I'd hope any C/C++ project would take advantage of, but they too don't address the reason why C/C++ would be better than Ada for this specific use-case? Or functional for that matter?
> Let's check ourselves a bit and realize developing flight software is a different world from web services and mobile apps.
I agree, it is far more important in avionics to be provably correct and to eliminate as many bugs as possible. It is exactly BECAUSE these aren't web services or mobile apps that C/C++ shouldn't even be a candidate.
You've made exactly zero arguments to support C/C++ here. Just acted smug and condescending to all of us lowly other developers. Do you yourself work in avionics?
Your original statement read to me as an assertion that it's obvious that this radar system defect (and others) exists because the system was developed using C/C++.
I think it's a gross oversimplification to state that the avionics software is developed in C/C++: the avionics software is using a subset of C and/or C++, along with support tools, system engineering models, code reviews, V&V, etc. As I see it, there isn't that much difference on this sort of project if one were to use Ada or Java instead of C or C++: the support tools may be somewhat simpler, but the overwhelming majority of the SDLC activity remains unchanged.
I'm not clear what causes this radar system defect, I didn't see it described in the source article, but a defect in design (for example) would seem likely to occur regardless of the programming language selected.
Also, I'm curious to hear what your proposed alternative would be and why?
This (and a ditto to gaius). My first real job out of college the company was doing work for Boeing. They were given a choice between Ada and C for their work. They chose C. I later asked why we didn't use Ada (I was working as a tester, most of our issues outside timing would have been detected at compile time if we had properly exercised the type system in Ada - such as constraining values to certain ranges). The answer was, "C programmers are easier to find." Never mind the excessive amount of money spent on testing (time and personnel) and tooling (time learning it). Getting C programmers was easier, and often cheaper, so they went with that.
Since then, in other jobs in the same general field (avionics or related), the answer has been the same at most offices.
Agree with you, adrianN, and gaius. One of many troubling trends we are seeing WRT quantity/quality/cost/safety/security/time to market as engineering orgs try to reboot their business models for a rapidly-changing, cheap hardware, service and FOSS-centric world.
Given how well the other parts of the project are going, I think it was likely to be a disaster regardless. Based on various news reports I've read about its delays and cost overruns, the F-35 seems to be defined by difficult and frequently changing requirements given to sprawling teams who cannot communicate well with each other due to the massive complexity and number of people involved.
I think you're assuming that the issues and bugs are simple language errors or the proverbial segfault - which in my experience is quite rare in high reliability software regardless of the language - issues tend to appear at a system or design level.
I doesn't matter much what language you're using if the HUD blurs during high G forces, clocks run out of sync or you forgot to design a feature or designed the prioritization of events wrongly.
I was genuinely curious. You can use any language you want for an embedded system as long as it has enough resources to run the runtime environment or the interpreter. In many (most?) cases, it isn't worth the overhead to do that.
There isn't just one CPU in this aircraft. There are probably dozens or hundreds. Not all of them can be powerful without undesirable tradeoffs.
That being the case, I wish I wasn't too late to edit my comment to be a tad less snarky.
That said, I understand a bit of the trade-offs involved (though far from an expert). I think there's room for discussion somewhere in the middle. The parent took one extreme ("you had it coming using C/C++"), and at first reading your comment stated the opposite ("C/C++ is just what you use on embedded").
Knowing very little on the subject at hand (embedded avionics SW), I'd still argue that maybe there at least be a discussion around using something like Ada for critical systems? (Yeah, I know what a big hit Ada is with embedded folk, at least the ones I hang around.)
It is technical bias based on technical fact. Am I or am I not correct in the technical claim that languages exist, like Ada, which decrease certain error classes and eliminate others entirely?
Go ahead and actually support why C/C++ produces more verifiable and secure code than Ada?
It may be true that Ada reduces or removes some types of software engineering errors, but that is not the only goal when choosing a language. Why did the military move away from Ada if it's a silver bullet?
> Why did the military move away from Ada if it's a silver bullet?
The US military doesn't develop software. They also have very little choice over who does. The entire system is a convoluted mess of lowest bidders and inept contractors.
So it is ultimately up to the lowest bidder what they want to develop in. There just isn't enough incentive to develop high quality, low in bugs, software for them.
If downvotes only affected the specific comment, that would make complete sense, but they also affect karma. I guess it's not irrational to punish people who consistently misunderstand things, but is that a purposeful decision or an unfortunate byproduct of the karma system?
The idea isn't to punish people, IMHO, but to improve content for others reading the conversation. That comment doesn't contribute to the conversation, and in fact it confuses it, so it gets downvoted to where few will read it.
Voting is about the readers, not the commenters. Personally, other than for determining specific capabilities such as downvoting, I think tracking karma per user does little good. Karma per comment - a bit of feedback on your comment - is useful.
it does punish people who consistently post opinions based on wrong information, and do not edit themselves, but that's a side effect.
If the poster didn't want to be downvoted, they should read and think about the topic before posting.
In general, HN is pretty good at not downvoting for dissenting opinions, but the group does downvote heavily for wrong info or posts that are inflammatory based on the wrong facts.
Unfortunately this happens to create lots of frustration on the part on newcomers to HN. Karma is King, so you can be downvoted by the Elite, you should check the HN guidelines.
I have no idea. I posted my comment because the title baited me into clicking it because I was under the impression that those planes were so bad they required a full reboot. His answer seems valid to me.
Is it? English is not my main language. I must not be alone that have read this the way I did because my initial comment was upvoted a lot.
If the radar's bug required to reboot the entire plane, how would you write it without being redondant? "F-35 radar system has bug that requires hard reboot of the F-35 in flight"?
The comprehension error in question is not that it requires a reboot, but that the reboot is a "4-hour reboot". The article indicated a reboot every 4 hours, not a reboot that lasts 4 hours. That's the only error in question - and that's why it was downvoted.
There's no question that the radar needs a reboot, but to say it's a 4-hour reboot is factually wrong and inflammatory based on an incorrect assumption.
Had a bug. Everywhere I read said they already delivered a fix for this but I've noticed some news outlets not mentioning it.
Though I gotta say this damn jet has been a nightmare. Since they'll never stop working towards newer and newer jets I'm curious if lessons learned here will apply to the next one. After doing government contracting for many years where we don't get the "lessons learned" from the previous contractors or much of anything (I've had to FIGHT to get source code for a product we were supposed to update; ended up having to rewrite the damn thing) I'm curious how it works when they contract out to have a jet built.
You joke, but for so many things in aviation, this is the answer. There are also a lot of problems where the answer is "re-poll all your interrupts, that problem you (the computer) think is a problem shouldn't be anymore".
Judging by reddit comments this type of thing is pretty common with fighter jets which is kinda surprising to me. I would have thought that this is one area where reliability is highly prized. Along with space shuttle software & other life/death stuff etc.
Instead "bash it with a hammer and maybe reset it" seems widely accepted. Weird.
So, how fast does it reboot, and how much does this affect the other software components and operating the plane?
Not that I expect it, but this could be a minor issue, if this 'radar system' is as good as stateless (and not, for example, a component that tracks friends and foes over time) and if it reboots in milliseconds.
The SR-71 flew faster than any plane in history and used cutting edge engine design whipped up by a very small team at Skunkworks in a short period of time. The F-35 doesn't outclass any fighter, is being developed with a huge budget by an enormous team, and they're using software tools and techniques that generally feel like regressions.
"During some Blackbird flights, however, the harmonious working of the spike and the forward and aft bypass doors broke down, and all too quickly the inlet was filled with more air than it could handle. When the air pressure inside the inlet became too great, the normal shock wave was suddenly belched out of the inlet in an unstart, accompanied by an instantaneous loss of air flow to the engine, an enormous increase in drag, and a significant yaw to the side with the affected inlet. Unstarts occurred 'when you least expected them—all relaxed and taking in the magnificent view from 75,000 feet,' wrote Graham in SR-71 Revealed. If the crew’s attempts to restart the inlet’s supersonic flow failed, they would have to slow their aircraft to subsonic speeds."
It wasn't really a "bug" but more that the computers could not keep up with changes in the flight conditions so if the engine spikes were not in the correct position for the speed and altitude it could cause a blow back which would extinguish the engines which will require the pilots to restart them.
It also looks like they later mitigated it as technology improved:
> Lockheed later installed an electronic control to detect unstart conditions and perform this reset action without pilot intervention. Beginning in 1980, the analog inlet control system was replaced by a digital system, which reduced unstart instances.
> Lockheed Martin has discovered the cause of the problem and has diverted developers who were working on the next increment of the F-35's code to fix it. A patch is expected by the end of March.
Seriously, isn't this to be expected? I know a lot of people have a massive hate boner for the f-35 because they read a lot of shitty war blogs but for a bunch of coders to act incredulous towards this information is absolutely ridiculous.
How many bugs have any of you introduced and fixed in the past week?
Most of us are working on things that are several orders of magnitude less likely to get someone killed than the F-35 radar code. The goal for safety-critical embedded flight code is 0 hard reboots mid-flight. Is that controversial, in your opinion?
Edit: "standard for safety-critical embedded flight" was a bad word choice and wasn't what I meant. s/standard/goal
As per hackuser's comment above, the mean time between reboots on most mature fighters is 8 hours, while the F-35 is 4 hours, so the standard is clearly not 0 hard reboots.
I realize this. But bugs are sometimes worth talking about. Many people feel that a bug requiring a hard reboot mid-flight in a 5th generation fighter, a problem apparently not able to be overcome through multiple generations of fighters, is worth talking about. If you, or the original complainer, disagree, you're free to not participate in the conversation, but complaining about the rest of us talking about this bug is annoying.
Quoting from Boeing:
The F-35 has the most robust communications suite of any fighter aircraft built, to date. Components include the AESA radar, EOTS targeting system, Distributed Aperture System (DAS), Helmet Mounted Display (HMD), and the Communications, Navigation and Identification (CNI) Avionics.
If by robust they mean "you can go 8 hours instead of 4 hours between hard reboots", I think that's worth talking about.
Its not the war blogs, its the general anti-US sentiment that liberal sites tend to lean towards. But when Russia or China launch some shitshack new weapons system, suddenly everyone is applauding. Thankfully, this stuff is easy to ignore. I grew up hearing how the F-16 was a nightmare and then later the F-22. The loudmouth knee-jerk anti-US types have long been discredited. Making weapons system is difficult and having an open-ish process in a democracy where milestones and delays are publicised should be a net positive, but it just gives the nutters more ammunition to whine on the internet. Autocratic states dont have this problem as everything is secret and dissent leads to being served up polonium tea or sent to a "re-education" camp. Frankly, this forum isn't mature enough to discuess the US weapons systems because the upvoted commentary will be overly critical hit pieces bordering on propaganda or "whataboutism." No one actually wants to discuss the tech here, how difficult this kind of tech is, the long tail of military technology, etc.
This is going to be a very impressive plane and one China, Iran, and Russia absolutely do not want to see in the air.
While Chinese defence tech is still quite far behind, Russian tech has always outperformed western expectations. Most often when we see Russian systems in a theatre of war, it's 30-40 year old, poorly maintained systems in the hands of poorly trained pilots/soldiers. I don't recall US pilots ever even going up against 80's generation planes like the MiG-29 or Su-27...
Every time Russian technology, in the hands of competent people, is put on display, it does very well (witness the performance of Indian Su-30s in various wargames, and modern Russian planes in Georgia/Syria).
And of course, being in tech, I'd have thought you'd have come across a few Russian programmers/engineers - for the most part they're all pretty damn awesome.
Anything more modern would probably have led to WWIII.
I think the rest of your comment is confirmation bias: this ethnic group/nationality is good/bad depending on where you sit on the old racism spectrum.
Its also worth mentioning that a lot of the advances of the SU were due to subjugating 15 independent states, usually via military means, and suddenly having a lot of more talent on tap. Often, non-Russian slavs didn't get credit for Soviet innovations due to horrific politics of communism and the Soviets system in general. So crediting "Russians" for many of these innovations is ahistoric and propagandist.
The constant applause surrounding the SU is fairly amusing. If they were so great, why aren't they around today? If Russia is so chock full of greatness why is it running a falling apart economy and have degenerated into a kleptocratic petrostate that makes most middle east dictatorships look progressive and economically sound? I get it, you don't like the US and the EU, but you'd have to be crazy to appreciate the SU and Russia if your complaints are focused on the US not being "Bernie Sanders" enough for you.
>Every time Russian technology, in the hands of competent people, is put on display, it does very well
Well, when your targets are Ukranian women and children, yes, the casualty rate is impressive, but that's far from a real military engagement against another nation state. We have yet to see these arms tested against a proper western foe, and for good reason.
There's nothing racist about recognizing nations for their strengths. The reason some nationalities are better at some things is because their culture emphasizes it. US culture has traditionally been anti-intellectual and emphasizes sports, making money, etc. Really smart people in the US tend to avoid government work (because the pay is not as good and the bureaucracy stifling) and go to work in Silicon Valley or in a different field altogether like medicine or finance. In the USSR, mathematics and physics were strongly promoted by the government, so of course they did really well there. Smart engineers went to work making weapons and such because it was probably prestigious, and since all the industry was state-owned, there was no advantage in staying out of it like there is in the US.
Why did the SU fail? Simple: the things that make a society good at producing leading physicists aren't necessarily the same things that result in a robust economy. Worse, spending most of your GNP on your military tends not to result in a strong economy either; just look at North Korea. The US spends a lot on its military too, but nothing like what the Soviet Union did, as a percentage of national output.
> Well, we certainly have seen an F-14 go against a MiG-23.
Did the Libyan MiGs fire a single shot? Of course I know the answer, so not sure how this can be considered any sort of indicator of the gross inferiority of Russian tech. In fact, based on the engagement itself, and subsequent after-math, a reasonable assumption would be that they goaded the US into the confrontation, as an attempt to gain international support...
> I think the rest of your comment is confirmation bias: this ethnic group/nationality is good/bad depending on where you sit on the old racism spectrum.
Interesting that you equate my pointing out that Russians are generally well educated and that they produce good engineers with racism. Next if I say Russia produces plenty of good chess players you'll also call me racist? Fact is, their culture encourages people to pursue these activities, and they have fairly high educational standards.
Same can be said for many countries who have a strong focus on education, who produce good engineers; witness the recent Indian space program, which is on a relatively modest budget, definitely 'punching above it's weight class'.
> The constant applause surrounding the SU is fairly amusing
What does my post have to do with the SU, and any economic or political elements of the SU? It's purely about technology.
> If Russia is so chock full of greatness why is it running a falling apart economy and have degenerated into a kleptocratic petrostate that makes most middle east dictatorships look progressive and economically sound? I get it, you don't like the US and the EU, but you'd have to be crazy to appreciate the SU and Russia if your complaints are focused on the US not being "Bernie Sanders" enough for you.
Again, where are you getting this? Pointing out that they produce relatively advanced technology for their current economic state? And it's the truth, from Sputnik to Mir to their current space program, their defence programs, etc... You're right, Russia's economic output is behind the US, and they somehow still have the ability to put people into space.
The rest of it is off-topic, is not related to anything I said, and it's obvious you're trying to offend.
> Well, when your targets are Ukranian women and children, yes, the casualty rate is impressive, but that's far from a real military engagement against another nation state. We have yet to see these arms tested against a proper western foe, and for good reason.
First of all, the verdict is still out on the exact amount of Russian involvement in Ukraine (especially since the next Ukrainian Prime Minister used to work for the US State Dept, reinforcing the view that the US was involved in the overthrow of the previous government). Second, from a purely objective standpoint, they didn't use air power vs. Ukraine.
Anyhow, Russia hasn't gone up against the west, nor vice versa. Russia did rout the Georgian military in a week. And they are the only effective power in Syria right now. Take that for what it is. US analysts have admitted Russian tech is more advanced than they thought, especially in the realm of electronic warfare.
BTW, forgive me for having a nuanced view of the world. For not bringing political beliefs into a discussion about TECHNOLOGY. I have Russian friends too, and I'm of Ukrainian background. My wife is black (and an immigrant from a region where the inhabitants were once slaves from both Africa and India), I live in the west, and speak multiple languages. I know people from every corner of the world. Shoot me for not believing in a world where everything is black and white, where it's us vs. them.
Natalie Jaresko is likely going to be the next Ukrainian Prime Minister when Yatseniuk resigns. Mikheil Saakashvili is in their government. The government of Canada already admitted it's embassy was used for logistics purposes during the Kyiv 'protests'. And the Dutch are likely to vote against the EU association agreement with Ukraine coming up.
I don't know how much you know concerning Ukrainian politics, or how closely you followed the whole 'revolution', let's just say it's all far from clear cut.
And maybe my wording is too nuanced, but I said "the exact amount of Russian involvement". Obviously there weren't Russian Sukhoi bombers wreaking havoc on Kyiv...
You're right, I missed a few encounters. Most were in highly asymmetrical conflicts, and it seems all reports of MiGs shooting down western targets were downplayed as 'mechanical failures' or 'friendly fire'.
Anybody know how long the reboot takes? That seems like important information. Especially if e.g. your AAMs need directions from the master system (?) in order to take over tracking by themselves.
this doesn't surprise me one bit. I interviewed at one of the major defense contractors for a programming job last year and the technical interviewer had never heard of C#
If someone only works in embedded systems that are meant to last 20+ years, should they also know about the latest Js/Bootstrap/Ember/whatever front end framework?
C# is a mostly Windows centric, managed language which frankly, will likely never enter a discussion about embedded, real-time systems. I'd expect them to know about Rust before they know about C#.
Also, how many 'competent' engineers know about the newest Fortran and Ada standards? Or have even written a single line of either?
There is so much more to software engineering than choice of language. I think it's entirely possible that defense contractors have more knowledge of the software development lifecycle than you are giving them credit for.
I believe parent's point is that one should know that C# is a choice, not that they have to use it. How can one make an informed choice if one doesn't know what the choices are? If all you have is a hammer, and all that.
One of the best programmers I ever worked with would probably not know of C# beyond that it exists, maybe that MS makes it.
But I'd hire him over nearly anyone else for embedded work. He did absolutely no programming or engineering work after hours. He played his guitar, went to music festivals, and ran a club (a couple different times in his life, don't know what he's doing these days or if he's still alive, he'd be past 70 now). C# doesn't enter into the discussion when developing embedded software, and his lack of knowledge of it indicates nothing about his character or ability.
for someone to have never even heard of C# and to be working full time in a senior level programming job, you have to be completely and utterly isolated from any kind of programming community.
just because I don't use node js, ruby, or perl in my work doesnt mean i have an excuse to not know that they even exist. how can you choose the best tool for a job if you only know C++ and Ada?
> for someone to have never even heard of C# and to be working full time in a senior level programming job, you have to be completely and utterly isolated from any kind of programming community.
The majority of defense contractors I know don't spend time in the "programming community" in their off hours, both because it's contractually too cumbersome to do so, and because they're doing other things in their spare time. They may spend time in the "defense community". It's very different culturally from SV.
> just because I don't use node js, ruby, or perl in my work doesnt mean i have an excuse to not know that they even exist. how can you choose the best tool for a job if you only know C++ and Ada?
node.js, ruby, and perl are not acceptable for use on real-time systems nor safety-critical systems.
> how can you choose the best tool for a job if you only know C++ and Ada?
Having worked in that world for almost 10 years, I can tell you that this choice is not up to you. You are told what tools you will be using, typically driven by government customer "experts" and inter-company politics.
Languages that have no use in my office outside a handful of projects (not intended for end users) include everything that's not: Ada, C, C++, Fortran, Jovial (being replaced), assembler (various forms), Delphi (one project, too costly to rewrite, too lucrative to drop). C#, Java, Python, Ruby, JS, Smalltalk, Rebol, Perl, Bash, C--, Lisp, Forth, Factor, Go, Rust, SML, OCaml, Haskell, Erlang, Racket, Scheme, F#, Visual Basic, BASIC (in other forms), etc. offer no value to the majority of our projects. If they're used, perl and a couple other scripting languages in particular, it's for tools for data analysis (scanning logs, turning hex dumps into useful printouts, etc.). In which case almost any language will suffice.
But on our target hardware platforms, and given our target performance constraints, the initial listed languages are pretty much the only options. This is the industry trend.
You've done a very good job at illustrating the way I'm interpreting the parent and grandparent comment. Of the list of languages you just wrote out, I've never heard of "C--". I've written code in 22/31 of those languages. Depending on how you want to measure the duration of my career, I've got somewhere between 10 and 20 years of experience.
If leaders in an organization haven't at least heard of a language that's in the top 5 of TIOBE (http://www.tiobe.com/tiobe_index), I'd have some serious concerns when considering whether or not to work with them. These days I'm mostly hanging out in embedded land, but I at a minimum spend enough keeping in touch with what's happening in other industries, just so that I can at least understand what other people are talking about and what they're up to.
It was the first number that shocked me the most.
EDIT: I found it:
http://breakingdefense.com/2016/02/bogdan-predicts-f-35s-for...
The toughest problem the program is having is matching the timing of the aircraft’s fusion software with its sensors’ software. “As we add different radar modes and as we add different and capabilities to the DAS system and to the EOTS system, the timing is misaligned,” and then you have to reboot it. Bogdan said he’s aiming for eight to nine hours between such software failures when a radar or DAS or EOTS needs to be rebooted, which is what legacy aircraft boast. Right now they are at four to five hours between such events. “That’s not a good metric.”
I don't understand a lot of that, but I think the fusion softwre refers to a unified UI for the pilot that merges information from the plane's very many sensors and presents it in a comprehensible way on one screen. Apparently pilots in older planes suffer from data overload and from watching many different screens at once - all while trying to fly a plane in combat. UI is reported to be a highly consequential improvement for the new plane.