Hacker News new | past | comments | ask | show | jobs | submit login

I spent a lot of time at the FAA writing software. (6+ years) there is a huge culture of process, policy and not a whole lot of thinking or analysis or actually understanding the problems that they are working on. it is maddening.

imagine a spreadsheet with 700 lines in it telling you that you need to do ABCDEFG each of those lines is instructing you to write a document detailing a procedure with the chain of custody forms and keys and whatever password rotations etc etc. follow this process and fill out this presentation template wait 3 months and then present it to a board who doesn't give a shit or have any understanding of your project.

it's all an insane amount of work and it gets us the opposite in terms of the goals these processes are intended to achieve.

I have zero doubt that the only thing that will happen as a result of this catastrophic incident is another 50 lines in a spreadsheet somewhere telling you to do stuff that nobody will comprehend or implement correctly or even verify until there is a similar incident causing an investigation into it.




Agencies like the FAA are notoriously risk adverse. Basically the motivations of most employees seems like, "if I mess up once, I get fired; if I don't produce any movement, I can't get fired". Naturally, the output is glacial progress and the introduction of tons of safety-theatre procedures (that get the implementers promotions for "increasing a culture of safety").

We all know this from the subject the FAA regulates: flights. New unleaded gas gets forever to approve. Simple changes to instruments takes years of certification, leaving 1960s technology in place when clear improvements have happened in the last 60 years.

I always wondered what would happen if this culture were carried over to another space. We know what it looks like in medicine because the FDA has similar priorities. Rarely do we see it in tech, which is known to "move fast and break stuff". But here we get a glimpse of the dystopian crossover between FAA-procedure-culture and software engineering.


We know what it's like in software because we have e.g. the example of how the Space Shuttle's flight control software was developed and audited. It's weeks of meetings, tests and change processes etc. obsessing over changing a single instruction in assembly.

Which also suggests to me that the problem isn't per-se the glacial progress & approval processes, but that they're using the wrong glacial processes.

If this FAA software was developed with something like the Space Shuttle's process it would still take forever to change something, but at least you'd end up with a function that you could mathematically prove would be able to handle any conceivable input.

I think you're never going to convince the government of a "move fast" culture. Even if the overall cost of grounding planes would be less than the cost of more reliability they'd never go for it.

Too many people's asses are on the line, and they're not having to spend their own money. The only thing they have to "pay" for is possible loss of face, or loss of political capital, both of which can be insured against effectively for free with taxpayer dollars.

But you might just be able to convince them that they're using the wrong sort of bureaucracy. You'd still spend a billion or two on something that should cost a million, but at least you'd get actual reliability as a result.


I would settle for the government moving bureaucracy from safety-theatre to actual safety.


The NASA Space Shuttle flight software was among the best ever developed. There was never a defect that impacted safety.

But that team had kind of an "unfair" advantage in that were able to program in assembly code on bare metal with no real software stack. Whereas the rest of us are forced to build on a foundation of sand using multiple layers of low-quality third-party software in order to deliver any useful functionality.


> But that team had kind of an "unfair" advantage in that were able to program in assembly code on bare metal with no real software stack

The Space Shuttle’s avionics software was not written in assembly, rather HAL/S, a high-level language invented for the project. Assembly was mainly used for the custom real-time OS kernel. They also maintained their HAL/S toolchain, which was written in XPL-a PL/I dialect which was popular for compiler development in the 1970s. The development environment ran on IBM mainframes, and the main CPUs on the Shuttles were the aerospace derivatives of the IBM S/360 mainframe architecture, System/4pi, model AP-101. The same CPUs were used by USAF (e.g. the B-1 bomber), but USAF mainly used JOVIAL to program theirs. Another big user of JOVIAL was the FAA, who used it to write a lot of their original mainframe-based air traffic control software (FAA HOST).

The Space Shuttle team inventing their own programming language was a byproduct of the time the project started (1970s). If they’d started a decade later, they probably would have used Ada instead. But Ada didn’t exist yet, and they thought inventing their own language was a better choice than JOVIAL


It sounds less like risk averse and more like cover-your-ass though; go through long checklists and committees and people so that no one person can be held responsible for any problems.

The closest thing you'll see in tech will be at back-end software like banks, insurance companies, pension funds, investment companies, embedded engineering / SCADA, ERP systems, etc. The "move fast and break things" mindset seems to mainly be a thing in Silicon Valley internet companies / start-ups, and mainly the latter because they're driven to pump up their own value instead of provide reliable software. Because if Twitter is down, it's an inconvenience, but if airplane software fails, lives are on the line.


> Agencies like the FAA are notoriously risk adverse.

There is a difference from risk averse ("we require heavy testing before deployment") and dysfunctional ("we are terrified of breaking anything but are unwilling to invest in maintenance").

Take the Air Force. Their risk decision is: can we accomplish a specific mission at hand with ideally minimal loss of warfigher life. If they don't maintain their planes, they cannot achieve the warfighter life loss minimization.

Similar with IT generally. Kicking the can down the road just grows the problem.


yeah working for government is strange. I'm in a position where the product owners don't want to make any enhancements/changes to a production system out of budget concerns. However, the actual invoice for my team's time is the same whether they make changes or not. They don't want to "spend the money on enhancements to a functioning system" but the invoice amount is the same every month regardless.

oh and in my experience, government fte's can screw up an infinite amount of times with no risk to their job.


The early aviation industry had a very substantial "move fast and break stuff" mentality. YouTube has plenty of videos of those folks. However "break stuff" usually meant smearing aircraft and bodies all over the landscape. Much of FAA's regulation is to try to prevent killing "too many" people. If the engine on your car stops, you can usually pull over to the side of the road. If the engine on your aircraft stops, you're going to be landing soon and hopefully not landing on a building full of people. And if you're really lucky when the engine stops, all the people sitting in the back can walk away.

The FAA is frequently called a "tombstone agency" - who only act long after fatal accidents and the bodies have been buried.


When that risk-adverse culture is carried over to another space, it just fails and never gets noticed. Risk-adverse organizations get outcompeted by more efficient risk-tolerant ones, every time that such competition is possible.

Risk aversion only works for an entity that has a forcible monopoly in its space. The FAA and FDA do. Another is the Nuclear Regulatory Commission, which exhibits the same behavior: their job is to prevent accidents, and the surest way to do that is to never approve anything at all.


Seems like Space-X is a counterpoint to this type of operation. They are much more efficient and have been rather reliable.


>Agencies like the FAA are notoriously risk adverse. Basically the motivations of most employees seems like, "if I mess up once, I get fired

Weird, my impression was federal employment is the opposite: mess up as much as you want and you’ll be retrained and reassigned.

Or a that only in contexts like harassment or drug use?


You're right, my "mess up once, get fired", I really mean if a flight crashes and the FAA could have done something, they'll get political flak. High level political appointees may need to resign. Whereas if an agency does nothing for a term, the political leaders get to stay.

For routine FTEs, as a sibling post mentioned, you can probably mess up in a lot of ways (short of murder or being really non-PC) and still keep your job.


This is common for government agencies. I worked at Labor and your second paragraph hit very close to home. The only plus side was the insane amount of free time you spent waiting. I tried to go back on the civilian side of the contracts and I nope'd out of it as soon as I hit red tape


This is by design, it meets the primary goal of the government which is funding patronage jobs that make few, if any, demands on the worker.


All the federal employees I have met are EXCELLENT. It is very difficult to get hired into one of the dwindling jobs at the agencies. A lot of the government has been contracted out. Having worked at a federal IT contractor, I can say that in my experience most of those workers are very good and dedicated to their work. HOWEVER, they don’t always understand well the mission at hand. Some of this is to be expected given that they are contractors who come and go more frequently than federal employees charged with implementing government programs.


Ironically extreme selection pressure is exactly what leads to extreme risk aversion.

"We only hire the absolute best" does not lead to "move fast and break things" (which sounds awful in a FAA context anyway) it leads to people who devoted their lives to being the absolute best at coloring inside the lines and never straying off the path, to being the best follower out there, to the ultimate authoritarians desiring to grow into being the authority.

The heaviest selection pressure usually does not lead to the most efficient system, it generally leads to a system able to endure heavy selection pressure.

There's a sociologist who wrote a famous book about bureaucracy and its in my library at home and the name of the sociologist and his book are at the tip of my tongue but he wasn't near the top of a quick google search; the above is a paraphrase of his book. No its not Douglas Adams or even Scott Adams although those two are correct about the problem in general LOL.


David Graeber?


People who want a job like that should just be on UBI instead. Then at least we'd have systems that could change to meet the needs of their users in a timely way.


UBI will never pay what a government job pays in purchasing power; that part is just math I think.


I don't think that is unreasonable to expect that government pay in a world where the government is a welfare program with a governing hobby might have less purchasing power than UBI in a world where we prioritize effective governance over beaurocracy. It's not a zero sum game.


There are just under 3 M federal (civilian) employees. I think it's entirely unreasonable to think that we would pay a UBI to ~210 M adult citizens (a 70x multiple) at levels that would represent a greater amount of purchasing power than to the people nominally working for the federal government.

If you're firm in your view that that's reasonable, I'd like to learn more about the proposal as to how the math would work.


OK.

I'm not going to code up a simulation because the research hasn't been done to confirm my choice of constants, but I can sketch it. Each workday is a function of the macroeconomic climate and some set of cultural norms during which we exhibit some blend of the following personae. As we'll see, introducing UBI reduces the prevalence of the bureaucrat persona which has knock-on effects leading to surplus.

---

The Missionary - has a mission and is working towards it. Cares more about the mission than prestige.

The Worker - doesn't have a plan, but likes to be a part of something meaningful. Will gamble with prestige in order to ensure that the work stays meaningful.

The Bureaucrat - willing to tolerate or create waste in favor of preserving prestige. Sometimes manages to trick a worker into believing they're a missionary.

---

Obviously people are more complex than this. Also, I'll use dollars to indicate productive output even though I think that most of the time collapsing such things to a single dimension is a slippery slope to somewhere awful. All this to say: gimme a break, it's model.

Here are my totally made up constants, note that X is a parameter which will depend on UBI:

---

Missionary creates 100$ of output always, plus a 1% daily chance to inspire a worker to become a missionary, a 1% chance to inspire a bureaucrat to become a worker, and a 1% chance to burn out and become a worker.

Worker creates 80$ of output if they're following a missionary and -20$ if they're following a bureaucrat because it's likely that they're causing more harm than good. They have an X% chance of burning out and becoming a bureaucrat.

A Bureaucrat creates -$20 of output, because they're definitely doing more harm than good.

Now lets say that everybody consumes $5 each day to stay alive.

---

So X is our worker burn-out rate.

As with most systems of this kind, it's very sensitive to initial conditions. If you start with a high enough concentration of workers and missionaries, your bureaucrat rate will be very low and you'll have a surplus. Too many bureaucrats and most of your workers are doing more harm than good, the system is carried (if it survives at all) by the missionaries and the minority of workers following them.

Critically, X is a function of risk tolerance. The worker becomes a bureaucrat because they cannot tolerate the risk of pointing out the wastefulness of the bureaucrat above them.

Introducing UBI does two things. It makes standing up to your Bureaucrat less risky, reducing X, and it creates a fourth type, the Video Gamer, who consumes $5 to stay alive but doesn't sabotage the output of any workers like the bureaucrat does.

Some percentage of the Bureaucrats will become Video Gamers if UBI is implemented. That percent depends on the size of the surplus. If the surplus gets big enough, UBI can be so comfortable that there's no reason to be a bureaucrat, because it doesn't afford a significant quality of life increase.

---

So to answer your question about the 3M and the 210M, I'd guess that today we've got 213M people living on the positive output of maybe 50M--the rest are bureaucrats or are following bureaucrats. They're busy fighting over their slice of the pie instead of baking it. Bureaucrats sort of expand to consume available resources, so as automation improves worker output, that ratio will get worse unless we find a place to put them.

We'd have to do research to come up with better constants and run that model for real to be sure, but I don't think it's unreasonable to assume that reducing both the bureaucrat concentration and the worker burnout rate by 50% would triple the system's output once you let the personae conventions find a new equilibrium. I'm not sure how much more federal employees will get paid above UBI, but I think there's room for the end result to be that future UBI is cushier than today's government work.


Where does the UBI money come from in this system, particularly if the surplus gets so bit that there's no reason to take a government job?


We issue it to ourselves, more or less like CirclesUBI is doing it in Berlin.

They're just letting it be inflationary and setting the payout to increase over time to adjust for inflation. So maybe you get $5 per week this year and $8 per week next year... This can be balanced so that it amounts to a more or less constant purchasing power.

Personally I prefer the demurrage approach where account balances just have a decay rate--that way you've got a better shot at $5 written down today having the same meaning to people who read it next year, but the economics are the same (more on the theory here: http://en.trm.creationmonetaire.info/ ).

It's gotta be decoupled from the government so that, as discussed in my model, it can act as a safety net while you're ridding yourself of wasteful bureaucracy. It doesn't really work if the bureaucrat you're deposing can threaten to take away your UBI.


The people who need a pyramid structure to strive for, and office politics to fight, will never settle for "from each according to their ability and to each according to their need", and those are the people we've selected for.


So we give them a pyramid to fight over. We should just stop letting it be the whole world.


[flagged]


Here here. Also, if there weren’t red tape, then there would be more corruption and lack of accountability. The staff of agencies are damned if they do, damned if they don’t. If one wants to critique government agencies, criticize the political appointees who are in thrall to the industries they are supposed to be regulating. The rank and file generally work hard and in good faith. They are just trying to be good stewards of public resources. I’ve seen this at the federal level and state levels (primarily in North Carolina and Louisiana).


It sounds like it would be very simple to implement correctly if it was literally spelled out like what your describing?


I just spent a month doing an E-Business Suite platform migration and it was very similar: follow the step-by-step instructions to apply patches and run commands. Each patch has a README file with dependent patches or commands which need to be completed first. It works mind-numbingly great until you run into the first of many circular dependencies.

That's one problem with treating the implementer as a machine to run code. The whole procedure can't be tested, so when parts are changed they can break the whole. It relies on the human in the loop to resolve the conflicts, which is not repeatable.

The other problem is the "mind-numbing" part. No-one can maintain 100% perfection all of the time. And in the context of presenting to people who don't know what it all means, I can see why mistakes would be made.


There can be multiple folks doing it in parallel and checking each other’s work, unless I’ve misunderstood?

If problems of circularity arise they can try to clarify the issue, or go back to the drawing board.


The problem is that spelling out complicated things is hard. Take the law for example - in theory we have a coherent code that specifies exactly what things are crimes and the appropriate methods of dealing with them. In practice, it takes teams of highly trained professionals and an elaborate system of courts to clarify what these laws mean in all but the most trivial cases.

Generally you need some flexibility to handle slight variations in circumstance whenever making a decision, and at times things come down to judgement calls that can not be turned into an algorithm. But bureaucracies don't like empowering their workers to make decisions, and so you get ever more conoluted instructions to shift the decision making process higher up the ladder.


Which legal theory postulates a ‘coherent code that specifies exactly what things are crimes and the appropriate methods of dealing with them’?


That is a different use of the word theory, and serves as an excellent example of why the problem of communicating complex ideas so unambiguously as to eliminate the need for interpretation is so intractable.


More concretely, I have never read or encountered anyone educated in or practicing law who supposed this could even exist.

Where did you acquire this notion?


What produces this sort of environment? Is it basically a way for everyone to shed responsibility?

I guess even that wouldnt answer it, since this isnt really a problem in private industry.


In my experience, this arises as an unintended consequence of the quest to lower costs and reduce bureaucracy.

About ten years ago, a new manager was brought in to make us act less like a moribund government department and behave more efficiently. As an example of government waste, he pointed to the money we were spending on storage for data back-ups. We wouldn't need back-ups if we stopped making mistakes.

You might think that this no-back-ups policy would be an instant disaster, but it lasted years without issue. When there was a failure, the manager would hand the sys-admin a soldering iron, the admin would fix the hard drive, and we would be back on track. Finally, the sys-admin retired and a new one replaced him. Not long after, a critical system failed and data that we were required by law to maintain was lost. The manager handed the admin a soldering iron and told him to fix the hard drive. The admin said it was impossible and the manager fired him (yes, you can get fired from a government job). Other candidates were interviewed, but no one applying for a $30k job was confident that they could repair a broken hard drive.

Finally, there was talking of hiring the old admin to come out of retirement and fix the drive. Except he explained that it had always been impossible. During his tenure, he'd spent 5% of his salary (gross, not net) paying for back-ups and replacement drives. When the manager gave him a soldering iron, he'd just chuck out the old drive, by a replacement off Newegg with his personal credit card, and load it with the data he'd backed up to his personal S3 storage. His back-up script was still running on the server, but he'd stopped paying for the storage space the moment he retired.

Eventually, the manager was forced to spend a whole year's budget on an expensive data-retrieval firm to collect the data (which was still cost an order of magnitude less than the fine the department would have had to pay if we'd lost the data). He was fired and a new manager brought on board. Because of the money which had been lost on the data retrieval, new measures were put in place to prevent this from ever happening again. This included a new back-up system and audits to ensure that other employees were using personal funds to pay for departmental resources. Of course, this meant rigorously documenting exactly what resources each employee was using...

Six years after the manager was brought in to decrease cost and increase agility, we were now more over budget and tightly controlled than we'd ever been.


And you were a part of that culture ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: