In BigCorps, if there's a stupid requirement, there's usually a reason for the stupid requirement to be there in the first place but getting to the reason might require un-peeling a few org layers to since the people enforcing the policy will not be the people who wrote the policy. A more productive use of time would be to understand the reason for the policy, document out why it doesn't apply to your case and then attempt to get approval.
For instance, there's a restriction at my workplace (not a software company, a regular old industry fortune 500) which prevents git installs from pushing to any non corporate GitHub repo from our work machines.
The obvious reason it exists is to prevent people (a lot of who are analysts or data scientists - not professional programmers) from shooting themselves in the foot by pushing code to their personal repos in error.
It's annoying to work around if you need to say push a contribution to an open source project and you could rage at the infosec for enforcing it - but it obviously exists because stupid errors would have happened.
About five years ago, a place I was working at was selecting a new laptop for all employees. Mine was up for replacement, so I was interested in what they were picking.
They'd arrived at some god-awful Lenovo gamer model. It fit the performance and price point they'd decided they needed.
I said I'd prefer to have something smaller with less Christmas lights.
"Policy is everyone has to have the same laptop, and some people need more graphics power than the one you're asking for."
Turns out there'd been some big thing back and forth already about size and performance and this was the compromise that nobody liked.
"But why does everyone have to have the same laptop?" I asked.
"Well, it's just policy. Every desk has a docking station, and we don't want to have to buy different ones."
So I flipped the selected monstrosity over to reveal the extremely lacking docking connector at the bottom. There were some awkward laughs and I got my Thinkpad.
This is literally my job. Amazon calls it Dive Deep/Earn Trust. I get called in constantly to "Remove Blockers". Part of it is just understanding why the policy exists and then getting a policy modification. While it's definitely an art, it's not as hard as a lot of people think, but you can't be afraid to escalate.
It is also _much_ easier when you're called in for that purpose. Someone who didn't get that mandate on hire will be considered troublesome and overstepping, while someone roped in for that reason will be innovative and an enabler.
It's hard for people to scope an unknown task - especially dealing with bureaucracy. Is it going to be easy peasy, or will the rabbit hole just keep getting deeper.
Additionally, the ROI for an employee to investigate things like this is not very high, or could be negative. Imagine a poor H1B person - whose life and future depends on a company - trying to chase little things like this down. ("it's ok, I don't need " an ssd; an ergonomic chair; a 17" monitor is fine; etc...)
It also helps that Amazon isn’t afraid to entertain ideas from lower in the ranks. You just need a strong idea and be willing to write a memo that explains it, then work your way up the chain until you get to a decision maker.
I did almost exactly the same thing at my last job. I'm a game dev and they provided us all with woefully underpowered MacBooks that didn't even have a dedicated GPU. The game engine was constantly crashing while working and making it very difficult to get anything done. I recorded the amount of time wasted on these crashes over the course of a week.
I then proved to management that there was about 5 hours a week being completely wasted for me waiting for my machine to open applications after they crash. So after that we all got new MacBooks that were properly spec'd out for game development. I got to pick the specs :)
Yeah this part is something that annoys me but lots of the Unity team is working on macbooks, even though it's only a relevant platform for iOS as far as the end user is concerned.
I had the opposite experience once: at my previous company we were all issued mid-tier ThinkPads. Completely out of the blue we all received an email saying we'd be given a set budget and would be able to choose any machine we wanted. We went from 100% Windows to about 80% OS X within a week. They ended up giving the old ThinkPads away. (I put Ubuntu on one and gave it to my mom.)
This was in 2012 or so, and because of that I was able to learn Objective-C and go on to work on the first version of our native app. It made me a stronger engineer, and allowed the company to have developers who were already familiar with the product and backend services work on native. It was win-win.
Yes it is really individual: I have an ex-colleague who was a linux super-user and awesome sysadmin but moved to Mac for his development laptop. He loved Mac, and couldn't stand the hassle of Linux anymore as far as I understood him.
This confused me a lot as even though I wasn't even close to his level at that time I had no problems living with my Linux machines while my Mac laptop used to drive me crazy every day with some nonsense.
Turns out it is extremely individual. Maybe it is because I'm a keyboard person. When I work efficiently I feel I rarely touch the touchpad. I use fullscreen a lot, I sometimes use virtual desktops a lot etc.
You may be on to something with the keyboard idea.
I've always used i3/Sway/Pop!_Shell, or just dealt with windows subpar tiling. I'm picky about terminals as well, so that may have something to do with it. Although Mac's shell is nicer than windows (obviously), I can't stand the built in terminal app.
The only good news would be... if apple decides to "reverse course" questing for non-discerning-consumers, all the "crazy ones" will breathe a sigh of relief and buy a pile of new hardware.
I remember when apple gave up on its "apple-inward-looking" powerpc hardware and shipped intel machines. All of a sudden apple machines could run windows too, people could see more possibilities buying apple and the market for apple hardware became much larger very quickly.
OS X is a great platform, less frustrating to use than Windows, with lots of well-designed GUI apps, a true MS Office, and a whole UNIX under the hood.
Yes, while WSL2 has some filesystem quirks, it works pretty darn well for what I do (ruby development). I picked up a 5 Y/O 2 CPU E5 v4 Xeon workstation for 300 bucks. I've sunk in an additional 400 for an SSD, RAM and GPU, and when paired with Microsoft Remote Desktop, I've got a solid setup that allows the best of all worlds.
If ruby development is your gig, and especially if Linux is your target runtime environment, I don't understand why you'd choose anything other than Linux, unless the choice was made for you.
Speaking as a web dev on a company-issued Mac, running my local dev environment inside a Parallels Linux VM. I asked to switch to Linux but was turned down.
(Now if Microsoft released LSW, i.e. an Ubuntu-based distro with Windows syscall support via a KVM-hosted NT kernel, I very well might opt for that instead.)
Obviously back in the day windows was an absolute dog for any dev work outside of windows, and a complete no go. I've only started using it for development a few months now, but so far haven't encountered any major issues. Maybe I'll run into some interoperability problems with specific syscalls at the compatibility layer, but so far it largely just works for my needs (VS Code with intellisense and debugging, docker, K8S, Zsh and few other tools). If my tests pass locally, I can safely assume they pass on the CI server, and haven't been proven wrong so far. If I ever need a linux box, there's an old laptop that's just for that. At this point it's turtles all the way down, what's the harm adding another turtle on top.
It's not all roses, there is a noticeable performance hit with the filesystem in both WSL and the windows layer. I wouldn't use it for production, and why would you, but for development, it's fine and largely not a problem. If you had to regularly deal with a large numbers of files, say log parsing or compiling, it's probably not going to work.
Besides recent OS X SSL library issues, I've haven't ever run in problems developing on a Mac and have used one for the better part of 10 years now. Can't say the same with developing on Linux (almost exclusively on the graphics side of things, but that's a problem when it's your primary machine).
> but getting to the reason might require un-peeling a few org layers to since the people enforcing the policy will not be the people who wrote the policy.
The issue is that all the policy documents often only contain the One True Way to achieve their goals, while the goals remain unstated. The documents should always come with a rationale. And appending "exceptions may be granted for equivalent or better processes" to requirements would also help. I was faced with a password "security policy" that pretty much only whitelisted SHA2 + salting for password hashing. We were using a time-hard (not memory-hard though) password hashing function instead and several levels of auditors were effectively asking us to downgrade to comply with their policy without even understanding the difference.
It's terribly demotivating to have to argue with people enforcing policy they do not understand and erodes any willingness to engage with those policies instead of seeing them as theater and working around them.
> The issue is that all the policy documents often only contain the One True Way to achieve their goals, while the goals remain unstated. The documents should always come with a rationale.
A reason can be argued against while a policy must just be followed. It's probably by design, because as soon as you put a reason that becomes a target and people start to get ideas about why it doesn't apply to them. Much like when web companies disable your account and won't say exactly why. Not gonna take the risk you prove them wrong, are they?
It would be nice though if all laws had a purpose included.
Or policy writers copy n pasted something off web.
In my Mega Tech corp, we have security team that has created hodge podge of conflicting policies. For example, they ask some applications to be internal network only and other are accessible via web. But they can never provide their reasoning and get annoyed when we ask any details.
The worst part is that some of their policies directly contradict with each other. For example, policy is to never run as root inside docker container but then there are some applications provided by them that runs as root in docker. And every time you use it, you have to remind them that these are their apps or else they will ask you to stop running it as root.
Finally, I had pleasure of talking with one of their Security and Compliance expert. They could not even knew how to list file perms in Linux, and other a lot of basic ideas. This person clearly had memorized interview questions but had clearly no understanding of those concepts.
Only way to hide their incompetence is act elite and don't explain their reasoning.
> Or policy writers copy n pasted something off web.
The fact that policy sponsors are more than anything looking to a single answer to end debate and establish uniformity is why policy writers can copy and paste stuff from the web and get it adopted as policy without a clear rationale connected to the organization’s particular circumstances. They aren't conflicting, alternative explanations.
> It's probably by design, because as soon as you put a reason that becomes a target and people start to get ideas about why it doesn't apply to them.
There's more to it than that. There are two separate groups of reasons:
1. The reasons a policy was put in place. ("Why did we do this?")
2. The reasons a policy succeeds. ("Why is this a good idea?")
You can know the reasons in group 1. But nobody cares about those. What matters are the group 2 reasons, and they were probably never known in the first place.
I was working in a throughout computing problem domain. About the time I got there, some asshole in HR had everyone’s (current and former employees) tax information on their work laptop and lost it. Tens of thousands of people, because there was no policy against putting such information on portable equipment.
And as far as I’m aware there still isn’t. No, within the year (might have been a lawsuit, I can’t recall) new policy became everyone’s machine has whole disk encryption. Including engineering. So my software was instantly 30% slower, and a half a day added to setting up any new machine. I don’t know how many times I had to explain that it’s not our code that got slower, the whole company got slower. It took me months of work to get us back to zero.
That person should have been run off and anyone in their dept caught doing the same should have been put on probation. I don’t know how you figure that it’s “fair” for everyone to suffer for the mistakes of one member. In a small team setting? Perhaps. But not for something like this.
Thinking about the policy in terms of punishments and fairness isn’t the right approach. Something happened that cost the company a great deal of resources, so the company is trying to prevent such a thing from happening again.
It may be doing a poor job at that, but it isn’t punishing you and it isn’t about fairness. It’s trying to prevent a future mistake. And because the company is a single entity, it is the company choosing to bear the cost.
It may make your job harder and they may not modify its expectations properly after imposing this cost on you, but that isn’t punishment or fairness. It is poorly thought out systems.
I don't know if encryption was the right policy response here (it seems to be a very good idea regardless due to theft/hacking possibilities), but I'm OK with this sort of policy sometimes.
Full disk encryption is a fair response because it won't be feasible to enumerate every type of situation that results in sensitive data being put on the laptop (such as temporary files or source code). If someone was going to just add tax numbers to a list it leaves a lot out; if they say "sensitive data" it leaves a lot open to interpretation; if they list everything they can think of it'll be impossible to properly comply while still getting work done.
So perhaps it was a heavy handed approach handed down mindlessly, but it could also have been someone looking at the bigger picture. Knowing the intent as others said would help.
You act like this policy was intended as punishment instead of as risk mitigation.
Mandating full disk encryption is easy for IT to enforce. A policy of not putting sensitive information on laptops is valuable, but difficult to enforce. Encryption is a sound way to reduce the risk of harm when that policy is inevitably broken.
Full disk encryption also been the company-wide policy of everywhere I've worked in recent memory, fwiw.
A policy might be put into place because a single event. The expected outcome can drastically differ depending on who writes the policy. E.g. it could be "this type of attack won't succeed again" vs. "even if this or a similar kind of attack succeeds it won't expose our entire data"
That's my exact point. A policy isn't random but created for an effect.
As an examnple: If an HN disclosure leaks passwords and email addresses into the public domain, imposing a policy control is _meant_ to have the effect that such a disclosure would not happen again, or if it does to have a limited scope.
> On the other hand, a reason can convince people, while a policy can be avoided, worked around or ignored while creating zero feelings of guilt.
Though, I'd imagine that, in many cases, adding reasons would turn a policy pamphlet into a textbook.
This seems like one of those areas where there are unavoidable tradeoffs:
* clear, concise, and rigid policies are easy to communicate and enforce, but frustrate people with better ideas.
* clear, concise, and flexible polices invite a lot of noise from people who think they know more than they do (e.g. can I use my own custom password hash I invented? It has more bits so it must be more secure.). Enforcement is also harder, since now you have to track exceptions.
* policy reasoning is harder to communicate, may never get read, and may actually discourage reading the policy.
You could have one pamphlet with the policy, and a separate book with the reasoning, and instruct people to check the book when proposing a change or exception. Then all of the lazy compliant people could have their brevity, and the crackpots and geniuses would both get the explanatory text they needed.
> You could have one pamphlet with the policy, and a separate book with the reasoning, and instruct people to check the book when proposing a change or exception
Very often part of the purpose of the policy is to end debate about how to handle an issue, and to get the “crackpots and geniuses” to STFU.
That's not a motivation that policy sponsors usually want in print, though.
> You could have one pamphlet with the policy, and a separate book with the reasoning, and instruct people to check the book when proposing a change or exception.
That's still a significant trade-off: putting together a book with the reasoning would be a lot more expensive than just creating the pamphlet. Your organization might not be able to afford the cost, period, or your boss may not agree to spend all that money just to make a minority of engineers happy in a few narrow cases. Then there's the question of how many people would actually consult the book of rationales; maybe it'd only be a handful.
I'm guessing most organizations tend to pursue a "minimum viable policy" tradeoff: spend as little effort as possible to create a policy that addresses a particular set of high-priority problems (since that is easily justifiable), and ignore more theoretical concerns like worker education and the personality preferences of certain kinds of individuals.
> On the other hand, a reason can convince people,
Policies very often have official reasons for just this propaganda purpose, but it's worth noting that the reasons cited for the purpose of motivating compliance are often chosen for the specific purpose of how well they are anticipated to motivate compliance and not how well they reflect the actual reasons the policy is adopted, and thus they often are not in practice a good tool for understanding how to challenge the policy effectively.
I think it depends on the person. For myself (and probably many people with the hacker mindset), I don't like useless rules and want a justification or I tend to think I "know better" which is of course sometimes true and sometimes not.
I used to think this was a rebellious streak but perhaps it's just from that desire to know how things work. Absent a reason, I'll find my own.
A lot of people don't care either way, rules make life simple so there's no need to complicate it further.
> A reason can be argued against while a policy must just be followed.
And if you talk to people directing policy efforts (who are often not the ones writing, or even actively choosing, policy but generally are the ones formally signing off), you often don't have to dig very far to find that settling discussions is as important of a rationale for the policy as any of the others. This obviously conflicts with ideals like continuous improvement, but it's true lots of places including those with nominal commitments to continuous improvement.
There is also the concern of letting perfect be the enemy of good. For every developer that correctly integrates a better password storage scheme, there will be dozens that use something worse. Consistently standardizing on something that is good enough might be a net win and avoid the overhead and risk of verifying that other approaches are as good or better. Easier to evaluate/amend the policy periodically than to evaluate every single password storage instance.
This is a common issue in policies. If you try to accommodate every possible situation then the policy becomes incredibly complicated and difficult to understand let alone follow. If you have a simple policy that is easy to understand, there will inevitably be exceptions where the policy doesn't make sense.
several levels of auditors were effectively asking us to downgrade to comply with their policy without even understanding the difference
The auditors' policy? Are you sure? An auditor's job is to check if you're doing what you say you should be doing. If you're arguing with an auditor then you're essentially arguing with your own organisation without any hope winning the argument.
An auditor's job is often to check if you're doing what an external standard says you should be doing (SOC 2 => AICPA trust principles; FedRAMP => NIST 800-53, etc.).
Unfortunately, these external standards may be written vaguely and while you may have policies that define X as Y, the auditor doesn't have to accept your answers. For example, when PCI requirement 5 says "Deploy anti-virus software on all
systems commonly affected by malicious
software (particularly personal computers
and servers).", your policy may say "antivirus is not required inside containers that run on platforms like GKE, as these are not commonly affected by malicious software." It's very likely you'll have a discussion about your interpretation of that requirement.
PCI also suggests a hardened system image, for example CIS and consistency checking like Aida. I'm getting tired of explaining that CIS (and other) "hardened" images just flip a few options and install lots of crap that can actually increase risk. E.g. You don't need cron? Haha, it's scored in the CIS benchmark, now you're running it.
I don't mean to just single out CIS as bad, but recently I learned that Ubuntu CIS docker images contain Aida, cron, and sysctl configuration. Yes, you pay for that. I'll be making fun of that for a long time.
Not responding to this, just using the example. It can be easily argued that adding AV software to (some) servers increase their surface attack and reduce their security (leaving alone performance and other AV issues).
Sounds like you've mostly worked in perfectly-sized and structured corporations, where the auditors and policy-writers were perfectly connected to changing product, business and technical needs; well-staffed with policy writers and architectural governance committees who have the time, skill and background to have regular, even-handed tradeoff conversations when these issues occur, and where the engineering teams are prepped and able to engage in those conversations well.
Actually, you are both half right half wrong. If GP was talking about internal audit, they are correct. If you are talking about external audit, then you are correct. If you are lumping both audit functions into one, you are both wrong. Big difference between the two.
No, they check if you're doing what an external standard that you purport to follow is being followed.
You dont have to claim that you follow some standard that actually makes you less secure, you are just not going to be able to sell to clients who are also sheep in this manner (read: everyone.)
Multi-national corporation. The auditing department and the ones writing the policy aren't even on the same continent. So it is all internal and "our own policy" in a sense, but so far removed that I might as well be talking to an external bureaucracy. By coincidence we later learned about the contact person within our own sub-org who would be responsible for forwarding change requests on those policies. The auditors themselves didn't see that as their responsibility, even when you explicitly asked them to do so.
It's definitely not the responsibility of auditors even if you explicitly ask them to do so.
In essence, the auditors are there to evaluate whether certain assertions are true. E.g. the company is asserting that they are doing X because they have a policy that X must be done. If it turns out that the internal communication within the company is screwed up and some teams are following the policy and some are not, then the claim is false and the core job of the auditors is to detect and note this. Detecting such discrepancies the main reason for why such an audit (often external,, and if internal, then mostly independent, not reporting to the audited departments) is requested. It's not their job to fix this or decide who's in the wrong - the initial claims are found to be false, and that's not okay no matter if the policy should be changed or will be changed. It does not matter how (and if) the company negotiates changes to the policy, it does not excuse the misleading claims of "X is being done" if actually the company is doing Y instead of following the stated policy. When the management has fixed this (or claims to have fixed this) one way or another, then the auditors should re-evaluate whether the claims are true now.
Compliance serves a purpose and is not an end unto itself. So if you only enforce the rules even when they achieve the opposite of the underlying goal then you are goodhart's law personified.
So even assuming for the moment that it's not the auditor's job to evaluate the rules themselves then at the very least they should escalate all the way back to those who write the rules if an inconsistency between rule and underlying goal is found. There's no point of having a separate channel for such communication because a particular issue for which it is relevant cannot be resolved (without missing the underlying goal) until the rule is clarified.
Goal and value statements are a hallmark of effective leadership, and even people who don’t know this seem to know it on an unconscious level. It can make some people who don’t have “it” very defensive if you try to address the elephant in the room.
I’m not going to tell you how to accomplish the goal, but I’m going to tell you what we are trying to do and the manner in which we want to get there.
I wonder if there's a value in simplicity of policy. Just like with coding. Sometimes you decide not to add more features or make something more efficient because it would make the code harder to maintain. If you have one blanket policy at your company, it cuts down on conversations. Case in point: you said that auditors don't understand the benefit of better hashing functions. If the policy allowed for "equivalent or better" maybe it would cause the confusion in different departments with different auditors many times over. If this is the case, though, I agree that this should be listed as the rationale, because I can imagine it's demotivating when people are required to do a worse thing for no apparent reason.
> I was faced with a password "security policy" that pretty much only whitelisted SHA2 + salting for password hashing. We were using a time-hard (not memory-hard though) password hashing function instead and several levels of auditors were effectively asking us to downgrade to comply with their policy without even understanding the difference.
You can comply without downgrading by just applying the SHA2 hash after your time-hard hash.
That's possible but it would mean unnecessarily changing security-critical code. I'd again have to make sure to use constant-time comparisons of the different output, migrate the existing data or require users to reset their passwords and so on. All to tick a box.
Ultimately we pretty much did that, switched to scrypt which is based on sha2 and pointed to the standard saying it's sha2-based. But that does not change the fact that the policy imposes incorrect requirements on teams. SHA2 is not appropriate for password hashing.
> I'd again have to make sure to use constant-time comparisons of the different output
Why? What would this mean? You have a flow like this:
1. User sends "password".
2. Time-hard hash processes "password" into some hex value, HASHT. No comparison is done.
3. SHA2 processes HASHT into some other hex value, SHAHASH. You compare SHAHASH to the user's stored hashed password.
But even in step 3, you don't need constant-time comparisons. What purpose would they serve?
Assume the attacker knows your full hashing algorithm -- they can compute SHAHASH for any input password. (Maybe your salt is equal to the username, say.) That's enough to use a standard timing attack against early-abort comparison to learn the first byte (or other comparison unit) of SHAHASH.
But to learn the second byte, you need to provide a large number of hashes which all have the correct first byte. This can't be done. (It can, but you'll have to generate around 256 times as many hashes as you can use.) To learn the third byte, you need to provide a large number of hashes which all have the correct first two bytes. This is even more impossible. (In fact, you may recognize this problem as being the proof-of-work that Bitcoin requires.) The whole point of using a cryptographic hash in the first place is that we can't predict the outcome of a hash! Constant-time comparisons are something you need when you're comparing an attacker-controlled value to a secret value, but that never occurs here.
Perhaps I was thinking of verifying API tokens. But see, I made a mistake while reasoning about security code. Don't make me do that more often than necessary, one day it'll go wrong.
Are the people "enforcing" the policy going to actually look at the code? Probably not. Just tell them what they want to hear so they can check their box. If it comes up again, just tell them it was a mistake. Sorry, I thought we were using SHA2... we're really using more-secure-thing.
ISO 26262 includes the requirement to prevent “implausible values, execution errors, division by zero, and errors in data flow and control flow” (8.4.4). I'm confident the division by zero aims at integer division because the result is undefined. For floating point, division-by-zero is perfectly fine and well defined by IEEE 754.
Nevertheless, our safety folks require us to enable the traps and shut down the device in case a floating division by zero occurs. This is stupid because a division by a very small number has the same effect as division by zero. However, since it is explicitly mentioned in ISO 26262 nobody has the balls to allow it officially.
I don't even want to estimate how many man-months have been burned to fix such "defects".
Somewhat recent experience doing robotics work really highlighted the dangers of dividing by small numbers. As a robot arm approaches a singular configuration, the matrix used to calculate motor torques starts to get very bad and, left unaddressed, will likely result in some pretty wild behaviour.
I would argue that dividing by very tiny numbers is also something that you should probably be catching, since the output very large numbers are likely not going to do what you’re expecting...
Mandatory additional detailed info: Division by zero in floating point equals to Infinity because in a floating point data structure there is space to represent such a value. For integers, there is no such information and therefore an exception should be thrown.
For a 32-bit float, the following are true:
Positive infinity: 011111111(and then 23 zeroes)
Negative infinity: 111111111(and then 23 zeroes)
NaN: x11111111(and then anything except all zeroes)
... x being either 0 or 1.
That's true for a lot of IEEE-754. 1/0 is defined in it in a useful way. Just as 1/3 is defined in a useful way that is not equivalent to its mathematical property.
It's not the correct answer for division of the reals. For division of IEEE 754 floats, it is the correct answer, because division in that context is defined that way.
Yes, but thats only a small part of the story, your limit only works if the numerator is constant, positive and > 0. Actually, the correct answer for lim{x->0} a/x is really "anything" and "strongly depends on a and da/dx"...
So in general, assuming anything like \infty for a division by zero is wrong wrong wrong.
Sorry if I wasn’t clear. I was not implying that any a/b where b approaches 0 is always +infinity. I was simply saying that a/0 isn’t defined, but that in the case of 1/x (positive 0), it is somewhat defined as positive infinity.
Of course ideally evaluating this particular formula for x == 0 would give 0/0 which IEEE FP defines as NaN, but you could also get a non-zero finite number divided by zero due to rounding for x ≅ 1 and then the default result of +Infinity would be very far from the correct answer. Any situation where you might come close to a 0/0 or ∞/∞ term requires very careful consideration since there is very little separation between a NaN result and ±Infinity.
I worked briefly in the ISO 26262 space. While I don't have a copy of it in front of me, my understanding was not so much that you must prevent division by zero, as much as it is that you handle division by zero appropriately. Appropriately means that you can show that if you get a divide by zero, safety is not compromised. If I get an Inf or a NaN in a calculation involving cruise control, and I do nothing to handle it, do you think it will be OK?
More formally, if you have a potential division by zero and get an Inf (or NaN in case it's 0/0), you have to show what impact this will have. Can it result in a safety violation? If yes, what ASIL level would this result in? Based on this, you define your mitigation. It seems the safety folks have defined the mitigation to be "shut down the device". If you can show that there is a simpler mitigation, or that a mitigation is not needed (i.e. has no safety impact), then they should back off. This mitigation does need to be noted and documented, though.
Of course, this could be an organizational issue: Perhaps the safety folks are being the safety gatekeepers as a small portion of their job, so they are not willing to spend time to deal with nuances. I was the guy responsible for safety for my team, and it sucked - but I was doing it full time so I could analyze scenarios. The challenge was that the developers were not well versed with the ISO standard (had never read it), and it was a small portion of their job, and so they kept arguing.
> This is stupid because a division by a very small number has the same effect as division by zero.
As others have pointed out, this is not true. There is a big difference between a very large result and Inf. And of course, if a very large result could lead to a safety violation, then you need to handle it.
> For floating point, division-by-zero is perfectly fine and well defined by IEEE 754.
Keep in mind that conformance to the IEEE 754 standard is often not complete. Don't assume your language (or your libraries) are fully conformant, unless they are advertised as such.
Even if the ISO 26262 standard seems to have definitive language, I seem to recall there is a portion of the standard that allows you to waive requirements if the requirement makes no sense in your use case. There is a procedure to do it (documenting, etc).
FWIW, we did not handle Inf or NaN or divide by zero anywhere. We were writing a general purpose library for auto manufacturers, and it was up to them to decide where/how it would be used (called "Safety Element Out of Context" in the standard). Since we did not know the context, we argued that we can't assess if a division by zero would be hazardous, and merely noted in the documentation where it could occur. It's up to the folks who buy our libraries to then do that analysis. We only took care of things like showing there were no known crashes or hangs.
What I meant by "the same effect" is that division by zero or by a small number can both result in Inf.
You are correct that it is possible in general. However, in this specific project, the decision was to shut down the device for safety reasons if the CPU detects a floating point division by zero. In my opinion this reduces the availability of the functions unnecessarily.
Just because division by zero produces a value doesn't mean the value means anything or should ever have been obtained. Division by a very small number and division by zero should NOT have the same effect. That is the error which you're supposed to catch by not dividing by zero in the first place.
The other principle is separation of duties. Policy is made by people who aren't the practitioners so that the interests of the organization are ahead of the whim or convenience of the practitioner. People who served in the military usually get this concept, as it's more obvious that there is a long line of stupid actions on random soldiers' part that leads to a longer line of preventative rules.
I'd also say that when I worked on a program for a big company that interfaced with alot of small companies and startups a few years, the startups in particular had really awful practices. The "bullshit" compliance checklists for security revealed awful practices that present real risks. The big company was more vulnerable to systemic risk due to inflexibility, the small companies are more vulnerable to individual risks due to employee action or lack of controls.
The big company was more vulnerable to systemic risk due to inflexibility, the small companies are more vulnerable to individual risks due to employee action or lack of controls.
This is incredibly well said and worth highlighting.
Management works by creating processes and systems. There are always inaccuracies and imperfections in how these are targeted, and inefficiencies resulting.
Yet creating processes and systems is the expected task of management -- it's a key way to scale.
I'm fairly much averse to process myself, but have been occasionally involved.
The critical bit is to continuously evaluate whether or not the processes and systems are achieving the mission of the organization.
In my, academic, world, the core problem is that there is no overarching mission against which the process/system can be evaluated. We have 5-10 important goals of the institution/system and multiple sub-organizations with wildly different emphasis on each goal.
Is there a "reverse" version of this where you do understand the reasoning, explain clearly why it doesn't apply anymore, but the other side just refuses to think for themselves and just sticks with the status quo because it must've been done with good reason and they don't want to second-guess it?
I believe the phrase is "jobsworth", as in "more than my".
The point of the article is that's kind of correct - the cost (risk) of deregulating is concentrated, but the benefit is distributed. There needs to be some way to capture the value-add of removing a rule.
But if you do that the easiest way - turning the global metric of productivity into a target - then managers will asset strip, fire security (ahem, Mozilla), and walk off with a fat bonus (short-term productivity is up!) before people realise what's left is a the empty shell of a company.
Quite often the other side is perfectly capable of thinking for themselves. They just don't want to lose their job on account of that one common-sense approval in case things go south. Self-preservation etc.
> In BigCorps, if there's a stupid requirement, there's usually a reason for the stupid requirement to be there in the first place
Yes, but that doesn't mean it was ever a good reason, or that the reason still applies.
In a very large number of cases—like the one mentioned in the article—the primary reason is to let the company's management feel more "in control" of their low-level employees.
In a lot of (closely related) cases, it's straight-up racism, sexism, or classism. (Frankly, one can easily make a case that the working-from-home prohibitions fall into that category—they derive from an assumption that low-level employees are lazy and want to avoid work as much as possible, because they're analogous to the assembly-line employees of yesteryear, and thus assumed to be lower-class...which means lesser beings.)
"Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended. The servers that were inadvertently removed supported two other S3 subsystems." [1]
I suspect we all have. The question is rather: how much would it have cost to prevent that mistake?
The flip side is the insurance argument: given a big enough organisation, a mistake like that will happen, even if the probability of each individual mistake is very low. So assuming one of these will happen, what can you do to reduce the impact or to reduce the latency of fixing such a mistake?
Indeed, but did those 100 engineers increase the company bottom line by 10% more because they weren’t hindered by too much red tape, and is that value a multiple of their salary?
Also, do you expect mistakes of that magnitude to keep being made?
So the company I work for made the front page of HN last year for a security breach. I won't go into details, but I will say one factor was a particular instance where security was overlooked for many years due to convenience.
In response, the company has now sort of swung the other way: lots of things that have a perfectly valid reason and are reasonably safe have been forbidden or suddenly have needed to be justified in the face of extreme skepticism.
That wasn't a case of a single engineer causing 100x engineers worth of damage, but it was the case where something made hundreds of people's jobs a tiny bit easier for two decades, then caused a massive public loss of face and who knows how much reputational damage.
And I'd be willing to bet that a fair number of HN readers work at a company which has at least one system with similar properties (a bit more convenient, a bit less secure) to the one implicated in our security breach; but failure is a once-a-decade event, and their company isn't nearly that old yet. Would those engineers defend that setup using the argument from this article?
Anyone who has spent a bit of time trying to calculate the lost productivity caused just the meetings needed to discuss breaches (internal, accidental), knows it's great ROI.
Everyone greatly overestimates ROI in their own area of expertise and greatly underestimates costs in the rest.
You might be right in some special instance, but generally you are not. One example: Sending files via Email is a security problem. So you prohibit this via settings, virus scanners, appliances, etc. Now you've solved the emailed worm problem. However, you've just created a "how do we move files/data/screenshots/..." problem for the whole company. Now everyone will need their own special solution for everything, support will need a screenshot upload tool because you cannot just email them screenshots anymore, marketing won't be able to do images in mails anymore, external people won't have access to internal file shares, etc.
Please tell this to my companies IT department. They've finally moved to Azure and Onedrive with it, but are still forbidding any files downloaded to devices or email attachments being send to external addresses because otherwise it's unsecure.
So everyone is still using WeTransfer/Dropbox/whatever like they were in the years before.
We keep a low profile. Separate network, very limited interaction with "corporate IT" (which is outsourced). We keep infosec onside by making our liason feel we are across things. Allows us to sidestep a lot of problems, because ultimately in a large company people don't have to
Took me 15 years to realise that incompetence is at worst ignored, and often leads to promotions.
I've spent 3 years telling corporate IT they have a problem with a router config that limits throughput, we got 3mbit rather than 1Gbit. They spent 3 years insisting it wasn't them, and it was the upstream ISP. I even managed to get read only snmp access, and generated cacti graphs of their router showing 400M (iperf2 in udp mode, so 400kbit/ms) going in on port channel 1, but not emerging on the ISP interface.
My shadow-IT deparement spent 3 years paying for a completely separate network connection to bypass the corporate IT one and meet our requirements (easy to do when it's a remote branch office in another country), other departments just suffered it. It was the backup link so only used about 1 day in 10.
Eventually a senior member of (non-tech) staff resigned over the issue and it started being taken more seriously.
The way this system is designed (giving the front door to corporate IT) was done over my objections, and the objections of many others, but on paper it was good. Corporate IT provide shiny SLAs (which mean squat).
Last week, 12 months after the resignation and the beginning of taking it seriously, it had been escalated through 4 different layers of corporate IT, and eventually they came back and said "we've found an errant access list and removed it, and it's now fixed".
That's it. 3 years telling them what the problem was, 3 years of being ignored, and what happens? Certainly no blame for the idiots that made the decision to use this, no comeback on corporate IT provider, but if people find out about shadow IT they kick up a fuss (so the trick is to keep quiet and keep good personal relations with potential pain points).
Oh yes, we outsourced our corporate IT. Obviously there's no money coming back, I suspect we'll get a bill.
I appreciate you're giving an example, but this is pretty much solved at most big companies currently with OneDrive (or any other flavor of cloud storage). For external sharing, you just select if the link can be seen external to the org.
SharePoint is fairly good at sharing files: it allows upload by drag and drop, it supports versioning and approval of documents with a decent user interface, it offers a reasonable organization of users and permissions. Running SharePoint is anything but trivial, but with some discipline it can actually solve problems for end users.
For external people likely not, usually Sharepoint would be behind some VPN and Firewall. Some cloud service like Dropbox might work if allowed (which it often isn't). And then there will often be the resulting sprawl of various services: this for secret internal stuff, that for external people, yet another thing for marketing, etc.
This just tells us that we shouldn’t discuss breaches so much? My one experience with a breach has been that the whole process around it was a collosal waste of everyone’s time.
Depends on the impact of a single erroneous action. I’ve seen cases when folks would push to GitHub passwords (that should be kept in git to start with).
To mitigate this issue, on paper, an (usually) HR policy writes "do not do non-work related stuff with your work computer". And since many people ignore this rule, because "why have a second laptop?", the next best thing is to route all traffic from your laptop through the company, and weed out the GigHubs and GitLabs of this world (in the same manner that they block all sex-drugs-rocknroll stuff).
You can't blame a company for wanted to protect itself against a disgruntled employee that wants to push the (example - not applicable to your company) ebanking software code out in the open.
That said.. why would a person use the corporate computer for their (allow me the use of the word) 'hobby'? Unless it's for charity work, most companies don't appreciate spending resources (time, money, equipment) on other things.
Re: Chesterton's Fence - didn't know it had a name, thank you for this!
> You can't blame a company for wanted to protect itself against a disgruntled employee that wants to push the (example - not applicable to your company) ebanking software code out in the open.
The thing is, they can't, not if my PC is still usable for day to day work. For a legitimate user, the ways to extricate data are endless (e.g. tunnel out via DNS, embed into video streams (for customer training or something), hell, even simple stuff like embedding data into the monstrous modern MS Office files should fly under the radar). The real value comes from making certain that a) only those that need access, get access (e.g. why do I have read/write access to the sales file folder as a developer?) and b) minimize the result of a breach ahead of time, which often coincides with doing good work in general. For instance, don't lie to your customers on a regular basis, treat your employees respectfully, and – regarding your example – write closed source software as if it was open source the whole time. It's not as if attackers need the source code to fuzz your software for vulnerabilities.
On a principled basis, I really dislike the world view where the employees have to be constantly prevented from getting the better of the company. If you don't trust me enough to surf on news sites during my time off at work, you should not trust me with software development, where laziness has often far worse consequences than not doing any work at all.
These protections never really work against people who really want to get data out. Worst case you could just take pictures of your screen and read text back via OCR.
But most employees would never know how to do this and even if, the threshold is high to go to such lengths. Most companies primarily want to prevent users from sending out data by mistake or via malware, since these are probably >99% of the reasons for data loss.
I also dislike companies restricting employees. But I also know people from our IT department and the incidents they have to fight on a daily basis. If you don't restrict your network and company computers, you'll very quickly end up with malware, randsomware, leaked data etc.
The original scenario HenryBemis painted involved source code being leaked, so I think it's fair to either assume the employee is technically competent, or should not have access to it in the first place. Also, their scenario involved disgruntled employees, so on the other end of the spectrum, if you have, say, sales representatives which want to take out their customer database, then it's well in their motivation spectrum to snap a few hundred smartphone pictures of Excel or Outlook with a pdf "scanning" app to get a nicely printable address book. Sure, it's not perfect, but it can still be damaging as hell. Basically: Don't rely on data exfiltration to fail.
But the reason I've bothered to write the first comment, is that it's such a huge productivity drain to develop software on a locked down machine. I'll think twice or thrice before taking on a position where I don't have root access to my computer.
I concur most non-technical employees don't need (or should have) more than the equivalent of a Chromebook.
The problem is that not all warning signs rise to a fireable offense and in todays sue happy climate even at will employment has restrictions that protect workers rights
Assuming both are laptops and you don't have some insane personal computer - you should seriously examine why your workplace is skimping on a few hundred dollars extra on a machine that might save you, an employee with a salary in the tens of thousands (or higher) a few days of waiting for things to load, render or compile a year.
There certainly is a logic around not wasting money where it'd do no good - but companies that are tight with employee capital expenditures make me really nervous. It's a thing I've avoided consciously in employers since my thirties - if you don't value me enough for a decent keyboard, a chair, and a sufficiently performing computer - then you don't have anyone who is sane at evaluating RoI at your upper echelons.
My work computer and personal computer may be roughly on par as far as the spec sheet goes, but my personal computer does not have McAfee, so it's at least 100 times faster.
No, both are desktops; and I work at computer engineering department of a (not so research-focussed) state university as an academic, with my specialisation being low-level software engineering; not at a software company. Even lab computers are not so better than my personal computer, except for the supercomputing lab.
Even if the work machine has comparable specs to personal computer, it's much slower. Every machine is fast, but put antivirus software on it, backup solution, 10 "corporate security" products which monitor and log every move and even the strongest machine slows to crawl.
My work computer would be substantially faster than my personal computer if it wasn’t bogged down by so much IT crap. In practice my personal computer substantially faster to get anything done because I have to fight it so much less.
I really wish I could use my personal laptop at work, but now it is hard, because GDPR laws mandate that I would need to take additional security measures and could be held responsible for any data leak. Using company issued laptop avoids the problem - all responsibility rests on my employer's shoulders.
And having two computers is just annoying. I dislike when companies prevent me from using my computer for personal stuff (esp web surfing). But if I was in charge, I also think I wouldn't take that risk and prohibit use outside work. The risk just doesn't justify the benefit for the employee.
Then buy a faster personal computer? The resources I have freely available at work are infinitely more than I have at home, but that doesn’t mean I can set up my own Netflix competitor on it.
I love these requirements at work! Yes, I need to buy my own computer, but now I can leave my work laptop in the office.
Which helps avoiding working late as partner expects you to be on time for dinner, and as you don’t need the work laptop at home as it’s not shared also as a personal laptop :)
I swear that’s main reason employers allow you to use your work laptop for personal use too. As you will have your work laptop with you at home so they can let you easier work weekends etc
>You can't blame a company for wanted to protect itself against a disgruntled employee that wants to push the (example - not applicable to your company) ebanking software code out in the open.
It's pretty dumb to block gitlab when you employee can just... copy the codebase to an external drive?
Side issue, but I have cause to mention Chesterton's Fence so often that it's become a real frustration that there is no Wikipedia article about it — only this essay about why Wikipedia contributors should honour it. I wish there was an actual article I could point to when I want people to know about it.
(Why don't I just make one? I have basically given up on making substantial contributons to Wikipedia since the Deletion Police took over the asylum. There's a 90% probability that if I did spend a couple of hours writing a good article, someone would come along within a week and spend a couple seconds deleting it. It's out of control.)
I really like Chesterton's Fence as a counter-story to it, not to stop meaningful change from actually happening but to neutralize decision-by-best-folksy-story from over-ruling other options unchallenged, and allowing case-by-case thinking to prevail.
> For instance, there's a restriction at my workplace (not a software company, a regular old industry fortune 500) which prevents git installs from pushing to any non corporate GitHub repo from our work machines.
Someone has to say it.
I’ll admit to accidentally pushing some of my employer’s code to GitHub.
The code was a fork of one of my GitHub repositories and I pushed to the wrong remote. I deleted the resulting commits afterwards and no PII was leaked but it’s a valid concern for employers.
> For instance, there's a restriction at my workplace (not a software company, a regular old industry fortune 500) which prevents git installs from pushing to any non corporate GitHub repo from our work machines.
Oh my! As I have struggled with security incidents caused by people either accidentally publishing GitHub repos that are "public" rather than "private", or pushing corporate code to their own repos (that are... you guessed it, "public"), I have long looked for another IT department that has grappled with this problem. GitHub Support hasn't helped, appealing to public forums hasn't helped. But here, a sign that my problem isn't unique!
I can report in on the container version of this. Docker would not accept patches from Red Hat about Docker Hub being the default push location. They wouldn't even accept to make this configurable. Our enterprise customers didn't even want it trying, regardless of if it was blocked at the network level.
Quay.io makes all pushes private by default, to protect from this type of behavior. If you accidentally typo, you are protected.
> A more productive use of time would be to understand the reason for the policy, document out why it doesn't apply to your case and then attempt to get approval.
My experiences with big organizations is that the above is complete absolute waste of time, unless you are in some kind of high level position already.
True, there is probably a reason for some stupid requirement, but that reason might still be stupid.
Remember that upper management is people, and as people they have their own biases, fallacies, ignorance, etc. They are just as prone to make wrong decisions as you are. In fact in an area of your expertise, they are indeed more prone to make a wrong decision out of simple ignorance or bias. If you think a requirement from your area of expertise is stupid, it probably is a stupid requirement. If you can’t see a reason for that requirement, there probably isn’t a good one.
The fact that the decision to make this stupid requirement comes from people in power gives you even more reason to question the validity of the requirement. People that hold high position and a great amount of power often face less scrutiny and questionable decisions made by them are far less criticized, then similar decisions made by people of less power.
> In BigCorps, if there's a stupid requirement, there's usually a reason for the stupid requirement to be there in the first place
I disagree. I've worked in a BigCorp and many requirements were in place not due to reason but due to feelings and inertia. The "butts in seats" requirement was one of the main ones. Even after showing my boss that I could at the very least work equally as effectively from home, he still felt it wasn't OK for me to work at home more than once a week. And never underestimate the power of inertia. Things are this way, they've always been this way.
I just left a company that was trying to rush us back into an open office post Covid. As if it weren’t bad and noisy enough pre-Covid.
It kind of threw that whole “we are in this together. Accept your 10% pay cut.” bullshit out of the window. Especially when all of the management had separate offices.
> In BigCorps, if there's a stupid requirement, there's usually a reason for the stupid requirement to be there in the first place
Sometimes, this is to be able to hire the exactly one person they want to hire, but HR policies or some law requires it be an "open bid" system. So the tailor the requirements such that only the one person can qualify, and they can justify the hiring and the letter of the law.
I wish I could find the article, but I think Joel on Software (?) had something once about bigger companies adding on things like how some people stand by the door turning around and checking their wallet and keys carefully before going out. or some similar analogy.
This shouldn’t need to be enforced by software. You really don’t have any business accessing a personal git repo from a coorperate machine; you’re spending time twice (stealing) and it’s a good way to lose ownership of IP.
Furthermore this probably makes officially allowed open source contributions much more difficult.
"I want to do X" / "I am too lazy to do Y instead" becomes "We can't do Y, it's a mandatory requirement to do X"
Very high on your response agenda should always be to ask for details. Who mandated this and how? If there's a regulation, cite the exact text of the regulation. There's a good chance the person telling you it's mandatory either knows it isn't or has never actually wondered why, and you can divert them from insisting on doing X / not doing Y to an exciting journey through their bureaucracy to find out that indeed there is no such mandate and never has been.
Back when TLS 1.3 was being finalised EDCO and other groups trying to preserve RSA key exchange tried really hard to pretend that what they were doing was mandatory (it isn't and wasn't) and that their use cases were legitimate (most of them don't even appear to have a sound security rationale, let alone a legitimate purpose).
When we get to September and companies that were asleep at the wheel suddenly notice the 398 day rule (Apple unilaterally changed the maximum lifetime of new certificates in the Web PKI to 398 days starting in September) you can guarantee that some of them will insist that having longer-lived certificates is somehow mandatory.
[ Edited: It's the Enterprise Data Center Operators thus EDCO not ECDO ]
Or just in IT in general. E.g.: "We have to have a D:\ drive for applications!"
That's a rule that last made sense some time in the 90s, when it was feasible to format the system drive, reinstall windows, and then applications on a secondary drive letter would work as-is without any further steps.
In the era of 10 GB application installs that deploy several hundred system-shared components, this is a hilariously out-dated mandatory process.
Yet.
Everywhere I go. There it is: The D:\ drive with "Apps" as the volume name.
In 1605 there was an attempt to blow up the British Parliament during the state opening by placing explosives in the cellars. 415 years later they still search the cellars for barrels of explosives, using oil lanterns and armed with swords. I feel that so many organisations are doing the same thing, maintaining an old solution for a problem that no longer exists.
So I think it's just as, or more, important to apply this thinking to the business requirements which drive development. Half of solution architecture is asking why something is the way it is, and asking again and again until you're sure there's a real reason. A lot of the time the answer is because that's how it's always been and traces back to a technical constraint from long ago. I see this a lot with companies moving from decades old legacy systems to modern applications.
Same with processes, authorisations, security risks, etc... They tend to start when somebody mad a mistake and they stick around regardless of if they're still relevant, effective or blocking progress.
To be clear, they don't continue to search the cellars as an actual security measure. It's purely ceremonial and done as part of the traditions around opening Parliament for a new session. Hence the oil lanterns and swords.
They also (ceremonially) hold a member of the Commons "hostage" while the Queen is in Parliament to ensure her safe return.
I think this is more typical in public administration. Employees just follow the existing rules/laws because it’s not their job challenging them, and decision makers are mostly concerned on shifting away blame from them. Nobody wants to be that guy that ordered to stop searching for a bomb, the day that a new bomb will be placed. The only reason why one should stop doing it is to gain efficiency (= be more productive but doing less useless things), but the efficiency metric is non existent in public administrations.
I don't see what's different between public administrations and large corporations here. You say that the efficiency metric is non-existent in public administration, but it's mostly the same for many organizations within large corporations as well. Efficiency metrics may not be entirely non-existent, but they're incredibly opaque or difficult to measure. (For instance, R&D efficiency is usually difficult to measure because of the inevitable time delay between successful R&D efforts and market success.)
> Employees just follow the existing rules/laws because it’s not their job challenging them, and decision makers are mostly concerned on shifting away blame from them
>but the efficiency metric is non existent in public administrations.
And both of these have very insidious downstream consequences.
The nature of government employment selects over time for people who are dedicated to process for process sake so of course there's nobody around who asks "why?" and nobody around to say "the tradeoff is worth it let's do X instead".
Politicians and high level bureaucrats control the resource streams. The less efficient the larger the resource streams and the more power they have. Not only is there no incentive for efficacy at a high level because "screw it's other people's money" but there is an incentive for inefficiency because it lets the people calling the shots preserve/grow their power.
The only time you get a push for efficiency is deep in the bowels of a silo/fiefdom where the teams are small and the organizational unit's resource inputs are static but their performance metrics are pegged to outputs. Of course this sentiment dies on the way up the org chart because resources are mostly a "use it or lose it" type thing.
One effect is that this drives away exactly the kind of results driven people who should be working as public service.
Another effect is that it cements the status quo such that it is basically impossible for a government or government unit to pivot from "being inefficient and generally sucking" to "decent use of taxpayer's money" without an existential crisis for the organization. Basically nobody says "hey, WTF are we doing" while the organization is becoming an ineffective graft ridden mess. Everyone just keeps their head down until a shock forces them to actually solve the problems.
This is why you never see government organizations in rich places like NYC, Boston or DC clean themselves up but you occasionally see government organizations in poorer places like Buffalo or Newark clean house because as painful as making do with less and cleaning your own house is to those kinds of organizations the alternative of having politicians do it for you to win brownie points is worse. The rich places can just paper over their organizational problems with money. The poorer places have to stay on task or clean house much more frequently if they don't because there's less money to float along on.
BigCos have some similarly bad feedback loops but it's much more common to see them lop off an org that's dead weight because when the metric the people at the top use is profit you get much more incentives value driven behavior.
I think searching government property for explosives and terroristic threats is a reasonable precaution to make. The fact that they were able to discover the explosives in the first place back then without Parliament exploding shows the value in preventative measures.
> maintaining an old solution for a problem that no longer exists.
- new people
- systemic change
all this means to me that experience is not passed along and that regularly the new team will just face a somehow new problem with whatever reflex solution they can come up with with whoever is in the lobby at the time. It has to be between 5 minute smalltalk to 1 meeting at the end of the day.
This is why managers need a high level of emotional intelligence, the ability to empathize and quickly place themselves in the position of others. Without those three qualities management is ill equipped to handle these kinds of things sensibly.
Sadly the above traits are more deemed leadership qualities rather than management qualities and so often missing in management.
Leadership skills are far harder to teach than management skills and also rarer in the population than the number it management roles there are to fill.
To add to that, if you have a senior IC with leadership skills and a manager without, the senior IC won't last long, either through rage quitting or slow-walking out the door by the manager who's threatened by the senior IC.
Not really, during my military training a there was a focus on management vs leadership and how the two intertwine.
Great leaders can be terrible managers if they don’t want to do the associated admin, fortunately management skills are much easier to teach than leadership skills. Generally the latter (such as officer training) is cultivated in people that show requisite qualities (hence the UK cultivating potential military officers from 16 like the program I was in).
Neither is a given for success but the most successful people in management structure have a good mix of both to navigate. Also, I think n being empathetic doesn’t always mean being nice, it just means seeing other view points and making decisions with that extra insight.
yes, the leader/manager dichotomy is covered in mba school too, but the core curriculum usually doesn't cover it at a level of detail to be truly actionable. you have to take elective courses for that, and most students don't. the knowledge is offered, but not emphasized, unfortunately.
Even the best manager is faced with a bandwidth problem. It's impossible to collect accurately everything that the engineers are doing. And each layer of management is applying its own compression on what information they get from the levels below.
What is frustrating when having the feet on the ground (or in the code), is that we clearly see the issues on our level. But conveying that in a understandable and nuanced manner would require too much context.
IMO a good solution is to leave breathing room in the agenda and trust that the engineers will work on fixing those issues. By making them somewhat autonomous they can skip the politics and get right down to make their lives and the product a better place.
A complementary phrasing is to apply the subsidiarity principle ("the principle that a central authority should have a subsidiary function, performing only those tasks which cannot be performed at a more local level" – OED) to companies, which would be the antithesis of micro-management. When your boss is an enabler first and a decision maker second, they are already following a variation of that principle.
I wish there was a standard way to work against the institutionalized trust issues that creep up in larger companies. E.g. when managers can only take actions if they can show an issue exists using their (arbitrary) metrics, and not when their reports directly tell them about it.
I have also personally witnessed a corporation insisting that I learn exclusively management practices in a multi-session Leadership Education and 'selling' it to me as "leadership".
Ideally yes but you don’t need to be a good leader to fulfil the roles of management. Good leaders are in the main better managers but the percentage of the population with that leadership instinct is rare enough that it can’t really be a prerequisite for management roles.
There is an opinion that the more manager resembles high functioning sociopath the more efficient he is. He may even look friendly and provide benefits for the employees but that would be not because of empathy but because of calculated decision. Personally I think there is some truth to this opinion and I don't believe in the "benevolent" corporations, especially big ones.
I don't believe in "benevolent" corporations either, but that doesn't excuse managers from being cold-fish personalities that border on sociopathic or machiavellian.
In fact, even the meanest largest corporations have excellent managers peppered throughout the organization that watch out for their people and genuinely care for them even though the corp as a whole might be a greedy SOB-- I'm thinking places like Comcast and Oracle here. ymmv.
> Every time someone in the management chain has axed the proposal with some variation of "this policy can not be changed because this is our policy and thus can not be changed", possibly with a "due to security reasons" thrown in there somewhere.
That's circular, no doubt there. We've decided this rule must remain in place, because it's a rule. This decision-making process doesn't make any sense, but I still don't see the sense in calling the rule 'imaginary'.
Is the real point here that sometimes bad rules remain in place until some external event forces a change? Doesn't sound so profound.
No, in short, the real point seems to be while the actual workers (non-management) would benefit the most from the rule change, managers (and further up the chain) mostly sees the risk and not the benefits, so they are the most likely to avoid changing the rules. Since the chance of getting blamed for changing the rules and impacting something negatively is bigger than getting blamed for _not_ changing the rules, and they are the ones actually allowed to making changes, you don't see a lot of the rules changing.
That is usually true, hidden agendas play a big role here. In Germany similar rules about contractors are often in place to prevent outsourcing. Contractors shouldn't be too comfortable and cheap, so the workers council and union enforce such rules so their established clientele still has some advantages over the external people.
Rules are agreements between people. They're binding in the sense that we value our reputations, and we value not having to pay the consequences - often formally recorded - of not abiding by them.
Did the people making the agreements imagine doing so, or did it really happen?
Are the consequences real, and enforced, or are they just words?
If the rule is enforced, it is not imaginary at all.
> We've decided this rule must remain in place, because it's a rule.
This is the sane default position. If you don't understand a rule enough to be convinced it is problematic, it's safer to leave it in place than allow someone to erode it. Certainly the people in charge of rules should be able to understand them, but I'll take people who don't understand the rules and yet faithfully follow them over people who don't understand them and throw them away.
No. The Laws of Football and the Laws of Australia are imaginary in the sense you mean, but the Law of Gravity isn't.
Mother Nature's rules are different than ours, stubborn far beyond all reason and not recorded anywhere for our inspection.
Video games are interesting because we make them, yet for the most part some of their rules are like Ma's rules not ours. Watching Mario Maker 2 troll levels, the fact is that even if the player, and the viewer, and even the people at Nintendo who made the game think that Skipsqueak can't jump through that platform, if the game was coded such that it does, it will anyway.
Anyway, I think part of what you're missing is that the people in this story often don't know the rule is imaginary. Sometimes it's an excuse, but sometimes they honestly just assumed there was a reason for things being as they are and never stopped to go find out why. Not everybody has the sort of inquisitive nature that would afford that, some lines of business crush that right out of you.
> No. The Laws of Football and the Laws of Australia are imaginary in the sense you mean, but the Law of Gravity isn't.
I'm not a scientist but should the law of gravity not be seen as a human understanding of what gravity is rather than the handed down from God definition of gravity?
The laws of physics are the human interpretation of God's definition of the world. And the corporate policies are the human interpretation of the CEO's definition of the world. It all makes sense! \o/
>Is the real point here that sometimes bad rules remain in place until some external event forces a change? Doesn't sound so profound.
The author's example shows the very glaring case focused on: a rule totally disconnected from security. He was told "no remote work for contractors." One would assume any reasons for this rule (valid or not) would also be reflected in reality. In this case, maybe only regular employees had filled the legal paperwork and passed whatever security checks.
But the rule disappeared in less than a day. So it only existed on paper and punishment. I doubt the security concerns disappeared or were fixed in less a day.
These kind of rules, when exposed, can have a big impact on how well people follow other rules. I should know: I started ignoring a bunch (and sometimes later learned the risks I pooh-poohed).
I've found this type of circular reasoning is also somewhat common in with regards to laws. Doing X is wrong, because it is illegal, because it is wrong. (and other variants).
Murdering people is wrong (at a surface level that requires no moral understanding), because it is illegal (and therefore you'll face consequences from other people), because it is wrong (because it is immoral).
There's no circular logic there at all. The second "because it is wrong" is shorthand for more complex moral reasoning that most people don't need to know - who cares if they know it, when the goal is simply to cut down on the murdering?
Exact same story happened to me. I am an Indian contractor working for a US client in US. I asked for work from home in India. Customer manager said security council didn't approve it. He said something like their policy is 'don't take anything to China that you are not ok with loosing, India is one level below it, safeguard everything'. Now Corona happened and the entire team in India has been working from home for last 5 months.
Another one is AWS Workspace. I think its the same reasoning - that it is more secure and not having to deal with hardware. But that thing is horrible to use. Constantly getting Windows is on low memory, disconnections which can be recovered from only by a reboot that takes 40 minutes. Even on a good day, everything is slow, you can see the Window maximize/minimize in slow motion, run an IDE and try to debug to get 100% CPU. Join a conference call, can't even open any other Window without frustration. Once I am no longer working for this client, I want to give a piece of my mind to who ever is in charge of this.
> He said something like their policy is 'don't take anything to China that you are not ok with loosing, India is one level below it, safeguard everything'.
I can't imagine someone hiring a person they have such disdain for. That's a pretty sad attitude on the manager's part.
I believe he doesn't have any disdain for me (except may be for my skills). He had a meeting with the company's security council, and told me the decision afterwards. I think he just heard this line regarding trust from the council meeting. Not sure why he chose to tell me that. I would like to think he was just saying that to convey the mindset of the council people.
Ah that makes sense. I didn't necessarily think he held personal disdain, just I've noticed beyond this situation the way some people seem to treat outsourced workers worse or in a disrespectful way.
> ..you can see the window maximize/minimize in slow motion
Oof, that's painful to imagine the daily frustration. There must be a better technical solution - but, as long as the current setup is working (and generating value), probably no one with decision-making power has an incentive to improve anything. They can ignore the cost, the wasted time, effort, stress and loss of morale.
Its incredibly stressful on some days, especially when I am dealing with a production issue or a deadline. This is one of the main reasons I will be soon looking for a new job. One of the questions I would ask the interviewer would be to find out if I will I be working on a AWS workspace/Citrix VDI/something similar or will I get a dedicated hardware.
I tried to bring this issue up a couple of times, but got turned down saying no budget for laptop (though they must be paying more to AWS within a year or two, we all have been on Citrix VDI or AWS for 6 years), or that it is easy for IT this way, and recently manager just told me to reboot everyday when I leave (which seemed to have solved the low memory issues, but others still occur).
2 weeks ago, the workspace wasn't registering the key presses (it does this sometimes and I have to reconnect). I got so angry, next thing I know I punched the laptop keyboard. The laptop screen froze with lot of lines on it, and there was a loud sound (probably from something getting in between the fan). Fortunately, everything was fine after a reboot. I never knew I would respond like this. I probably need to see a psychologist soon before this becomes a major issue. My dad also had such angry outbursts and I used to hate him for that.
Yes, I picture it like having to wear shoes with little stones in them. Wishing you good luck in finding a better situation!
A way to stay optimistic may be, to see it like Kung Fu training, where they wear heavy weights around the feet for building strength and endurance. When the weights are taken off, they can jump high and run fast - as I imagine you would be when you finally transition to a workplace that provides an actual work machine.
I would try keeping a log of when it happens and how long it takes. Presenting a logbook of wasted time makes it their responsibility, not yours.
And try not to worry about it. One trick that I use is trying to think if such frustrations will be important 2 hours, 2 days, 2 weeks or 2 months from now. Usually they won't.
It's interesting that in tech there is this expectation that every available thing should be done to make the employee as happy and relaxed as possible. If an employer refuses to do these things it's met with "does not compute", as in this example.
Granted, it does seem to make sense that your employees are as happy and relaxed as possible, but this attitude doesn't seem to exist in other industries to the same degree.
For example, I have a friend who works in finance. He was allowed to work from home for the bare minimum amount of time during the pandemic, his company requested him to return to the office almost as soon as the government permitted it (despite his job being possible to do 100% remotely). He also wears formal clothes every day (another thing that is frequently viewed as ridiculous in tech). But, he makes probably 3-4 times what I do per year!
It seems that in tech people expect to be "looked after" by their employer more than in other industries, yet simultaneously undervalue themselves.
Caveat: this is a perspective from the UK where finance salaries are high, and tech salaries are (as far as I can tell) much lower than the USA.
I think it's the hacker mindset of not being afraid to change things. Every job I ever had I always questioned the rules and pushed for change to make things less redundant and more efficient. This was even in retail as a teenager, and being a waiter for church bingo when I was 10.
Some of us just have that inherent urge to say "wait, wouldn't it be better if we did it this way" instead of quietly following the rules. My guess is that people with this mentality are more likely to want to be a programmer in the first place.
I am 100% sure that if we all decided that formal clothes every day are way to go, our salaries NOT would go up 3-4 times. Those two things are not directly related.
The salary is market thing, to large extend.
The cloth and home office do have more to do with cultural background of people who run and join both institutions. It has also to do with value systems - higher preference for hierarchy, higher expectation for conformity because people judge each other by different signals in financial institution. Financial institutions are among the most conservative institutions there are. The need for tie is all about that.
I'm pretty sure that, if a software engineer, you were to pick up and conform to executive social and cultural norms, and did the work required to be able to manage people, raise money, and so on, your salary COULD go up 3-4x. Clothing is but one tiny piece of that. You'd also need to learn a new communication style, learn to code-switch ethics/values, and so on.
People who can bridge technical, business, marketing, and sales are extremely highly valued.
I've learned to adapt, and I have a choice of jobs spanning about a 10x salary range as a result.
The choice I made: I chose the job which was the most fun and fulfilling, while still paying enough to where I don't need to stress about money (so long as I live a modest life). At the time, I had an offer at 2x that salary, and was being recruited into jobs at 2x that in turn. I also had more academic job offers where I could financially scrape by, but /with/ financial stress, which didn't seem worthwhile.
Also: If financial institutions were so conservative, we wouldn't have regular meltdowns like the subprime mortgage crisis.
I think that comment was not about people who manage people or raise money. It was not about those who can bridge technical, business, marketing either.
It was about finance people who could easily do their jobs from home.
Yes there's a current trend of thinking that work should be as near play as possible.
There are entire classes of work, like digging ditches and driving equipment, that can be sweaty and rote. Not designed to be fun, but to get something done. For pay. This is becoming regarded as mentally or physically abusive.
It helps to consider the pay as recompense for whatever hardship you endure to deliver value to the employer. Which seems obvious but apparently is not.
> This is becoming regarded as mentally or physically abusive.
An acquaintance of mine working in academia told me that her boss was being bullying and abusive. I was really concerned and asked what was happening. She told me that her boss and department were making her come into the office (pre-covid) at 9am despite that she assured them she could do all her research remotely. When she would not come in or come in late, her boss was verbally reprimanding her.
This scenario is certainly bureaucratic and not the funnest, but it’s funny to me that bullying is considered making people come into work at an established time.
Requiring strict adherence to arbitrary bureaucratic norms, and particularly berating someone for daring to question them, is absolutely abusive.
An awful lot of corporate culture is abusive, and designed primarily to give managers a warm fuzzy feeling of absolute control over their subordinates. Much of it derives directly from the assembly-line era, and much of the overall philosophy behind it is just thinly-veiled feudalism.
"But everyone does it this way" is not a valid defense against "this behaviour is abusive and designed to squeeze the agency and will to be more than a drone out of me."
And there we differ I guess. The job is the job. Wear this; show up then; answer the phone with that phrase - its the job. Do it, and get paid. Not abuse.
Because the boss makes the rules and you should blindly follow them. The other team's boss is relaxed with these rules and the team is productive and happy but, you should still not question your boss, always put the head down and say I am sorry, always stay in late without overtime because the boss hasn't left the office yet and so on. And it doesn't matter that the boss doesn't produce anything and that you drive all the work, respect the boss because he might feel offended otherwise.
I don’t think bosses should be followed blindly. But showing up and doing the job as required isn’t blindly following, it’s obeying.
I think the difference in what is reasonable and what is reasonable. If I question a reasonable start time “9am” that makes me someone that few want to work with. Whereas questioning forced, unpaid overtime is an entirely different, and I think reasonable question.
Reasonable rules like “be here by 9am” are not emotionally abusive to enforce. Perhaps you feel that way, but I think that’s unreasonable and will result in tons of emotional abuse as you go through the world.
Is a stop light emotionally abusive? Is not paying me $100k emotionally abusive?
I’m glad that we’re not actually under feudalism, but I think employment is not feudalism.
Stop lights (which maintain a safe flow of traffic) and not paying arbitrary people arbitrarily high sums of money, are not meaningfully comparable to enforcing a single, inflexible schedule on all employees regardless of their work or personal circumstances.
If there is no actual business purpose for it—beyond "I'm the boss and I want it that way"—then why shouldn't the boss be compassionate, and allow flexibility? Meetings with coworkers, as we've seen in the past few months, can work just fine being scheduled at times that actually work for everyone, rather than expecting everyone to be constantly available (and thus never able to truly focus on their actual work).
Comparing manual labor, which actually accomplishes things, to wearing formal clothing for the benefit of no one but your boss, accomplishes nothing of value, is not really fair. The latter is usually not abuse, but it is usually stupid. Pay as recompense for suffering is less compelling when you're supposed to be paid for your output, and there's no reason for the suffering at all.
Unions are a horrible solution and should be avoided at all costs because they're a competitive disadvantage and will sink the company long term. The better solution is to pour money into creating more jobs and uplifting people out of a shitty job into one where there is more leverage.
I guess it's a seller's vs a buyer's market, though maybe it's also cultural to some degree. It's generally hard to know which of the two came first, though in this instance I think it's clearer that the market influences the culture.
Tech workers live and breathe the space between mechanism and intent. Policies that are inconsistent with their stated goals offend exactly the same mental hardware that finds bugs.
There's nothing inherently wrong with heavyweight bureaucracy. Roles, ceremonies, witnesses, signatures, tamper-evident seals, 2-man rules, split-knowledge safe combinations... the community loves this stuff. But we don't respect it because it's policy. We respect it because we were following along, trying to break it, trying to do better, and we couldn't.
Yeah, finance workers are rarely ever hired right after college. They are also way more oppressed demographics despite having significantly larger salaries.
Our time on this Earth is limited and we can't do everything. We have to make decisions.
"X is required" is shorthand for "we do not want to deal with the consequences of the lack of X". It's a decision. Sure, they could, but they wouldn't have resources to deal with other things.
That's basically all there is to understand here.
All the rest are corollaries:
- Sometimes changed circumstances require revisiting some decisions. Like COVID-19.
- You can try and convince people to change their decisions. Your time to do that is limited too. Chose your battles.
- Decision fatigue exists. People may not want to make new decisions and will prefer sticking to old ones or push the burden onto other people.
Saying it's "imaginary" is childish. Sure, policies may be arbitrary, but the constraint of only having 24 hours in a day is not.
Related, after telling us for years how dangerous it would be if people flew with more than 100 ml of liquid, it's now allowed (i.e. no longer dangerous?) to carry a larger quantity of hand sanitizer because of coronavirus:
That is a topic in itself - everything the TSA does is pure security theater. There was never any danger from liquids nor any effort to prevent groups of people from coordinating to bring in a larger quantity. There is no explanation why 8oz of milk is dangerous and 8oz of cheese is not, nor why ice continues to be dangerous even though not a liquid, nor any explanation why the bbq sauce on the pulled pork sandwich I’m bringing on is magically not dangerous even though there is more then 100ml of it. I’m convinced it’s more or less a plot to sell $7 sodas in the terminals.
Batteries were dangerous and had to be removed until Apple started making devices without removable batteries then magically they are not dangerous anymore. Next you had to start turning your devices on as-if somehow it was impossible to make something dangerous that didn’t turn on. Security theater.
I used to have to take my belt and shoes off every time I passed through security but because I paid a $120 fee now I’m not dangerous anymore and can walk through security. I get to skip to the head if the line at the security checkpoints and coming back from international flights I get to breeze through the diplomatic lane at JFK (you can too! https://www.cbp.gov/travel/trusted-traveler-programs/global-...) Security theater.
TTP is capitalism at its finest. Let's make the normal process so terrible so that we can charge extra for people to be treated normally.
And then you have services like Clear which is basically legalizing corruption. Instead of slipping the agent $20 to go to the front of line, it's legitimized so that you can do it without any stigma.
When circumstances change, risk profiles and trade offs change. News at 11. I mean you can rage about the unlikeliness of people concocting explosives from various liquids, but that is entirely besides the point here.
I, personally, would prefer to rage at the fact that the TSA has managed to export that liquid rule to everyone else in the world. I don't see why, when flying from Ontatio to Quebec, I need to follow the TSA's security theatre.
True, but I think it isn't besides the point - the argument is the (global) risk profile hasn't changed all that much. The risk profile to the rule maker specifically has changed, in favour of them not simply maintaining established process.
Ideally, we wouldn't have to wait for a pandemic or other crisis to overcome this managerial hysteresis. Of course, if the rule is reinstated after a few months, or if a plane is brought down with 125 mL of liquid in a few years, then you're right - it's a rational rebalancing.
In Europe (did a lot of flying lately) the 100ml still stands for sanitizers. You can go over the 100ml only if you got a baby with you and they allow the water/milk for the baby.
Funny thing is: once you travel with a baby, basically anything goes. 1L water bottle? no questions asked. Even though the baby won't drink 1L of water on a 2-hours flight.
The whole restriction on liquids is fairly ridiculous, and even more of a security theater than most of the other checks. And I think the security guards know it, and use any excuse to look the other way.
To be fair, traveling with a baby makes you much less likely to attack a plane.
This exception makes the TSA's policy more pragmatic and reasonable.
The harm of denying a baby milk or formula is worse than the risk of terrorism in this case. Not to mention is that it lowers the incidence of crying babies on flight.
The same logic could be applied to hand sanitizer restrictions in a global pandemic. Allowing people to carry a larger size could save more lives than the risk it poses. This is especially true when shortages can make it difficult to buy a travel size.
Statistically speaking, most people who travel with babies travel with their own babies, and strong parental instincts prevents most humans from putting their babies into mortal danger. So, traveling with a baby and attacking the plane you're on is very low probability.
Most attackers aren't that smart. Rules like this prevent many from trying an attack as it's much harder to perform an attack if you have to take your baby with you. No one ever said the 100ml rule was good, it was just the best alternative they could come up with.
As an aside sounds like we would be doing away with the whole bin show, this is great if it stays through:
>Tip 5: Place items from your pockets into your carry-on bag. Prior to going through the security checkpoint, take the items from your pockets and place them into your carry-on bag so that you don’t have to place them in a bin. Remove the keys, tissues, lip balm, loose change, breath mints, mobile phone and anything else from your pockets and place them right into your carry-on bag.
To state a corollary of this: most "mandatory experience" on tech job requirements are imaginary.
Therefore, if you're a job seeker, and especially if you're a woman or a minority, you shouldn't let not having any of these imaginary requirements stop you from applying for whatever job you want, especially if it's a junior role.
I got my first junior role by applying for a senior role. Granted, I was definitely lucky they had one, but I got my foot through the senior screening tests and then realised I was in over my head.
HR was kind enough to redirect me to a newly opened junior position in another part of the company. Been with that company (directly or indirectly) for a total of 5 years now.
I wonder how often that happens. I'm guessing it happens more often than HR thinks it does. I've heard quite a few stories like yours. Usually it's from somebody who can pass all the screens but doesn't have enough years of experience to make the hiring committee feel comfortable, so they get offered a junior role that technically wasn't open and is created specifically for that person to grow in. Anyway, it's a smart move for companies that can recognise talent.
This is one thing to try to find out when one applies for a job. What kind of stupid requirements are there and is this company prone to senseless prescriptions? Even if one thinks that a particular senseless prescription is not very burdensome to oneself it still indicates an inclination to senseless prescriptions. One should avoid such places when one looks for a job. And that actually is a risk that management is perhaps not very aware of. The people who have the most choice of places to work can just decide not to work for you if you like to push your employees around.
The dynamics feel a bit like good cop / bad cop on an organisational level.
"I'd personally love to let X occur .. but it's not allowed due to company policy. This is mandatory."
I have always believed everything is negotiable in business. As people, we're subject to the rule of law, and obviously our businesses need to operate accordingly.
However, the policies that a company uses are really just a snapshot of current thinking. Nothing more than that; and thinking obviously changes.
Sometimes it doesn't - sometimes people vehemently insist on application of policies even though what they produce is ridiculous.
At a former employer I was quoted £70K (yes seventy thousand) for internal hosting of a not particularly business critical single static HTML page that would be accessed by maybe 100 people. I actually got a breakdown and it all made sense if you applied the policies the infrastructure team had invented - it didn't matter that the result was nonsense.
> it all made sense if you applied the policies the infrastructure team had invented
Was it basically "we charge more for stuff we don't want to do"?
Makes sense, like:
> When a customer asked if he could have chips with his lunch, White hand-cut and personally cooked the chips, but charged the customer £25 for his time.
Well, apparently it counted as a separate application - so that required redundant (virtual servers) plus back-end databases. Add DR environment and live replication of VMs and databases, database backup retention planning, off-site storage costs. Ultra high performance SAN storage for all of that. Add pre-prod and test environments as well and it soon adds up.... apparently.
Turns out the internal infrastructure recharging model didn't scale down and as it was a "process" it couldn't be changed.
Let's see: A risk and impact analysis, a requirement check, a capacity audit, a full blown security audit, a data retention policy, a storage size study, a ton of policies to fill in so you get a dns name, a financial audit, setup of a new bookkeeping post so all these activities have somewhere to book their time, a project for that post, a language and style audit, a translation study to decide nothing needs a translation, ... I easily see more than 100 people working on this.
I feel like I can deal with most things but “mandatory training” drives me nuts. At one company after another these are always over-produced things that impose way more distraction and time than they need to (all video/audio with little to no text transcript, stupid things like the inability to even click “next” until the unnecessary narrator finishes reading you some slide, etc.). And then they have serious technical lackings like only working in certain browsers or (my favorite) having assessment tests so broken that you literally have to be careful how you type something to avoid correct answers being deemed “incorrect”. It’s always a waste of money and time, at each job I always sent survey feedback telling them this, and no company has ever changed how they do it.
Having only ever worked at a small company, I've enjoyed the absence of fear of blame. When the hierarchy is so minimal risk-vs-benefit seems to be easy to communicate and digest between both ends of the spectrum and accept with shared responsibility. I understand this doesn't automatically scale, but it feels unsatisfying to assume blame is as inevitable part of growth.
Can anyone think of a way to avoid it? Is it a necessary side effect of larger layered hierarchy or is it something else?
I once worked for a ~10 person consulting firm. We had a contract to do some IT work for a large firm's HR department. We'd joke that the HR department needed its own HR department, to work out the politics among them. And, we'd joke that we needed clients so we could borrow their office politics so we could have some.
My pop-science (I learned about Dunbar's number from Malcom Gladwell's 'Tipping Point') theory for this is that we're wired to think of some small set of people (~150) as 'our group'/'my people', and that within that group we're more likely to forgive, and outside that group, we're more likely to blame.
Story colleagues told me once: requirements for a car's rear wind shield was that is had to withstand air pressures of 120km/h. But the car could not go in reverse that fast, so the requirement did not make sense and was loosened.
On delivery of the produced cars, many rear wind shields were broken. Turns out they were transported from the factory on a train, faced backwards so they could fit more cars on the train.
(no idea if this really happened, but it's a nice little story)
I feel like the point of the story is supposed to be "all requirements have a reason behind them", but to me it's more of a case of "even a broken clock is right twice a day"
The reason contractors and consultants have silly requirements like this placed on them is because they’re service providers, not employees. The managers are the customers in this arrangement, and customers often want silly things, like your physical presence in the office, because perhaps that just makes them feel better, and it’s fine if some silly thing makes your customer feel better.
The broader issue he’s talking about is simply the inefficiency of large organisations. There are many things that start to become much less efficient the more you scale up the size of an organisation, including risk management. This is because the feedback loops between decision and outcome starts to get much longer, the causal relationship between decision and outcome starts to become much more blurry, the distance between decision maker and impacted user gets bigger, and management starts to be comprised of less leaders and more bureaucrats.
Not only is this unavoidable, but it’s a good thing. Large businesses benefit from the economies of scale, but smaller organisations get to compete with them, because smaller organisations have the potential to be orders of magnitude more efficient than large ones. It also creates market opportunities for other B2B companies to come along and address some of their efficiency issues. I’ve done lots of consulting and contractor work, and one of the primary factors that drives demand for my services is enterprise inefficiency. Even if you put aside any consideration for how slow they are to adapt to new technology, a lot of demand for my contracts has been driven by strict corporate salary bands. The board sets the maximum rate that an engineer is allowed to be paid, and any team in the company that requires skills that the market has priced above that rate can only access them through external contractors/consultants. I’d bet the same is true for the OP, even if they don’t know it, so perhaps they shouldn’t be so quick to deride enterprise inefficiency.
In one of my previous jobs at a startup (~15 people), the reason for the color scheme we were forced to use on our ecommerce product wasn't the result of some marketing study or human interface recommendation, but rather the personal preference of another employee who had a "special" relationship with the company's president.
The best way to deal with stupid corporate policies is to ignore them. Working remote when you're not "allowed to" is a big one to ignore, and if you don't have the access credentials you need, it might not even be possible. There are other policies that are much easier to ignore, though.
Don't like the dress code? Dress however you want. Don't think the status meeting requires your attendance? Don't go. Mandatory office hours are 9-5 but you have better things to do with your morning? Show up at 11.
What are they gonna do, fire you? Not likely. It's hard to hire people, and it's risky to fire people who perform well but flout pointless rules.
Maybe some of your coworkers will resent you for thinking you're special and the rules don't apply to you, but don't let their jealousy stop you. If your company isn't going to treat you like an adult, you don't owe them anything.
Protip: don't follow this advice. Your coworkers will resent you and you'll have a bad time.
Corporate rules can be broken, but it should always been done in a savvy way instead of a brazen way. In most orgs, your performance is less important than people's perception and opinion of you. Find a way to show them the respect and deference they crave, even if you break their rules anyway.
Being annoying and arrogant is bad performance in itself: you are producing irritation and resentment (e.g. giving the impression that rules apply to everyone else).
I really like this approach (and I tend to do this myself), but lots of people get really upset.
PG wrote an essay recently in which he talks about “aggressive conformists” and “aggressive non-conformists”. You’re right in that being an aggressive non-conformist at a big company likely (though not definitely) won’t get you fired, but it certainly will rankle the aggressive conformists and not make you many friends from that group. And that may or may not be really bad for your career.
This particular requirement that contractors cannot work from home had a purpose; so that full time employees have at least one advantage over contractors so that employees don't start asking to become contractors. Of course contractors get paid more so it makes no sense to be an employee... That's why companies need these odd rules; to artificially sweeten the worse deal... Even though there is no productivity benefit at all. It's all about exploiting psychology to keep costs down.
It's all imaginary. It's just that people are in different stages of enlightenment:
These rules/laws are Real
Wait, these rules/laws are made up/flawed
The whole thing is made up
Nothing is real
Lets make up our own rules/laws
These rules/laws have a functional place
We should build process around these rules
Every Generation washes rinses repeats forever. It doesn't matter if it's a corporate "requirement," government involvement or a scientific law, it's the same pattern.
In some countries (I am from Brazil, where this is the case), consultants would be allowed to work remotely, because, if they came too often to the office and took direct orders from a manager, they'd be considered employees of the company, not of the consultancy. If you walk like an employee and quack like an employee, you have the legal rights of an employee.
Once you have people in your organization, you can manipulate the internal economy of favors and perks in order to increase your ability to control their behavior.
I just learned about this apparently popular interpretation and after going out and reading about it enough, I had to come back and find this thread just to say:
"I'm in this photo and I don't like it".jpg
I'm afraid it might be true! This explains a lot for me.
I work in medical devices and we have a lot of that. We have tons of tedious processes that take a lot of time and don’t improve the product at all. Often they prevent fixing of problems. When you ask why things are that way the answer usually is “the FDA requires it”. But often when you read the latest regulations things it’s easy to see ways to improve our processes and still be compliant.
I think the problem is that the company has found something that works in an acceptable manner and there is a big risk that any change will have unforeseen consequences so people stick to what works, no matter how badly.
There is another set of internal rules that clearly make life of one function easier at the expense of others. Again, these are very hard to challenge.
I feel that as organization size increases, individual responsibility decreases.
At some point there’s a limit where individuals become responsible for nothing and everything is a policy or process. It would be interesting to try to study or quantify this with different orgs and roles.
Funny enough i though so as well but discovered quite quickly that without this stuff, a lot of people are doing shit.
Do i think someone needs to tell me not to put every shitty tool on my work laptop which has a corp certificate? Access to vpn and corp network? With access to HyperScalers?
No.
What do my collegues? Everything. Oh there is a nice new shiny tool and it sends metrics to an external service, lets try this out...
I'm not in favour of the "BigCorp controls my entire machine" approach, but this is silly. It's not a lot of people doing shit, it's a problem of scale.
Have you verified that every single application installed on your machines sends no telemetry, no crash reporting, and has no random web servers running that run arbitrary code? It's likely that you've missed one application. Now multiply that by 10,000 employees - all it takes is one application per person, and you have a massive amount of data being leaked.
This is allegedly what happened to the entire German population during the holocaust. They even made up a word for it something like "office talk." People displaced their personal responsibility with lies like "I HAVE to, it's the company policy. I just do my job. I don't make the rules."
I agree with the author about how the risks are more obvious to higher ups than benefits and that is why rules generally stay the same. However, one thing that I learned as a new parent is how important it is to have consistency! I think a lot of computer engineers (ie hacker news) would enjoy an organization which tinkers with policies to find the best way to do things... But I think with a large org things need to be pretty consistent.
Now that said, I personally have bristled being in a company which wasn't flexible at all! So the key is to find a balance between consistency and flexibility, and give all levels of management and employees empowerment to find the best way to do things.
Revenge culture permeates every pixel of society. Generally, the bigger or more entrenched the entity (corp, agency), the more hardened is the revenge process.
Even questioning whether revenge should be shaping our decisions is likely to be met with a measure of it.
My reason for this rule would be: if you allow consultanta to work from home, how do you make sure they are not double- selling their workhours? Like they bill you 8h, but only work 4h on your project.
I think that is the way to find pricing for any development activity. There is an industry standard range on price per hour, but no standard on price per feature (which is impossible).
Monitoring how much they actually use their mouse, keyboard and phone is an obvious one, along with snapshotting their display. I'm not saying I agree with this, but I know it happens.
Indeed, what amazes me is the implicit distrust so many managers have. Either they have a pretty low opinion of people, or they are lazy themselves and projecting. I've had managers openly say to me 'well we have noone else to check your estimation with'. Are there really that many devs out there that estimate 10 days, finish in 1 and take 9 off??
If you are 10 times faster than the average, but can only bill average rates than it might look fair to you to do the work in 1 day and take 9 off. If then you mortage needs payments, you might think why take 9 days off and not do the same for somebody else. Of course I do not endorse this behaviour, but i can see how somebody could see this not as stealing as it is.
This article is talking specifically about corporate policy. It is good to have solid corporate policy, but not so good if it was written without proper forethought, and even worse if they do not have a mechanism to change it.
I work in this arena, in the public sector, and COVID brought massive fast changes to policy. But the public sector also already has mechanisms in place to regularly change it - regular board and city council meetings, specifically. And most organizations have a specific cadence on which they review and update their policies.
But the corporate world varies - sometimes the board sets policy, sometimes the execs, sometimes a compliance officer. Whomever it is, they need to not just write policy once and forget it - they need to treat it as a living body of documents, responsive to changes in their environment. Some companies are good at this, some are not.
I don't buy the conclusion of the article, though, that the leaders don't know enough to make decisions and policy becomes a way to entrench arbitrary rules and escape blame. Risk management and compliance are not about making life easy for the individual contributors. They are about looking at big picture risks such as litigation, regulatory compliance, and business continuity. They then set policy to be sure that well-meaning people who don't have visibility into those high-level concerns don't just make up their own rules. Yes, it puts some pain on us workers. But reduces risk. It is their job to choose those trade-offs.
That being said, not everyone is good at it, and there does need to be solid communication in the organization to let them know what problems a policy causes, so they can decide whether or not to adjust it. There also should be a communication path for people to ask why a policy exists, and start a dialogue about it.
I once worked with a very talented senior artist. After a few years, he told me that he had never once submitted a time report. He felt it was needless busywork and that if he ignored managers and HR nagging about it, sooner or later they just stopped. I was horrified at the audacity and later astounded that he got away with it.
A friend told me about his company where the IT decision makers were largely centralised, physically near their cloud servers and they frequently discounted the measurable and serious difficulties of internal tech consumers suffered in other locations for years, even though their architecture would have made it comparatively easy to rebalance things for a more global approach.
Early in my career we were told of a major reorganization of my then company's network shares, required "Because of Compliance" - only once it was underway and some key points still needed to be clarified did it become clear that Compliance was not even aware of the project! Nothing happened to the individual who had lied and diverted substantial resources needlessly. The individual was not even a member of the department, so saw zero benefit either.
A fun one I experienced was awful VOIP call quality due to a policy requirement that all company network traffic route through the headquarters office, and all the latency that introduced for anyone who worked from one of the (many) other offices.
Then one day the director of sales had a rather important call with a major prospective client while they happened to be visiting one of the regional offices. We were enjoying absolutely stellar-sounding phone calls within 48 hours.
Founders should keep this in mind anytime they are negotiating contract terms with a corporation also. There is no such thing as a "standard" contract and almost all "mandatory" clauses are not actually mandatory. This also applies to real estate transactions and almost any other high stakes negotiating. Standard and mandatory are terms used to trick inexperienced or less powerful people/entities into agreeing to terms more favorable to an opponent.
None of this means you can necessarily axe all the requirements though. It entirely depends on your negotiating position. If a company wants to work with you badly enough they'll pay their lawyer the $500 hourly to modify the contract or approve your changes.
Everyone should keep this in mind for every type of contract.
In the beginning of my adult life, I was under the expression that contracts were static and immutable. As I got more experience, I tried being more demanding (and my skills were also more in demand) when setting up contracts and I have at multiple points managed to change what was supposed to be "mandatory" and "unchangable" many times, from rental contracts to employment contracts and many other things.
> If a company wants to work with you badly enough
This is usually the problem for many founders early on, if you already have a great product market fit then everything will bend to your demands and even turn down customers if they are not flexible enough. Most products do not have that very strong PMF early on.
The fact that requirements change when circumstances change DRASTICALLY doesn't mean that they were imaginary. It just means that they were based on a situation that no longer exists.
The company reacted to changing circumstances. This is how things are supposed to work.
> The big question on managers' minds (either consciously or unconsciously) when approving a policy change is "if I do this and anything goes wrong, will I get blamed?". This should not be the basis of choice but in practice it sadly is. This is where things go wrong. The people who would most benefit from the change (and thus have the biggest incentive to get it fixed) do not get to make the call. Instead it goes to people who will see no personal benefit, only risk.
This cones very close to hitting the nail on the head but not quite. The crux of the matter is:
/Corporate risk aversion culture is only aware of the risk of change, and ignores the risk of stasis./
This and other related issues to "decision makers being so very far removed from any actual work" is why I'm very happy I have never had to work for a company of more than a few hundred people. And hope I never will.
Sometimes they are not just there out of fear, but also not for your benefit. For instance at one corporation I worked for they made a policy against communicating with other departments outside of jira tickets, in order to clamp down on any union organizing that might take place. And also to prevent us from spreading rumors about an imminent wave of layoffs. Of course they said it was for “security reasons”.
So I would assume malice rather than just fear of risk for any seemingly strange policy measures. The management is not your friend.
A lot of these get baked into boilerplate legal agreements companies end up signing when partnering with other companies or dealing with compliance/certifications/auditors. It can cost a lot in lawyer time to go back and forth renegotiating and redlining contracts to get all these addressed (and it may not even be an option if you're a smaller business). And this is even if people impacted or implementing the requirements are even involved and suitably motivated to pushback.
No rules are appeared by itself from thin air, there was always a cause. Most of the times rules are created to reduce a cognitive load. Thus changing the rules should be based on a cause greater or equal to the one prodiced the rule.
No one will change remote working rule just because of one or two outsiders' caprice (calling rule "stupid" shows just lack of understanding). World pandemy is obviously enough cause to change some rules.
At least part of OP's problem is that she was a consultant. She didn't work there. Note that in her story, employees of BigCorp did WFH. There's even less upside for BigCorp to change policy because it would positively impact some consultant's work/life balance. In BigCorp's mind, the consultant is exhorbitantly expensive and doesn't deserve work/life balance anyway.
To be fair, they probably had to choose between taking risks they previously weren't willing to take (trust in consultants should always be lower than in your own employees in my opinion) but had no choice with Covid. The last months were a matter of life or death for many companies.
And to make it clear, I love home working but I also know colleagues who work without screen protection in public places or leave their laptop unlocked when they are at home and have guests. That's potentially horrible for the company but absolutely unenforceable without physical presence.
Playing devil’s advocate: Is it possible that, yes, of course you can work from home, and you must do that during lockdown, but is it as efficient as working in the office?
I think that’s why many don’t like remote: They can’t see you working and they’re not as comfortable video-chatting.
I’m a remote-only developer, but if I could, at any time, turn to my coworker and ask about something, I’d be more efficient.
>if I could, at any time, turn to my coworker and ask about something, I'd be more efficient
Haha, I remember when I was in the office before this all started, I had multiple coworkers who would do exactly that. I can't tell you how jarring it is to have headphones in and be in a focus state implementing something, just to have someone imagine they are able to trample on that time for their own efficiency and ask me questions. Doubly worse was they wouldn't even think thru their questions properly
Now, we're all remote, and I just hit back at their impromptu questions with "can you give me more context?" and half the time they solve it themselves. Imagine that
Would your coworker be as efficient if you could, at any time, interrupt them? I agree that there are trade offs with remote work vs colocated worm. The ability to interrupt and distract more easily I don’t consider a strength of colocation.
Corporate policy evolution through time: 1. given x unit of time passing, the core information that caused the assumption is forgotten or transmuted into something else. 2. given the same x unit of time passing, the context has also shifted but no tools were stated or developed to rethink the assumption.
Corporate knowledge management requires effort to be done well.
This is predicated on the idea that the current situation is just as good as office work. That's debatable and even if you think WFH is strictly better there was cost in the growing pains.
Just because you think differently about a trade off it doesn't mean decisions you don't like are "imaginary."
CYA has been the driving force that has allowed the species to survive this long. It's a fundamental human instinct that was selected for and reinforced through our long rough evolution on the savanna, just like pattern recognition and dominance games.
Do you really want to be the one responsible for wiping us all out?
We have some imperfect system. Everybody knows about the problems it has. Somebody writes an article about how stupid and wrong those problems are. How does that help anybody?
The real question is how you fix those problems without introducing others. Passed a certain age and work experience you start to notice that the bureaucracy that stops you from doing quick smart fixes also stops a lot of people from doing serious damage to the company.
And yes, in some cases external factors will force a quick reevaluation of that bureaucracy. And you can say "I told you so all along!". But again, that's zero value to the business. The real value is in the work done to plan and execute a change is a way that assures no collateral damage and also to satisfy humanly needs/desires in the hierarchy.
People want to cover their asses, but guess what, when you put the responsibility of a change on the smart ass that always annoys coworkers with his brilliant ideas they tend to back away from it cause his ass is precious to him too.
If only people were willing to place all responsibility with me. I’d happily take it if we don’t have to deal with all the senseless policies/teams any more.
The bureaucracy stops idiots from being idiots. The logical solution is to not hire (or retain) idiots, not to add more bureaucracy.
From your post I think you are quite young and inexperienced. There's not enough non-idiots to run companies. Having to deal with them is a fact of business, you can't simply isolate from the world. The fact that nobody is willing to place responsibility with you should tell the same thing.
To sum it up: try to talk about what you already did. So from experience, not what you would do in your ideal world.
Considering I have worked with extremely talented and extremely stupid people I believe I know what I’m talking about (within the limits of my experience).
There seems to be absolutely zero correlation between organisational role/level and ability, instead using years of service as a proxy.
Maybe it could stimulate reflection? Granted, my hopes are dampened too.
> Passed a certain age and work experience you start to notice that the bureaucracy that stops you from [...] doing serious damage to the company.
If you get even older, people might even stop you from running around without supervision to stop you from doing damage to yourself. Doesn't mean it should always be the case.
There will hopefully always be forces the evaluate practices. Bureaucracy can be good and it can be bad. It behaves just like french fries.
But I think the general trend that large corps seem to get a bit slow on innovation holds true. Doesn't mean it is not working well with lots of accomplishments.
> There will hopefully always be forces the evaluate practices.
And there are. The article itself contains such an example. On the other hand they ask why was it necessary for coronavirus to do so. The answer is in the question: there was no need to evaluate the practice until it was.
> you start to notice that the bureaucracy that stops you from doing quick smart fixes also stops a lot of people from doing serious damage to the company.
That is the problem - we treat everyone as equals, trying to put rules that work for everyone, so the lowest common denominator is who the rules are made for. So someone who can leverage their knowledge for the good of the company gets ducktaped with red tape because someone else is prone to fucking up.
>Answer: the higher on the hierarchy you are the less the rules apply to you.
Yes and + the more you break/ignore the rules successfully, the less they apply to you.
But who gets to decide and on what basis is an organisation-specific question. Trying to answer it for "everyone" also gets you into lowest common denominator bucket where you can only give an answer that covers all cases. That is why a huge chunk of organisational/managerial advice is just a "how not to shoot yourself with that gun in your hand for idiots" guide.
I had this experience with some companies I was interviewing with. They would force me to submit my most recent payslip from my previous employer. I gave up after some protest, coz they could not have proceeded with my interviews without it.
Well, you could … not have proceeded with the interviews? I'm sorry to state this to you without knowing your personal situation, but in general such shitty employer behavior is only possible because we employees bend to company pressure. Every clause in your work contract you just accept because it is a "standard clause we write in every contract" is a loss for our combined bargaining power.
Walk away from these idiots secure in the knowledge that you dodged a bullet. They're doing you a favor by demonstrating how broken their organization is.
OP is probably from India, this is the norm here. Most companies will ask you for your last months of payslips, reason? Nobody knows.
In general, companies treat you are like a criminal trying to con them. Submit previous payslips, last company's relieving letter, contacting the previous company, stupid non-compete clauses, bond clauses and list goes on.
This is unlikely to change as most employees don't really have the bargaining power to question the rules, the workforce is abundant, they can just tell you to fuck off and hire someone else.
The author explains this situation as a downside of hierarchical organizations, but what other kinds of organizations are there they can claim real experience with?
What the author is describing here is poorly designed management incentives.
Many policy is tangled in initial agreements made with the local community and the Corp. For example, $$ leverage when you have large groups commuting rather working remote. The commerce wheel keeps rolling
Many stupid policies held by corporations generally benefit another Corp or uphold other community agreement which were often made at the beginning of operation.
Another thing is consensus. If you can get buy-ins from a group of colleagues, preferably higher ranks, it'll be easier, as not a single person is to blame.
If i pay an external $1000 per day and that price doesn't change when they are in my office or at home, i would put them in my office for obvious security risks.
If corona means i can't get my work done, i have a new risk.
Security risk for external consultants having full access at their home vs. no one can work -> i might choose the externals giving access.
In my experience (I am a contractor) the real reason is control. Companies want to ensure you (who are a separate legal entity and therefore have certain freedoms) have your nose to the grindstone. It's understandable, but paranoid.
No, because keeping external consultants happy is not the job of the company. They only have to keep their own employees happy, external consultants are seen as temporary, cheap, illoyal and taking away normal employees' jobs. Most parts of the hierarchy (except the very top) will try to inconvenience them if possible.
That doesn’t seem very rational. The goal of all managers should be to maximize productivity per dollar spent. Motivated contractors who are well integrated in teams, don’t need to be replaced often, and generally kept happy are far more productive, hence the bottom line of the company benefits by it.
It isn't "big picture" rational, but it is rational in a myopic way. Most managers in a company are only responsible for their own little kingdom and will care little about the rest or any overall whole-company picture. Only the highest-ups are responsible, accountable and incentivized to care about the whole.
Pretty sure my home is more secure than your office. Literally no one wanders in here and tries to steal corporate secrets / install malware. It happens all the time in actual offices, and it's surprisingly easy to gain access to the most "secure" offices.
Company has their office under control but not yours. Doesn't matter if you think your office is more secure. Mine has access control.
Also we are not talking about the person who is really smart about it we talk about the avg person. There are plenty of stories of laptops being stolen in coffee shops or from a train.
And its also a Network thing. Do i want you to be able to pull/push something into my corp network fromyour home network?
Your example forgets to acknowledge that while there may have been good reasons for people to not be allowed to work remotely, COVID is a very pressing reason to do allow it, and the benefit of allowing people to work remotely vs getting absolutely nothing done or getting fined by the government for forcing people to come in is extremely obvious. So much so that the cons of allowing this have to be put aside, or remediated differently (which in some cases can even require investment).
It's a risk assessment that suddenly tilted the other way.
I don't agree with "everybody should work from the office" sentiments, myself, but I have seen this happen very clearly in many organizations.
For instance, there's a restriction at my workplace (not a software company, a regular old industry fortune 500) which prevents git installs from pushing to any non corporate GitHub repo from our work machines.
The obvious reason it exists is to prevent people (a lot of who are analysts or data scientists - not professional programmers) from shooting themselves in the foot by pushing code to their personal repos in error.
It's annoying to work around if you need to say push a contribution to an open source project and you could rage at the infosec for enforcing it - but it obviously exists because stupid errors would have happened.
This principle is also called Chesterton's fence
https://en.m.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fen...