Hacker News new | past | comments | ask | show | jobs | submit | chazu's comments login

Absolutely - anyone who focuses this much on personal life during an interview is almost assuredly clueless as to how to manage people.


I've got one of each and they are unquestionably the best things I've made, done or invested in.


I'm consistently shocked by the number of people who have never heard of this principle. Introducing arbitrary numerical limits (emphasis on _arbitrary_, as performance limitations or other actual requirements obviously trump this rule) is a design decision that I find myself having to clean up after frequently.

I see a lot of people here questioning the wisdom of the rule, however, like every other principle used in SWE, it shouldn't be applied blindly. Ask yourself "why am I specifying that a maximum of five wangervanes can be specified in the turboencabulator settings?" _IF_ you have a good reason, fine. Most of the time you will not.


Limits are good. Limits mean that you can test your software under both min and max conditions.

If there’s no hard limit, then the limit exists merely in the developer’s mind as what they consider sensible or not.

Inevitably there will be some user who takes your software past what the developer considered sensible. And unknowingly and silently, this user becomes a tester in production.

The real problem is not DRYing your limits. There should always be one central point of truth, one constant that determines what the limit is.


Examples of issues Ive seen in the wild because people violate this rule include payroll systems with an arbitrary maximum number of pay codes and review app systems with a static number of review apps.

Just like every other heuristic in software engineering, its not a silver bullet, but generally speaking, this principle will serve you well.


So you're confirming what I said. No theoretical basis. You just reaffirmed my position that this rule of thumb is anecdotal with your own anecdotal experiences.

If you look through this very thread there are people talking about anecdotal experiences verifying the opposite effect.


Interestingly this sort of thing falls into a special category where reasoning from first principles is less rigorous than pattern matching to experience. That's because we don't have a good theoretical model.

We're at the point in history where doctor who noticed less patients dying after they wash their hands argues with the doctor with a very erudite explanation of how that's impossible. Maybe someday we'll discover germ theory, but until then we're left to argue anecdotes down in the mud.


You recall mercury? Mercury was used by doctors for years as medicine and it was validated by anecdotal experiences. But it really had the complete opposite effect. It killed people. That is the reliability of anecdotes and an illustration of how delusional humans and "experts" are. Nowadays there's still no complete "theory" for medical science. We have partial theories like biochemistry but it's not a complete theory as in we can't completely derive which chemicals can cure which disease via a formal model.

In fact given the complexity of reality we may never ever have a formal model for medicine and as such we most likely will have to forever rely on asymmetrical nature of science.

Computing is bounded in a simulated universe axioms and logic. Computers are in actuality a part of reality but we try to use them as if they are seperate universes of pure logic and math games. To say that something has "some theoretical basis" is a very precise statement in computing because unlike medicine it is VERY possible for entities in such a bounded system to have a complete formal theory.

The problem is the Zero one infinity rule is not such a thing. The statement to say it has "some theoretical basis" is therefore completely false. Especially given the fact that in this thread there are a bunch of counter examples and I myself don't fully agree with it. How do you know this Zero one infinite rule is not just some form of mercury in disguise?

My disagreement, however, is NOT the point. The point is that this rule currently has ZERO theoretical basis. Additionally given the existence of counter examples, it likely will ALWAYS have zero theoretical basis. I don't completely deny the validity of anecdotal evidence, but, again, the claim made initially by the GP is false.


Cue is one of the most exciting developments that's impacted me professionally in recent years. I can't advocate for it enough.


Cue is interesting, but why is it only available as a command-line tool rather than a library? I'd want to integrate such a configuration language in my programs, so I could use its evaluation and validation capabilities rather than writing a custom parser/validator.


If you're writing in Go, you can use it as a library. This is poorly documented, unfortunately.


Also their website is terrible. One of those projects that assumes you've already decided (or been forced) to use it and have cleared out a week of your schedule to learn how to use it.


I learned cue from it during one weekend with plenty of time to play with kids, using it in production since 2020, it's been absolutely great, zero problems, very terse configs, intuitive formalism.


I took another look and eventually found the bit of the website that they should put front and centre in the Tutorial page. Still difficult to navigate (why doesn't the page tree show up on the left?) but it is at least well written and to the point.

The "learn more" button on the front page should link to that, perhaps with a single paragraph giving motivation.

And the main page breaks the fundamental rule of programming languages/formats. Put examples on the front page!

I assumed they hadn't done that because the examples would be too complex or maybe the concepts were too difficult to demonstrate with small examples but having gone through the tutorial that isn't there case at all.


Are you talking about https://cuelang.org/docs/tutorials/tour/intro/ ? Even that is a bit light on detail for real usage, while https://github.com/cue-lang/cue/blob/v0.4.3/doc/tutorial/kub... is kinda rambling. I don't think that there's "One true way" to introduce these concepts; How you teach cue to a config-generation novice is very different from someone who's used to using an IDE to generate kubernetes YAML.


Spec [0] is very good + practice with small files.

[0] https://cuelang.org/docs/references/spec


I find specs nearly unreadable when trying to first digest a language; While invaluable for advanced usage and implementation, I can't read a BNF-Style Spec and make heads or tails of what's going on unless I also have an annotated example next to it.


I agree. The website is heavy on theory and very light on practical usage.


I want the author of htmx to get together with the guy from pandastrike and rant about the misuse of REST for an hour a week. It would be my new favorite podcast.



An API is also the interface used by humans to create programs. When you use a library, you're using its API. This sense of the term API is often lost.


True, but in this sense every code, every library and every API would be "for humans", which renders this distinction rather useless.


Not sure what you mean. Sometimes the word API is used in one sense, sometimes in another. It's a useful distinction insofar as it allows you to talk about APIs as things used by programmers. I find many developers have a hard time understanding this sense of the word API and as a result fail to apply good API design principles such as SOLID. In fact I think this is often what separates mediocre programmers from good ones.


Very cool - thank you for sharing. I like the idea of task-based benchmarking for UI toolkits, and I'm also happy to see more tview code for me to study.


90% of SREs and SRE managers haven't read the SRE book(s).

99.9% of folks hiring SREs or starting SRE teams haven't read the SRE book.

The SRE book (and its sequels) say quite plainly what SRE is and isn't. They also say that not every org is going to be exactly like google so no, "we're not google" isn't an excuse.

the E in SRE is for engineering. As in software engineering. SREs are software engineers. Or should be. If your SREs don't know basic SWE principles, they're not SREs. If your org isn't applying software engineering principles to minimizing operational complexity at scale, your org isn't doing SRE.

I'm constantly shocked by how hard these things are to grasp, even for most SREs. If the problems I (occasionally) get to solve weren't more interesting than most regular product work, I'd get out of "SRE" entirely.


I think this myth exists because Google was (is?) famously obsessed with SWE. But if you actually read the SRE books and look at the actual discipline of SRE ("what's the difference between SWE and SRE?"), SRE is quite blatantly just operations management. The website is a power plant, and the SRE runs the power plant. You don't build parts to run a power plant, you use software (as in manipulate/control/operate) to run it. You act quickly when the numbers go out of line, you write reports and control how much power is going in and out, respond to surges and dips, etc.

For whatever reason, Google decided to tell people that the same person who's building the klaxon and the concrete wall and the pipes for the power plant, and the person who's operating the power plant, are one in the same. But that's clearly bunk. Building a part and running a system are completely different disciplines, and anyone who does both will only be half good at both. Humans are shit at multitasking and there's few true polymaths out there. Show me a master programmer and I'll show you an amateur woodworker.

I also don't believe software engineering principles will help you reduce operational complexity. If anything, software engineering tends to either make things either inefficient or subtly complicated. Reducing operational complexity comes from the discipline of operations, which isn't engineering. Non-tech companies have known about these distinctions for like a hundred years. Deming applied scientific rigor and analysis to come up with better practices, but he didn't have to design any widgets to do it.


> For whatever reason, Google decided to tell people that the same person who's building the klaxon and the concrete wall and the pipes for the power plant, and the person who's operating the power plant, are one in the same. But that's clearly bunk. Building a part and running a system are completely different disciplines, and anyone who does both will only be half good at both

Depending on the team, SREs can absolutely involved with "building the system", especially the klaxon ;) Examples include designing and implementating metrics used to make make decisions in business logic and or exposed to customers/users, writing routing components like mixers and proxies, developing data pipelines, etc. At Google many SRE teams build and run entire multi-tenant systems with no pure SWEs involved at all.

Healthy SRE teams should be spending 20% of their time on operations. On my team its actually the devs who do most of the operations work. They take the pager during business hours and we route most maintenance tickets to them.


“[…] and we route most maintenance tickets to them.”

My difficulty is that mandated separation of responsibilities within our org is preventing us from embedding ops in dev.

Anyone successfully fought against this and have tips?


One company I worked for opened a position for an ops person on the team.

They shadow-IT’d their way to launch and we’re hugely successful, not the business is largely re-orging to better fit the paradigm.

Was a big gamble. The wrong person could have left a mountain of tech-debt.


The website is NOT a power plant, it's just code. In software, "operations management" is basically infrastructure automation, incident response and build and release. All of these require some software development or at least code literacy and familiarity with software development practices. If there's large overlap in technical skill between the operators and the builders, then it makes more sense to see them as the same but focussing on different problems.


It's probably useful to talk about what Operations Management is first. It's a business discipline that touches on many parts of a business. It is defined as "the management of an organization's productive resources or its production system, which converts inputs into the organization's products and services". You can get a PhD in Operations Management.

In tech, software and data is the "productive resources", and the "production system" is the actual system you build out of those resources: the website, API, etc. You don't have to write any software to build and manage that production system. Maybe that's unusual to people in tech today, but it's a fact that you don't have to write a single line of code to build and operate such a system. Heroku, PagerDuty, DataDog, Splunk, Octopus, AWS, etc, all are products built with the sole purpose of enabling operations without the need to write code. You can assemble logging, alerting, monitoring, web server, networking, database, deployment, etc, without ever writing a single line of code, and have it be highly available and highly reliable.

The title will vary (Systems Engineer, Operations Engineer, DevOps Engineer, Site Reliability Engineer, Systems Administrator) but the job is the same: to use Operations Management techniques to ensure the products and services are productive. You can use software development practices for all of this, sure! But they are absolutely not a requirement to accomplish the goal. And many other roles in the company are involved in Operations (QA, PM, DM, etc) and may or may not use code. The business doesn't care about code, it cares that its resources are being used properly and the production line is operating nominally.

In terms of the distinction between builders and operators: you could say that a construction worker and a custodial worker are part of the same occupation because a lot of their skills overlap. They both need to understand how the building works and may need to build/repair parts of it at times. But they're still two different disciplines that require different training, experience, and day to day responsibilities, and as such we don't lump them into the same category.


THere's at least one big issue here, which is that you're pretending that a website is like a building or a dam. If that were the case, a company like Google would have a (relatively) small team of SWEs who "built" things, and a much larger team of SREs who maintained them over their operational lifecycle once the SWEs were done building the thing. But that isn't the case.

Software systems (at least in competitive consumer markets) are constantly changing and evolving. To use the dam analogy, there's constantly a team of people making the dam taller or wider or deeper, even while the dam is running and producing power.

All the SRE teams I've worked with have done a bunch of things that go beyond "operations". They are usually consulted at the design stage, to make sure that the thing is going to be built reliably. They're also responsible for ensuring ongoing reliability as all new features are added. That means that the features themselves don't impact reliability, and that the process of adding new features doesn't impact reliability. None of this work has a reasonable analogue in your dam analogy, except perhaps as some combination of consultant and regulatory body.


"You don't have to write any software to build and manage that production system."

It depends on the scale and complexity of your application. At some scale/complexity, it absolutely requires writing software because your IAAS provider doesn't provide you with automation that covers 100% of your operational needs and even they recommend using infrastructure as code tools to manage your infra.

If your production system is a CRUD service with 3 application nodes and a managed PostgreSQL instance then you do not need to write software to manage it. But if your application is that simple, then I'd suggest you probably don't need a software developer to build it (Wordpress, Wix).

Construction vs custodian is not a fair analogy because their training and evaluation doesn't really overlap. The training and eval for both "dev" vs "systems" engineers is very similar; most have CS degrees and have to do some leet coding to get the job. Devs generally need to be better at algos, systems engineers need better understanding of networking, os, system design.


> I also don't believe software engineering principles will help you reduce operational complexity.

This isn't a goal of SRE, in my opinion nor in anything I can recall reading. The goal of applying software engineering principles is to accept the increased complexity in exchange for a reduced operational burden.

There's layers of that effect, and the right one depends on largely on your operational burden. Sysadmins shun complexity, so systems are simple but doing mass updates requires a lot of manpower. DevOps embraces some complexity like Ansible or manually orchestrated containers making it easier to do mass updates, but still a burden. SRE embraces complexity, in exchange for a dramatic reduction in manual effort on many tasks.

The idea is that at certain scales (or reliability requirements), it becomes cheaper to hire a small number of expensive people that can manage complex systems than it is to hire a large number of people each managing a simple system.

Software engineering arises because it can effectively trade complexity for reduced operational burdens in exactly the areas you want. You don't have to migrate to a new infrastructure orchestration tool, you can just write an orchestration tool on top of what's there (which I've actually seen done). Was it perfect? No. Was it cheaper than migrating a half million containers to Kubernetes? Yes.

Operations management tends to be very inflexible. They have a set of tools, and anything outside those tools is either a no go or will require replacing an old tool at the cost of months of effort.


its not that this is hard to grasp. it's not. in fact, many of the people i've consulted on RE have read all or parts of these books. IMO, it's mostly comes down to selective interpretation.

it's like telling a homeowner that they need to spend $1000/year on an annual maintenance item to prevent a _possible_ $15k repair bill every five years.

For some, $1000/yr is too expensive for them. So they take their bets or skimp. (People who think you can do SRE without being SWEs because they "can't code")

For others, $1000/yr is affordable, but because the $15k bill is "unlikely", they skimp. (People who think you can do SRE without being SWEs because even though they can code, they'd rather separate those jobs.)


Well said.

> 90% of SREs and SRE managers haven't read the SRE book(s).

> 99.9% of folks hiring SREs or starting SRE teams haven't read the SRE book.

These are free to read online, for those that are wondering:

https://sre.google/books/


> From the productivity standpoint, it is not acceptable that a Machine Learning engineer or a Full Stack Developer are expected to know Kubernetes. Or that they need to interact with a Kubernetes person/team.

I agree - these things should be abstracted from the developer - thats the goal of SRE/platform engineering - DevOps is [supposed to be] as you said, a philosophical and cultural stance around early productionization. While not mutually exclusive, they're not the same thing.

But back to your point re: orchestration-level concerns being foisted upon devs - at a shop of any size, there will be devs who feel they _need_ to touch kubernetes to get their job done (wrongly, IMHO) as well as devs who want nothing to do with it - so without engineering leadership throwing their support heavily behind a specific approach, its hard for a small team to deliver value.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: