Hacker News new | past | comments | ask | show | jobs | submit login

Somewhere there is an architect saying "I told you so!" I can almost guarantee the requirement was to handle several hundred requests per day, an architect pointed out if we get deluged then we won't be able to handle it, so maybe they were able to get them to allow for one or two thousand requests per day.

Now of course we don't know what the architecture of this system is and what the deltas in cost would have been to allow this to scale-out more - but I do know that all too often the more robust solution giving you much greater protection and lower cost down the road is often discarded if it costs even just 5%-10% more. Then the day comes when the people making these decisions get caught flat-footed and they try to blame everyone but themselves. It doesn't always happen like this - but it happens a lot.




This reminds me of an old story about an engineer who took initiative and automated the accounts receivable process at his company, now they get paid 25% faster! He shows his boss and gets a promotion.

He decides to do it again, this time with accounts payable, and is promptly fired.


I think that is small-think. The technical solution is only part of the problem and scaling up all systems to meet the .1% case seldom makes sense. They were smart to save 5-10%.


Eh.... On the flip side, processing and storing some simple text forms should be able to handle 1000s of simultaneous users on one box.

So, probably like most software of this nature, the reason it's not scaling is simply because the people who made it probably weren't the greatest engineers on the block.


These are the same kinds of assumptions that lead engineers to think they can build a [any product] clone in a weekend. It's unlikely that the problem or constraints are nearly as simple as one may think.

Consider: single auth across all the state's services, external APIs, identity verification, address verification, employer ID verification, federal/military ID verification, income/tax verification, phone verification, bank account information, translation into multiple languages, accessibility features, etc. Also, there's probably a lot of legacy infrastructure and process.

Also, if "ability to burst to 10x normal filings per week that might happen once every 40 years" wasn't in the spec, I think they were right not to engineer for it.


Admittedly it's a value call. My thought is generally if it's a small incremental cost that greatly increases the robustness then you should go for it. But - sometimes the money or time just isn't there. I'm bothered more by the people not even wanting to have the discussion than by those who do a summary analysis and decide it's not worth it.


That's a fair point. My comment comes from being in too many meetings where people want Twitter scale for conference-room-sized user bases.

It sometimes borders on sealioning.


The 0.1% case happens. And if it’s going to seriously wreck lives when it happens then you should solve for it. Does Instagram need to handle the 0.1% case? No. But the unemployment website should.


Unemployment forms being delayed by a day or two to deal with poor queuing will not "wreck lives".


Yes but for every architect there's an antiarchitect saying YAGNI!!1


You spelled pragmatist wrong.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: