Hacker News new | past | comments | ask | show | jobs | submit login

In the past when someone asked for something impossible, I offered up the closest possible work arounds, and let them choose. I give them all the detail they need (but no more) to understand what the limitations are if they ask why it's impossible.

They have jobs to do, that were possible before things got automated, they should remain possible afterwards.




I do the same, but sometimes you enter this neverending scope creep in trying to satisfy the original request.

Example -

At one of my former jobs, management became concerned about developer productiveness. I was tasked with creating a UI + backend that would scrape our bitbucket repositories, and output a coder's commit history, LOC, and some other stuff.

I quickly ran into an issue. This company had over 700 repositories - and by my estimate, added about 20 new ones every quarter (it was a large company with an absolutely massive codebase). Well, even doing a scan of ALL repositories would quickly eat up my rate limit with the API, so I came back to the stakeholders and told them what they were asking was pretty much impossible with the current limitations of the API.

"Can't you build a cache? or a database? or scrape it yourself?"

Ok, sure. Iteration 2 I built a database and queried it for the answers the users wanted. It spun into me creating nightly jobs that would scrape repositories and create this massive cache. Then the stakeholders began complaining about "diffs" of several hours - up to 1 business day - and I again ran into limitations with rate limiting. So that led me to try to make the caching system "smarter" and only archiving "hot" repos, and so on and so forth.

In the end, of course, they didn't like the look/feel of the UI and scrapped the project after several months and I took a performance hit for it. Probably for the best, I know what they wanted to use that tool for (filtering devs by arbitrary "productivity" standards) and it should've died at the vine.


This is a constant problem when managing the expectations of product owners or stakeholders. You tell them "X is not possible, but I can give you Y with Z drawbacks/risks." They say, "Yes let's do that." and when you do, the first thing they say is "Great, but I noticed Z drawbacks happened. Can we fix those?" Arrrrgggghhhhhh....


What's the total disk-size usage of all those repos checked out locally? Sounds like you should just have rely on git (or mercurial?) itself for the whole thing. For example on an EC2 VM which persists the repos on an EBS volume, spins up, calls the API only to get a list of created/moved repos, clones and fetches, then from there you operate on everything locally.

I've done similar analysis for orgs with same order of magnitude of repos on both GH and BB. No need to re-implement caches, diffing, or other optimizations that the VCS already handes for you.

Sorry to say it but sounds the requirements were completely sane and realistic.


Not really, no, they wanted certain metrics that could only be obtained by the API. But how intuitive of you to suss out their exact requirements without me even detailing them! you should do my job.


> developer productiveness

> LOC

This definitely should've died at the vine.


isn't it correct to ask bitbucket for a higher rate limit?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: