Hacker News new | past | comments | ask | show | jobs | submit login

Reasonable argument, I like it, however would add a bit of salt:

I've seen several times so far that people maintaining things are actually doing a much better job, while those who were granted honor of doing a new thing because hey they are "pros". And once they did the prototype, they are _considered_ even more "pros", because hey, they wrote it. The people who rethought and rewrote the thing later were real makers, but didn't get the appreciation of "pros".

The worst thing is that "pros" status keeps working for itself, even while delivering worse job. I wouldn't have a problem if the system could fix itself over time.

Kinda related: "Confession of a so-called AI expert " https://huyenchip.com/2017/07/28/confession.html




I've been on both sides of this fence. As a maintainer, you have all the time in the world to understand the system, as it is, and improve it. You have the benefits of hindsight.

As a producer of the mvp, a lot of times there is 'exploratory code' where you're not actually sure how it will work yet; or scale. So you bang out a few approaches and pick the best one for the constraints you have. And basically never touch it again.

In one case, I was working on the mvp and the team who would be maintaining it where the one's doing code review. The code turned out way better than either of us would have come up alone. It ended up being one of the coolest projects in the company and some internal secret-sauce for making money. For example, I remember one code review where they asked to add a configurable callback in. I was like 'why?' and they answered, "we're already being asked if it can do X, if you add a callback here, here and here, we can already start building that feature so you can focus on the core."


Yes, but when a "pro" takes too long to deliver something, then that thing is considered hard, because even the "pro" cannot make it quickly.

Re. scaling -- probably, agree, just I never seen myself (working in distributed systems). On my personal experience it was always more-less clear, depending on what software/database/data types we intended to use. So before implementing an MVP, I could roughly answer what price/scalability it would have. But I can see it might have, especially in fields of statistics/ML/new algorithms. E.g. "try using SIMD for this new sorting algorithm" -- only tests could show if it's any better.

But in that areas, usually managers shouldn't judge by outcome anyway, but rather by how fast they make and test hypotheses.


Wow, that blog post post was a breath of fresh air. Can you believe it was written almost six years ago? She is really seeing into the future. Unbelievable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: