Hacker News new | past | comments | ask | show | jobs | submit login

Sufficiently fast software often allows leaving out whole layers of crap and needless indirection, the most common being caching. Fixing an algorithm so you can omit a dedicated database of intermediate results can be a huge maintainability/usability improvement. The same principle appears all over the place, e.g. immediate mode UIs, better networking (e.g. CSS image tiling vs. just fixing small request overhead in HTTP1 vs. QUIC), importing giant CSV files via some elaborate ETL process vs. just having a lightning fast parser+query engine, etc.

Depending on how you look at it, you could view large chunks of DOM state through this lens, as intermediate data that only exists to cache HTML parsing results. What's the point of allocating a hash table to represent element attributes if they are unchanged from the source document, and reparsing them from the source is just as fast as keeping around the parsed form? etc. These kinds of tricks only tend to present themselves after optimization work is done, which is annoying, because it's usually so damn hard to justify optimization work in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: