Hacker News new | past | comments | ask | show | jobs | submit login

The biggest problem I see with this is that in real life clients will hard-code the urls anyway and break when the urls get changed.



Why is this the biggest problem? For every protocol there are broken clients. For example, most XML parsers are broken. That is not a problem of XML, is it?


What's the alternative? For clients to refresh their copy of the "parent" resource [there may be a better term for this... I mean the resource used to access the resource that is to be gotten] often? For a commonly interactively accessed thing, it would introduce a lot of latency for clients to get it every time, so I suppose it must be cached... and then the cache of the parent resource be refreshed regularly and in case of failure? How far up the tree should that go?

In other words, if I want to get the Twitter feed for Joe's "ABC" group and I don't want to hardcode the URL for that resource, how often should I download (and cache), parse, and traverse the resources for Twitter's users then Joe's feed then Joe's ABC group, etc.? Or am I misunderstanding this?


In this case respecting HTTP's Expires: or Cache-Control: max-age= headers seems like the thing to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: