Also an economist and a data scraper/consultant here -- depending on the data, some times all you need to figure out is correlation -- frequency of updates, listings being live for X time; clusters of listings around Y days, etc.
In terms of a few real-life examples, on the one hand you have eBay which provides you with sold data (API through Terapeak). On the other hand you have Craigslist, which is kinda opaque, hates scraping, but you can monitor listings and their half-life. (Listings that disappear quickly presumably get sold quick; listings that stick around for weeks relisted over and over have lower liquidity presumably and/or are priced high.)
eBay's completed listings is definitely one of the best applications of obtaining sales data on the Internet that I'm aware of. Besides that, in some cases there are ways to imperfectly estimate quantities when best seller rankings are available (e.g. at Amazon) -- Chevalier and Goolsbee where the first to suggest this approach back in 2003.[1]
As you mentioned, monitoring half-life is another imperfect approach, but it is of course plagued by false positives (a listing goes away but no sale was made). There was a Google Tech Talk many years ago where some economists took this approach[2], except they were looking at pricing power instead of measuring quantity sold.
In terms of a few real-life examples, on the one hand you have eBay which provides you with sold data (API through Terapeak). On the other hand you have Craigslist, which is kinda opaque, hates scraping, but you can monitor listings and their half-life. (Listings that disappear quickly presumably get sold quick; listings that stick around for weeks relisted over and over have lower liquidity presumably and/or are priced high.)