Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

you can memory-map files in chunks and keep references to the mapped buffers or evict them once a certain limit is reached.

note that mapped buffers are not necessarily backed by RAM, they act more like swap space, except it goes straight to the original file instead of a swap partition. the downside is that anything reading from the buffer can unpredictably incur IO overhead if there's memory pressure and the OS decides to not back them with memory. java doesn't have an api to check residency of buffers. a possible workaround is to first queue them up on a separate thread and forcing them in before using. if they're already there it should have low latency. if they aren't then only the victim thread eats the IO penalty.



> java doesn't have an api to check residency of buffers.

Too late to edit now, but I misremembered that part. I think the issue was that isLoaded was not entirely reliable.


So, something like LRU cache of MappedByteBuffers, one buffer per chunk (i.e. "piece")? I wonder what queueing strategy would look like, just enqueue a couple of subsequent pieces when one is requested and loaded?


Start simple, then heap on optimizations if there are latency or throughput issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: