Hacker News new | past | comments | ask | show | jobs | submit login

Your article is very well worded. I like the idea, but I am not sure if it is reasonable for people "on the inside" to spend time fulfilling requests.

Could there be a js solution to this? A volunteer inside the paywall could leave a tab open which periodically goes to get requests from the main paperbay queue and tries to fulfill them (you would need some robot-like functionality like find the PDF link). If success, it uploads the PDF. If fail (no subsc?), it can notify paperbay to put the request back in the queue.

With one requests every 10 minutes and 1000 volunteers you could have a 6000 paper/ hour rate of exodus.

BTW? Where are you hosted? How long do you think before they come for you if it gets big?

________________

PS: The protocol could be extended a bit it we need to handle captchas' as well. These really piss me off because I can't get the paper via ssh to campus + elinks!

R: requestor F: friend (inside paywall)

   R-->tpb.com     doi:10000x200 PLZ
   tpb.com:        resolve doi:10000x200, prepare scrape recipe.
   tpb.com-->F     could you get jrnl.com/yr/issue/33131/
   F-->jrnl.com    GET ... 
   F<--jrnl.com    CAPTCHA.jpg  +  form el
   R<--tpb.com<--F solve plz ( CAPTCHA.jpg ,  form el )
   R-->tpb.com-->F form ans
   F-->jrnl.com    captcha form submit 
   R<--tpb.com<--F<--jrnl.com    PAPER.pdf



Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: