Hacker News new | past | comments | ask | show | jobs | submit login
Sci Hub Injector (github.com/rickwierenga)
313 points by sixtyfourbits on Jan 16, 2022 | hide | past | favorite | 44 comments



Author here. Thanks for submitting this project! I made this because I thought it would be funny if publisher websites had SciHub links that look like they belong on the website [0]. I didn't know about the bookmarklet when I started this. Maybe I should have used that instead. Oh well.

I'm currently waiting for Mozilla to accept this into the add on store. If that passes, I will submit it to Chrome as well.

[0] https://imgur.com/a/GP7rm43


it is great this is so subtle. I like this better than having context menu or some other hidden access link


Small suggestion: instead of hardcoding the .se domain, you might want to send a request to Wikidata to get the currently ised domains. That's how similar sci-hub tools do it to stay up-to-date.


I had never heard of wikidata, but might steal this idea for a similar app I have on F-Droid that pulls PDFs from Sci-Hub when a doi link is clicked using android's intent system https://f-droid.org/en/packages/com.sigmarelax.doitoscihub/


Cool idea. I'm gonna give this a try, thanks for sharing


Is Wikidata the proper way to get the currently functioning mirror? I was under the impression that you had to get it from Elbakyan's VK or the SciHub Telegram. I've been assuming that the subreddit would update with accurate links, so I've just been scraping it from there: https://github.com/smasher164/search/blob/53ae11b52f158d1986...


I found that source code extremely familiar and was wondering what it was.

I saw the shebang and still didn’t understand what the heck roku was.

Until I searched it:

“Raku is a member of the Perl family of programming languages. Formerly known as Perl 6, it was renamed in October 2019.”


My god, why did you mention wikidata? SPARQL is the most obscure fucking thing I've ever encountered. I've been sitting here for an hour trying to find how to get the data of a specific page!

I guess the rest API will do, ugh.

Related discussion: https://news.ycombinator.com/item?id=28277749


Ok, I'm not just here to bitch. Here's the code for my implementation of Sci-Hub mirror checking: https://observablehq.com/@iz/sci-hub

I made an iOS Shortcut based on the same code. To use it after installing, access it from the iOS Share screen when you're on a relevant site. It will look for preferred mirrors from Wikidata before running.

https://www.icloud.com/shortcuts/080b9f68c96a4491898b804547e...

Always review a Shortcut's actions before running it.


Thanks for the suggestion! I'll add that in a future version.


There’s also this [0] userscript that does essentially the same thing but doesn’t need a separate extension assuming you already use a userscript manager. It also supports far more domains from the looks of it.

0: https://greasyfork.org/en/scripts/370246-sci-hub-button


Does this script work for you? I installed in FF and tried on 3 sample links and none of them produced the extra link :/ Icon in Greasemonkey shows that scripts recognized URL and was running on each site


Yeah I've been using it for at least 6 months and it's been great. Running on Chrome w/Violetmonkey.

As an example I get the icon in the top left of the page for this random IEEE paper: https://ieeexplore.ieee.org/document/6025669


Thanks, was hoping there would be exactly this rather than messing with a new extension.


You can also use a one line JS bookmarklet on the article page, such as the following:

javascript:(function(){window.location = 'http://' + window.location.hostname + '.sci-hub.st' + window.location.pathname;})();


Another one that works:

    javascript:window.location='http://sci-hub.st/'+window.location
(scihub detects most academic websites)


Ooh, that's a lot simpler than my attempt to extract the DOI via regex (which is anyway not 100% possible because of the how flexible the DOI spec is...)

  javascript:location.href = 'https://sci-hub.se/' + document.getElementsByTagName('html')[0].innerHTML.match(/10\.\d{4,9}\/[-._;()\/:A-Z0-9]+/i)[0]


This post and the four (4) comments suggesting alternate methods to do this are a pretty good indicator that pirating papers is still by far the easier method than going through official channels.

I think it would be marginally quicker for me to access a paper legally if I was on my uni's campus. But I am WFH from the other side of the country and would need to log into the VPN. Sci-hub with one of these solutions would be much quicker!


There's also cost.

My company gives me access to a few journals, but at home I have no such thing. $20 is ridiculous for a paper, given that (a) the authors rarely see anything of this money, and (b) you often need to skim 10 papers before you find the 1 that's relevant.

Luckily, many papers in my research domain (compsci/ML) are open access. 90% is either on arxiv or Google Scholar knows a pdf URL.

For the rest, scihub is a lifesaver.


> the authors rarely see anything of this money

IMHO it's not rarely but never, I'm not aware of any plausible scenario where any author would ever get a single cent of that payment.


> the authors rarely see anything of this money

The authors never see anything of that money. Scientific journals do not pay for the authors of the published research papers.


Indeed, and even worse: authors pay scientific journals thousands of dollars in publishing fees per article.

You want your article to be open access? No problem, that'll be thousands more dollars.


My main problem with sci-hub right now is that it's stopped adding new content to the website since like 1 or 2 years. Which means if you want to have an up-to-date state of the art, you can't use sci-hub. I personally use the bookmarklet, i'm way more inclined towards this than some random browser extension.


The reason why they've stopped temporarily is due to an ongoing court case in India initiated by Elsevier. I'm not sure this is the best article on the case but basically sci-hub agreed not to post any new articles for a period of time (which has been extended) while the case is ongoing:

https://www.hindustantimes.com/india-news/no-new-articles-on...

They did release a bulk issue of 2.7 million articles a few months ago (as part of the torrent collection available from libgen), but nothing new since then.


I thought they started publishing new papers again?


I think they just added a couple million new journals as a one-off.


I feel this pain too, it is a tragedy. :(


One of the contributions being solicited on the page is to get DOIs for the given webpage. Right now, it has a few methods to grab DOIs for specific sites.

I've had a lot of success running Zotero's translation server for my own bibliographic needs, but I would really love if I didn't have to host it on a server somewhere (and could actually do that part in a browser engine I depend on to download PDFs anyway). Has anyone here figured out how to wrap the translation server brains (i.e., the recipes for each URL) into a simple library?


Looks like the Zotero devs have already done that [1]. You can probably just vendor the repo in a browser extension, I'd think.

[1] https://github.com/zotero/translators


I have access to most of the good journals through my institution, but this is more convenient than the typical process, which involves logging in to a proxy and going through one or more gateway sites to find the actual PDF download.


The fact that Sci-hub is still illegal is an indictment of the legislative process.


For people who are using zotro, this extension will be great.

https://github.com/ethanwillis/zotero-scihub


AutHotKey and opening SciHub links on Windows: https://www.hillelwayne.com/post/ahk/


BTW did you know that Firefox supports keywords for bookmarks OOTB?

You can set up a keyword e.g. "shb" for (code from above)

    javascript:window.location='http://sci-hub.se/'+window.location
and you can run it writing "shb" in the address bar. No need for bookmarklets.

Or how do y'all use boorkmarklets? E.g. on chrome? Is your boorkmark bar always visible?


Being green here means I must ask (ask me about poetry!):

What do I actually write in the address bar?


In firefox,

      1. Create a temporary bookmark (e.g. by pressing ctrl + D, or clicking or the star in the address bar)
      2. Open your bookmarks (e.g open the bookmark bar, or press ctrl + B)
      3. Right click on the bookmark, and choose "edit bookmark" (or right click, then press "i")
      4. Fill the "URL" field with 

            javascript:window.location='http://sci-hub.se/'+window.location

      5. Fill the "keyword" field with "shb" (or whatever you want)
That's it, whenever you write "shb" in the address bar on a page and hit ENTER, it will navigate you to

      http://sci-hub.se/ + window.location

.


A simple bookmarklet is more than enough for me...


I use something similar with tampermonkey that detects on hundreds of websites and autoinjects the scihub logo and link into any page, including search results.

https://greasyfork.org/en/scripts/370246-sci-hub-button


If you like this, you may also enjoy PaperPanda: https://chrome.google.com/webstore/detail/paperpanda/ggjlkin...


Did this get rejected from the Chrome Store? Would've been more convenient.


Author here! It’s currently in review for Firefox and if that passes I’ll submit it to chrome as well.


Works like a charm! Thanks!


gone already


wow, nice time-saver.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: