Hacker News new | past | comments | ask | show | jobs | submit login
The Research Pirates of the Dark Web (theatlantic.com)
196 points by chewymouse on Feb 10, 2016 | hide | past | favorite | 43 comments




You pasted a HTTPS address, but the server doesn't answer to HTTPS (it is somewhat redundant when you use onion services anyway). The correct one is http://scihub22266oqcxt.onion/


It's not redundant; if someone sets up a malicious exit node they can view all your traffic, unless it is encrypted e.g. by SSL.

Of course for this kind of site that is not so important, but you shouldn't use any kind of site where you log in or enter sensitive information over Tor without HTTPS.


Traffic to .onion URLs is terminated within the Tor network. It doesn't pass through exit nodes.

HTTPS is actually a bit redundant for .onion URLs, as the hostname is a representation of the host's public key.


It's fully redundant for .onion sites. Traffic destined for them never goes through exit nodes. It is only ever encrypted in a way that is readable only by the specific .onion destination.


> the dark web, a part of the Internet often associated with drugs, weapons, and child porn.

Indeed, due to journalists perpetuating that meme on and on again. Thanks for strengthening the bad image, Atlantic.


With the new X-Files season, I've had an excuse to watch some over the air TV for the first time in a while. Before and during the most recent episode there was a recurring teaser for the local evening news that went something like "the hidden Dark Nets that cyber criminals are using and why they're so difficult to shut down!"

The accompanying graphics were things like dark hooded figures in silhouette and images of credit cards being scanned on something resembling a flatbed scanner. The whole thing was a real reminded of why I never watch network news unless there's something very local and "breaking" that's going on in my city.


"The researchers, who have been studying and writing about encryption policy, sniffed around with a Tor browser and found 1,547 out of 5,205 total websites live on the dark web engaging in illegal activity."

It is one of the most common ways for distributing those three things, so yes, that "meme" is correct.

http://www.networkworld.com/article/3031661/internet/drugs-g...


You think the most common way to distribute drugs, weapons, and child porn is over Tor hidden services? I'm incredibly skeptical.

Doubtless a significant percentage of child porn is distributed that way, but I'm under the impression they prefer to use Tor to access encrypted blobs others host for them, like on Usenet. I've met many drug users, but only two of them were even aware of hidden services. And however many guns are sold on hidden services will never compare to the number purchased by the US military, or sold at gun conventions, etc.


I'm surprised 3/4 aren't illegal!


So far there have been no signs of large scale arms or drug trafficking activity on these websites though.


It's a self-fulfilling prophecy.

Every time a journalist writes that it is "often associated with" it gradually becomes more associated with it.


Well, "drugs, weapons, and child porn" are better clickbait than "resisting censorship and repression", no?


thanks, I was just about to quote the same paragraph. Totally ludicrous.

Every time a misguided journalist writes about the "underground drug-web", which is shaped like the underneath of an iceberg, and 1000x larger than the "real" Internet, Tim Berners-Lee sheds a tear. /s

Please make it stop.


To be fair that is exactly how the Internet itself was treated by the media in the 90s.


To be fair, they do have a thriving market for PayPal accounts and money laundering.

DEAR NSA: THE FACT THAT I RESEARCHED THIS DOES NOT MEAN YOU GET TO KICK MY DOOR DOWN NOW. THANK YOU.


You are aware that the NSA doesn't kick down anyone's door, and that any SIGINT intercepts are accompanied with a message that they can't be used in criminal prosecution, right?


Well they don't kick down doors, but they have picked plenty of locks. You probably don't need to worry about the Special Collection Service [0] though, unless you live in some kind of communications facility. The NSA's crypto programs are well known, but one area that gets very little attention is the part of the agency full of military personnel on loan from their parent services. Those people don't spend their day analyzing cryptographic functions.

[0] https://en.wikipedia.org/wiki/Special_Collection_Service


Did not know that.

Would not be surprised if some people ignore this.


So does the "clear web"...


Fair enough.


Really surprised this article makes no mention of Aaron Swartz and his battle with JSTOR.


This is the logical continuation of Aaron Swartz work. He should be mentioned.


Not to belittle Aaron but the Library Genesis is much older than his JSTOR actions.


Aaron Swartz sure deserves a mention


Why are the academics forced to publish their papers through these paywalls? Hosting a PDF on the university website seems cheeper than going through these publishers.

Are these university imposed rules? Are they forced to go through these publishers if they want their paper peer reviewed and published in journals?


To put it short: Academic "weight" works much like google - the more citations paper has, the higher it is ranked. Of course a citation from local university journal (probably every university has at least one) is worth much, much less than a citation from Nature. It is calculated from Impact Factor, that is in turn calculated from number of citations. Sort of a closed loop.

This would not be too much of a problem if not for funding. You see, funding, that is necessary for anything non-trivial, is usually awarded according to researcher's reputation - something again based on impact, citations, etc.


It might even be more accurate to say that google works like academic "weight" -- it's my understanding that PageRank was originally intended to rank research papers.


yep, "Impact Factor" (IF) is the term, and (just to clarify) it encourages but does not force. there are open access journals like PLoS w/ significant IF, albeit not comparable to Nature.

since the link below was written, PLoS has become not only open access for articles but also open access for the data those articles are based on. imo, open access to data (w/ privacy measures, of course, for human subjects) is of the same order of importance as access to articles.

http://scholarlykitchen.sspnet.org/2013/06/20/the-rise-and-f...


This was my immediate question. If the "faculty do the research, write the papers, referee papers by other researchers, serve on editorial boards, all for free", and then are forced to "buy back the fruits of [their] labour at outrageous prices", why do academic institutions even bother paying money to these companies? If the refereeing is being done for free, naively it surprises me that universities haven't just set up an independent organisation to do peer reviewing.


Researchers don't pay for access to journals, universities do. There is a personal benefit for a researcher to publish on a prestigious but paywalled journal but no personal cost.


A good analogy is a farmer shopping at a supermarket.


I use it almost everyday, it's even more convenient to search an article in Google Scholar and then download a paper thorough Sci-Hub, than to use Scopus.com or Web of Knowledge's search engine.


Do you get good results? I just tried it out and it most of the articles didn't work. I'm hoping it's just because of a traffic spike.


Do you search by article name or DOI or link? I prefer DOI, because, I guess, it easier for the engine to find a paper. In 95% cases I find a needed paper by DOI.


What DOI means?


https://en.wikipedia.org/wiki/Digital_object_identifier

A digital object identifier (DOI) is a serial code used to uniquely identify objects. The DOI system is particularly used for electronic documents such as journal articles.


Has anyone successfully found something useful which they couldn't find elsewhere using this tool? I tried it on a few things yesterday and was disappointed.


I would say easily a third the time I need a paper, I'll have to get it through Libgen, and especially if a paper is new or published through Elsevier or Nature. (If you've read through my comments and noticed me posting Dropbox links to fulltext PDFs - Libgen in action.) It's also good for older papers pre-2000 where it's unlikely the author has put up a copy or someone else has hosted it, and finally, it's good for getting finalized copies of papers as published since preprints often change quite a bit (economics papers in particular seem to spend years hanging around going through many versions which can change conclusions dramatically).


Yes, I can read IEEE Annals of the History of Computer papers with it! Subscriptions are extremely expensive plus I despise the IEEE enough for previous conduct that I'd rather not pay them.


I find the way TOR is used lacking. I really would like to have .Onion resolution across my whole system (in my case, I extensively use Linux). So, Here's a way to do just that:

I use a significant amount of HiddenServices to communicate back and forth with my machines. My eventual goal was to be able to process data from different geographical areas and have them inserted into MQTT via Node-Red. Until now, it was all or nothing with regards to proxy settings.

I have figured that out. For those that want to integrate seamless .onion usage across the whole of Node-Red (and every other Linux program), follow this.

get the following packages (Ubuntu, Debian)

    sudo apt-get install tor iptables dnsmasq dnsutils


 
Add the following to the /etc/tor/torrc file

    VirtualAddrNetworkIPv4 10.192.0.0/10
    AutomapHostsOnResolve 1
    TransPort 9040
    DNSPort 53
    DNSListenAddress 127.0.0.2

Restart TOR

    sudo service tor restart

Edit /etc/dnsmasq.conf and add the following:

    listen-address=127.0.0.1
    resolv-file=/etc/realresolv.conf
    server=/onion/127.0.0.2

Make a new file, called /etc/realresolv.conf . Add this in the file:

    nameserver 107.170.95.180
    nameserver 8.8.8.8

Restart DNSmasq:

    sudo service dnsmasq restart

Run the IPtables firewall update for redirection

    sudo iptables -t nat -A OUTPUT -p tcp -d 10.192.0.0/10 -j REDIRECT --to-ports 9040

Also, this script must be run at every boot, so add this in /etc/rc.local, ABOVE the "exit 0"

    /sbin/iptables -t nat -A OUTPUT -p tcp -d 10.192.0.0/10 -j REDIRECT --to-ports 9040


Once you do those things, your whole Linux sustem will be able to resolve .onion addresses seamlessly, yet leaving alone canonical address schemes. this means that you can talk with a MQTT-out on an .onion, or control remote servers via exec node and SSH. And since you don't have to poke holes through firewalls, networking between Hidden Nodes with Node-Red sitting on top makes IoT sensor capture from remote areas (Work, home, car, hackerspace) very easy.

Of course, this does not discuss how to actually add a new hidden service You should think very hard before enabling a service: Make sure there is good authentication on them along with the newest updates. There is no determining origination on these kinds of attacks.

cite: http://www.linuxquestions.org/questions/linux-networking-3/h... , Have confirmed directions work flawlessly on Ubuntu 14.04, 15.04, and 15.10 (various flavors of Ubuntu, XUbuntu, KUbuntu)


Thank you, an interesting approach.


Somehow I believe that thing could have an awesome forum. (But alas, it does not have one.)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: