Hacker News new | past | comments | ask | show | jobs | submit login

I can only hope they actually find a smoking gun implicating Taylor as a true conspirator in this case. Because the picture painted by the article puts a whole new meaning on 'chilling effect'.

Did he pick the wrong place to advertise his code? HackForum could just have easily been Hacker News. According to the article Taylor actively worked to defend against malicious use of his software; deactivating accounts he found were using the software to launch attacks, and eventually removing functionality like password scraping and keylogging which ultimately proved too alluring to black hats.

The tool has obvious non-infringing use. That it can also be used maliciously cannot be a factor. If NanoCore is a criminal conspiracy, I'd hate to think what the FBI thinks of Metasploit or Tor.

The article puts forward a good theory for how the FBI might have found themselves in this position. They are used to barging into these guys houses, crawling through all their equipment, and finding actual evidence of collusion with black hats. They are used to pressing these guys to turn state's witness and in the past it's worked out great when a trusted malware provider landed them 100 convictions. I am shocked, shocked let me tell you, the FBI would, upon not finding any real evidence of a conspiracy, press on with charges against a sole developer with $60,000 to their name.

Taylor Huddleston might not exactly be Aaron Swartz, but if the truth is anything like how The Daily Beast is telling it, Taylor is going to need a lot of help and a lot of support to get through this, and I hope he gets it.




You say that the tool has obvious non-infringing use and the article claims that security experts who have examined NanoCore say there’s nothing in the code to disprove Huddleston’s claim that he intended it for lawful use.

I looked at a youtube video of NanoCore [1] and it's immediately obvious that all of the above is bullshit. This is just a modern version of Sub7. [2]

Some features that NanoCore offers:

* Disable webcam indicator light

* Lock computer with a password of your choosing and show a message on the computer. The youtuber says it's for ransom.

* Swap mouse button functions

* Open CD tray

* Keylogger

* Extract passwords of various applications

* Send SYN floods from all your controlled computers

What exactly is the legitimate use of disabling the webcam indicator remotely? Combine this with the fact that NanoCore was originally launched on HackForums and I'd say this is a slam dunk case of a tool being purpose built for illegal activity.

Now whether someone should be held accountable for building such tools without using it themselves is an interesting question. However please don't try to act like this tool was built for anything other than malicious activity.

--

[1] https://www.youtube.com/watch?v=J1uzu6hzSQQ

[2] https://en.wikipedia.org/wiki/Sub7


>>Disable webcam indicator light

Schools and corporation do this all the time for theft Reaction (take a photo of the thief with out them knowing), it is a feature they want, Some Schools have gotten in trouble for turning it on and catching children in the rooms

>Lock computer with a password of your choosing and show a message on the computer. The youtuber says it's for ransom.

Again, legitimate Theft Reaction

> Swap mouse button functions ..Open CD tray

Is that malicious, really. Enough for jail time

>* Keylogger

Plenty of Corporations have keyloggers on their systems, some corporations even go as far as 24/7 keylogging and screen recording while the system is on.

>>* Send SYN floods from all your controlled computers

That one you may have a case for... the rest all have legit purposes used today by Enterprises worldwide


Even the SYN flood could be used for testing DDoS against an internal server for benchmarks. There have been more capable technologies which seem to be absolutely on the end user for malicious intent.

The reference to blaming a gun manufacturer for a crime is spot on. Especially when the people abusing it are pirating the software, you can't even look to the author as their arms dealer, they stole the product and are using it with malicious intent.


It is more about the total package. nmap and ssh could be used to build a distributed SYN flood tool quite easily. But when you combine all of these features into one tool it shapes a picture of intent that gets harder and harder to argue about.

I have great concern as a person who has built and released software whose only real purpose is to perform MiTM on network traffic. On the other hand, my software isn't popular with criminals and I break software professionally. It would take a lot of effort to package most infosec and computer tools into easy to use hacking tools.

We tread a difficult line here, but at some point there is no charitable interpretation for a software package. I think at the end of the day I still lean in this guy's favor, but he makes it really hard. It was for profit software and I bet if we have the whole back story of evidence it will become even more difficult to defend the author. Intent matters, even with software.


What that sounds like to me is you would consider dozens of individual and potentially malicious packages to be benign, but when brought under one umbrella it is considered to be malicious?

Every feature I've read that is included in that software suite has a good use case with zero malicious intent, and often times can be very useful to white hat hackers and system administrators and security analysts alike. I still don't believe it is fault of the author that black hat hackers are pirating and abusing a useful software suite, especially when it isn't being advertised exclusively towards them and the author has in many ways attempted to mitigate or limit harmful uses and users.

Like a gun manufacturer who offers weapons as believer in home defense and the right to bear arms, only to have criminals steal merchandise and use it to rob a bank. Or the guy who invented dynamite which has great uses such as tunneling through mountains only to have it used for derailing and looting trains. You can kill a man with a pencil; that doesn't mean a 20-pack of pencils was produced with malicious intent. Dangerous use cases don't necessarily mean that is their purpose.

I agree it is a difficult line to tread, and, in my opinion it really boils down to his involvement in the criminal activity itself.


I liken this to weaponizing dynamite. It is a step beyond a simple tool. But still just a tool. The criminal activity matters. Also, this software was marketed toward the black hat community based on other threads here and my own understanding of how this software got its popularity.


You don't need a remote access tool to test DDoS. That would be a silly use case for something better accomplished with ssh or a remote desktop tool.


Can you name a major corporation that does "24/7 keylogging"?


I don't know about major corporations but smaller businesses or government institutions (such as my old highschool) use stuff like this.


I hope high schools like that keep getting sued.

Also really glad I'm not in high school anymore.

http://www.pcmag.com/article2/0,2817,2386599,00.asp


Can you point out where its described in NanoCore as "24/7 keylogging" as opposed to a feature that can be enabled when needed (in the case of theft or suspected misdeeds for instance). If not, you're building a strawman.


It's not "building a strawman," it's a direct response to a claim by the parent. It's open to some amount of interpretation how much of a bearing it has on the broader discussion, but if the claim is false it absolutely deserves push-back - especially insofar as these understandings help determine norms.


I know of a call center that use to do it, it was stored with Screen Capture and Audio recording data,

Basically everything the employee did while interacting with the customer was recorded..


Some of the tax prep and strict financial institutions do.


I've done work for gigantic investment banks, hedge funds, two of the world's largest retail banks, several insurance companies, and three major trading exchanges. None of them keylog. Can you give me a more specific example of "strict financial institution" that does keylog all its employees?

At every F-500 company I've ever been posted to, the logs produced by a keylogger would be considered a far greater threat than anything the keylogger itself might detect. I can't imagine the regime you'd have to come up with to protect those logs.


It would be a nightmare having to classify this flood of data, store it, manage its lifecycle, identify (or de-identify) it, understand its risk properties from legal, privacy, and insurance perspectives, manage its domicile(s)... Hard to imagine what benefit would outweigh all the cost and risk.


Bloomberg.


Comcast.


> Send SYN floods from all your controlled computers

Seems potentially useful for stress testing. Especially if you're trying to make your service more resistant to DDoS attacks.


This is about as credible as saying that the Michelangelo virus was really intended as a remote wipe tool for IT departments.


There's a long tradition there, wasn't BO2K non-ironically marketed as a system admin tool?


No, it was ironically marketed as a system administration tool, for exactly the subtextual reason as this story suggests one might.


Thanks for posting the video, it's a nice demo of the tool. This is actually a really awesome program I could see myself using to control and monitor my own machines.

Yes, this clearly has the kitchen sink of functionality thrown in. In many ways this looks like a platform for experimentation, with the developer clearly building this as a labor of love and learning as much, if not more so, than a commercial enterprise.

> What exactly is the legitimate use of disabling the webcam indicator remotely?

In a context menu labeled 'Swiss Army Knife' there's an option to Enable/Disable webcam light. There's also options like making the computer speaker beep. By the way, most video recording software provide options to toggle the webcam light in advanced menus.

Under an option called 'NanoStress' you can send several traffic patterns. Yes, it even has a bit of iPerf built in.

Frankly this is a really neat tool with tons of useful applications. It's highly distressing that this could get you raided and charged in Federal court with conspiracy.


You can see yourself installing a RAT with a keylogger, webcam light disabler, and SYN flood feature to manage your own computers?


Certainly, this program has a number of features which I needed on a daily basis in my WiFi testing lab. We had hundreds of headless machines running in isolation chambers which needed automated tools for remote controlling all aspects of the system.

Back in the day, we programmed our own agent to do things like provide remote program execution, file system access, NDIS/OLE/DCOM control, traffic generation, packet capture, system profiling, key-press and mouse-click macros for UI automation, etc. We had many of the same options for automating how the agent was installed, such as customizing the build for automatic deployment across various environments. We had automated PXE combined with a Ghost program where we could snapshot and deploy images to the machines straight from a TCL API. We had ways to throw up screens on the UI to indicate tests were in progress and lock the machine for interactive use.

About 15 years ago I actually spent several man-years building much of the functionality which is now contained within NanoCore. And while we didn't provide specifically SYN flood, we wrote wrappers for iperf to be able to load the executable onto the machine, and a TCL API around running iperf in server or client mode and capturing and parsing the output. We also wrote our own L2 traffic generator which trivially could have been used to generate SYN floods, although we were more interested in particular with generating "pure" traffic patterns to find the synthetic maximum possible throughput, as well as ideally sized packets for WiFi range and ACI testing.

The Azimuth WSC -- as it was called -- met every definition of a modern day "RAT" except that of course it's official purpose wasn't malware.


FWIW, Apple's Remote Desktop tool does almost everything this guy's RAT does.

Except: You can't disable webcam lights with it. And you can't SYN flood (directly) with it. It is trivial to run a few shell commands and install tools that let you SYN flood. Why would a RAT include that by default. I think it will come down to a couple of the small hard to justify features coupled with the overall packaging and history of the software that really spell out a story of intent from the author that lands him in jail.

Intent, and the story, that gets told in court really matters. Plus we don't have the totality of the evidence.


You should think about writing up a statement and sending it to his legal defense.


Why not? Could be used to see if you have been hacked and there is something/someone else using it no? A different way to monitor the logs of a system


FWIW, I specifically wanted to disable the webcam light on my own computer and was both frustrated and relieved to discover it was - at least - difficult.


Are the features selectively installed, opt-in installed, or configured downloaded, as with most modern software?


He can see himself claiming that to make a point, anyway.


One very simple use case that would apply to almost any sort of hack, no matter how bad, is to lawfully use it to show someone that it can be done. Say a teacher using this at school on a sample laptop as a PSA to let students know the danger of leaving actively connected webcams in their room and not trusting the light indicator on them to show when it is working. Yeah, you could just tell the students, but sometimes showing them the hack in action works far better to convince them.


Educational use is definitely interesting. That's largely how I've become an expert at security myself, because there is so much security related information & resources available under the disclaimer of "educational purposes only".

However I do think that there are some interesting corner cases. Looking at extreme cases, what about selling nuclear bombs for educational purposes? There are certainly scientific tests that could be done with the bomb and humanity would be better off for having done it. However I think that selling nuclear bombs without restrictions, or launching the sale campaign in the middle of Raqqa would lead to undesirable results.

This leads me to belive that we as a global society certainly aren't ready for every tool to be available unrestricted for educational purposes. What's more, I don't think we're even ready for every tutorial to be available unrestricted, because the cross section of people who can follow the tutorial and also want to end civilization as we know it is still too numerous.

I also don't like censorship or the idea of hindering scientific progress. I would definitely like to progress towards a world where nuclear bombs can be sold at WalMart and nobody would cause problems with them.

We're not there yet and I'm not sure how exactly we can even get there. Until we do, as much as I hate to say it, even educational purposes will have to be sacrificed for the greater good. Where exactly we draw the line is a tough nut to crack and I personally don't hold a strong opinion of a specific line yet.


I think there's a lot of room to draw the line between nuclear bombs and a RAT.


Looking at Stuxnet [1] the distance between those two is less than most think, and the distance is only decreasing. More important is the takeaway that it's getting easier and easier for a misguided teenager to cause industrial scale harm. So the classic problem of a punk kid breaking a window gets amplified.

Of course there are other options besides banning software to improve the situation. Among them is increasing awareness of the possible threats, and that software like NanoCore makes mischief easier to execute.

--

[1] https://en.wikipedia.org/wiki/Stuxnet


Stuxnet didn't kill hundreds of thousands of people. I think that while the distance has narrowed, perhaps it isn't as narrow as you think.


What a peculiar thing to argue, but in the same spirit most nukes haven't killed a single person. The ones that did had really good delivery mechanisms, which don't come prepackaged with the nuke.


Are you really arguing that nuclear weapons and stuxnet are similar because most nukes haven't been used on people? Why?

I get that you're saying Stuxnet is an example of programming having real world, physical effects, but this is a very strange argument because a lot of things that we have no moral or legal issue with anyone owning have the potential for outsized physical effect. Nuclear weapons have been used to kill hundreds of thousands of people, so the line we spoke about earlier, they belong on the 'not for everyone and ideally not for anyone' side of it, along with chemical and biological weapons.

A RAT doesn't quite seem to go that far.


> Are you really arguing that nuclear weapons and stuxnet are similar because most nukes haven't been used on people?

Definitely not arguing that they're similar. More so about the difference decreasing at a greater rate than people seem to realize.

Taking a step back and talking in more general terms. Nukes are dangerous because they allow one person to do harm to masses. The same statement is increasingly more true in the software world. I feel like this isn't understood well enough (or is ignored?) by most people.

As an example, we're putting more and more software into cars, internet connected software even. If this software follows the security practices of almost any other software, then it won't take long until malicious users will move from opening CD trays to car doors.


Usually educational use tools leave a lot as "exercise" to the reader. They don't come with features that make malicious use easy. And you usually frame the intent of the software around that. Not as a for pay tool. So that argument does have validity... in an open source project or white paper describing some technique.

There is a reason researchers often leave some details of an exploitable vulnerability write up left as an "exercise" for the reader.


And you think that teacher would go buy it on a forum devoted to hacking?


I've only read the article linked here and had never heard of this tool before today so grain of salt and all that.

But the article did mention he was hoping "anxious parents" would use it to monitor their kids activity.

As a parent of two teenagers and a twenty-something I could see myself wanting to turn on a web cam without them knowing it. I hate to say it but my kids get up to some crazy stuff on the web and have defeated a lot of my efforts to monitor/block that activity. In the end I've had better luck just sitting down and talking openly with them about it. But there was a time where I was frustrated enough that I might have sunk to plain old spying on them...

Anyway, that's just one use I could see for disabling the cam indicator.

And the article does mention youtube videos and how frustrated he was to see what people were doing with the tool. He even added a "feature" where the user's license number was displayed in the program so if he could see it in such videos he could disable their copy. Did you see the actual author in any youtube videos?


I'm also a parent, and I am of the opinion that being a teenager doesn't mean you should have your computer hacked to be monitored like this. There's all kinds of legitimate admin tools you can use to monitor their activity. If they deliberately circumvent them, physically take the computer away. Turning on a webcam surreptitiously to monitor them is not cool.


Or you could, you know, just teach them the consequences. Eg, talking to catfish, child porn charges, etc.


I think I said as much in my comment.


>As a parent of two teenagers and a twenty-something I could see myself wanting to turn on a web cam without them knowing it. I hate to say it but my kids get up to some crazy stuff on the web and have defeated a lot of my efforts to monitor/block that activity.

What the fuck. It's well past time to give real responsibility and freedom. This sort of behavior can and will cause long-term damage, pain, and resentment to your own children. And for what, to selfishly assuage your own anxiety?

IMO, riding roughshod over a basic fundamental need for privacy is child abuse, and it ought to be more widely considered as such.


How many children do you have? And how many times have they caused threatening letters from ISP's to be sent to your house for downloading copyrighted material? How many times have you intercepted drugs in your mailbox that were bought off the internet? How many times have you had to worry that one of these days their social activities were going to catch up with them finally and get them into real trouble?

For the record to all those that replied, I never actually spied on my children through a web cam hack. I said in my comment that I had been frustrated enough in my life that had I read this article then I may have considered it.

And, no, children living in my home using my internet connection are not entitled to the same level of privacy that you and I are. Just as they are not entitled to drink, drive, vote or join the military. They need boundaries and guidance. To the extent that they are doing well in school and socially and leave me little room to worry I'd guess they enjoy more freedom than most of their friends. When they abuse that trust the reins get drawn in and you can darn well bet I'm going to do everything I can to make sure they don't get into further trouble.

And less you draw any conclusions about the twenty-something I mentioned, I was speaking about when she was a teenager and living at home, I didn't try to block or monitor her habits once she became a well rounded adult who looks back on her own behaviour now that she's also a parent a bit sheepishly.


I'm sorry, I definitely read more into your post than is there. I've found that abusers often use more legitimate overt goals as narrative cover for covert abusive techniques. "Protecting your children from things" is often a pretext for fulfilling the parent's need for control and dominance. Or for a more graphic example, people who handle their anger or frustration through physical abuse will claim that they spank their children for disciplinary reasons.


> What exactly is the legitimate use of disabling the webcam indicator remotely?

I want to do that on a Pi camera at a remote location so animals aren't disturbed, or the people aren't aware they're being filmed.

> ...a tool being purpose built for illegal activity.

Intent isn't determined by how others use something, or what a YouTube video says the intent is.


> I want to do that on a Pi camera at a remote location so animals aren't disturbed

Put a tape over it


turning off the light seems easier and more thorough to me.


Just removing the light that has no practical purpose in that case seems more thorough yet.


Is SYN flooding also to prevent disturbing animals?


No, those are to win http://ipv6tree.bitnet.be/


>> "... a tool being purpose built for illegal activity."

Attempting to discern the author's design intent ex-post-facto strikes me as very difficult.

Also, this does not admit the possibility of a tool being designed for a nefarious purpose, but later turning out to be useful for beneficial purposes.

E.g. A rifle designed to kill people later turns out to be great for deer hunting (trivial example but you get the idea).


> * Disable webcam indicator light ... What exactly is the legitimate use of disabling the webcam indicator remotely?

Cheaper than opening the laptop frame and cutting/shorting the LED, allows you to take pictures of people trying to log into your computer without them noticing. Company computer does this on every invalid login, and makes me review them when I log in successfully.

They also make sure when I enter my password I'm not reading it from anything (like a phone or a text file), and when I use the 2FA card that there aren't any wires sticking out from it. I also imagine it takes random pictures, and I'm glad the light is disabled because it would probably annoy me to have it blink all the time.

> * Lock computer with a password of your choosing and show a message on the computer.

My company laptop does this if someone steals it, telling them that this computer is stolen and has embedded GPS and 3G in it so the owners know where it is.

> * Swap mouse button functions

Left-handed people. Sinister, I know.

> * Open CD tray

Before DVD and Blu-Ray made it common to fit everything on a single disc, many applications would copy files from multiple CD discs and would open the tray as a prompt to swap the disc at various stages of the installation process.

> * Keylogger

One of the environments I work in has strict audit policy: Everything is logged to make sure nobody is passing notes in some other way (chat, a text file on a shared server, a drafts folder, etc). Indeed, someone was using a keystroke-stuffing USB device to transfer files into the network and the keylogger was the tool that detected it.

> * Extract passwords of various applications

I use the Apple Keychain. It has an option to let me get my password. This is useful when I log in via the website, which saves a generated password, but then I log in via the app and the idiot programmer has made their own password prompt (instead of using the iCloud-enabled password API).

Sometimes I want to go the other way, and I'm lucky that most programmers are idiots and just wang cleartext passwords into a text file or a sqlite file, or trivially "encrypting it"[1] because otherwise I might not be able to recover my access.

[1]: https://www.unix-ag.uni-kl.de/~massar/bin/cisco-decode

> * Send SYN floods from all your controlled computers

I usually use hping to do this because I want to know that I can protect my applications and services from anyone who will spend less than they can make by knocking my services offline. I wish more programmers did this, but I'm frequently disappointed by bad engineering.

Honestly, that someone is so willing to judge someone they don't even know as malicious simply because of their own ignorance and lack of imagination, is what I find most disappointing of all.

> Combine [all of] this with the fact that NanoCore was originally launched on HackForums and I'd say this is a slam dunk case of a tool being purpose built for illegal activity.

It probably is, but if that happens I hope it will be overturned like some other slam dunk cases[2].

[2]: http://legal-dictionary.thefreedictionary.com/Jim+Crow+Laws

> Now whether someone should be held accountable for building such tools without using it themselves is an interesting question.

I've never pondered it before, and I feel like spending my time arguing about it could be better spent elsewhere, so in my view, it is the exact opposite of an interesting question.

> However please don't try to act like this tool was built for anything other than malicious activity.

Only if you don't try to act like this tool was built for malicious activity, because to be completely frank: You don't know this person, or this tool, or this space, and that the US government agrees with you isn't evidence that you're smart or right, nor will I respect your prejudices for it.


> so willing to judge someone they don't even know as malicious simply because of their own ignorance and lack of imagination

> You don't know this person, or this tool, or this space

I've been involved in the security space for 17 years now. I'm very well aware how these tools and features are used in practice. That's the primary reason why these superficial excuses don't work on me. This isn't abstract theory for me, I'm talking from direct experience.


I've been programming for nearly forty years at this point, and consulting for more than the last twenty.

I've seen each and every one of these "superficial excuses" in companies with 1000 employees or more.

If you haven't seen them, then "being involved in the security space" isn't making you as experienced as you think you are.


Can you name any company with 1000 employees or more that has used NeonCore? Or are you talking about other tools? That's really the crux of the situation, the packaging & intent - not the individual features in vacuum. I've seen plenty of people open CD trays and swap mouse buttons for left handed use, but none of them use popular trojans like NeonCore or Sub7 to achieve these tasks remotely.


> are you talking about other tools? That's really the crux of the situation,

No, it's not the crux of the situation. You said:

> > > Now whether someone should be held accountable for building such tools without using it themselves is an interesting question.

So you're clearly talking about any such tool. If now you want to just talk about NanoCore, then you're moving the goalpost, but it's still not going to work:

> > > > I looked at a youtube video of NanoCore [1] and it's immediately obvious that all of the above is bullshit.

...because your source is a youtube clip, and not the admission of any personal or experiential knowledge of the tool, to wit you list a number of features outlined in the youtube clip as specific evidence that the software was designed for illegal purposes only, to which I argue convincingly that those features are not evidence, because I have used those features in large companies.

Don't be a troll. You can be better than this.


I've never moved the goalpost, your interpretation may have changed though. Regarding accountability I'm talking about any tool which contains all those features [1] packaged together in high concentration. So NanoCore, Sub7, Zeus etc. Beyond that I've talked specifically about NanoCore. You said the accountability question isn't interesting to you and commented plenty on the specific features. It increasingly felt to me that you're building a case for every feature separately, which is why I brought it explicitly back to the package. I've never argued for the features being inherently malicious in a vacuum, so arguing that with me seems like talking past eachother more than anything else.

Regarding describing personal experience around software designed for illegal purposes, I don't feel like the benfits are worth it for me right now. So you'll have to live with parallel construction. [2]

Unrelated to NanoCore, just as a friendly suggestion, you should cut down on the ad hominem. In every reply you've made to me you've managed to slap on a personal attack. First you called me ignorant and having a lack of imagination. Next you call into question my experience. Now to top it off, you've moved from ad hominem to straight up name calling, accusing me of being a troll. [3] Tactics like these don't help me understand your arguments any better, and I would bet they don't help others reading either.

--

[1] I want to be even more clear in that when I say "all those features" I also mean the truckload of botnet controlling & deployment features that I didn't list in my comment but exist in NanoCore and other competing software.

[2] https://en.wikipedia.org/wiki/Parallel_construction

[3] Name calling being the lowest point in Paul Graham's excellent essay about disagreement. http://www.paulgraham.com/disagree.html


Does it matter that the CIA and FBI are building these tools all the time? It takes a few characters to blow away a drive on the Linux command line, should those characters be illegal because they're potentially dangerous?

Banning software you don't like is a slippery slope, researchers publish proof of concepts far worse all the time.


Is this any different than the tools CIA employees and contractors built which were then used to conduct espionage against lawful American companies?


"Do as we say, not as we do"


"Well, when the president does it, that means it is not illegal." --Nixon


Do you also think the Metasploit authors should have their homes raided, and criminal charges raised?


Also? I haven't been a proponent of any raids. I'm not even sure if in-the-mail charges are in order for creating these trojans without using them offensively. I'm talking about the intent and purpose of the application, not what should be done about it.


> What exactly is the legitimate use of disabling the webcam indicator remotely?

We want our webcam light to blink when there are connection issues, or transcoding issues during a conversation and we had to degrade QoS.

We use blinking to indicate degraded service, and multiple light colours to indicate level of service.


On first pass I was pretty alarmed and thought this was a scary precedent being set. Further down in the comments I believe it was likened to arresting a gun manufacturer for someone pulling an armed robbery, and that the major vendors are getting a pass for the same functionality.

However, I disagree. The functions listed above are not mainstream use cases for legitimate software. I also think of it as the same thing as arresting someone who hosts a child porn or silk road type of website. You may not be doing the crime, but you certainly are facilitating it in a big way.

What I also think, however, is that this is probably a misguided kid with aspirations that exceeded his business savvy. He probably could be mentored into using some of his skills for good, so I hope the FBI doesn't proceed to ruin his life.


> The functions listed above are not mainstream use cases for legitimate software.

Who decides what is "legitimate" software? Do you want to live in a world where you have to get the government's approval before writing code?

How could a feature be "mainstream" if it isn't included in software? Should we have arrested Steve Jobs for the Macintosh because GUI wasn't "mainstream" when it came out?


For that matter... if it's not legitimate, then why does the hardware have the ability to do it? Why does camera hardware allow the light to be disabled during recording? I mean SYN is useful, and SYN flood might be useful for systems testing... that said, there are other tools for that, and RAT probably isn't the right place.

In any case, this is definitely a slippery slope as there are "Security" companies that provide software that does all of this that act as US, local and other govt vendors.


>Do you want to live in a world where you have to get the government's approval before writing code

I'm not saying it would be a good idea but there is a very clear comparison here to building permits.

For example, you may want to remodel some part of your house and remove/replace some walls. To do this, you must get the governments approval.


It is simple, the law decides.. Whether the law is sane or sensible is a different matter ..


Which law provides an unambiguous definition of "legitimate software"?


"legitimate software" sounds a bit like "legitimate ideas"


Slippery slope.


It seems that functionality is provided by plugins though? You can see in the menu that he has at least "MultiCore" and "Nanonana" installed. I think the default version of the program does not come with these, e.g. here is a forum thread about someone asking how to disable the camera light and getting pointed to a plugin:

https://webcache.googleusercontent.com/search?q=cache:RGNVD_...


Good point, the only dubious feature is the "recover passwords" one, but others clearly black hat in nature like "recover steam file", "open meatspin", "SYN flood", "lock computer with password" are from plugins.


Why are indicator lights software controllable?


It varies by manufacturer/model. Some have a hardware/firmware only solution that can't really be turned off from the OS. Other times it's controlled by the driver. Often there isn't an API for controlling it, but you can memory patch the driver. As for why, I'd say it's just another case of security design by people who don't fully understand security. Similarly to why toy bears are accessible from the internet. [1]

--

[1] https://arstechnica.com/security/2017/02/creepy-iot-teddy-be...


While i'm not going to sit here and pretend that this RAT was built for law activity, metasploit payloads don't provide any indication to the 'victims' that their system has been infected either.


Ergo we should immediately arrest all gunmakers because it's hard to think of another use for guns besides killing things or practicing the art of killing things, no?


Just because you don't have a legitimate use case doesn't mean someone else doesn't. Your mentality can be very dangerous with regards to eroding our cherished liberties.


Is there any legitimate use case for triggering SYN flood ?


Stress test your servers (although, you could argue this could be done at a better level from a non remote tool).


I hope you understand they where 3rd party external plugins, not created by Huddleston.


Some of these feature remind me of a "tool" back in the mid/late 90's.


I am inclined to agree with you, but man, it still scares me.


Please visualize and define various malicious activity.


What exactly is the legitimate use of disabling the webcam indicator remotely?

Employer monitoring. Parental monitoring.


If you're shocked it just means you haven't been keeping up with how the US treats it's so called citizens.



Aha good one.


It couldnt be "hacker news" .. becuase hacker news has nothing to do with unethical hacking, which is a primary topic of where he advertised and supported his product.

If the product was talked about there, used there, etc, that would be different. The fact that he actively engaged in that community as the author of this software demonstrates pretty clearly which of the "dual use" sides he was intending on cashing in on...


>Did he pick the wrong place to advertise his code? HackForum could just have easily been Hacker News.

Spoken like someone who hasn't spent a lot of time on HackForum. 99% of the content is super low quality and/or obviously criminal. Lots of money changes hands.


I guess this is why we have people like Satoshi, and now the MimbleWimble(https://www.youtube.com/watch?v=XiUGu48JTd0) team all working under Potter-esque pseudonyms


I imagine their argument would be that his claim that the tool was not for illegal use was just a subterfuge, given the forum. Isn't this kind of the same reason a head shop doesn't want you to even mention anything but tobacco?


It is a bit of a red flag that "handgun" was the analogy of choice. It says to me that there is unlikely to be a morally legitimate use, only a possibly legally legitimate use.


Most branches of moral philosophy and ethics consider self defense morally legitimate.


Sure. The main question I'm asking is, "why that particular analogy"? I mean, he could've even said hunting rifles (where legitimate use is more obvious). Why handguns?

IMO, it's a matter of explicit vs implicit functionality. The explicit function of a handgun is to do violence to other humans. Self-defense is only implicit/secondary.

So, to analogize this software to a handgun is to acknowledge that the most obvious use for the software is to hurt others, while "self-defense" is only secondary.


Because it's come up recently and congress voted it down and the courts dismissed it. It's established case law and congressional law that supports his case strongly.

This software is used for self-defense. If you install it on your laptop and someone steals it you can lock them out and take pictures of them, etc. (it is your property after all).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: