This is an absurd idea. You can't "recover" stolen digital assets; when they're gone, they're gone. Meanwhile, virtually all attacks that would be handled under the rubric of this proposal will be done through pivot machines, which are also innocent people's compromised systems. Proposals like this are a license for companies like Raytheon and Leidos to break into random machines and violate everyone's privacy.
While I intend to agree, its not 100% correct. In case the only copy of that data resides on the server you hack back and you can delete it... you have actually unstolen the data... technically speaking..
In practice of course, you have to assume its already copied a trillion times.
In practice, a company finding one copy might delete it and then claim that it was as if there had been no breach and so that it does not have to report it, or do anything to protect, or even warn, the people whose information was stolen. Uber took this position after merely paying the intruders to say that they had deleted the data.
It's not just about recovery, it's about deterrence and reclamation of command.
If you've hacked my server and locked it's drive with ransomware and I can hack you back to retrieve the private key then I should be allowed to do so. If you're DOSing me I should be able to take out your C&C and patch the botnet with as many security fixes as possible.
There's many circumstances in which one party breaking the law allows another party to _legally_ perform actions which are typically not allowed.
For example, it is illegal to kill another human being. However, nearly all jurisdictions recognize "killing in self-defense" as permissible, if unfortunate.
I can't say if the same argument applies in this situation, but I _could_ see some parallels. If a malicious party is willing to cause you financial harm for their own gain, and you can reasonably prevent this financial harm by quickly inflicting equal-or-less harm on them, perhaps that's justifiable.
These situations exist but they are actually pretty rare. As lawyer can tell you, in the modern legal system attempts at "self-help" are generally bad for the plaintiff.
Seems like these days, the expected strategy is to let things happen and delegate making you whole to insurance and the legal system.
I can see how this is better - especially for society in general - than taking things into your own hands, but then insurance sometimes will try to screw you (is there insurance for insurance?), and lawyers are prohibitively expensive for many.
This isn't an academic argument. There are many cases where company leadership knows the people and computers behind the botnet / ransomers and has to just sit there taking a beating. Not everyone can farm this shit out to Cloudflare.
Yes, and while others are theoretically claiming this is not bad, attackers will change their tactics. Like groups that fire rockets from schools, they will anticipate over-arching revenge tactics and use them to their advantage.
I don't necessarily disagree, but it does seem plausible that you could collect information about the assailants even in cases where the stolen data can't be suppressed.
Also, is it really the case that "virtually all attacks" would go through pivot machines? Certainly the sophisticated ones would, but there are plenty of script kiddies to bring down the hammer on.
Pivot machines are the main reason for "hackback" policies.
If a script kiddie does something directly from his home computer, then the source of attack is clear and you can just subpoena his home address from his ISP and get a cop to educate his parents or whatever. However, if pivot machines are used, then you need to obtain data from every machine in the chain to trace the source of attack, and doing so by a chain of subpoenas is extremely slow, and compared to that hackback provides a practical (though usually not legal) option to do so.
Also, hackback provides a venue for stopping distributed attacks - if you're attacked from a large number of compromised machines, you need data from them (again, either by subpoena and physical retrieval or by hackback) to identify the command&control servers and possibly other victims.
That makes sense, thanks. Do you know a place I could read more about motivations of this law which might discuss the important role of pivot machines?
I feel like this is opening Pandora's box. What's to stop people from hacking a business rival or other entity and claim a hack originated from them. Once they have access to the box they could plant whatever incriminating evidence they need to corroborate their claim.
Who is responsible for service outages, degradations in service, damaged equipment if a pivot machine is damaged?
How do you guarantee that these vigilantes only access materials related to the hack and don't run wild on the pivot machine's network once they gain access?
As an analogy, imagine someone broke into my house and stole my computer. Should I be allowed to hire a private military contractor to raid all the safe houses or apartments my suspected assailants used to try to get it back?
The correct solution to the problem this supposedly addresses is for companies to get their security act together. Until they can do that, they cannot be considered competent enough in security to 'hack back' anyway, and instead of hiring specialists to do it for them, they should be hiring specialists to fix their security.
And exactly none of that stops them from wiring a monthly fee to a PO box in Ethiopia to pay an "offense as a defense" service that's running out of an office in the Ukraine.
The possibility of having a public disclosure to the tune of "on $date $company was hacked, we were contracted to get to the bottom of it employing offensive tactics as necessary....blah blah..." followed by a write up covering attribution and a pen-test of whoever it's attributed to would dissuade a lot of actors.
Would you risk your botnet and C&C infrastructure by hacking a company knowing that they'll pay someone to try their best to figure out who did it and hack them back?
> Would you risk your botnet and C&C infrastructure by hacking a company knowing that they'll pay someone to try their best to figure out who did it and hack them back?
You are asking the wrong person, but my guess is that you are exaggerating the efficacy of this retribution, while at the same time regarding botnet operators as being uniquely responsive to the threat of deterrence, in comparison to other criminally-minded groups.
>You are asking the wrong person, but my guess is that you are exaggerating the efficacy of this retribution
Probably to some degree. Obviously you can't assure anywhere near 100% success rate but you don't need one. People go the speed limit in the left lane even though the cops don't ticket everyone who speeds.
I'd love to live when you live. In my area (and in every other motorized place on the planet that I heard about so far), people always go above the speed limit, precisely because cops don't ticket everyone.
Banks are allowed to hire armed security to shoot at potential robbers. They could require better vetting (maybe requiring the use of a debit card before the doors are unlocked, metal detectors, etc). But instead they're allowed to skimp on less draconian security measures and go straight to guns. How is back hacking any different?
Your example is one of intrusion prevention. The appropriate analogy for hacking back would be giving banks the power to conduct their own search and seizure operations. While lenders are permitted to repossess publicly-accessible property in certain circumstances, they cannot independently authorize and perform intrusions for that purpose.
A lot of companies can't set up a proper audit trail or system inventory, how can we trust them with managing other people's sensitive information and infrastructure if they can't even handle their own properly?
It seems like an opportunity to kill two birds with one stone. Require a license to be a "qualified defender" and make the corporate license requirements include some sanity checks on infosec practices.
There's probably some nice historical analogies to be drawn between permitting hack-backs and older forms of self-help, especially in frontier setting where there was little or no criminal justice infrastructure.
(In this case, the lack of infrastructure seems to be due to lack of technical expertise on the part of the government, which may change in the future.) Bounty hunters, repossession, and citizen prosecution come to mind, and I'd love to see a serious effort to compare things like these with hack-backs.
I am having a difficult time trying to imagine the legal gymnastics that would be required for this to work without incurring liabilities on everyone involved.
I can think of hundreds of scenarios, but here is just one. I know your company stole data from me. I take over your BGP routes and force your traffic through my data-center. I underestimated the capacity I would need to take control of your data streams and now I am causing an outage for your customers. Am I immune to civil and criminal damages?
Having to sort out logs of an attacker pivoting through systems combined with logs generated by victims "hacking back" isn't going to make incident response investigations any easier.
I'm not sure. Yes, there would be more to look at, but the "qualified defender" will have already done some of the work. Assuming QDs are required to turn over info to law enforcement, all LEOs may have to do is verify the QD's analysis of events against the logs.
Currently one can obtain a private investigator licence. Getting such a licence requires a minimum amount of training, mandatory firearm training, and is prohibited to criminals and such.
How about instigating a 'cyber investigator' licence. Applicants would have to prove a minimum level of proficiency with pen testing, gaining entry to systems without damaging anything or causing data loss, hacking software, etc., and would also be vetted for a criminal record.
Those who gained a licence would be expected to follow a code of conduct. Benefits would include immunity from DMCA laws and the like that get used to prevent ethical hackers from reporting vulnerabilities.
Companies that suffered a breach and wanted to find out who did it would be required to hire only a licenced cyber investigator. It might cost them a bit more but they would at least have the confidence that the hired person had the required skillset. It could be that a requirement of hiring someone for such a task would be to pay them to report on the vulnerabilities that allowed the breach to happen in the first place.
Labs that wanted to carry out vulnerability testing of software could apply for licences for staff so they would then be immune from companies informed of vulnerabilities trying to get them prosecuted.
Wow. What would be the possible reason for blocking incognito?
You still get the full ad experience? I can't support any company that is demanding that they not only serve their ads but also be allowed to track the visitor after.
> Wow. What would be the possible reason for blocking incognito?
This is what the blocking page reads:
> Visitors are allowed 3 free articles per month (without a subscription), and private browsing prevents us from counting how many stories you've read.
Visitors are allowed 3 free articles per month (without a subscription), and private browsing prevents us from counting how many stories you've read. We hope you understand, and consider subscribing for unlimited online access.
-----------
It's not about ads, it's about free access before a paywall.
Yeah but I mean, if the mechanism they're using relies on not using incognito, that means you should just be able to delete their cookies and keep getting access.
It says in the 'error' screen why they are blocking incognito.. so that they can accurately count how many articles you have read. Seems fair enough to me.
uBlock Origin seems unaffected (but private browsing still trips it.) Overall, I just exit the page and don't even consider switching off my adblocker, so it hardly matters to me, personally. I imagine most people are on the same boat.
umatrix here. I saw 1 cookie (from techreview itself), although enabling googletagservices/etc leads to more techreview cookies. Enabled everything google and it still wasn't good enough for them. If it's not google doing their tracking work, that's just being awkward.
The only thing that will stop a bad guy with a computer is a good guy with a computer! Legalize concealed computing!
...but seriously, us Americans live in a world where people make the serious argument that vigilante gun ownership helps cut the crime rate. I see this as more of the same. I think it's probably untrue, but if we accept one, shouldn't we accept the other?
The argument "gun ownership serves as a deterrent of crime and has been known to spoil criminal endeavours, therefore we should allow citizens to carry guns" has never been about vigilantism. Vigilante behavior is actually a quick way to get yourself locked up and/or your permit revoked.
There is a BIG difference between defending your home with a firearm, and looking up a thief and paying him a visit with a gun in hand. Likewise there is a BIG difference between keeping attackers out of your servers and collecting thorough forensic evidence, and going all out guns blazing with your 0day collection trying to outhack the hacker.
The natural analogy for guns is probably crypto, but the real problem is that we live in a political climate where computer defense is totally ignored.