Nothing. Also nothing is stopping anti-virus companies from making their software detect the "Bundestrojaner" as soon as they have learned to recognize it. They have done this with the previous version, too:
Thanks. I wasn't actually aware they're a German company. This should make it even more reassuring though: if even German anti-virus companies won't give it any special treatment, this really makes it no more of a concern than any other malware from the user's POV (other than the political debate of course).
A hipstery part of me wants to register one for the lulz and not renew it. The key point here that the domain will not be renewed because it's another bill i'm hit with each year.
I wish there was some way to get a free domain from Google without resorting to things like Freenom, or other unreliable services.
I've only ever associated WebRTC with its ability to expose the real IP address when surfing with a VPN. I am unsure if this bug is unique to Firefox, and I hope the bug doesn't show up in Safari.
I always wondered what the web would feel like, in terms of experience, without some manner of filtering. The last time I rode bareback on the internet without ADblockers, or even rudimentary hosts blocking was at an Airport kiosk stand, which even then felt weird, because the New York Times was still only learning about fingerprinting and grabbing what is effectively the Mac address of any machine using Flash.
be _very_ careful about copy-pasting an enormous hosts file with one found on the internet, especially one served over http. such a thing is ripe for phishing injection.
it takes one malicious entry in the list of 10k which doesnt loop back to your own machine for me to present you with a legit-looking and secure "capitolone.com" home page.
Google is only a proxy to The Internet, nothing more. There's a bucketload of untapped traffic coming from sources Google could only dream of. I found this out by mistake when I had AWStats running for way longer than it should have, and it hoovered up globs of what we would call 'big data' today.
Some of the referal links are still alive today and funneling terrabytes of metadata through their servers; a practice I thought was more or less frowned upon, but still serving SEOs today. Still so much to tap into, still like 2012 too!
Google has been trying to put a damper on linking from your website to another one for years now. My first website in 1996 I developed the first thing I did was go to websites that had similar customers and we would exchange links so our customers would find out about our complimentary services. Now that freaks google out because they prefer everyone to see Google ads when searching so finding websites must be done inside of Google otherwise they threaten you to take you out of Google search.
Very much welcomed. Like all protocols, unless they are pushed by industry players, they won't take off in any meaningful way. Take IPV6 for example; out compteted by its predessor IPV4 because swathes of industry players chose not to support it. I hope this is not the case for HTTP/2 as turning it on is trivial these days and more often than not, it's a small flag in a config file and boom, we have HTTP/2.
Just for those using this: this phones home to panthema.net when checking for updates. You might want to forcefully make it phone home, and then create a firewall rule for it.
I inspected the traffic and it's fairly innocuous, but you never really know, unless you compile from source and leave out any third party TCP connections, which I have done.
I genuinely am interested in their payloads...