Hacker News new | past | comments | ask | show | jobs | submit login
Popular Chinese iOS apps compromised in malware attack (greatfire.org)
194 points by tomkwok on Sept 19, 2015 | hide | past | favorite | 84 comments



Remember the NSA's infamous "I hunt sysadmins" [1].

Software engineers, ops people, and sysadmins at big tech companies with interesting data are high-value targets. If you can can run code on production infrastructure or a large install base, you should assume you are being actively, personally targeted by multiple advanced persistent threats, including at least one state intelligence agency, and adjust your OPSEC accordingly.

They want into your infrastructure, and the easiest way into your infrastructure may well be your SSH key.

What's paranoid for the general public is a pretty good idea for you.

[1] https://theintercept.com/2014/03/20/inside-nsa-secret-effort...


To give some practically implementable advice, what you should do is run anything that parses untrusted input in a virtual machine or physically separate machine.

This means you should browse the web, read e-mail and read any Office/PDF/whatever files sent to you in a virtual machine that doesn't have access to anything more than strictly needed.

The reason is that most desktop software and OS kernels are not written with total security against NSA-like adversaries in mind, and as a result often have exploits that completely compromise the system. While they can still provide reasonable security against random attackers, they cannot provide any significant security against organizations (such as intelligence agencies) who can offer millions of dollars for exclusive access to unpatched exploits and MITM any connection.

You can install Qubes [https://www.qubes-os.org] if you want a system that makes it more convenient and secure to use virtual machines this way (and you trust its authors to not have backdoored it).

Furthermore, you should only install software from official sources and when you install software, you should make sure to check its digital signature or hash (from a separate virtual machine, since the one you are browsing with is potentially compromised as discussed above)

If you don't already have a trusted signing key stored on your machine, you should download the file itself or something guaranteeing its integrity if available (signing key, HTTPS certificate, file hash) from multiple Internet connections, such as a residential ISP, mobile ISP and multiple instances of Tor and make sure the downloaded data is the same. This is very likely to foil a targeted attack specifically designed to give a special trojaned version of the file or a replaced signing key under the attacker's control.

Of course it's impossible to defend against the original authors of the software being malicious or compromised themselves, or against an attack designed to give everyone maliciously altered software when you don't have a trusted key/certificate, but those are more likely to be discovered. Even if the downloaded software is malicious, if you are separating software by virtual machine as described above, the damage will be limited to the virtual machine the software runs in.

There are a lot of other things to consider, such as defending from physically targeted attacks (e.g. someone breaking in your home and installing a keylogger) if you are not anonymous and keeping your anonymity if you are.


Congratulations, you just described a security compliance program that's smaller than DFARS 252.204-7012, COBIT, PCI-DSS, HIPAA/HITECH, CLOUD SECURITY MATRIX, SANS CRITICAL CONTROLS, SOX, FEDRAMP/FISMA, and ISO 27001.

I keep reading (IMO nonsense) that best practices like those compliance frameworks aren't enough (mostly coming from Josh Corman). However, breaches are the on the rise and only 10% of the IT industry is goes through this thorough cleansing process.

Security Compliance forces you to go through that cleansing. This isn't a compliance burden, its just thorough hygiene.

Key Rotation process, Log Review process, Change control process, etc are examples of things in Compliance Frameworks.

Even the Chinese government is demanding that US Tech firms go through one of the above compliance programs.

http://www.nytimes.com/2015/09/17/technology/china-tries-to-...

Sysadmins continuously foo foo that compliance is not security.

They foo foo on it because they're self-compliant - doing security processes right the first time around. Therefore those self-compliant sysadmins are already security competent and don't need to adhere to an external set of security compliance standards.


Well, for starters those standards don't seem to be targeted to individuals, and often the goal seems to be to just allow someone to say "I'm compliant with XXX" rather than telling them how to be secure.

The practical advice they give is in the form of obvious generalities ("setup your firewalls", "update your OS") without any concrete details on how to best do that or even a discussion on which attacks are prevented and which aren't

Threat model discussions, advice on choosing which threats you can realistically defend against, theoretical principles and talk about possible attacks also seem to be non-existent.


>The practical advice they give is in the form of obvious generalities ("setup your firewalls", "update your OS") without any concrete details on how to best do that

I hope you're not suggesting that security compliance isn't necessary because of the time-consuming external research involved. Sure its a usability problem but does that mean we should throw it out and just stick with Penetration Testing, Intrusion Detection Systems, and Risk Assessments? A security guy actually told me that just doing Pentesting, Risk Assessments, and IDS was good enough.

Personally, I believe the rest of the world doesn't have a reasonable alternative except for going through a thorough cleansing process of security compliance. That's why its been my mission to take out the guesswork from security compliance frameworks like DFARS 252.204-7012, COBIT, PCI-DSS, HIPAA, FEDRAMP/FISMA, ISO 27001/2.

However, I understand that these Compliance Frameworks might not be for you. It seems like you're already self-compliant with your security processes and you're so security-competent that you're doing things right the first time around.


It seems that organizations where security is a compliance-driven process are barely concerned or not concerned at all about security breaches, only regulators.

Some of those processes are a fucking joke. The HIPAA technical safeguards include nothing particularly interesting; the hard part is the paperwork and legal ass-covering. Some PCI-DSS "auditors" are nothing more than salespeople who bought Nessus or similar and charge $10k/pop to run it, slap a logo on its report, and email it to you. Security regulations that businesses at large actually seem to care about have nothing at all to do with secure software engineering, just checking boxes like "have a firewall" and "have a password policy" and "have a network security policy" as if producing an endless trail of Word documents will make you less vulnerable.


>Some of those processes are a fucking joke.

superuser2: you're telling me that having a process for firewall changes or rotating your keys is a joke? What other process is a fucking joke? System Hardening? Log review? Source code analysis? Updating your network diagrams? Physical access monitoring? These are all processes (and more) that compliance says you should do.

You bitch about word documents when I bet you've never even gone through a thorough compliance process.


Not even that -- there are tech firms that have teams that span multiple geographic boundaries and spoken languages, and they have varying degrees of opsec (from none to excellent, with your typical bellcurve).

These firms receive contracts from Fortune 500 companies that have no interest in hiring/maintaining technical staff but have a need for apps that reach their userbase (of which numbers from hundreds of thousands to several millions across various jurisdictions).

The world of software development is growing and there are cracks appearing everywhere, a malicious individual should have no trouble accruing a healthy collection of exploitable code across various tech stacks (be it Android, iOS, server-side, or otherwise).

Proper opsec is expensive and many companies don't even bother (or are completely unaware that they could be in trouble), and that's not even touching on designing secure systems. A malicious individual could hold code for several months before deploying an exploit that reaches end-users.

Hunting sysadmins is most definitely a serious problem, but so is outsourcing.


>The world of software development is growing and there are cracks appearing everywhere, a malicious individual should have no trouble accruing a healthy collection of exploitable code across various tech stacks (be it Android, iOS, server-side, or otherwise).

Even more reason to enforce a compliance program (e.g. ISO 27001) to clean your systems and your code.

In fact, you're talking about growing cracks appearing everywhere, and when I look at your code right now, I see even you don't follow secure coding practices for Software Development. Not using the Pull Request Model? Just pushing commits directly into master? These (and more) are all bad security processes that I've identified in your github account.

https://github.com/ihsw/toxiproxy-php-client/commits/master

And you're the same people that talk about security compliance as if its a burden when you're not even doing basic hygiene with your own code.


The internet in China is so bad developers at Tencent download Xcode from random links on Chinese Dropbox? Pathetic.


It's bad when crossing the border. Rumor says it's due to traffic analysis by the GFW. I cannot find any source to prove or disprove that though.


The internet being bad is done on purpose. They don't have to make it shit with the traffic analysis, since internal traffic has traffic analysis too. They do it to keep china's internet an internal, more easily controllable intranet.


I think it has more to do with lack of competition in the ISP market. In most places, there's only one fixed line broadband provider, so there's incentive to increase speeds (as people will pay to get better quality streaming video on multiple devices) but none to improve peering with overseas networks.


sad but true


I've been fortunate enough to have access to non-GFWed bandwidth to play with on several occasions, and my experience would seem to suggest this is accurate. I can easily sustain 60mbps to the US when not subject to the GFW, but when using normal non-exempt bandwidth from the same (domestic) provider, could only sustain 5-10mbps. China's network architecture in Tier-1 cities is largely fine, and it's really the GFW that throws a wrench in things internationally.

Data localisation regulations are certainly one reason for foreign companies to set up local infra, but IMHO the real reason is that it's the only way to provide an even remotely tolerable user experience to users in mainland China. Aliyun is basically AWS for China and is very easy to get up and running, and getting an ICP license wasn't nearly as hard as everyone makes it out to be (got one in ~2 weeks from application).


The Cause is your exempt bandwidth,or how to explain why downloading other big files crossing the border through GFW is not so slow as Xcode?


NOT GFW, just concerning Apple's money. Chinese have downloaded so much HD video crossing the border through the GFW,but the speed is not so slow as Xcode downloading.


Even apple has earned so much money from chinese,there are no CDN server for distributing Xcode in China.


And dont run a checksum.


A checksum can only tell you if the file is identical to another. Even if it did - you downloaded the file from untrusted location for a reason - you don't have access to original source.

Checksums are useless if you don't have access to an alternative source.


Xcode.app includes a digital signature, which can be checked with `codesign`. As all OS X comes bundled with Apple's root certificate, one can check for the validity of that application by oneself without any additional trusted source.

Or if the developers never disable GateKeeper and read the warning, they will know that the application is not genuine.


Of course being China, of course Gatekeeper is totally disabled and no one has valid signatures on anything unless it's absolutely required


> you don't have access to original source.

Not true, they have access to original XCode (and checksum), but the download speed is very slow.


Tencent security team just posted a write-up on this issue on its blog. http://security.tencent.com/index.php/blog/msg/96

Here's the English translation:

---

Sep 12 - When we were tracking down a bug, we discovered that there was suspicious encrypted traffic sent from one/some app(s) to one/some particular domain(s) when the app(s) was/were launched or closed. Front-end security team immediately followed the issue.

Sep 13 - Tencent product team released an updated version of the app(s). We notified CNCERT of the issue.

Sep 14 - CNCERT issued a pre-warning on its website. [1]

Sep 16 - We discovered that 76 of the top 5000 apps on App Store were infected. We notified Apple and most of the app vendors of the infected apps of the issue.

Sep 17 - Palo Alto Networks also discovered the issue and published a report of their preliminary findings[2], and so did Ali mobile security team[3].

Analysis

1. Infected apps send the following information to attackers' servers: app name, app version, iOS version, locale, device type, country code, IDFV. The domain used is icloud-analysis.com. We also discovered three other domains that are not used.

2. Attackers can identify every infected iOS device and issue commands to be executed via the openURL API.

3. Attackers can invoke a customized alert box on infected iOS devices, showing whatever they want.

4. The malicious remote control module itself is vulnerable to MiTM attack.

It should be noted that multiple versions of the remote control module are discovered, some of which do not have the capability described in (2) and (3).

[1] http://www.cert.org.cn/publish/main/12/2015/2015091415282115...

[2] http://researchcenter.paloaltonetworks.com/2015/09/novel-mal...

[3] http://drops.wooyun.org/news/8864 (Chinese)


The WeChat team posted a public bulletin 24 hours ago saying that their was no leaked user info and that the issue only affects WeChat 6.2.5 users on iOS and that newer versions are not affected. An English version was published several hours afterwards and can be found here: http://blog.wechat.com/2015/09/19/fixed-security-flaw-in-wec...

Fixed Security Flaw in WeChat 6.2.5 for iOS

A security flaw, caused by an external malware, was recently discovered affecting iOS users only on WeChat version 6.2.5. This flaw has been repaired and will not affect users who install or upgrade WeChat version 6.2.6 or greater, currently available on the iOS App Store. Here are some important points about the situation.

1. The flaw, described in recent media reports, only affects WeChat v6.2.5 for iOS. Newer versions of WeChat (versions 6.2.6 or greater) are not affected.

2. A preliminary investigation into the flaw has revealed that there has been no theft and leakage of users’ information or money, but the WeChat team will continue to closely monitor the situation. 3. The WeChat tech team has extensive experience combating attempts to hack our systems. Once the security flaw was discovered, the team immediately took steps to secure against any theft of user information. 4. Users who encounter any issues can contact the team by leaving feedback in the “WeChat Team” WeChat account.

The WeChat Team September 18, 2015


The statement is controversy.

It said WeChat 6.2.5 has been affected. But it also said no leaked user info. If it was affected, how do you know there is no leaking? if the flaw doesn't theft anything? what does it do? a joke?



It's amazing how widespread this is, perhaps the biggest malware infection ever in sheer number of devices affected? The Xcode attack vector is evil genius.

So all these apps are transmitting user data (basic device information) to 'icloud-analysis.com' and no one notices for how long?

I wish there were better tools to monitor outbound traffic from our own devices, but I guess it's just too overwhelming how many different places our data is sent in an average day of browsing.


The best tool I've found so far is MITMProxy [1]. It lets you snoop all your HTTP(s) traffic from your iPhone. You just have to keep it running on your PC (or server, if you want) and set the HTTP Proxy settings accordingly. Then you have to install a CA certificate (that has been locally created and is only known by you). It is pretty interesting what data your phone is sending. Of course, you could simply trick this tool by not using HTTPS (and send it over FTP or a custom protocol instead) but it is a good start.

[1] https://mitmproxy.org/


Really?

So the developers of wechat had their xcode infected by something and now an unauthorized tracker is on the ios wechat app?!

Wechat with about 500 mio users! Probably running on about 90% of Chinese-owned iPhones.

I would like to see some independent confirmation of that.


According to the following link, Tencent, developer of WeChat has made a statement that version 6.2.5 is affected and users should update. I think that's pretty credible.

http://researchcenter.paloaltonetworks.com/2015/09/malware-x...

http://mt.sohu.com/20150918/n421556810.shtml


It's true and if you read Chinese, it is already all over the Internet.

On twitter, most of them are under #XcodeGhost[0].

Besides, the (alleged) author has put the source code on github[1].

0. https://twitter.com/hashtag/XcodeGhost?src=hash

1. https://github.com/XcodeGhostSource/XcodeGhost

Update 1: Add the source code of XcodeGhost


I don't read Chinese but this reads very sensational. For example it says not to download WeChat at all while its sources seem to say it is only an older version that is affected.

The sources seems to be tweet-like forum postings.

Think about it. If Facebook had their releases infected by a virus - what would be needed to convince the world?

It would either require Facebook to officially acknowledge it or have someone do an explicit reproducable analysis of a release.

I.e. this particular version 6.23 of Messenger, signed by Facebook, that you can download here, does sends user information, under these circumstances, to this address which it is clearly not a part of Facebook but belongs to this malicious compiler virus.


I can now see that the developers of WeChat Tencent have acknowledge that a release of theirs has been infected:

http://security.tencent.com/index.php/blog/msg/96

Wrt. to this github:

https://github.com/XcodeGhostSource/XcodeGhost

As far as I can see what this code does is that it sends some basic user information to an external website and it may popup an alert window or open Safari or other apps based on the response of that external website.

It is not a compiler virus and there is nothing on how you can modify an XCode release to add the above into other developers' apps.


>It is not a compiler virus

Nobody said it was. Modifying a compiler to inject XcodeGhost is simple assembly work. Almost anyone could figure it out in an afternoon.

Multiple independent security organizations are reporting that the modified XCode release was shared via a filesharing site in China. That is how so many Chinese iOS developers came across it.

The version of XcodeGhost on github is a harmless version posted by the original author. The actual compiled code being found in the wild has malicious abilities not found in the code in that GitHub repo.


@XcodeGhostSource on GitHub claims to be the author of the malware and open-sourced it and apologizes.

"XcodeGhost" Source: https://github.com/XcodeGhostSource/XcodeGhost


What I am interested in is the authenticity of this source. Could it be a "modified" source with the personal-info-tracking code removed? Any confirmation on this?


Update: it may be able to steal iCloud passwords:

http://researchcenter.paloaltonetworks.com/2015/09/update-xc...


That's crazy!


Is there any new information on how the attack works?

Was it something that's injected at compile time (the Xcode building the app store version was compromised for all these apps?)? Was it just files added to the project if anyone on the team used the compromised Xcode?


A Chinese coworker sent me this link this morning, but HN said it had already been submitted (but I couldn't find it): http://researchcenter.paloaltonetworks.com/2015/09/novel-mal...

If you're going to target developers, I'd think their keychain contents would be valuable too. Grab their Apple account credentials and their signing credentials. Have Xcode phone home, and custom craft a payload for their app.


> If you are an iOS developer, however, a lot can be done to secure your development system:... Separate your development system with your everyday system. Development systems should be used solely for development and not for browsing random sites. If physical separation presents too much of a problem for developers, at the very least, a dedicated user account for development should be used.

I'm not an application developer, but I haven't seen this recommended before and it sounds a bit extreme. Is it a standard, recommended best practice for development environments that must sign code?


It's often good practice to have a dedicated build server responsible for producing your release builds and owning the release key. That machine should be kept as secure as possible, along with your source control server.

However, isolating machines used solely for development is not common. A leaked development key is typically not very useful. The resulting code is usually not run anywhere other than the developer's own hardware.

(A compromised dev machine in this scenario could still be used to push malicious code to source control, which would be bad. So it's still good to keep security in mind. But at least that's potentially detectable compared to a compromised compiler. This is also where mandatory code review prior to commit can be useful.)


Standard (I've mainly worked with IT in financial companies).

But usually implemented not as a physically separate network, but an extremely locked-down one. No USB activated ports (data, mice are OK), instant reporting of unexpected devices, white-listed website access (if lucky) and often no email privileges to email out of the organization. This gets relaxed (apart from USB access) with seniority. I know companies where mobile phones have to be left in lockers on entering/leaving the building, and paper free environments meaning no one unless a bit senior has printing rights.

Does this make a difference? Against a malignant party, no. Against careless staff member, yes. Against a lucky/fluky outside actor (in the story link), yes.


Think only Chinese iOS Apps are affected? Think again, because your favorite app might just be outsourced to Chinese developers.

Apps like Mercury, WinZip, PDFReader are reported to be affected by this Xcode trojan (have been taken down by Apple).

http://researchcenter.paloaltonetworks.com/2015/09/malware-x...


IOS now is the most dangerous platform...


Not by a large margin.

First of all, it's not the platform. There wasn't some vulnerability found in iOS that made this possible.

It's caused by pirated infected third party XCode downloads. If you use third party Visual Studio or Eclipse/Idea for Android development, you can get the same exact issues.

Second, apps run in a sandbox in iOS anyway, so those infected apps can't do much besides giving you ads and data about their usage.

Thirds, that's like 100 apps in the list, mostly all made in China, and all by people with infected, non-official XCode.

Contrast with malware in Android landm which amounts to 97% of mobile malware: http://www.forbes.com/sites/gordonkelly/2014/03/24/report-97...


Thanks for correcting me. I am wondering if IOS could add a new feature to detect bad apps.

For example IOS could give an operation history summary for each App. The list could be something like this:

APP1:

Photo ---- read ? times write ? times

Contact ---- ...

sms ---- ...

device ID ---- ...

APP2:

Photo ---- read ? times write ? times

Contact ---- ...

sms ---- ...

device ID ---- ...

customer can turn on/off this feature.

I believe if IOS have this feature, it will be much easier for IOS user to find out the bad APP.


>I am wondering if IOS could add a new feature to detect bad apps.

I think Apple can add such as a step into their build process (IIRC, with the new XCode 7 there's the option to submit a kind of bytecode to be built on Apple's servers depending on the target architecture etc).

Another thing they could do is enable some kind of "Little Snitch"-like network connection that a user can enable for apps. This way the user can be informed for any "mysterious" external connections going on.


Yet days after this Trojan was disclosed together with a code signature, Apple is still relying on third parties to tell them which apps are affected. Meanwhile, we know that Amazon has been scanning their store for (at least) AWS keys for years, and Google has been running Bouncer on their store for longer.

It is well-known that due to Apple's restrictions on third-parties scanning software in their store, malware incidences in the App Store are significantly underreported.


Which popular/mainstream Android apps were taken down from Google Play due to malware?

I'll grant you third party Android app stores and side loading is more dangerous than iOS - but in the case of Google Play vs iTunes it seems like Google Play is safer.


>Which popular/mainstream Android apps were taken down from Google Play due to malware?

Well, there have been popular apps that have been infected as came pre-installed in phones:

http://thehackernews.com/2015/09/android-smartphone-malware....

As well as apps pulled from Google Play for having malware:

http://www.huffingtonpost.com/2015/02/04/mobile-malware_n_66...

http://www.coindesk.com/google-pulls-six-mobile-wallpaper-ap...

http://fortune.com/2015/07/08/google-play-fake-app/


Right, I agreed that there are some cases of malware on Google Play. The apps you linked don't look to have the number of users affected by this latest App Store scare. That's why I said overall Play seems safer than iTunes.


Downloading apps from App Store is very slow in China and sometimes you can't even open App Store successfully. It's not Apple's fault (but Apple should do something to solve it). The real cause is the gov and GFW. Many developers use XunLei(Thunder, a p2p downloading tool) to accelerate downloading. I used it too when I was in Shanghai China. After I moved to Hong Kong, I always download directly from App Store and the downloading speed is about 5MB/s.


What can Apple reliably do without exposing the source code and attribution (with signig) of every file used to generate the app, run inside a signed environment on a locked down computer?

Please. This is the GFW.


I think that at least Apple could put the Xcode binary files onto a CDN inside China. I'm trying to download Xcode 7.0 from the App Store today, and it's still hanging after a few hours.


And when that CDN has a hiccup dozens of iOS developers go elsewhere.

Also, this is by no means an Xcode problem only. It potentially affects every piece of software and even code rehosted from github. Does that copy of that library developed on github, mirrored on Baidu, contain a backdoor?



I'm trying to figure out if the version of WeChat I have installed is compromised. I haven't been able to find any specifics about versions of any of the apps. Does anyone have more info?


According to Tencent, only 6.2.5 is compromised. If you updated to 6.2.6 you should be fine.


I find it interesting that the source code on GitHub was committed by 2 different users with the same user name XcodeGhostSource, one for the code and one for the README.md. Are these two separate persons? Or maybe the one for the code was committed before using the same email to register GitHub account?

https://github.com/XcodeGhostSource/XcodeGhost/commits/maste...



Wow. I didn't know you could track down the email address using commit information. Now this guy is exposed in terms of the online identity.



Honestly... what's the difference between this kind of malware and the data-mining performed on websites, OSes (Windows 10, Android, iOS..), cellular companies, government backdoors, etal..? It's sickening (and criminal) that it's gotten to the point it has.


Did you read the article? This isn't a company intentionally mining user data, which is usually not criminal, as the EULA would have allowed those companies to do so. This is developers downloading Xcode from third-party sources, which are compromised and would inject third-party code to iOS apps sending analytics to a third-party server, without knowledge of the original developer.


Your missing the point of my comment. The whole system of data-mining, surveillance, stalking, sharing of data with untold 3rd-parties IS criminal - eula or otherwise. The only thing these guys did that breaks with 'current accepted (criminal) practices' is that they modified someone else's code to do what everyone else is already doing.

So yes... I read the article and a bit more...


What sort of analytics? Apple has blocked most of them. Why not just create some flashlight apps?


According to the posted source (if that's the actual one) it is just basic stuff.

But being able to inject code like that, as the article describes, could present fake login windows and whatnot for phishing attacks.


Can't believe anyone would download FREE software from any other than the original author. This screams for that kind of attack!


Is it possible that developer outside China download the infected Xcode? if so, it will be critical issue.


Surely you can.


Makes me wonder if any other tool chains / compilers have already been co-opted in the wild.


How about the other Chinese IOS APP?


obviously to monitor and crush dissent


oh!!!!!!!!!


What about Android? Can the same scheme happen to Eclipse and affect Android apps, too?

Now, do we all switch to Windows phones?


Windows phone is indeed a much safer OS for the moment. Security by obscurity. It worked for OSX for quite some time in the 90s and early 2000s :)


Because Google is not available in China, so most of Android developers in big companies know how to cross the GFW, after crossing the CDN of Google will make the download very fast. So I think Android apps may be less effected under this situation.


wrong most of the time Google is accessible in China, different from several years ago


This scheme isn't needed with Android since most devices are vulnerable to well known exploits.


> The compromised version of Xcode was hosted on Baidu Pan. It is unlikely that Baidu was aware of the compromised version of Xcode.

I'm sorry, but at this point I no longer think "it is unlikely" Baidu was unaware. I find it too coincidental based on the simple fact that they were also involved with the DDoS attack on GitHub earlier this year.


"Baidu Yun" is an online file locker (like Dropbox) with a generous free quota. You can create a shareable link for a file with a single click. Why would you think someone at Baidu would have knowledge of a particular file a user shared on that service?

Separately, you bring up Baidu's "involvement" in the DDoS attack on GitHub. I remember reading that this was achieved using a man-in-the-middle attack on customers of Baidu's analytics product, which would not need Baidu's cooperation: http://www.netresec.com/?page=Blog&month=2015-03&post=China%...


The source is also hosted on GitHub, so GitHub must be involved in this? Funny logic, I bet you have never used Baidu Pan, and has no idea how it is used by millions of Chinese people everyday.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: