I can think of two reasons:
1. it's immediately clear to users that they're seeing content that doesn't belong to your business but instead belongs to your business's users. maybe less relevant for github, but imagine if someone uploaded something phishing-y and it was visible on a page with a url like google.com/uploads/asdf.
2. if a user uploaded something like an html file, you wouldn't want it to be able to run javascript on google.com (because then you can steal cookies and do bad stuff), csp rules exist, but it's a lot easier to sandbox users content entirely like this.
> if a user uploaded something like an html file, you wouldn't want it to be able to run javascript on google.com (because then you can steal cookies and do bad stuff)
Cookies are the only problem here, as far as I know, everything else should be sequestered by origin, which includes the full domain name (and port and protocol). Cookies predate the same-origin policy and so browsers scope them using their best guess at what the topmost single-owner domain name is, using—I kid you not—a compiled-in list[1]. (It’s as terrifying as it sounds.)
3. If someone uploads something bad, it could potentially get your entire base domain blocklisted by various services, firewalls, anti-malware software, etc.
- is allowed to set cookies scoped to *.github.com, interfering with cookie mechanisms on the parent domain and its other subdomains, potentially resulting in session fixation attacks
- will receive cookies scoped to *.github.com. In IE, cookies set from a site with address "github.com" will by default be scoped to *.github.com, resulting in session-stealing attacks. (Which is why it's traditionally a good idea to prefer keeping 'www.' as the canonical address from which apps run, if there might be any other subdomains at any point.)
So if you've any chance of giving an attacker scripting access into that origin, best it not be a subdomain of anything you care about.
A completely separate domain is more secure because it's impossible to mess up. From the browser's point of view githubusercontent.com is completely unrelated to github.com, so there's literally nothing github could accidentally do or a hacker could maliciously do with the usercontent site that would grant elevated access to the main site. Anything they could do is equally doable with their own attacker-controlled domain.
I think one reason is that a subdomain of github.com (like username.github.com) might be able to read and set cookies that are shared with the main github.com domain. There are ways to control this but using a different domain (github.io is the one I'm familiar with) creates wider separation and probably helps reduce mistakes.
I read about this a while back but I can't find the link anymore (and it's not the same one that op pointed to).
client browsers have no "idea" of subdomains, either. if i have example.com login saved, and also a one.example.com and a two.example.com, a lot of my browsers and plugins will get weird about wanting to save that two.example.com login as a separate entity. I run ~4 domains so i use a lot of subdomains, and the root domain (example.com) now has dozens of passwords saved. I stand up a new service on three.example.com and it will suggest some arbitrary subset of those passwords from example.com, one.example.com, two.example.com.
Imagine if eg.com allowed user subdomains, and some users added logins to their subdomains for whatever reason, there's a potential for an adversarial user to have a subdomain and just record all logins attempted, because browsers will automagically autofill into any subdomain.
if you need proof i can take a screenshot, it's ridiculous, and i blame google - it used to be the standard way of having users on your service, and then php and apache rewrite style usage made example.com/user1 more common than user1.example.com.
Because there's stuff out there (software, entities such as Google) that assume the same level of trust in a subdomain vs its parent and siblings. Therefore if something bad ends up being served on one subdomain they can distrust the whole tree. That can be very bad. So you isolate user provided content on its own SLD to reduce the blast radius.
I've read - because if a user uploads content that gets you on a list that blocks your domain - you could technically switch user content domains for your hosting after purging the bad content. If it's hosted under your primary domain, your primary domain is still going to be on that blocked list.
Example I have is - I have a domain that allows users to upload images. Some people abuse that. If google delists that domain, I haven't lost SEO if the user content domain gets delisted.
This is probably the best reason. I had a project where it went in reverse. It was a type of content that was controlled in certain countries. We launched a new feature and suddenly started getting reports from users in one country that they couldn't get into the app anymore. After going down a ton of dead ends, we realized that in this country, the ISPs blocked our public web site domain, but not the domain the app used. The new feature had been launched on a subdomain of the web site as part of a plan to consolidate domains. We switched the new feature to another domain, and the problems stopped.
CDNs can be easier to configure, you can more easily put your CDNs colocated into POPs if it's simpler to segregate them, and you have more options for geo-aware routing and name resolution.
Also in the case of HTTP/1 browsers will limit the number of simultaneous connections by host or domain name, and this was a technique for doubling those parallel connections. With the rise of HTTP/2 this is becoming moot, and I'm not sure of the exact rules of modern browsers to know if this is still true anyway.
There's historical reasons regarding per-host connection limitations of browsers. You would put your images, scripts, etc each on their own subdomain for the sake of increased parallelization of content retrieval. Then came CDNs after that. I feel like I was taught in my support role at a webhost that this was _the_ reasoning for subdomains initially, but that may have been someone's opinion.
Search engines, anti-malware software, etc track sites' reputations. You don't want users' bad behavior affecting the reputation of your company's main domain.