Hacker News new | past | comments | ask | show | jobs | submit login
MarkMonitor left 60k domains for the taking (ian.sh)
420 points by agwa on Aug 29, 2021 | hide | past | favorite | 99 comments



“ MarkMonitor sells themselves as the domain registrar that does not make mistakes. (…) MarkMonitor is not a cheap solution to this problem, but it is widely used (apparently by "more than half of the Fortune 100", per the page)“

And then:

“ MarkMonitor does not have a way of disclosing security issues, which inhibited reporting this to them in a timely manner. They have not responded to any of our communications.”

Should anyone be surprised that a company that claims to not make mistakes, but has no way to report vulnerabilities gets their vulnerability on the front page of HN? (And this is a good case scenario, as opposed to their customers being hacked: which could have happened by an attacker claiming some of these domains)

Let’s hope they make at least the change of putting a bug bounty program in place.


Oops. People pay a lot of money to MarkMonitor because their promise is to not make such mistakes.


When I was a customer, screwing up a parking page isn't the kind of mistake my employer was paying them not to make. My employer was paying them not to let outside groups change our domain's DNS settings.

But that was for domains we were actively using. I don't even remember if we had a parking page on the domains we only had to sit on; if we did, while it wouldn't have been great, it also wouldn't have been a big deal for them to show something attacker controlled for a little while. Parked domains weren't in active use, didn't have links going to them, and probably shouldn't seem credible to others.

I think we might have set up the parked domains to just redirect to our main website, but yeah.


> it also wouldn't have been a big deal for them to show something attacker controlled for a little while

This is understating the risk. If an attacker obtains control over a website for even a brief period of time, they can go to a certificate authority and obtain a domain validation for the domain (including all subdomains) that's valid for one year. At the end of that year, they can use the domain validation to obtain an SSL certificate that's valid for up to one year, even if they no longer control the website. This means that following a brief takeover, the domain continues to be vulnerable to attack for two years, which would be a problem if a parked domain is transitioned to active use during that time.


Nowadays a bit lessened by CT logs, so you (or MM) can at least notice and cause the cert to be revoked.


Yes, although the attacker can defer logging the certificate to CT until the moment before the certificate is used in an attack, so you would not have much time to respond.


How? The CA submits records (precertificates) to the logs before it signs the cert.


Logging is not a policy requirement of the trust stores.

It's a condition for your certificates to work in some popular browsers (notably Chrome and Safari) but not a policy requirement.

This is on purpose. Google themselves for example will get a certificate issued, say, today, for shiny-new-google-product.example and then only at the last moment do they log that certificate when spinning up the web site https://shiny-new-google-product.example/ to launch shiny-new-google-product -- you can't find out about it in the CT logs because it wasn't in there earlier.

Now, the readily available and especially free certificates most people use are logged before issuance as you describe using the poisoned "pre-certificate" feature from RFC 6962 (in theory this will one day be replaced by a 6962bis actual pre-certificate document rather than poisoned X.509 certs) but that is not at all mandatory, it's just convenient because it requires no workflow changes. You get a certificate, it's a bit bigger (it has SCTs baked inside it) and it just works.

If you do things the hard way, you must ensure your server application software understands what SCTs are and can send them to clients as necessary. You save some bytes talking to clients that don't want the SCTs, and you get to have this just-in-time logging behaviour if you want that.

The other reason it isn't against policy to issue without logging is that some archaic systems that are subject the Web PKI rules but aren't actually talking to the public Internet, and especially to web browsers, do not have logging, it's not a thing for them. This will probably go away in the next few years, these systems age out, but they still existed when I last looked.


thanks for the context, that pretty much answered all my follow-up questions. :)

The "weird old systems" case sounds like something were maybe it could be required for DV-certs?


Logging is only valuable if it's enforced. A mandate ("You must log everything") with no enforcement ("... but we won't check") is futile. At the very most maybe it adds one more to what is likely a long list of problems with a CA during a distrust discussion.

In effect there is already a requirement on CAs to internally track everything they issue, because there are numerous circumstances where the question, "What was issued with these criteria?" is pertinent and being unable to answer honestly is unacceptable. So, it doesn't make much sense to introduce logging as a policy requirement. I can't say it won't happen but I don't think it's useful. In contrast improvements to client software to actually examine SCTs in more clients and eventually to gossip about what SCTs they've seen are valuable because they improve the practical enforcement of logging.

What we see today is that most incidents are logged. A CA will issue something that shouldn't exist, and independent researchers will see that in the logs, but the CA's own tools didn't discover this, not only before logging but often afterwards.

Under existing policy the pre-certificate is a misissuance, even if the "real" certificate never existed we can't prove that, so the CA violated policy, but it's a problem that some of them will only realise this after it's reported by somebody else rather than flagging it immediately and self-reporting the violation. So this is reassuring because it matches my assumption that the CA operators are like most of us, they are lazy and incompetent but they aren't malevolent. They can't be bothered to do it properly, they don't remember how to do it properly, but they aren't intentionally doing a bad job, which means that they can improve if given better tools and procedures to avoid trouble.


Yes, it only makes sense if organizations monitor their domains for unexpected certificates and actually react to that, which probably not enough people do.


Precertificates are optional. Alternatively, the CA can skip creating a precertificate. The certificate would not contain any embedded SCTs. Instead, the certificate is logged using the add-chain endpoint[1], and the returned SCTs are delivered via an extension to the TLS handshake or an extension in a stapled OCSP response[2]

[1] https://datatracker.ietf.org/doc/html/rfc6962#section-4.1

[2] https://datatracker.ietf.org/doc/html/rfc6962#section-3.3


thanks, I didn't know both steps (pre-cert and actual cert submission into logs) were optional.


> At the end of that year, they can use the domain validation to obtain an SSL certificate that's valid for up to one year, even if they no longer control the website.

Can you explain this further? I don't understand.

Are you saying they could MITM the request that the CA makes to the website when it tries to do domain valudation? If that's the case, why is it limited to 2 years? Why couldn't they continue to do this indefinitely?


No, they don't have to MitM the CA's domain validation request. While they have brief control over the website, they use domain validation method 3.2.2.4.18 (Agreed-Upon Change to Website v2)[1] or 3.2.2.4.19 (Agreed-Upon Change to Website - ACME)[2] to legitimately complete domain validation by making a change to the website.

Due to domain validation reuse[3], the certificate doesn't have to be issued right away. The attacker can wait and request the certificate up to 398 days later, without having to do domain validation again.

[1] https://github.com/cabforum/servercert/blob/cda0f92ee70121fd...

[2] https://github.com/cabforum/servercert/blob/cda0f92ee70121fd...

[3] https://github.com/cabforum/servercert/blob/cda0f92ee70121fd...


> This issue is not entirely the fault of MarkMonitor. While they need to be careful with handling parked domains, AWS is at fault for not being more stringent with claiming S3 buckets. Google Cloud, for example, has required domain verification for years, rendering this useless.

This sounds like more of an Amazon problem than a MarkMonitor problem to me. And it makes a good case for using other cloud providers over AWS, as with GCP this attack wouldn't have been possible. Merely creating the DNS records pointing to the cloud provider's nameservers shouldn't be enough for anyone to then claim it and start hosting.


I hear about subdomain takeovers with S3 all the time. It seems insane Amazon doesn't require any domain verification.


This feels like a typical "why we can't have nice things"-scenario. You're in control of DNS. You decided to create the DNS-record to point to S3. It's a little odd to then come complaining when Amazon does what you wanted it to do.

A fun exercise could be to boot up a machine on pretty much any cloud provider and set up a simple webserver to respond to all Host-headers. Heck, I'm pretty sure you could make nginx only respond requests where both the Host-header and A/AAAA-record were correct with some Lua magic. In this scenario, would you blame the cloud provider or the administrator of the domain?

(Not necessarily you-you, but you get he idea.)


Relatedly, the thought of properly storing your web server's full access logs indefinitely comes to mind, for a baseline of "until I write some exactingly specific filtering logic and I get 0 unclassified results back out". Doesn't have to happen immediately, but by all means hoard 50GB of traffic logs until you do get it done. That sort of thing.

You'd naturally be capturing the requested host here, ideally along with all other request headers.

And ideally flagging unknown host headers in close to real time, so you can have fun with your visitors next time they say hello :>


S3 allowing customers to incorporate a "domain" or any part of it as part of the bucket's name is a contributing factor here, in my opinion.

The name of the bucket (which ends up being a part of the bucket's URL) should be entirely outside of the user's control. Make it a random 32 character string (and make it such that old strings can't ever be recycled).

This way people can't "register" buckets for domains they don't own.


What do they require? You can just claim any S3 bucket you want that's been 'abandoned'?


No, S3 buckets named after hostnames are served when said hostname is pointing to S3 servers .

Bob owns bob.com and sets CNAME bob.com.us-east-1.amazonaws.com. Eve creates the bucket and can now serve any static content she wants.


That seems pretty reasonable to me. How should AWS verify domain ownership if not control over the CNAME?


Google provides several options for site verification. They provide a string you must place on your website using one of the following ways:

- DNS TXT record

- HTML meta tag

- Upload a file with a randomized name to your site root


This is the reverse situation. What's going on here:

- Bob makes bob.com and sets the CNAME to be bobstaticsite.s3.aws

- Bob forgot to make a bucket called bobstaticsite.s3, and Alice, scanning the DNS records, creates it instead. Now Alice is serving data on Bob's domain

Am I missing something? How would AWS S3 stop you from creating a CNAME? Or how is this their responsibility? Don't make that CNAME entry pointing to a bucket you didn't create yet / don't own?

One final thing: S3 is not CloudFront. And CloudFront, AWS's CDN, does require domain name verification.


> Am I missing something? How would AWS S3 stop you from creating a CNAME? Or how is this their responsibility?

They (AWS) should not accept traffic on that hostname until ownership - tightly bound - is proven.

This also helps avoid related cases, such as deleting a bucket (which now frees it up) - other services like Heroku have similarly been vulnerable to this takeover hack because of a lack of strict verification: https://0xpatrik.com/subdomain-takeover-providers/

Every new bucket, service or account handling traffic for a domain should require re-verification. That verification should never persist beyond the lifetime of that resource.


> They (AWS) should not accept traffic on that hostname until ownership - tightly bound - is proven.

Why shouldn't they ? Don't you think it can be a useful feature ?


The problem is that AWS only allows hosting domains on S3 that match the hostname. i.e. If I want to host some content from S3 on 'static.mydomain.com', I MUST own the bucket of the same name.

If someone else creates the bucket on their AWS account, I am stuck. I would have to use a different hostname, or use a more complex workaround. Since creating empty buckets costs nothing, it effectively allows DoSing hostnames from an S3 hosting standpoint.

It would be one thing to allow this behavior if a domain owner explicitly wanted it, via a TXT record or something, but as it is, it's a poorly designed solution.


AWS can't stop you from creating the CNAME, but AWS could stop Alice from creating the bucket, or (IMHO better) not serve HTTP(S) traffic for bob.com from that bucket unless it is confirmed that the bucket belongs to the owner of bob.com.


Amazon can't "know" out of all domains if one has a CNAME pointing to a bucket name though. Even if they did, that would be open to abuse by anyone.


derp, true. so only the second method works.


Could Eve then just register bob.com and point it to bob's random bucket to mess with him?

And shouldn't Alice be able to register Alice.com and point it at Bob's bucket if she feels like it?

It seems like the answer already exists: don't create CNAMEs haphazardly.

(But please explain it to me if I'm missing something here)


> Could Eve then just register bob.com and point it to bob's random bucket to mess with him?

If it blocks creation, she would have to know the bucket name Bob would want to create soon. At which point she could just create the bucket instead.

> And shouldn't Alice be able to register Alice.com and point it at Bob's bucket if she feels like it?

She can only do that with this method if Bob's bucket is called "alice.com". Doesn't seem important to support for random unrelated people, if its for a dedicated setup where Bob manages a thing for Alice you could always have a way of granting that permission specifically or to opt out of the verification.

> It seems like the answer already exists: don't create CNAMEs haphazardly.

"don't make mistakes". If people keep making a mistake all the time, that's not the greatest answer. There's a reason most other providers that allow you to point a domain at them do some verification.


How would it block creation? Is AWS supposed to keep a full record of CNAME entries for the Internet and keep it up to date?

Since CNAME records can point to other domains, I’m not sure how AWS is supposed to police this and allow cross-referencing from other parties. Blocking based on any CNAME presence could turn into a bucket-squatting exploit pretty quickly.

A CNAME record is an “I’m pointing my domain at some other place” record. Redirect to something out of your control, S3 or otherwise, and you’re handing over control of the returned content. That doesn’t so much strike me as a mistake but more as the way that CNAME records were designed to work.


yes, as mhio already pointed out the first variant doesn't actually work. The second does though, and is what's commonly done.

> That doesn’t so much strike me as a mistake but more as the way that CNAME records were designed to work.

"I hold something into a saw, and it gets cut. That doesn’t so much strike me as a mistake but more as the way that saws are supposed to work. Why are all these fingerless people saying our saws should have safety features?!"

We know such things happen to all kinds of people all the time. It has been part of actual security problems. We know how to fix it, because many service providers do require validation. Why not consider the validation?


You’ve tried the snark twice, but it’s not aiding your point.

What is the actual mechanism you think should be implemented, and how does that support cross-party referencing while avoiding other modes of vulnerability (e.g. denial of service)?


All of which are much weaker than the CNAME, since with a cname you can do all 3.


it doesn't verify control over the CNAME from the person creating the bucket, that's the criticism, instead it lets any AWS user claim it. (The usual verification method would be something like requiring a TXT record to be set with a secret that's tied to the bucket, and I guess for route53 users they could integrate settings from there)


Right, but the owner of the domain needs to create a CNAME record that points to AWS for a bucket they don't control. How is that not purely on them?


If thousands of people keep making a mistake when using your product, you can go "purely on them, not our problem", true. Or you can do something to your product to make the mistake less likely to cause a problem.


The bucket can be created before they can do so, which is exactly what happened to MarkMonitor here. Now, technically you could create the bucket first before setting the CNAME, but this can quickly go wrong (especially when handling hundreds of domains) and it regularly does. There's no need to keep this footgun around.


S3 is not a CDN. S3 is a distributed file system. Use a CDN for CDN use cases (CloudFront) and you wouldn't have these issues.

There's a chance a normal AWS user may not understand the above distinction, a sophisticated actor like MarkMonitor (whose business is THIS) should. Instead they had a security incident. That's it. They used the wrong AWS service. They configured massive numbers of domain DNS records incorrectly. They risked their customer's reputations.


If they cared, a reasonable choice would be to lean on the Web PKI. It is usual in the Web PKI for the leaf certificates to be certified both to identify a server (which is what is ordinarily done) and to identify a client. The only name usually provided for the subject is a DNS name, but in this case that's exactly the identity we want to prove.

That is, you'd prove you control www.example.com to AWS the exact same way www.example.com proves it is www.example.com to a web browser.


If you control the cname its trivial to get a cert.


Sure, but the point of this choice is to make it easy for a third party to verify your identity, the exact same problem as for web browsers visiting an HTTPS site.


Maybe Bob shouldn't have set cname bob.com.us-east-1.amazonaws.com if he didn't own the bucket?!


That's easy to say when there's single or double digit number of people involved, but three or four (or five!)?


Often they did have it but it was decommissioned or deleted.


What's the method of attack here? Bob at some point stops using the site and deletes the bucket but not the DNS entry, and you notice that and create a new bucket with that now-available name?


Or as happened here, bob points the site to S3 because he plans to put something there later, and Eve beats him to it.


No

This company never created the S3 buckets at all but set them in the CNAME, for domains that were purchased but never used

They can change the CNAME at any time, it’s a pretty dumb but inconsequential exploit. Potentially some purchaser didn’t use generally free and default whois privacy and could be associated with offbrand content.

Potentially someone observant can earn a bunch of ad dollars on a popular domain.


I don't think it's as inconsequential as you say. Coinbase.ca and google.ar were two of these domains. The author noted that TLS certs could have been minted and then used in MITM attacks, if, for example, Coinbase began operating in Canada.


So if I understand the issue correctly, MarkMonitor pointed all of those domains to S3, without first creating S3 buckets for the domains in question?


Yes, they first pointed them to S3, then created the parked domain page for each of them. There was a window where anyone could have claimed the bucket.


And ultimately they still own the domains and can always point them to a new IP address at any time. The window for exploitation seems kind of small and temporary.


> The window for exploitation seems kind of small and temporary.

That depends on whether you are attacking the clients or the server. If an attacker obtained a domain cert or wildcard cert, while in control of the domain, then the attacks can continue.


How? They now have a cert for that domain but the domain owner can point the domain to any IP address he wants. The article mentions this, but I don’t see the vuln. Can you explain?


Yes. When the attacker can interpose their own server on the network, seen by valid clients, then the attacker can redirect the DNS of those clients to the fake server.

This is a MiTM attack that allows the fake server to be accepted by valid clients (on that network), stealing their credentials and then potentially their information from the server. There are actually many attacks that can happen here, one of which is simply to record the credentials, while passing the traffic through to the server and back from the server to the clients.

Even though the true DNS server has had its DNS record's IP address changed to point towards the correct and true server, the clients are unaware of this change, as their DNS caches have been poisoned to point towards the malicious server.

PS There is a defense against this, certificate pinning, but that is not used much in practice.


DNS is very easy to spoof and redirect. There are proposals to secure it (DNSSEC, DNS-over-TLS, DNS-over-HTTPS) but none are widely used and instead the server's certificates are used to both authenticate the correct destination and encrypt traffic to it.

Redirecting traffic is much easier than generating certificates so a valid cert held by a bad actor can be a serious vulnerability.


>Redirecting traffic is much easier than generating certificates so a valid cert held by a bad actor can be a serious vulnerability.

This is bullshit. If you can redirect traffic you can almost always create a certificate unless you can only redirect a very limited set of traffic.

Redirecting traffic is literally all you need in order to be able to use certbot to generate a new cert.


MITM


I think we are going to need a little bit more. How does the attacker get traffic to route to them?


DNS is not (yet) generally secured. An attacker who e.g. controls a wifi AP can just substituted their own DNS responses and redirect users to their own addresses, and their servers have valid certificates for the domains that they're supposed to be, so from the user side everything looks fine.


It's a bit silly, there's no meaningful defense defeated here. Since the BGP routing system is almost entirely trust based a determined attacker will be able to perform a MITM attack on your domain and get a domain validated TLS certificate issued.

There's no real defense against this, TLS only stops dragnet attackers.


Only if they notice it among 60K+ domain names. There might be an automated system that silently proceeds to the next domain if the bucket has already been claimed.


A lot of windows are small and temporary.

In this case the attacker could have served anything out of the S3 bucket and conducted full blown phishing (google.ar, coinbase.ca), and generated valid certificates.

It's a big deal the order of operations here and the risk MarkMonitor introduced to all their clients.


Agreed. The title is misleading...there were no domains 'taken', which would imply a change of ownership, especially in the context of a registrar.


'for the taking', ie: they were vulnerable to a domain takeover. The title is entirely accurate. They didn't say they were 'taken'.


Good point, I can see that perspective. I guess I should have used 'ambiguous' instead of misleading. My first thought, with current wording, is that they were allowing transfers out or letting domains expire. Perhaps I spent too much time in the registry business.


I also understood it as if they lost the domains.


This reminds me of a recent front page article that mentioned a developer's convenience (not those pesky ACID properties) is the major factor in choosing a database, since ACID is so 1990s. https://news.ycombinator.com/item?id=28330297


From Amazon's own documentation about this:

"An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. This means that after a bucket is created, the name of that bucket cannot be used by another AWS account in any AWS Region until the bucket is deleted. You should not depend on specific bucket naming conventions for availability or security verification purposes. For bucket naming guidelines, see Bucket naming rules."

... but if you want to host a static website via S3, they explicitly say you need to create a bucket with the domain name as the bucket :shrug:


> While many domains began responding with an S3 404, others began switching from S3 to the parked page. What is interesting is that DNS was not involved — all domains pointed to 93.191.168.52 both before and after the issue.

I don't get this. These domains weren't configured with CNAME DNS entries like one normally does. Does this mean MarkMonitor's parked domain name server was trying to load from S3, then falling back to something else if that failed?


I suspect what happened (as the author of the post) is that given the IP address is from Akamai, they misconfigured Akamai to naively proxy the traffic towards S3.

S3 will just pick a bucket based on the Host header in the request, and it seems like Akamai just proxied that from the client.


Ahh, sure, that'll do it.

That's quite a big oops. I'd love to see the panicked messages on their internal slack as they figure out what they did.


I guess they have an automatic system to do the same thing as your exploit. After they misconfigured Akamai, their system and your system were racing to claim the exposed S3 buckets.


What does the OP's "S3 detection" mean in this context?

I'm not clear how the list of vulnerable domains was collected in this instance - presumably he had to know which domains to create buckets for?


Presumably he's scanning DNS records looking for subdomains (A or CNAME records) pointing to S3 buckets. Script then checks to see if these buckets exist and outputs a list of subdomains where the bucket doesn't exist.

At least that's my understanding of his process.


I found a similar issue in a huge, very popular site. Thousands of root domains ripe for the taking. I sent a heads up to the company in question, but never got a reply--


Have they fixed the issue?


Nope


This should be something added to an AWS Security Hub check, e.g. do you have Route53 pointed to an unregistered domain. Otherwise, I can't find fault with AWS at all. Now I have to review our domains on Monday.


Please bug your TAM if you have one about this. I’ve bugged ours. S3 should not serve a bucket as a website without domain verification. In the interim, we’ve built middleware where a bucket serving content can’t be removed until the dns record has been.


Or use CloudFront with an S3 bucket as backing for this use case, like you'd expect? CloudFront has domain name verification.


The point is that without domain verification, it won't stop someone else from registering that bucket (which is what happened with the domains in this article).


The point is that S3 isn't a CDN. If you use it as a CDN, it's on you to ensure it'll work for your use case. CloudFront, however, is a CDN, and as expected, has domain verification.


Whether it's a CDN is irrelevant. This is already a supported use-case for S3 which is why it even has this functionality.

It's one of many products that supports serving under custom hostnames and all such products should have domain verification.


I take it back. S3 outlines this exact use case: https://docs.aws.amazon.com/AmazonS3/latest/userguide/IndexD...

Because of this, I agree, they should verify domain ownership to help protect their users.


I've seen this attack with cloudfront and an S3 bucket.

You can verify your domain is going to a cloudfront you own, but it doesn't verify that the origin is a bucket you own.


Another great post Ian!

If any researcher here needs data or help to do investigations like this please reach out to me chris at securitytrails.com - we're trying to hone the tools to be as useful as possible with as little effort.


Can someone explain how a subdomain takeover is done in layman terms?


Subdomain points to a hosting provider. Hosting doesn’t know who the owner is (yet) and waits for someone to sign up/register. Attacker signs up before the real owner does, is lucky that the hosting provider does not verify ownership, and is able to serve whatever they want on the domain, for example a fake website or fake verification files.


Or often the other way around - a subdomain for some legacy feature points to a host. After the feature gets axed the server gets shut down (as it costs money) but the domain is left dangling. Anyone can claim the target subdomain (for CNAME) or IP (for A records) on the same hosting provider if not in use. Apart from fake websites it can also be used to bypass SOP/CORS protection in some contexts.


They should be using cloudfront to serve those pages over https.


This is satire, right?


Why so? Expecting to claim the bucket name for each of their domains is clearly not the way to go. With cloudfront they have can have a single bucket and map upto 100 multiple (or request a higher limit) domains per cloudfront distribution backed by that bucket.


I think it's the issue with mark monitor and not s3/aws.

What if mark monitor would put all parked domains to bob.com.mys3.com service? Me, as mys3 provider I'm at fault? I doubt.

What if mark monitor would point to an IP address they don't own? Still not their fault?


You appear to be shadow-banned... which means that almost no one can see what your posts because of HN's algo and moderators (only people with show-dead enabled are able to see your posts).


Seems like a lot of people are blaming AWS. S3 buckets have a grace period of 24 hours after a bucket is to be deleted in which it cannot be registered by anyone else. We call it internally, "bucket sniping". AWS does not have control of your DNS records. This falls under the typical cloud shared security model.

It's purely a DNS management issue. This happens with other technologies too, like mail servers. Domains (unique identifiers like S3 bucks) expire, someone registers it, spins up a mailserver, and begins recovering accounts via password resets. Who is to blame here, the application for allowing the password reset (provided the valid email/fetched the reset key), or the person who let the domain expire? I'd say the latter, and in the s3 situation, I believe it's the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: