Hacker News new | past | comments | ask | show | jobs | submit login
We spent $20 to achieve RCE and accidentally became the admins of .mobi (watchtowr.com)
1624 points by notmine1337 3 months ago | hide | past | favorite | 367 comments



Obviously there are a lot of errors by a lot of people that led to this, but here's one that would've prevented this specific exploit:

> As part of our research, we discovered that a few years ago the WHOIS server for the .MOBI TLD migrated from whois.dotmobiregistry.net to whois.nic.mobi – and the dotmobiregistry.net domain had been left to expire seemingly in December 2023.

Never ever ever ever let a domain expire. If you're a business and you're looking to pick up a new domain because it's only $10/year, consider that you're going to be paying $10/year forever, because once you associate that domain with your business, you can never get rid of that association.


This is the most obvious reason why Verisign is a monopolist and should be regulated like a utility. They make false claims about choice and not being locked in. You buy a domain, you use it, you're locked in forever. And they know it. That's why they fight tooth and nail to protect their monopoly.


It’s worse if you stop using the phrase ‘buy’ and instead use the term ‘rent’. A DNS provider could 10,000x your domain cost and there’s nothing you can do about it.


> A DNS provider could 10,000x your domain cost

DNS providers can't do this.

It's domain registries that can.


This actually happened to me, but fortunately I never actually used the domain. I registered tweed.dev intending to use robert.tweed.dev as a personal blog. It wasn't classed as a "premium" domain and the first year was £5 or something IIRC, which was half price compared to the normal renewal fee.

The next year they decided it was premium after all, and wanted to charge £492,000 for renewal. I still have a screenshot of that, although needless to say I don't own the domain anymore.


Couldn't you just transfer it to another registrar? I guess they blocked that but I wonder whether icann allows them to do so. It's indeed ridiculous.


Isn’t Google the .dev registrar?


They operate the registry, but are not a registrar (bad choice of terminology) since they sold off that part of their business to Squarespace. Unclear to me who actually raised the price here since you can register a .dev domain with many registrars.

That's insane though, I assumed renewal prices were more or less locked in after you own a domain. Even the premium ones that go for thousands say they renew at the standard $12 or whatever.


No kidding. I had a one letter .tm domain name back in the 90s and they (Turkmenistan) increased the fee to $1000/year.


Tbh this seems like a win—you want to incentivize making as much use of those short domains as possible.


Is this like forcing a tenant out of a property because you wish to raise the rent?


Yea, but in this case the property is very special. I don't think anyone has a right to own a "name" for perpetuity, especially such a short one—that's just extending property rights to a nonsensical place.

Granted, I also have zero respect for people who think that trademarks, patents, and copyright are still working to promote rather than stifle the arts and sciences, so I can understand why my above sentiment might rankle.


Ok please stop posting as darby_nine. I’d like my turn with that identity. I think it fits with some objectionable conspiracy theories I’d like to promote.


So instead of fair use you’d like to reserve domains for the rich?


Countries owning their ccTLDs seems basically correct to me. If you rent a `.tm` domain, you're doing business with the nation of Turkmenistan: might want to think about whether a TLD pun is worth taking on that relationship.


How do you know the TLD was a pun and not an otherwise appropriate use of the .tm TLD? By your logic why would anyone use a ccTLD?


its the opposite, its an increase of rent, because you want to increase rent


Can they? I thought ICANN prevented such steep increases?


There are a bunch of different domain types all commingled together; non-premium gTLD domains, ccTLD domains, 3rd level domains, registry premium gTLD domains and, as added complexity, aftermarket domains which could be any of the previous listed types.

ICANN provides some protection for standard gTLD domains, but it's minimal. You're guaranteed identical pricing to all other standard domain registrants on the gTLD, so they can only raise your price by raising the price of everyone else at the same time. That hasn't stopped some registries from 10x price increases though. The only thing it does is ensure they can't single you out and massively hike your renewal fee.

However, that does not apply to registry premium gTLD domains. When you register a registry premium domain you waive those protections and the registries can technically do anything they want.

If you register a ccTLD domain, you're at the mercy of that country's registry. If you register a 3rd level domain you're at the mercy of the 2nd level domain owner and they're regulated by either ICANN or a country based registry.

It's actually somewhat complex when you get into it.


Only for a few TLD's, stuff like ccTLD's there's no limit on how much a registry can charge.


To be clear, that's because the country that represents that ccTLD has sovereignty over it. That's also why they can have arbitrary, unusual requirements on them.


We can prevent this by paying the domain registrar ahead of time for N years. It's not a real solution, but it works (as good as any patch)


And if you're domain is really worth that much, you can sell it before it expires.


See also personal phone numbers, which are now "portable" and thus "required for every single identity verification you will ever perform", without being regulated, which means your identity is one $30 bill autopayment or one dodgy MVNO customer service interaction from being lost forever.


And try sharing a phone number. Almost every service assumes that everyone in a household has their own phone. Which is of course not true.

It just makes many services such as Credit Karma unavailable to anyone but the first person to signup.


Phone number portability is required by law in the US since 2003. See 47 U.S.C. § 251(b)(2)

https://www.fcc.gov/general/wireless-local-number-portabilit...


What if you need to stop paying for a phone bill entirely though? Maybe you're living paycheck to paycheck and money is just too tight this month. That's what I think GP was talking about.

Is it possible to "park" your phone number until you can start a new plan?


It's now possible. I work for a mvno that was recently acquired. We have a $5 pause plan. It has no data, voice or text, it just keeps your line active.


Wow. I’d save ~$0.52 (tax included) over my current plan with unlimited voice, and texts, and 5GB data…


Which provider do you use?


https://www.sim.de/ German provider


If I compared it to the service provider in Guinea, I can also say that you are overpaying way too much.


Germany is not exactly known for cheap plans, but apparently it’s worse in the US and you can only get comparable plans if you pay yearly, which I guess might just barely make a $5 parking contract worth it.


Yes, port it to Google voice.


Its Google. They can kill any services with no reason


This wouldn't be surprising. It's sad they've let it atrophy the way that they have. My understanding is that they purchased it to train their digital assistant on the voicemails (where we would correct the transcripts for free)


I think that costs $20.


Yes, as a one-time charge.

Though AFAIK there's no law or contract term preventing Google from starting to charge a monthly fee in the future.

And after some time — for me it was 5+ years, porting from a baby Bell land line to a postpaid T-Mobile family plan for a couple years and then to Google Voice — your number will be tarred and feathered as a "VoIP" number and rejected for identity verification by some parties until it's ported back to a paid service (again, after some time).

Even so, it's nice that Google lets me keep the number I was born with for $0/month for as long as it lasts.


Google has already killed my sister's business's Enterprise Workspace plan, because they decided to change their mind, and make "unlimited storage" not a thing. She was paying $200/month and they now wanted $1,600/month. I decided to build a NAS for her instead.

This is despite written emails from their support confirming the use case (videography) and storage needs were suitable, and a written statement that she is "permanently grandfathered" once Google stopped offering the plan to new customers.

To make matters worse, they gave her 30 days to download all data before everything would be deleted permanently. This is how Google treats "enterprise" customers.


> your number will be tarred and feathered as a "VoIP" number and rejected for identity verification by some parties until it's ported back to a paid service (again, after some time).

Where things get fun is when Google Voice IS your paid service (e.g. google fiber's phone service, popular with a certain demographic that used POTS for most their life and want to continue having a similarly behaving service).


Whatever the cost is, it's one time. I ported a number to Google Voice in 2016 and haven't paid a dime for it since then.


You can port your number to NumberBarn and park it for $2/month. Other services probably exist, but I signed up to NumberBarn ages ago and haven't had any issues the handful of times I've used them.


Do pre-paid plans not exist in the US?


Not regulated? They're portable because they're regulated.


Lose access to your number by any category of errors on your part or your carrier's part, and see what happens.

They're not tied to your person with much more permanency than a DHCP IP address. There's no process to verify your identity or recover your number or help you regain your accounts. The actual process for migrating your number is "Sign up with this other brand you've never tried before and tell them to politely ask your former brand to release the number to them".

If I lose my phone to a trash compactor, the process to change anything in my phone carrier account with regard to SIM cards is going to forward things to my Gmail account, which at random times for random reasons is going to begin to demand 2 factor identification for logging in on a new device via texting my phone number.

There are all sorts of crazy scenarios that can arise with double binds like this.

If we had a resilient authoritative identity verification (say, the DMV, or US Passport Office), or if we had a diverse variety of low-trust identity factors that we could check multiple aspects of ("text my mother" / "Here's a bill showing my address" / "here's a video of my phase saying my phone number"), there would be a way out, but all of corporate America heard "2fa is required for security now" and said "So we just text them right?"

That makes your phone not "another thing that people can use to talk to you in circumstances when you're not accessible", which the FCC's portability plan was maybe sufficient for, but a fragile single point of failure for your entire identity.


Google allows you to set up multiple types of second factors for 2FA purposes. There's no reason you should be relying solely on SMS for gmail's 2FA.


What about any other service that only allows sms 2fa?


I'd assume regulated in the sense of identity verification and transactions. There's no legal basis for needing a north American phone number, but good luck with any US obligations if you are without one.


Thankfully you can still get them without ID, for cash.

Unlike in Germany, where you can’t get one without a passport or ID card.


I’m wondering how feasible would it be to just use a SIM card from another country (e.g. in Estonia, you can get a prepaid card for 1 € that works in EU roaming just fine, with domestic-like prices on local calls). How many services in Germany require you to use specifically German number?


The EU roaming thing usually works for 6-12 months until you are required to connect to the home network.


I don’t think that’s a big problem though? Especially if you live in Germany and get a SIM card in e.g. Czech Republic.


It depends of course how far you are. I used to use an orange Spain SIM before the EU roaming deal because they had free roaming on sister networks. But I didn't go there so much.


Several do require it.


There is an alternative to such regulation though. In the Netherlands, all registrars are required to support automatic transfer between registrars. You can lookup your "transfer code", which you can enter at a new registrar, and they will handle that your domain is transferred (with proper DNS etc) and your old subscription stops.


GP is referring to the registry, not the registrar. There's lots of competition between registrars, but the registries have a post-sale monopoly on all domains.

Put another way, as soon as you register a .com domain, the only registry that can sell you a renewal is Verisign. If there weren't price controls, Verisign could increase the price of a .com renewal to $100 and there's nothing anyone could do but pay it.

This whole thread back to the root is right. Verisign has a monopoly, you can never drop a domain once it's associated with your business, and all of it should be regulated like a monopoly.


Yup. Think about what happened when the Internet Society almost sold the .org TLD to Ethos Capital and they were planning on raising the registration prices by a lot.


If you really want to get upset, go look what the NTIA did with the 2018 renewal of the .com agreement. Prior to 2018, the US DoC had a significant amount of oversight and control. The 2018 renewal pretty much gave .com to Verisign. The only thing the US DoC can do now is renew the contract as-is or withdraw.


Even Google managed to (briefly) fuck that one up.

https://money.cnn.com/2016/01/29/technology/google-domain-pu...


Always use subdomains. Businesses only ever need a single $10 domain for their entire existence.


Not true. If you are hosting user content, you want their content on a completely separate domain, not a subdomain. This is why github uses githubusercontent.com.

https://github.blog/engineering/githubs-csp-journey/


interesting, why is this?


I can think of two reasons: 1. it's immediately clear to users that they're seeing content that doesn't belong to your business but instead belongs to your business's users. maybe less relevant for github, but imagine if someone uploaded something phishing-y and it was visible on a page with a url like google.com/uploads/asdf.

2. if a user uploaded something like an html file, you wouldn't want it to be able to run javascript on google.com (because then you can steal cookies and do bad stuff), csp rules exist, but it's a lot easier to sandbox users content entirely like this.


> if a user uploaded something like an html file, you wouldn't want it to be able to run javascript on google.com (because then you can steal cookies and do bad stuff)

Cookies are the only problem here, as far as I know, everything else should be sequestered by origin, which includes the full domain name (and port and protocol). Cookies predate the same-origin policy and so browsers scope them using their best guess at what the topmost single-owner domain name is, using—I kid you not—a compiled-in list[1]. (It’s as terrifying as it sounds.)

[1] https://publicsuffix.org/


There might be reason to block your user content.


3. If someone uploads something bad, it could potentially get your entire base domain blocklisted by various services, firewalls, anti-malware software, etc.


I'm wondering, many SaaS offer companyname.mysaas.com. Is that totally secure?


If it's on the PSL it gets treated similarly to second level "TLDs" like co.uk.


PSL = Public Suffix List

https://publicsuffix.org/


Wouldn't usercontent.github.com work just as well?


Script running on usercontent.github.com:

- is allowed to set cookies scoped to *.github.com, interfering with cookie mechanisms on the parent domain and its other subdomains, potentially resulting in session fixation attacks

- will receive cookies scoped to *.github.com. In IE, cookies set from a site with address "github.com" will by default be scoped to *.github.com, resulting in session-stealing attacks. (Which is why it's traditionally a good idea to prefer keeping 'www.' as the canonical address from which apps run, if there might be any other subdomains at any point.)

So if you've any chance of giving an attacker scripting access into that origin, best it not be a subdomain of anything you care about.


A completely separate domain is more secure because it's impossible to mess up. From the browser's point of view githubusercontent.com is completely unrelated to github.com, so there's literally nothing github could accidentally do or a hacker could maliciously do with the usercontent site that would grant elevated access to the main site. Anything they could do is equally doable with their own attacker-controlled domain.


I think one reason is that a subdomain of github.com (like username.github.com) might be able to read and set cookies that are shared with the main github.com domain. There are ways to control this but using a different domain (github.io is the one I'm familiar with) creates wider separation and probably helps reduce mistakes.

I read about this a while back but I can't find the link anymore (and it's not the same one that op pointed to).


client browsers have no "idea" of subdomains, either. if i have example.com login saved, and also a one.example.com and a two.example.com, a lot of my browsers and plugins will get weird about wanting to save that two.example.com login as a separate entity. I run ~4 domains so i use a lot of subdomains, and the root domain (example.com) now has dozens of passwords saved. I stand up a new service on three.example.com and it will suggest some arbitrary subset of those passwords from example.com, one.example.com, two.example.com.

Imagine if eg.com allowed user subdomains, and some users added logins to their subdomains for whatever reason, there's a potential for an adversarial user to have a subdomain and just record all logins attempted, because browsers will automagically autofill into any subdomain.

if you need proof i can take a screenshot, it's ridiculous, and i blame google - it used to be the standard way of having users on your service, and then php and apache rewrite style usage made example.com/user1 more common than user1.example.com.


> client browsers have no "idea" of subdomains, either.

They have. That's why PSL list exists. It applies to all CSP rules.

> if i have example.com login saved,

It's the passsword wallet thing. It uses different rules and have no standards


Because there's stuff out there (software, entities such as Google) that assume the same level of trust in a subdomain vs its parent and siblings. Therefore if something bad ends up being served on one subdomain they can distrust the whole tree. That can be very bad. So you isolate user provided content on its own SLD to reduce the blast radius.


I've read - because if a user uploads content that gets you on a list that blocks your domain - you could technically switch user content domains for your hosting after purging the bad content. If it's hosted under your primary domain, your primary domain is still going to be on that blocked list.

Example I have is - I have a domain that allows users to upload images. Some people abuse that. If google delists that domain, I haven't lost SEO if the user content domain gets delisted.


This is probably the best reason. I had a project where it went in reverse. It was a type of content that was controlled in certain countries. We launched a new feature and suddenly started getting reports from users in one country that they couldn't get into the app anymore. After going down a ton of dead ends, we realized that in this country, the ISPs blocked our public web site domain, but not the domain the app used. The new feature had been launched on a subdomain of the web site as part of a plan to consolidate domains. We switched the new feature to another domain, and the problems stopped.


CDNs can be easier to configure, you can more easily put your CDNs colocated into POPs if it's simpler to segregate them, and you have more options for geo-aware routing and name resolution.

Also in the case of HTTP/1 browsers will limit the number of simultaneous connections by host or domain name, and this was a technique for doubling those parallel connections. With the rise of HTTP/2 this is becoming moot, and I'm not sure of the exact rules of modern browsers to know if this is still true anyway.


There's historical reasons regarding per-host connection limitations of browsers. You would put your images, scripts, etc each on their own subdomain for the sake of increased parallelization of content retrieval. Then came CDNs after that. I feel like I was taught in my support role at a webhost that this was _the_ reasoning for subdomains initially, but that may have been someone's opinion.


Search engines, anti-malware software, etc track sites' reputations. You don't want users' bad behavior affecting the reputation of your company's main domain.


Also subdomains could set cookies on parent domains. Also causes a security problem between sibling domains.

I presume this issue has been reduced over the years by browsers as part of the third-party cookies denial fixes...?

Definitely was a bad security problem.


Another aspect are HSTS (HTTP Strict Transport Security) headers, which can extend to subdomains.

If your main web page is available at example.com, and the CMS starts sending HSTS headers, stuff on subdomain.example.com can suddenly break.


I actually think they need 2, usually need a second domain / setup for failover. Especially if the primary domain is a novelty TLD like.. .IO which showed that things can happen at random to the TLD. If the website down it's fine, but if you have systems calling back to subdomains on that domain, you're out of luck. A good failover will help mitigate / minimize these issues. I'd also keep it on a separate registrar.

Domains are really cheap, I try to just pay for 5-10 year blocks (as many as I can), when I can just to reduce the issues.


And a second for when your main domain gets banned for spam for innocuous reasons.


I felt the need to get in addition to (shall we say) foo-bar.nl the foobar.nl the foo-bar.com and foobar.com because I dont want a competitor picking up those and customers might type it like that.


Don't forget about infrastructure domains, static-asset domains, separation of product domains from corporate domains ... there are plenty of good reasons to use multiple domains, especially if you're doing anything with the web where domain hierarchies and the same-origin policy are so critical to the overall security model.


For whatever it's worth, subdomain takeovers are also a thing and bug bounty hunters have been exploiting it for years.


A lot of interesting and informative rebuttals to this comment but no one anticipated the obvious counter argument.

Businesses only ever need two $10 domains, usercompany.com and company.com, just in case they ever want to host user generated content.


I think it's a sane practice to keep the marketing landing page on a separate domain than the product in case of SaaS.


Why? I always get frustrated when I end up in some parallel universe of a website (like support or marketing) and I can't easily click back to the main site.


The non-technical reason is that these are usually owned by different teams in your org (after you mature beyond a 5-person startup).

The technical perspective is that things like wildcard subdomains (e.g. to support yourcustomername.example.com), or DNSSec if your compliance requires it, etc. cause an extra burden if done for these two use-cases at a time.

> can't easily click

Http pages don't have problems with having a link to example.net from within example.com. Or the opposite. Seems like an unrelated problem.


One potential reason is that marketing teams often want to do things that are higher risk than you may want to do on your main application domain. For example, hosting content (possibly involving a CNAME pointing to a domain outside your control) on a third party platform. Using a framework that may be less secure and hardened than your main application (for example WordPress or drupal with a ton of plugins) using third party Javascript for analytics, etc.


Could you elaborate on why? The companies I have worked for have pretty much all used domain.com for marketing and app.domain.com for the actual application. What's wrong with this approach?


If there’s any scope for a user to inject JavaScript, then potentially this gives a vector of attack against other internal things (e.g admin.domain.com, operations.domain.com etc)


Also, if for example the SaaS you’re running sends a lot of system emails that really shouldn’t end up in spam filters, you can’t afford to let things like marketing campaigns negatively influence your domain’s spam score.

Easier and safer to have separate domains.


But if companies did that then I never would have been able to buy coolchug.com!


I like the point you are making in this post. It makes me think about the Backblaze blog posts where they discuss the likelihood of enough drive failures to lose user data. Then, they decided the calculation result hardly matters, because people are more likely to forget to pay due to an expired credit card or email spam filtering (missed renewal reminders!).

How do mega corps remember to pay their domain bills? Do they pay an (overpriced) registrar for "infinity" years of renewals? This seems like a genuinely hard business operations problem.


Mega corps have their own top-level domains. For example there're .apple, .google, .amazon, .youtube and probably some more I had forgotten.

Even when companies don't have their own top-level domain, they can have their own domain registrar. For example "facebook.com" is registered with "registrarsafe.com" as registrar. The latter registrar is a wholly owned subsidiary of Facebook. I learned this from this HN thread https://news.ycombinator.com/item?id=28751497


The megacorp that I work at requires us to surrender domain names payment that we own to a central authority who takes care of this in perpetuity. Any domain names we buy we also have to tell them about it. Your triple boss gets a good Stern talking to if you're not following these procedures.


Services like https://www.markmonitor.com/ sort this out. Notice that google.com is registered with them.


Not all registrars are super evil. Sometimes the domain just goes down and then your customers start barking and you have a chance to renew it.

Found this out when some of our emails started bouncing...


> If you're a business and you're looking to pick up a new domain because it's only $10/year, consider that you're going to be paying $10/year forever, because once you associate that domain with your business, you can never get rid of that association.

Please elaborate...

Also, what about personal domains? Does it apply there as well?


As per the article, the old domain expired and was picked up by a third party for $20. Said domain was hard-coded into a vast number of networking tools never to be updated again, effectively letting the new domain owner unfettered access into WHOIS internals.


My brother used to own <our uncommon family name>.com and wrote on it a bunch. Eventually he bailed out and let it expire. It turned into a porn site for a few years and now its for sale for like $2k from some predatory reseller.


Same happened to my personal website for which I purchased the domain when I was 14 (long time ago) and at some point decided that a .com domain is ridiculous for a personal website. Chinese porn site it was thereafter …


My old domain remains unregistered... Lucky me. I guess my last name was uncommon enough!


People bookmark stuff. Random systems (including ones you don’t own) have hardcoded urls. Best to pay for it forever since it’s so low of a cost and someone taking over your past domain could lead to users getting duped.

Personal domains are up to you.


A friend of mine recently let the domain used for documentation of Pykka, a Python actor library, expire. Some of course registered the domain, resurected the content and injected ads/spam/SEO junk.

Since the documentation is Apache License 2.0 there isn't much one can do, other than complain to the hosting about misuse of the project name/branding. But so far we haven't heard back from the hosting provider's abuse contact point (https://github.com/jodal/pykka/issues/216 if anyone is interested).


You might have accounts associated with the email. You might be a trusted or respectable member who would never.....


I have the feeling that any day now I’m gonna wake up in the morning and I’ll find out that there just isn’t internet anymore because somebody did something from a hotel room in the middle of nowhere with a raspberry pi connected to a wifi hotspot of a nearby coffee shop.


Reminds me of the dorms in college where the internet would get messed up because someone would plug in a random router from home that would hand out junk dhcp ip addresses. It's like that but for the whole world.


Sounds like BGP…


A significant amount of stuff is indeed held up by hopes and prayers [0], but by design, the internet was built to be robust [1]. In this case the scope was limited to .mobi.

[0] https://xkcd.com/2347/

[1] https://en.wikipedia.org/wiki/ARPANET#Debate_about_design_go...


Any connection to the recent "White House asks agencies to step up internet routing security efforts" [1] is purely coincidental.

[1] https://news.ycombinator.com/item?id=41482087


even worse, the raspberry pi, tripped, fell, and burst into flames for no good reason.


Why are tools using hardcoded lists of WHOIS servers?

Seems there is a standard (?) way of registering this in DNS, but just from a quick test, a lot of TLDs are missing a record. Working example:

    dig _nicname._tcp.fr SRV +noall +answer

    _nicname._tcp.fr. 3588 IN SRV 0 0 43 whois.nic.fr.
Edit:

There's an expired Internet Draft for this: https://datatracker.ietf.org/doc/html/draft-sanz-whois-srv-0...


A plain

  mobi.whois.arpa. CNAME whois.nic.mobi
could've already solved the issue. But getting everyone to agree and adopt something like that is hard.

Although as fanf2 points out below, it seems you could also just start with the IANA whois server. Querying https://www.iana.org/whois for `mobi` will return `whois: whois.nic.mobi` as part of the answer.


The reality of life is that there are way more hardcoded strings than you imagine or there should be.


I have a feeling whois is way older than the concept of SRV records even


The first WHOIS db was created in early 70s, according to Wikipedia. So, older than DNS itself.


because people build these tools as part of one time need, publish it for others (or in case they need to reference it themselves). Other "engineers" copy and paste without hesitating. Then it gets into production and becomes a CVE like discussed.

Developer incompetence is one thing, but AI-hallucination will make this even worse.


I’ve seen so many teams that fail to realize that once you use a domain in any significant way, you’re basically bound to renewing it until the heat death of the universe – or at least the heat death of your team.

Whether it’s this sort of thing, a stale-but-important URL hanging out somewhere, someone on your team signing up for a service with an old domain-email, or whatever, it’s just so hard to know when it’s truly okay let an old domain go.


O.M.G. - the attack surface gained by buying a single expired domain of an old whois server is absolutely staggering.


[flagged]


Do you have any references/examples of this?


Nope, this is just someone spreading AI hype.


tons

rapid7 for example use LLMs to analyze code and identify vulnerabilities such as SQL injection, XSS, and buffer overflows. Their platform can also identify vulnerabilities in third-party libraries and frameworks from what i can see


Can you point me to a blog or feature of them that does this? I used to work at R7 up until last year and there was none of this functionality in their products at the time and nothing on the roadmap related to this. It was all static content.


must've been another company then which i got confused with the name


Good thing you have tons of examples.

Right?


I would rather own a WHOIS server than a "decent sized quantized LLM"...


The real solution to WHOIS is RDAP.

Unfortunately, it isn't required for ccTlds, and there are plenty of non-ccTlds that aren't working.

https://en.wikipedia.org/wiki/Registration_Data_Access_Proto...

https://resolve.rs/domains/rdap-missing.html


How does it mitigate the issues outlined in the article?


The root cause for the PHP vulnerability is trying to parse unstructured text. The actual information in WHOIS has structure: emails, addresses, dates, etc. This info should be provided in a structured format, which is what RDAP defines.

IMHO, there is no reason for a registrar to not support RDAP, and to have the RDAP server's address registered with ICANN.


Very cool work.

>The dotmobiregistry.net domain, and whois.dotmobiregisry.net hostname, has been pointed to sinkhole systems provided by ShadowServer that now proxy the legitimate WHOIS response for .mobi domains.

If those domains were meant to be deprecated should be better to return a 404. Keeping them active and working like normal reduces the insensitive to switch to the legitimate domain.


Whois doesn't support HTTP status codes, but the shadowserver sinkhole responds with:

   Domain not found.

   >>> Please update your code or tell your system administrator to use whois.nic.mobi, the authoritative WHOIS server for this domain. <<<


The article implies they were broken for a few years and lots of clients did not notice this.


I think the whole computer approach is doomed to failure. It relies on perfect security that is supposed to be achieved by SBOM checking and frequent updates.

That is never going to work. Even log4j, 40% of all downloads are vulnerable versions. Much less when a vendor in a chain goes out of business or stops maintaining a component.

Everything is always going to be buggy and full of holes, just like our body is always full of battlefields with microbes.


nah, slowly but surely we can write good and reliable code, use that for things to make better tools, and then use those to ... :)

It will be probably a few decades, but the road seems pretty clear. Put in the work, apply the knowledge gained from all the "lessons learned" and don't stop.


I love the overall sense of we didn't want to this but things just keep escalating and they keep getting more than they bargained for at each step.

If only the naysayers had listened and fixed their parsing, the post authors might've been spared.


>You would, at this point, be forgiven for thinking that this class of attack - controlling WHOIS server responses to exploit parsing implementations within WHOIS clients - isn’t a tangible threat in the real world.

Let's flip that on its head - are we expected to trust every single WHOIS server in the world to always be authentic and safe? Especially from the point of view of a CA trying to validate TLS, I would not want to find out that `whois somethingarbitrary.ru` leaves me open to an RCE by a Russian server!


> $ sqlite3 whois-log-copy.db "select source from queries"|sort|uniq|wc -l

Oh cool they saved the logs in a database ! Wait... |sort|uniq|wc -l ?? But why ?


SELECT COUNT( DISTINCT source ) FROM queries ORDER BY source ASC

-- COUNT ( DISTINCT ... ) ~= uniq | wc -l ;; sort without -u is this busybox? ORDER BY col ASC

-- wait this doesn't need sort and uniq if it's just being counted...

SELECT COUNT( DISTINCT source ) FROM queries


bash nerds vs sql nerds I guess, these people are bash nerds


beats up re-re-remembering how to do it in sql


And probably because for quick things like that you’re already working in a “pipeline”, where you first want to see some of the results so you output with SQLite, and then add more to the pipeline. Similarly, I often do ‘cat file | grep abc’ instead of just grep, might be probably out of habit.


I found that this is actually a good use case for LLMs. You can probably paste that one liner up there and ask it to create the corresponding SQL query.


yeah, they're good for cursed tools like that, ffmpeg, excel macros, etc etc


yeah, they could have done `sqlite …|sort -u|wc -l` instead and saved themselves a process invocation!


Hey now if you're just gonna count lines no need to sort it at all.


you need to sort it in order to uniq it, because uniq only removes duplicate consecutive lines.


You know, it's been so long since I've used it, I completely forgot that fact. Alright, you win the battle of best correct bad sql to bash pipeline :).


This blog is a fantastic journey, it was well worth reading the whole thing.


Conjecture: control over tlds should be determined by capture the flag. Whenever an organization running a registry achieves a level of incompetence whereby its tld is captured, the tld becomes owned by the attacker.

Sure there are problems with this conjecture, like what if the attacker is just as incompetent (it just gets captured again), or "bad actor" etc. A concept similar to capture the flag might provide for evolving better approaches toward security than the traditional legal and financial methods of organizational capture the flag.


Do we include possibility of phisically capturing the server?


It is an interesting question. Physical security is significant. On the other hand, the physical server is not necessarily the set of digital controls that establish the server's authenticity. The significant part is performing something similar to a "Turing test" whereby the capturer continues services just as if they were the previous operator of the service (but without the security holes).

OTOH, if the capture failed to also capture banking flows from customers to the service, then the capturer would have a paddle-less canoe.


It's grotesquely insecure and not authoritative to rely on rando, unsecured WHOIS in the clear scraping contact details to "authenticate" domain ownership rather than ask the owner to provide a challenge cookie by DNS or hosted in content.


> We recently performed research that started off "well-intentioned" (or as well-intentioned as we ever are) - to make vulnerabilities in WHOIS clients and how they parse responses from WHOIS servers exploitable in the real world (i.e. without needing to MITM etc).

R̶i̶g̶h̶t̶ o̶f̶f̶ t̶h̶e̶ b̶a̶t̶, S̶T̶O̶P̶. I̶ d̶o̶n̶'t̶ c̶a̶r̶e̶ w̶h̶o̶ y̶o̶u̶ a̶r̶e̶ o̶r̶ h̶o̶w̶ "w̶e̶l̶l̶-̶i̶n̶t̶e̶n̶t̶i̶o̶n̶e̶d̶" s̶o̶m̶e̶o̶n̶e̶ i̶s̶. I̶n̶t̶e̶n̶t̶i̶o̶n̶a̶l̶l̶y̶ s̶p̶r̶i̶n̶k̶l̶i̶n̶g̶ i̶n̶ v̶u̶l̶n̶e̶r̶a̶b̶l̶e̶ c̶o̶d̶e̶, K̶N̶O̶W̶I̶N̶G̶L̶Y̶ a̶n̶d̶ W̶I̶L̶L̶I̶N̶G̶L̶Y̶ t̶o̶ "a̶t̶ s̶o̶m̶e̶ p̶o̶i̶n̶t̶ a̶c̶h̶i̶e̶v̶e̶ R̶C̶E̶" i̶s̶ b̶e̶h̶a̶v̶i̶o̶r̶ t̶h̶a̶t̶ I̶ c̶a̶n̶ n̶e̶i̶t̶h̶e̶r̶ c̶o̶n̶d̶o̶n̶e̶ n̶o̶r̶ s̶u̶p̶p̶o̶r̶t̶. I̶ t̶h̶o̶u̶g̶h̶t̶ t̶h̶i̶s̶ k̶i̶n̶d̶ o̶f̶ r̶o̶g̶u̶e̶ c̶o̶n̶t̶r̶i̶b̶u̶t̶i̶o̶n̶s̶ t̶o̶ p̶r̶o̶j̶e̶c̶t̶s̶ h̶a̶d̶ a̶ g̶r̶e̶a̶t̶ e̶x̶a̶m̶p̶l̶e̶ w̶i̶t̶h̶ t̶h̶e̶ U̶n̶i̶v̶e̶r̶s̶i̶t̶y̶ o̶f̶ M̶i̶n̶n̶e̶s̶o̶t̶a̶ o̶f̶ w̶h̶a̶t̶ n̶o̶t̶ t̶o̶ d̶o̶ w̶h̶e̶n̶ t̶h̶e̶y̶ g̶o̶t̶ a̶l̶l̶ t̶h̶e̶i̶r̶ c̶o̶n̶t̶r̶i̶b̶u̶t̶i̶o̶n̶s̶ r̶e̶v̶o̶k̶e̶d̶ a̶n̶d̶ f̶o̶r̶c̶e̶ r̶e̶v̶i̶e̶w̶e̶d̶ o̶n̶ t̶h̶e̶ L̶i̶n̶u̶x̶ k̶e̶r̶n̶e̶l̶.

EDIT: This is not what the group has done upon further scrutiny of the article. It's just their very first sentence makes it sound like they were intentionally introducing vulnerabilities in existing codebases to achieve a result.

I definitely can see that it should have been worded a bit better to make the reader aware that they had not contributed bad code but were finding existing vulnerabilities in software which is much better than where I went initially.


Make sure you read the article since it doesn't look like they're doing that at all. The sentence you cited is pretty tricky to parse so your reaction is understandable.


I think you misinterpreted the sentence. They don't need to change the WHOIS client, it's already broken, exploitable, and surviving because the servers are nice to it. They needed to become the authoritative server (according to the client). They can do that with off-the-shelf code (or netcat) and don't need to mess with any supply chains.

This is the problem with allowing a critical domain to expire and fall into evil hands when software you don't control would need to be updated to not use it.


Yes, getting through the article I was happy to see that wasn't the case and was just vulnerabilities that had existed in those programs.

Definitely they could have worded that better to make it not sound like they had been intentionally contributing bad code to projects. I'll update my original post to reflect that.


I hear you. And I mostly agree. I’ve refused a couple genuine sounding offers lately to take over maintaining a couple packages I haven’t had time to update.

But also, we really need our software supply chains to be resilient. That means building a better cultural immune system toward malicious contributors than “please don’t”. Because the bad guys won’t respect our stern, disapproving looks.


you'd rather have blackhats do it and sell it to asian APT's?


You're right. They should have just done it and told no one.

We need to focus on the important things: not telling anyone, and not trying to break anything. It's important to just not have any knowledge on this stuff at all


That was not my intention at all. My concern is groups who do that kind of red team testing on open source projects without first seeking approval from the maintainers risk unintentionally poisoning a lot more machines than they might initially expect. While I don't expect this kind of research to go away, I would rather it be done in a way that does not allow malicious contributions to somehow find their way into mission critical systems.

It's one thing if you're trying to make sure that maintainers are actually reviewing code that is submitted to them and fully understanding "bad code" from good but a lot of open source projects are volunteer effort and maybe we should be shifting focus to how maintainers should be discouraged from accepting pull requests where they are not 100% confident in the code that has been submitted. Not every maintainer is going to be perfect but it's definitely not an easy problem to solve overnight by a simple change of policy.


As an aside, I haven't seen a .mobi domain out in the wild in the past 6 years.


Pretty horrible negligence on the part of .mobi to leave a domain like this to expire.


Can't agree entirely. It's negligent, sure, but the negligent part wasn't letting it expire.

The negligent part was not holding the domain with an error result for 10 years and respond to every request with an email telling them to stop using that domain. And I say 10 years because 10 years of having a broken system is already way too long to not go addressing, no matter how sluggish the service underneath.

You can not be expected to cover your own ass for OTHER people's fuckups into perpetuity. Every system issuing an whois to a supposed dead domain should be considered the actual responsible party for this.


Sure, though if you're a central provider like a registrar/ISP there are very bad things that happen no matter what you do with a domain.

Since the registrar could very easily determine whether or not the domain was in active use in the wild (and still return an error if they wanted), and didn't, I do consider it negligence.

People hard-code them, they end up in configs, all over, specially in forgotten or hard-to-change places.

$20 a year forever is pretty cheap for a company.


Don't forget the mail servers, certificate providers, whois clients


Is this in the bugzilla/MDSP yet?




The cost of managing a domain portfolio is like compound interest — the more domains you add, the higher the renewal costs climb year after year.

It’s tempting to hold onto every domain ‘just in case,’ but cutting domains without a proper risk assessment can open the door to serious security issues, as this article points out.


I still remember when websites would redirect you on your phone to their .mobi website, completely screwing up the original intent. They didn't show you the mobile version of whatever Google let you towards, they just lazily redirected you to the .mobi homepage. I bet they asked a non-dev to do those redirects, that one IT neckbeard who shoved a redirect into an Apache2 config file and moved on with life. :)

But seriously, it was the most frustrating thing about the mobile web.

Is this TLD even worth a damn in 2024?


> Is this TLD even worth a damn in 2024?

IMO: No. Table stakes nowadays are for all web sites to support mobile devices; the notion of having a separate web site for mobile users, let alone an entire TLD for those web sites, is obsolete.


"He who seeks finds." - old proverb.


The article puts the blame on

> Never Update, Auto-Updates And Change Are Bad

as the source of the problem a couple of times.

This is pretty common take from security professionals, and I wish they'd also call out the other side of the equation: organizations bundling their "feature" (i.e. enshittification) updates and security updates together. "Always keep your programs updated" is just not feasible advice anymore given that upgrades as just as likely to be downgrades these days. If that were to be realistic advice, we need more pressure on companies to separate out security-related updates and allow people to get updates only on that channel.


In essence, you are agreeing that this is the root cause, you just seem to believe it's unrealistic to fix it.

I actually think it's viable to fix, I am simply not sure if anyone would pay for it — basically, old LTS model from Linux distributions where a set of packages gets 5 or 10 years of guaranteed security updates (backported, maintaining backwards compatibility otherwise).

If one was to start a business of "give me a list of your FLOSS dependencies and I'll backport security fixes for you for X", what's X for you?


Aren't you just reinventing Red Hat?


That's the other way around (and also SuSE, Ubuntu LTS and even Debian stable): here are the things you can get security backports for vs here are the security backports for things you need.


Entertaining and informative read. Main takeaways for me from an end user POV:

- Be inherently less trustworthy of more unique TLDs where this kind of takeover seems more likely due to less care being taken during any switchover.

- Don't use any "TLS/SSL Certificate Authorities/resellers that support WHOIS-based ownership verification."


None of these are true for the MitM threat model that caused this whole investigation:

- If someone manages to MitM the communication between e.g. Digicert and the .com WHOIS server, then they can get a signed certificate from Digicert for the domain they want

- Whether you yourself used LE, Digicert or another provider doesn't have an impact, the attacker can still create such a certificate.

This is pretty worrying since as an end user you control none of these things.


Thank you for clarifying. That is indeed much more worrying.

If we were able to guarantee NO certificate authorities used WHOIS, this vector would be cut off right?

And is there not a way to, as a website visitor, tell who the certificate is from and reject/distrust ones from certain providers, e.g. Digicert? Edit: not sure if there's an extension for this, but seems to have been done before at browser level by Chrome: https://developers.google.com/search/blog/2018/04/distrust-o...


CAA records may help, depending on how the attacker uses the certificate. A CAA record allows you to instruct the browser that all certs for "*.tetha.example" should be signed by Lets Encrypt. Then - in theory - your browser could throw an alert if it encounters a DigiCert cert for "fun.tetha.example".

However, this depends strongly on how the attacker uses the cert. If they hijack your DNS to ensure "fun.tetha.example" goes to a record they control, they can also drop or modify the CAA record.

And sure, you could try to prevent that with long TTLs for the CAA record, but then the admin part of my head wonders: But what if you have to change cert providers really quickly? That could end up a mess.


CAA records are not addressed to end users, or to browsers or whatever - they are addressed to the Certificate Authority, hence their name.

The CAA record essentially says "I, the owner of this DNS name, hereby instruct you, the Certificate Authorities to only issue certificates for this name if they obey these rules"

It is valid, and perhaps even a good idea in some circumstances, to set the CAA record for a name you control to deny all issuance, and only update it to allow your preferred CA for a few minutes once a month while actively seeking new certificates for any which are close to expiring, then put it back to deny-all once the certificates were issued.

Using CAA allows Meta, for example, to insist only Digicert may issue for their famous domain name. Meta has a side deal with Digicert, which says when they get an order for whatever.facebook.com they call Meta's IT security regardless of whether the automation says that's all good and it can proceed, because (under the terms of that deal) Meta is specifically paying for this extra step so that there aren't any security "mistakes".

In fact Meta used to have the side deal but not the CAA record, and one day a contractor - not realising they're supposed to seek permission from above - just asked Let's Encrypt for a cert for this test site they were building and of course Let's Encrypt isn't subject to Digicert's agreement with Meta so they issued based on the contractor's control over this test site. Cue red faces for the appropriate people at Meta. When they were done being angry and confused they added the CAA record.

[Edited: Fix a place where I wrote Facebook but meant Meta]


Wow! Highly entertaining and scary at the same time. Sometimes ijust wish i was clueless about all those open barn doors.


Wonderful article! Well done chaps.


I wish I had the time they have…


I mean it sounds like this was done in a few hours while hanging out at a con. I'm sure you can allocate a few hours to some fun.


I have to say - it wasn’t exactly “accidentally” that this occurred


As a reminder, RCE = remote code execution (it’s not defined in the article).

https://www.cloudflare.com/learning/security/what-is-remote-...


It is defined in the article the first time it is used in the text.

Maybe they read your comment and fixed it?


Perhaps so! I didn’t see it defined anywhere earlier.


These days people use "RCE" for local code execution.


I would clarify that as running code somewhere you don’t already control. The classic approach would be a malformed request letting them run code on someone else’s server, but this other pull-based approach also qualifies since it’s running code on a stranger’s computer.


That is so neat. Good job guys!


> auto updates are bad, turn them off

What? No.


I have written PHP for a living for the last 20 years and that eval just pains me to no end

    eval($var . '="' . str_replace('"', '\\\\"', $itm) . '";');
Why? Dear god why. Please stop.

PHP provides a built in escaper for this purpose

    eval($var . '=' . var_export($itm, true) . ';');
But even then you don't need eval here!

    ${$var} = $itm;
Is all you really needed... but really just use an array(map) if you want dynamic keys... don't use dynamically defined variables...


Couldn't agree more with this. In general, if you're writing eval you've already committed to doing something the wrong way.


I mean no disrespect to you, but this sort of thing is exactly the sort of mess I’ve come to expect in any randomly-selected bit of PHP code found in the wild.

It’s not that PHP somehow makes people write terrible code, I think it’s just the fact that it’s been out for so long and so many people have taken a crack at learning it. Plus, it seems that a lot of ingrained habits began back when PHP didn’t have many of its newer features and they just carried on, echoing through stack overflow posts forever.


JavaScript land fares little better.

IMO it’s because php and js are so easy to pick up for new programmers.

They are very forgiving, and that leads to… well… the way that php and js is…


The saving grace of JS is that the ecosystem had a reset when React came out; there's plenty of horrifying JQuery code littering the StackOverflow (and Experts Exchange!) landscape, but by the time React came around, Backbone and other projects had already started to shift the ecosystem away from "you're writing a script" to "you're writing an application," so someone searching "how do I do X react" was already a huge step up in best practices for new learners. I don't think PHP and its largest frameworks ever had a similar singular branding reset.


The other thing making JavaScript a little better in practice is that it very rarely was used on the back end until Node.js came along, and by then, we were fully in the AJAX world, where people were making AJAX requests using JavaScript in the browser to APIs on the back end. You were almost never directly querying a database with JavaScript, whereas SQL injection seems to be one of the most common issues with a lot of older PHP code written by inexperienced devs. Obviously SQL injection can and does happen in any language, but in WordPress-land, when your website designer who happens to be the owner's nephew writes garbage, they can cause a lot of damage. You probably would not give that person access to a Java back end.


Laravel, maybe. But not as much as React, or the other myriad JS frontend frameworks.

(to include the ones that appeared in the time I spent typing this post)


I'd argue that PHP7 is the closest thing PHP has had to a quality revolution. It fixed a zillion things, got rid of some footguns like legacy mysql, and in general behaved a lot more rationally.

If you were doing things right, by that point you were already using Laravel or Symphony or something, so the change didn't seem as revolutionary as it was, but that was the moment a lot of dumb string concatenated query code (for example) no longer worked out of the box.


I've heard it said that one of the reasons Fortran has a reputation for bad code is this combination: lots of people who haven't had any education in best practices; and it's really easy in Fortran to write bad code.


Which is why that “you can write Fortran in any language” is such an epithet.


Most horrific code I've ever seen was a VB6 project written by a mainframe programmer... I didn't even know VB6 could do some of the things he did... and wish I never did. Not to mention variables like a, b, c, d .. aa, ab...


Code written by scientists is a sight to behold.


and they think cause they're scientists they can just do it because they're scientists and stuff. Very pragmatic to be sure...but horrifying.


I'm sorry, I haven't encountered bare eval in years. Do you have an example? And even then it's actually not that easy to get RCE going with that.


Something like half of of reported JavaScript vulnerabilities are "prototype pollution" because It's very common practice to write to object keys blindly, using objects as a dictionary, without considering the implications.

It's a very similar exploit.


arguably worse, since no eval is needed...


Yeah, same with the use of "filter_input_array", "htmlspecialchars", or how you should use PDO and prepare your statements with parameterized queries to prevent SQL injection, etc.


At least the node community is mostly allergic to using eval().

The main use I know of goes away with workers.


On a new job I stuck my foot in it because I argued something like this with a PHP fan who was adamant I was wrong.

Mind you this was more than ten years ago when PHP was fixing exploits left and right.

This dust up resolved itself within 24 hours though, as I came in the next morning to find he was too busy to work on something else because he was having to patch the PHP forum software he administered because it had been hacked overnight.

I did not gloat but I had trouble keeping my face entirely neutral.

Now I can’t read PHP for shit but I tried to read the patch notes that closed the hole. As near as I could tell, the exact same anti pattern appeared in several other places in the code.

I can’t touch PHP. I never could before and that cemented it.


PHP: an attack surface with a side effect of hosting blogs.


I mean, in this case the developer really went out of their way to write bad code. TBH it kind of looks like they wanted to introduce an RCE vulnerability, since variable variable assignment is well-known even to novice PHP developers (who would also be the only ones using that feature), and "eval is bad" is just as well known.

A developer who has the aptitude to write a whois client, but knows neither of those things? It just seems very unlikely.


Replace PHP by C or C++ in your comment, and then read it again.


Pretty sure C++ has 1/10 or fewer the all-time practitioners PHP has, so while I'm sure plenty of bad code is available out there, I still would not expect the situation to be as bad as PHP.


This is why PHP is mostly banned at bigCo


Pretty sure there's plenty of PHP at Amazon and Facebook (just with slightly different names)


There is no PHP at Amazon (at least not 2009-2016). It was evaluated before my time there and Perl Mason was chosen instead to replace C++. A bunch if that’s still appears to exist (many paths that start with gp/) but a lot was being rebuilt in various internal Java frameworks. I know AWS had some rails apps that were being migrated to Java a decade ago, but I don’t think I ever encountered PHP (and I came in as a programmer primarily writing PHP).


Ok, my "pretty sure" turns out to be "not sure at all". Thank you for the refresher! I was thinking about Mason and somehow conflated Perl with PHP.

I left Amazon 2020. Had various collaborations with ecommerce (mainly around fulfillment) and there was plenty of Mason around.


I was probably one of the few who enjoyed Mason and still think the aggregator framework was great. We implemented a work-a-like in Java on Prime and it worked great there as well. It was effectively GraphQL before GraphQL, but local and remote, async, polymorphic, and extremely flexible. Not being in that world anymore I’m not sure if there is anything else quite like it, but there really should be.


I can *assure* you that php is expressly prohibited for use at Amazon.


Really? How come? What is the history with regarding to that? What are their reasoning? Does it apply to PHP >=8?


To paraphrase: you can write PHP in any language. PHP is a negative bias for bigCo mostly because of the folkloric history of bad security practices by some PHP software developers.


By “folkloric history”, don’t you actually mean just “history”?


I guess they mean the stigma that arose based on the reality in the past.

So kind of both.


They fucked themselves and the rest of us moved on.

You can become a good person late in life and still be lonely because all your bridges are burned to the ground.


> folkloric

I think the word you’re looking for is “epic” or “legendary”


Isn’t Facebook one of the biggest?


Hack is not PHP (any longer)


Pretty much. PHP for a banking software? For anything money related? Goomg to have a bad time.


Magento, OpenCart or WooCommerce are money related. All terrible but also very popular. But I guess they work, somehow.

What would you use to build and self-host an ecommerce site quickly and that is not a SaaS?


Have you ever heard of WooCommerce? It’s the market leader. It powers more stores than Shopify.


You're saying all big companies ban whole language ecosystem because somebody on the internet used one function in that language in knowingly unsafe manner contrary to all established practices and warnings in the documentation? This is beyond laughable.


Laughable, but accurate.

Google for example does exactly this.


Does exactly what? Ban whole ecosystems because somebody on the internet used it wrong? Could you please provide any substantiation to this entirely unbelievable claim?


Great write-up - the tip of the iceberg on how fragile TLS/SSL is.

Let's add a few:

1. WHOIS isn't encrypted or signed, but is somehow suitable for verification (?)

2. DNS CAA records aren't protected by DNSSEC, as absence of a DNS record isn't sign-able (correction: NSEC is an optional DNSSEC extension)

3. DNS root & TLD servers are poorly protected against BGP hijacks (adding that DNSSEC is optional for CAs to verify)

4. Email, used for verification in this post, is also poorly protected against BGP hijacks.

I'm amazed we've lasted this long. It must be because if anyone abuses these issues, someone might wake up and care enough to fix them (:


Our industry needs to finish what it starts. Between IPv6, DNSSEC, SMTP TLS, SCTP/QUIC, etc all of these bedrock technologies feel like they're permanently stuck in a half completed implementation/migration. Like someone at your work had all these great ideas, started implementing them, then quit when they realized it would be too difficult to complete.


If you look at say 3G -> 4G -> 5G or Wifi, you see industry bodies of manufacturers, network providers, and middle vendors who both standardize and coordinate deployment schedules; at least at the high level of multi-year timelines. This is also backed by national and international RF spectrum regulators who want to ensure that there is the most efficient use of their scarce airwaves. Industry players who lag too much tend to lose business quite quickly.

Then if you look at the internet, there is a very uncoordinated collection of manufacturers, network providers, and standardization is driven in a more open manner that is good for transparency but is also prone to complexifying log-jams and hecklers vetos. Where we see success, like the promotion of TLS improvements, it's largely because a small number of knowledgable players - browsers in the case of TLS - agree to enforce improvements on the entire eco-system. That in turn is driven by simple self-interest. Google, Apple, and Microsoft all have strong incentives to ensure that TLS remains secure; their ads and services revenue depend upon it.

But technologies like DNSSEC, IPv6, QUIC all face a much harder road. To be effective they need a long chain of players to support the feature, and many of those players have active disincentives. If a home users internet seems to work just fine, why be the manufacturer that is first to support say DNSSEC validation and deal with all of the increased support cases when it breaks, or device returns when consumers perceive that it broke something? (and it will).


IPv6 deployment is extra hard because we need almost every network in the world to get on board.

Dnssec shouldn't be as bad, but for dns resolvers and software that build them in. I think it's a bit worse than TLS adoption in part just because of DNS allowing recursive resolution and in part DNS being applicable to a bit more than TLS was. But the big thing seems to be that there isn't a central authority like web browsers who can entirely force the issue. ... Maybe OS vendors could do it?

Quic is an end to end protocol so should be deployable without every network operator buying in. That said, we probably do need a reduction in udp blocking in some places. But otherwise, how can quic deployment be harder than TLS deployment? I think there just hasn't been incentive to force it everywhere.


No. IPv6 deployment is tricky (though accelerating), but not all that scary, because it's easy to run IPv4 and IPv6 alongside each other; virtually everybody running IPv6 does that.

The problem with DNSSEC is that deploying it breaks DNS. Anything that goes wrong with your DNSSEC configuration is going to knock your whole site off the Internet for a large fraction of Internet users.


I didn't say deploying IPv6 was scary.

Very aware that dual stack deployment is a thing. It's really the only sane way to do the migration for any sizable network, but obviously increases complexity vs a hopeful future of IPv6 only.

Good point about dnssec, but this is par for the course with good security technologies - it could break things used to be an excuse for supporting plaintext http as a fallback from https / TLS. If course having an insecure fallback means downgrade attacks are possible and often easy, so defeats a lot of the purpose of the newer protocols


I don't think the failure modes for DNSSEC really are par for the course for security technologies, just for what it's worth; I think DNSSEC's are distinctively awful. HPKP had similar problems, and they killed HPKP.


Plus IPv6 has significant downsides (more complex, harder to understand, more obscure failure modes, etc…), so the actual cost of moving is the transition cost + total downside costs + extra fears of unknown unknowns biting you in the future.


Definitely there are fear of unknowns to deal with. And generally some business won't want to pay the switching costs over something perceived to be working.

IPv6 is simpler in a lot of ways than ipv4 - fewer headers/extensions, no support for fragmentation. What makes it more complicated? What makes the failure modes more obscure? Is it just that dual stack is more complex to operate?


Well you can try listing the top dozen or so for both and see the difference?


AFAIK, in the case of IPv6 it's not even that: there's still the open drama of the peering agreement between Cogent and Hurricane Electrics.


In my 25+ years in this industry, there's one thing I've learned: starting something isn't all that difficult, however, shutting something down is nearly impossible. For example, brilliant people put a lot of time end effort into IPv6. But that time and effort is nothing compared to what it's gonna take to completely shut down IPv4. And I've dealt with this throughout my entire career: "We can't shut down that Apache v1.3 server because a single client used it once 6 years ago!"


But when you shut it down it feels so nice. I still have fuzzy feelings when I remember shutting down a XenServer cluster (based on CentOS 5) forever


> Our industry needs to finish what it starts.

"Our industry" is a pile of snakes that abhor the idea of collaboration on common technologies they don't get to extract rents from. ofc things are they way they are.


Let's not fool ourselves by saying we're purely profit driven. Our industry argues about code style (:


Our industry does not argue about code style. There were a few distinct subcultures which were appropriated by the industry who used to argue about code style, lisp-1 vs lisp-2, vim vs emacs, amiga vs apple, single pass vs multi pass compilers, Masters of Deception vs Legion of Doom and the list goes on, depending on the subculture.

The industry is profit driven.


Do you use tabs or spaces? Just joking, but:

The point is that our industry has a lot of opinionated individuals that tend to disagree on fundamentals, implementations, designs, etc., for good reasons! That's why we have thousands of frameworks, hundreds of databases, hundreds of programming languages, etc. Not everything our industry does is profit driven, or even rational.


FWIW, all my toy languages consider U+0009 HORIZONTAL TABULATION in a source file to be an invalid character, like any other control character except for U+000A LINE FEED (and also U+000D CARRIAGE RETURN but only when immediately before a LINE FEED).


I’d be a python programmer now if they had done this. It’s such an egregiously ridiculous foot gun that I can’t stand it.


> > Our industry argues about code style (:

> Our industry does not argue about code style.

QED


Our industry does not argue about arguing about code style.


Our industry doesn't always make Raymond Carver title references, but when it does, what we talk about when we talk about Raymond Carver title references usually is an oblique way of bringing up the thin and ultimately porous line between metadiscourse and discourse.


I'm pretty sure this is QEF.


> Like someone at your work had all these great ideas, started implementing them, then quit when they realized it would be too difficult to complete.

The problem is, in many of these fields actual real-world politics come into play - you got governments not wanting to lose the capability to do DNS censorship or other forms of sabotage, you got piss poor countries barely managing to keep the faintest of lights on, you got ISPs with systems that have grown over literal decades where any kind of major breaking change would require investments into rearchitecture larger than the company is worth, you got government regulations mandating stuff like all communications of staff be logged (e.g. banking/finance) which is made drastically more complex if TLS cannot be intercepted or where interceptor solutions must be certified making updates to them about as slow as molasses...


Considering we have 3 major tech companies (Microsoft/Apple/Google) controlling 90+% of user devices and browsers, I believe this is more solvable than we'd like to admit.


Browsers are just one tiny piece of the fossilization issue. We got countless vendors of networking gear, we got clouds (just how many AWS, Azure and GCP services are capable of running IPv6 only, or how many of these clouds can actually run IPv6 dual-stack in production grade?), we got even more vendors of interception middlebox gear (from reverse proxies and load balancers, SSL breaker proxies over virus scanners for web and mail to captive portal boxes for public wifi networks), we got a shitload of phone telco gear of which probably a lot has long since expired maintenance and is barely chugging along.


Ok. You added OEMs to the list, but then just named the same three dominant players as clouds. Last I checked, every device on the planet supports IPv6, if not those other protocols. Everything from the cheapest home WiFi router, to every Layer 3 switch sold in the last 20-years.

I think this is a 20-year old argument, and it’s largely irrelevant in 2024.


> I think this is a 20-year old argument, and it’s largely irrelevant in 2024.

It's not irrelevant - AWS lacks support for example in EKS or in ELB target groups, where it's actually vital [1]. GCE also lacks IPv6 for some services and you gotta pay extra [2]. Azure doesn't support IPv6-only at all, a fair few services don't support IPv6 [3].

The state of IPv6 is bloody ridiculous.

[1] https://docs.aws.amazon.com/vpc/latest/userguide/aws-ipv6-su...

[2] https://cloud.google.com/vpc/docs/ipv6-support?hl=de

[3] https://learn.microsoft.com/en-us/azure/virtual-network/ip-s...


Plenty doesn’t support IPv6.


Those companies have nothing to do with my ISP router or modem


Doesn't every place have a collection of ideas that are half implemented? I know I often choose between finishing somebody else's project or proving we don't need it and decommissioning it.

I'm convinced it's just human nature to work on something while it is interesting and move on. What is the motivation to actually finish?

Why would the the technologies that should hold up the Internet itself be any different?


I was weeks away from turning off someone’s giant pile of spaghetti code and replacing it with about fifty lines of code when I got laid off.

I bet they never finished it, since the perpetrators are half the remaining team.


While that's true, it dismisses the large body of work that has been completed. The technologies GP comment mentions are complete in the sense that they work, but the deployment is only partial. Herding cats on a global scale, in most cases. It also ignores the side effect benefit that completing the interesting part -- other efforts benefit from the lessons learned by that disrupted effort, even if the deployment fails because it turns out nobody wanted it. And sometimes it's just a matter of time and getting enough large stakeholders excited or at least convinced the cost of migration is worth it.

All that said, even the sense of completing or finishing a thing only really happens in small and limited-scope things, and in that sense it's very much human nature, yeah. You can see this in creative works, too. It's rarely "finished" but at some point it's called done.


IPv6 instead of being branded as a new implementation should probably have been presented as an extension of IPv4, like some previously reserved IPv4 address would mean that it is really IPv6 with the value in the previously reserved fields, etc. That would be a kludge, harder to implement, yet much easier for the wide Internet to embrace. Like it is easier to feed oatmeal to a toddler by presenting it as some magic food :)


It would have exactly the same deployment problems, but waste more bytes in every packet header. Proposals like this have been considered and rejected.

How is checking if, say, the source address is 255.255.255.255 to trigger special processing, any easier than checking if the version number is 6? If you're thinking about passing IPv6 packets through an IPv4 section of the network, that can already be achieved easily with tunneling. Note that ISPs already do, and always have done, transparent tunneling to pass IPv6 packets through IPv4-only sections of their network, and vice versa, at no cost to you.

Edit: And if you want to put the addresses of translation gateways into the IPv4 source and destination fields, that is literally just tunneling.


Or got fired/laid off and the project languished?


obligatory https://xkcd.com/927/

Honestly: we're in this situation because we keep trying to band-aid solutions onto ancient protocols that were never designed to be secure. (I'm talking about you DNS.) Given xkcd's wisdom though, I'm not sure if this is easily solvable.


Can we all agree to not link that comic when nobody is suggesting a new standard, or when the list of existing standards is zero to two long? It's not obligatory to link it just because the word "standard" showed up.

I think that covers everything in that list. For example, trying to go from IPv4 to IPv6 is a totally different kind of problem from the one in the comic.


The point is that, ironically, new standards may have been a better option.

Bolting on extensions to existing protocols not designed to be secure, while improving the situation, has been so far unable to address all of the security concerns leaving major gaps. It's just a fact.


dns should not have to be secure, it should be regulated as a public utility with 3rd-party quality control and all the whistles.

only then can it be trustworthy, fast and free/accessible


There is nothing fundamentally preventing us from securing DNS. It is not the most complicated protocol believe it or not and is extensible enough for us to secure it. Moreover a different name lookup protocol would look very similar to DNS. If you don’t quite understand what DNS does and how it works the idea of making it a government protected public service may appeal to you but that isn’t actually how it works. It’s only slightly hyperbolic to say that you want XML to be a public utility.

On the other hand things like SMTP truly are ancient. They were designed to do things that just aren’t a thing today.


If my DNS can be MITM'd, and is thus insecure, it is not trustworthy.


This sort of all-or-nothing thinking isn't helpful. DNS points you to a server, TLS certificates help you trust that you've arrived at the right place. It's not perfect, but we build very trustworthy systems on this foundation.


But DNS is all-or-nothing.

If you can't trust DNS, you can't trust TLS or anything downstream of it.

Even banks are not bothering with EV certificates any more, since browsers removed the indicator (for probably-good reasons). DV certificate issuance depends on trustworthy DNS.

Internet security is "good enough" for consumers, most of the time. That's "adequately trustworthy", but it's not "very trustworthy".


Bank websites like chase.com and hsbc.com and web services like google.com, amazon.com, and amazonaws.com intentionally avoid DNSSEC. I wouldn't consider those sites less than "very trustworthy" but my point is that "adequately trustworthy" is the goal. All-or-nothing thinking isn't how we build and secure systems.


I am definitely not arguing in favor of DNSSEC.

However, I don't think it's reasonable to call DNS, as a system, "very trustworthy".

"Well-secured" by active effort, and consequently "adequately trustworthy" for consumer ecommerce, sure.

But DNS is a systemic weak link in the chain of trust, and must be treated with extra caution for "actually secure" systems.

(E.g., for TLS and where possible, the standard way to remove the trust dependency on DNS is certificate pinning. This is common practice, because DNS is systemically not trustworthy!)


Is certificate pinning common? On the web we used to have HPKP, but that's obsolete and I didn't think it was replaced. I know pinning is common in mobile apps, but I've generally heard that's more to prevent end-user tampering than any actual distrust of the CAs/DNS.

I think you're "well-secured" comment is saying the same thing I am, with some disagreement about "adequate" vs "very". I don't spend any time worrying that my API calls to AWS or online banking transactions are insecure due to lack of DNSSEC, so the DNS+CA system feels "very" trustworthy to me, even outside ecommerce. The difference between "very" and "adequate" is sort of a moot point anyway: you're not getting extra points for superfluous security controls. There's lots of other things I worry about, though, because attackers are actually focusing their efforts there.


I agree that the semantics of "adequate" and "very" are moot.

As always, it ultimately depends on your threat profile, real or imagined.

Re: certificate pinning, it's common practice in the financial industry at least. It mitigates a few risks, of which I'd rate DNS compromise as more likely than a rogue CA or a persistent BGP hijack.


Certificate pinning is more or less dead. There are mobile apps that still do it, but most security engineers would say that's a mistake. WebPKI integrity is largely driven through CT now.


Standards evolve for good reasons. That's just a comic.


The comic is about re-inventing the wheel. What you propose "standards evolving" would be the opposite in spirit (and is what has happened with DNSSEC, RPKI, etc)


> 2. DNS CAA records aren't protected by DNSSEC, as absence of a DNS record isn't sign-able.

NSEC does this.

> An NSEC record can be used to say: “there are no subdomains between subdomains X and subdomain Y.


You're correct - noting that Lets Encrypt supports DNSSEC/NSEC fully.

Unfortunately though, the entire PKI ecosystem is tainted if other CAs do not share the same security posture.


Tainted seems a little strong, but I think you're right, there's nothing in the CAB Baseline Requirements [1] that requires DNSSEC use by CAs. I wouldn't push for DNSSEC to be required, though, as it's been so sparsely adopted. Any security benefit would be marginal. Second level domain usage has been decreasing (both in percentage and absolute number) since min-2023 [2]. We need to look past DNSSEC.

[1] https://cabforum.org/working-groups/server/baseline-requirem...

[2] https://www.verisign.com/en_US/company-information/verisign-...


I agree that DNSSEC is not the answer and has not lived up to expectations whatsoever, but what else is there to verify ownership of a domain? Email- broken. WHOIS- broken.

Let's convince all registrars to implement a new standard? ouch.


I'm a fan of the existing standards for DNS (§3.2.2.4.7) and IP address (§3.2.2.4.8) verification. These use multiple network perspectives as a way of reducing risk of network-level attacks. Paired with certificate transparency (and monitoring services). It's not perfect, but that isn't the goal.


BGP hijacks unfortunately completely destroy that. RPKI is still extremely immature (despite what companies say) and it is still trivial to BGP hijack if you know what you're doing. If you are able to announce a more specific prefix (highly likely unless the target has a strong security competency and their own network), you will receive 100% of the traffic.

At that point, it doesn't matter how many vantage points you verify from: all traffic goes to your hijack. It only takes a few seconds for you to verify a certificate, and then you can drop your BGP hijack and pretend nothing happened.

Thankfully there are initiatives to detect and alert BGP hijacks, but again, if your organization does not have a strong security competency, you have no knowledge to prevent nor even know about these attacks.


> 1. WHOIS isn't encrypted or signed, but is somehow suitable for verification (?)

HTTP-based ACME verification also uses unencrypted port-80 HTTP. Similar for DNS-based verification.


If it used HTTPS you would have a bootstrapping problem.


> HTTP-based ACME verification also uses unencrypted port-80 HTTP

I mean, they need to bootstrap the verification somehow no? You cannot upgrade the first time you request a challenge.


100% - another for the BGP hijack!


The current CAB Forum Baseline Requirements call for "Multi-Perspective Issuance Corroboration" [1] i.e. make sure the DNS or HTTP challenge looks the same from several different data centres in different countries. By the end of 2026, CAs will validate from 5 different data centres.

This should make getting a cert via BGP hijack very difficult.

[1] https://github.com/cabforum/servercert/blob/main/docs/BR.md#...


See my post above about BGP hijacks: https://news.ycombinator.com/item?id=41511582 - They're way easier than you think.


It is hypothesised to make this more difficult but it's unclear how effective it is in practice. I wouldn't expect it to make a significant difference. We've been here before.


> It must be because if anyone abuses these issues, someone might wake up and care enough to fix them

If anyone knows they are being abused, anyway. I conclude that someone may be abusing them, but those doing so try to keep it unknown that they have done so, to preserve their access to the vulnerability.


Certificate Transparency exists to catch abuse like this. [1]

Additionally, Google has pinned their certificates in Chrome and will alert via Certificate Transparency if unexpected certificates are found. [2]

It is unlikely this has been abused without anyone noticing. With that said, it definitely can be, there is a window of time before it is noticed to cause damage, and there would be fallout and a "call to action" afterwards as a result. If only someone said something.

[1] https://certificate.transparency.dev [2] https://github.com/chromium/chromium/blob/master/net/http/tr...


It’s like the crime numbers. If you’re good enough at embezzling nobody knows you embezzled. So what’s the real crime numbers? Nobody knows. And anyone who has an informed guess isn’t saying.

A big company might discover millions are missing years after the fact and back date reports. But nobody is ever going to record those office supplies.


Didn't Jon Postel do something like this, once?

It was long ago, and I don't remember the details, but I do remember a lot of people having shit hemorrhages.


For reasons not important hear I purchase my SSL certificates and barely have any legitimating business documents. If Dunn & Bradstreet calls I hang up...

It took me 3 years of getting SSL certs from the same company through a convoluted process before I tried a different company. My domain has been with the same registrar since private citizens could register DNS names. That relationship meant nothing when trying to prove that I'm me and I own the domain name.

I went back to the original company because I could verify myself through their process.

My only point is that human relationships is the best form of verifying integrity. I think this provides everyone the opportunity to gain trust and the ability to prejudge people based on association alone.


Human relationships also open you up to social engineering attacks. Unless they’re face-to-face, in person, with someone who remembers what you actually look like. Which is rare these days.


That is my point. We need to put value on the face to face relationships and extend trust outward from our personal relationships.

This sort of trust is only as strong as it's weakest link but each individual can choose how far to extend their own trust.


This is what the Web of Trust does but,

> This sort of trust is only as strong as it's weakest link but each individual can choose how far to extend their own trust.

is exactly why I prefer PKI to the WoT. If you try to extend the WoT to the whole Internet, you will eventually end up having to trust multiple people you never met with them properly managing their keys and correctly verifying the identity of other people. Identity verification is in particular an issue: how do you verify the identity of someone you don't know? How many of us know how to spot a fake ID card? Additionally, some of them will be people participating in the Web of Trust just because they heard that encryption is cool, but without really knowing what they are doing.

In the end, I prefer CAs. Sure, they're not perfect and there have been serious security incidents in the past. But at least they give me some confidence that they employ people with a Cyber Security background, not some random person that just read the PGP documentation (or similar).

PS: there's still some merit to your comment. I think that the WoT (but I don't know for sure) was based on the 7 degrees of separation theory. So, in theory, you would only have to certify the identity of people you already know, and be able to reach someone you don't know through a relatively short chain of people where each hop knows very well the next hop. But in practice, PGP ended up needing key signing parties, where people that never met before were signing each other's key. Maybe a reboot of the WoT with something more user friendly than PGP could have a chance, but I have some doubts.


I’m fine with PKIs presumably in America the department of education could act as a CA.


This is such a good point. We rely way too much on technical solutions.

A better approach is to have hyperlocal offices where you can go to do business. Is this less “efficient”? Yes but when the proceeds of efficiency go to shareholders anyway it doesn’t really matter.


It is only efficient based on particular metrics. Change the metrics and the efficiency changes.


>Is this less “efficient”? Yes but when the proceeds of efficiency go to shareholders anyway it doesn’t really matter.

I agree with this but that means you need to regulate it. Even banks nowadays are purposely understaffing themselves and closing early because "what the heck are you going to do about it? Go to a different bank? They're closed at 4pm too!"


The regulation needs to be focused on the validity of the identity chain mechanism but not on individuals. Multiple human interactions as well as institutional relationships could be leveraged depending on needs.

The earliest banking was done with letters of introduction. That is why banking families had early international success. They had a familial trust and verification system.


Its used for verification because its cheap, not because its good. Why would you expect anyone to care enough to fix it.

If we really wanted verification we would still be manually verifying the owners of domains. Highly effective but expensive.


None of these relate to TLS/SSL - that's the wrong level of abstraction: they relate to fragility of the roots of trust on which the registration authorities for Internet PKI depend.


As long as TLS/SSL depends on Internet PKI as it is, it is flawed. I guess there's always Private PKI, but that's if you're not interested in the internet (^:


TLS doesn't care what's in the certificate even if you use certificate authentication (which you don't have to for either side). Photo of your 10 metre swimming certificate awarded when you were seven? Fine. MP3 of your cat "singing along" with a pop song? Also fine.

Now, the application using TLS probably cares, and most Internet applications want an X.509 certificate, conforming more or less with PKIX and typically from the Web PKI. But TLS doesn't care about those details.


I would say that TLS/SSL doesn't depend on Internet PKI - browsers (etc) depend on Internet PKI in combination with TLS/SSL.


> 4. Email, used for verification in this post, is also poorly protected against BGP hijacks.

Do mail servers even verify TLS certs these days instead of just ignoring them?


>The first bug that our retrospective found was CVE-2015-5243. This is a monster of a bug, in which the prolific phpWhois library simply executes data obtained from the WHOIS server via the PHP ‘eval’ function, allowing instant RCE from any malicious WHOIS server.

I don't want to live on this planet anymore


As has been demonstrated many, many (many, many (many many many many many...)) times: there is no such thing as computer security. If you have data on a computer that is connected to the Internet, you should consider that data semi-public. If you put data on someone else's computer, you should consider that data fully public.

Our computer security analogies are modeled around securing a home from burglars, but the actual threat model is the ocean surging 30 feet onto our beachfront community. The ocean will find the holes, no matter how small. We are not prepared for this.


> As has been demonstrated many, many (many, many (many many many many many...)) times: there is no such thing as computer security.

Of course there is, and things are only getting more secure. Just because a lot of insecurity exists doesn't mean computer security isn't possible.


It's a matter of opinion, but no, I disagree. People are building new software all the time. It all has bugs. It will always have bugs. The only way to build secure software is to increase its cost by a factor of 100 or more (think medical and aviation software). No one is going to accept that.

Computer security is impossible at the prices we can afford. That doesn't mean we can't use computers, but it does mean we need to assess the threats appropriately. I don't think most people do.


It's not a matter of opinion at all. You can disagree but you can disagree with the earth being a sphere also.

> People are building new software all the time. It all has bugs. It will always have bugs.

No. Most bugs these days are due to legacy decisions where security was not an issue. We are making advances in both chip and software security. Things are already vastly more secure than they were 20 years ago.

20 years from now, security will be a lot closer to being a solved problem.

> The only way to build secure software is to increase its cost by a factor of 100 or more (think medical and aviation software). No one is going to accept that.

What are you basing that cost on?

> Computer security is impossible at the prices we can afford.

No, it really isn't. There's a reason some organizations have never been hacked and likely never will be. Largely because they have competent people implementing security that very much exists.


> Our computer security analogies are modeled around securing a home from burglars

Well, no home is burglar-proof either. Just like with computer security, we define , often just implicitly, a threat model and then we decide which kind of security measures we use to protect our homes. But a determined burglar could still find a way in. And here we get to a classic security consideration: if the effort required to break your security is greater than the benefit obtained from doing so, you're adequately protected from most threats.


I agree, my point is we need to be using the correct threat model when thinking about those risks. You might feel comfortable storing your unreplaceable valuables in a house that is reasonably secure against burglars, even if it's not perfectly secure. But you'd feel otherwise about an oceanfront property regularly facing 30 foot storm surges. I'm saying the latter is the correct frame of mind to be in when thinking about whether to put data onto an Internet-connected computer.

It's no huge loss if the sea takes all the cat photos off my phone. But if you're a hospital or civil services admin hooking up your operation to the Internet, you gotta be prepared for it all to go out to sea one day, because it will. Is that worth the gains?


And I think there's some cognitive problem that prevents people from understanding that "the effort required to break your security" has been rapidly trending towards zero. This makes the equation effectively useless.

(Possibly even negative, when people go out and deliberately install apps that, by backdoor or by design, hoover up their data, etc. And when the mainstream OSes are disincentivized to prevent this because it's their business model too.)

There was a time, not very long ago, when I could just tcpdump my cable-modem interface and know what every single packet was. The occasional scan or probe stuck out like a sore thumb. Today I'd be drinking from such a firehose of scans I don't even have words for it. It's not even beachfront property, we live in a damn submarine.


by this logic, every picture you'll ever take with your phone would be considered semi-public as phones are Internet connected.

While I wouldn't have too much of an issue with that, I'm pretty sure I'm a minority with that



Do you use a bank account? Or do you still trade using only the shells you can carry in your arms? Perhaps networked computers are secure enough to be useful after all.


I never claimed the Internet isn't useful. I just think people don't recognize how vulnerable computers are to attack. Search this very incomplete list for "bank": https://en.wikipedia.org/wiki/List_of_data_breaches


Always look on the bright side of Life.

The non-sensicalness of it is just a phase. Remember the Tower of Babel didn't stop humanity.

Here is a link that was posted a few days ago regarding how great things are compared to 200 years ago. Ice cream has only become a common experience in the last 200 years..

https://ourworldindata.org/a-history-of-global-living-condit...


Someone may have posted a link to it a few days ago, but the link is from 2016 with a partial update last February.


The fact they're using `eval()` to execute variable assignment... They could've just used the WTF-feature in PHP with double dollar signs. $$var = $itm; would've been equivalent to their eval statement, but with less code and no RCE.


The fact PHP is used for any critical web infrastructure is concerning. I used PHP professionally years ago and don't think it's that awful but certainly not something I'd consider for important systems.


Wouldn't "eval" in any language result in RCE? Isn't that the point of eval, to execute the given string command?


Fully compiled languages don't even have an eval at all.


Not with that attitude

Start shipping the compiler with your code for infrastructure-agnostic RCEs


When you turn pro you call it security software and add it to the kernel.


No, but they have system or the like, which is effectively the same, just being evaluated by the shell. https://man7.org/linux/man-pages/man3/system.3.html


And thanks to the magic of "shoving strings from the Internet into a command line", poof, RCE! It bit GitLab twice


What incident are you referring to?


https://gitlab.com/gitlab-org/gitlab/-/issues/327121 is the first one, and I'm having trouble locating up the second (possibly due to the search pollution from the first one) but there are a bunch of "Exiftool has been updated to version [0-9.]+ in order to mitigate security issues" style lines in their security releases feed so it's possible they were bitten by upstream Exiftool CVEs

Anyway, turns out that shelling out to an external binary fed with bytes from the Internet is good fun


a) system doesn't let you modify the state of the running process so it doesn't attract abuse like the example here. It's still a bad function but calling it effectively the same is absurd - the scope for "clever" usage of it is much much lower.

b) It's a legacy misfeature that I hope new compiled languages don't copy. There are much much better better interfaces for running processes that don't rely on an intermediate shell.

c) Shell escaping is much more stable than some hipster language like PHP where you'd need to update your escaping for new language changes all the time.


You can build an eval for a compiled language, absolutely. You can embed an interpreter, for example, or build one using closures. There's entire books on this, like LiSP in Small Pieces.


I'm curious about some specifics of why you wouldn't use PHP for _critical_ web infrastructure?


https://duckduckgo.com/?q=hash+site:reddit.com/r/lolphp

https://duckduckgo.com/?q=crypt+site:reddit.com/r/lolphp

>crc32($str) and hash("crc32",$str) use different algorithms ..

>Password_verify() always returns true with some hash

>md5('240610708') == md5('QNKCDZO')

>crypt() on failure: return <13 characters of garbage

> strcmp() will return 0 on error, can be used to bypass authentication

> crc32 produces a negative signed int on 32bit machines but positive on 64bit mahines

>5.3.7 Fails unit test, released anyway

The takeaway from these titles is not the problems themselves but the pattern of failure and the issue of trusting the tool itself. Other than that if you've used php enough yourself you will absolutely find frustration in the standard library

If you're looking for something more exhaustive there's the certified hood classic "PHP: A fractal of bad design" article as well that goes through ~~300+~~ 269 problems the language had and/or still has.

https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/

Though most of it has been fixed since 2012, there's only so much you can do before the good programmers in your community (and job market) just leave the language. What's left is what's left.


People keep saying "oh it's php 5.3 and before that are bad, things are much better now", but ...


It's very easy to make PHP safe, certainly now that we've passed the 7 mark and we have internal ASTs. Even when using eval, it's beyond trivial to not make gross mistakes.


Any language can be insecure. There’s nothing inherently bad about PHP, other than it’s the lowest-hanging fruit of CGI languages and has some less-than-ideal design decisions.


Don't just swipe the "less-than-ideal design decisions" under the rug


Modern PHP is about as solid as comparable languages. It's two biggest problems are:

Lingering bad reputation, from the bad old days

Minimal barrier to entry - which both makes it a go-to for people who should not be writing production code in any language, and encourages many higher-skill folks to look down on it


Have you ever witnessed a house being built? Everywhere is the same :) At least in our industry these issues are generally not life-threatening.


that seems like a bigger lift than just deciding to help fix the bug

“be the change” or some such


This is a fantastic exploit and I am appalled that CAs are still trying to use whois for this kind of thing. I expected the rise of the whois privacy services and privacy legislation would have made whois mostly useless for CAs years ago.

<< maintainers of WHOIS tooling are reluctant to scrape such a textual list at runtime, and so it has become the norm to simply hardcode server addresses, populating them at development time by referring to IANA’s list manually. Since the WHOIS server addresses change so infrequently, this is usually an acceptable solution >>

This is the approach taken by whois on Debian.

Years ago I did some hacking on FreeBSD’s whois client, and its approach is to have as little built-in hardcoded knowledge as possible, and instead follow whois referrals. These are only de-facto semi-standard, i.e. they aren’t part of the protocol spec, but most whois servers provide referrals that are fairly easy to parse, and the number of exceptions and workarounds is easier to manage than a huge hardcoded list.

FreeBSD’s whois starts from IANA’s whois server, which is one of the more helpful ones, and it basically solves the problem of finding TLD whois servers. Most of the pain comes from dealing with whois for IP addresses, because some of the RIRs are bad at referrals. There are some issues with weird behaviour from some TLD whois servers, but that’s relatively minor in comparison.


Today the Certificate Authorities in the Web PKI use the "Ten Blessed Methods" (there are in fact no longer ten of them, but that's what I'm going to keep calling them).

[[ Edited to add: I remembered last time I mentioned these some people got confused. The requirement is a CA must use at least one of the blessed methods, there used to be "Any other method" basically they could do whatever they wanted and that "method" was of course abused beyond belief which is why it's gone. They can do whatever they like in addition, and there are also some (largely not relevant) checks which are always mandatory, but these "blessed methods" are the core of what prevents you from getting a certificate for say the New York Times websites ]]

https://cabforum.org/working-groups/server/baseline-requirem...

The Ten Blessed Methods are listed in section 3.2.2.4 of the Baseline Requirements, there are currently twenty sub-sections corresponding to what the Forum considers distinct methods, the newer ones unsurprisingly are later in the list, although many are retired (no longer permitted for use)

3.2.2.4.2 "Email, Fax, SMS, or Postal Mail to Domain Contact" specifically says to check whois as does 3.2.2.4.15 "Phone Contact with Domain Contact".

For the commercial CAs this is all bad for their bottom line, because a willing customer can't buy their product due to some bureaucratic problem. They want to give you $50, but they can't because some IT bloke needs to update a field in some software. When they ask the IT guy "Hey, can you update this field so I can buy a $50 certificate" the IT guy is going to say "Oh, just use Let's Encrypt" and you don't get $50. So you want to make it as easy as possible to give you $50. Bad for the Internet's Security? Who cares.

ISRG (the Let's Encrypt CA) of course doesn't care about $$$ because the certificates do not cost money, only the provisioning infrastructure costs money, so they only implement 3.2.2.4.7, 3.2.2.4.19 and 3.2.2.4.20 IIRC because those make sense to automate and have reasonable security assuming no bugs.


Wouldn't it be easy for those software project, or a single central authority, to expose that WHOIS list through DNS?

    mobi.whoisserverlist.info. IN CNAME whois.nic.mobi.
    org.whoisserverlist.info.  IN CNAME whois.publicinterestregistry.org.
The presence of a referral mechanism inside the WHOIS protocol strikes me as a little odd.


I believe the original reason for referrals was related to the breakup of the Network Solutions DNS monopoly. This led to the split between TLD registries (who run the DNS servers) and registrars (who sell domain names). To enforce the split for the big TLDs .com, .net, .org, the registration database was also split so that Network Solutions could not directly know the customer who registered each domain, but only the registrar who sold it. This was known as the “thin registry” model. From the whois perspective, this meant that when you asked about example.com, the Network Solutions whois server would only provide information about the registrar; the whois client could follow the referral to get information about the actual registrant from the registrar. Basically all the other TLDs have a “thick registry” where the TLD operator has all the registration details so there’s no need for whois referrals to registrars.

As a result, a whois client needs referral support. The top level IANA whois server has good referral data, so there isn’t much to gain from trying to bypass it.



Yes that works too. Thanks!

Though this relies on registrar publishing their own, and some don't. I meant that some other authority could publish them all, if they are known.

edit: It seems like {tld}.whois-servers.net is exactly that, CNAME to whois servers. Your link mentioned it. Thanks again.


TLDR

> While this has been interesting to document and research, we are a little exasperated. Something-something-hopefully-an-LLM-will-solve-all-of-these-problems-something-something.


Oh no not .mobi!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: