Hacker News new | past | comments | ask | show | jobs | submit login

Can someone explain why this is leaky and how it can be exploited by malicious actors?



It's leaky because it's globally accessible and provides information that isn't otherwise readily apparent.

There is no guarantee that an exposed .gitignore (or other exposed files, like .htaccess, robots.txt, etc) will be exploitable, but they aid in the discovery process and may help adversaries uncover exploitable vulnerabilities they might have otherwise missed.

At the extreme, I've seen paths of backups of the production database listed in a publicly readable .gitignore, and that database backup was publicly accessible, too.

Most of the time, nothing sensitive is revealed, but defense in depth suggests it's better to not upload files like these to your web server unless they're being used by the webserver (like .htaccess) or crawlers (like robots.txt), and if you do, they ought to not be publicly readable (unless intended, like robots.txt), but even then, you'd want to make sure nothing sensitive is in any file like that which is publicly readable. Even if there's nothing sensitive in them now, there's no guarantee that nothing sensitive will ever be added.


I'm gonna give my counter take. Information disclosure is something that the DevSecOps(tm) crowd spends a disproportionate amount of time on for little benefit. The number of security professionals who don't know how to code, but learned Nessus or CrowdStrike and criticize others is too damn high.

I had to work with a security team in a FAANG for several years. They were so high and mighty with their low sev vulnerabilities, but they never improved security, and refused to acknowledge recommendations from the engineers working on systems that needed to be rearchitected due to a fundamental problems with networking, security boundaries, root of trust, etc. Unsurprisingly, their "automated scanner" failed to catch something a SRE would have spotted in 5 minutes, and the place got owned in a very public and humiliating way.

When I see things like this it brings back memories of that security culture. Frankly I think Infosec is deeply broken and gawking over a wild .gitignore is a perfect example of that.


I'm a professional red teamer at a FAANG company, for reference. There are plenty of times where I find several low severity vulnerabilities, none of which are exploitable alone, but which can be chained together to produce a functional exploit with real impact.

There's no guarantee any of your testers will find every issue, and there's no guarantee that a seemingly innocuous finding can't have a greater impact than might readily be apparent.

That said, there are a ton of charlatans in security exactly like you describe - folks who can't read code (let alone write it) who just know how to click "scan" on their GUI tools and export the report to a PDF. A lot orgs have a QA-level team running those automated scans, which get passed on to a penetration testing team, who have more experience, but a limited time window for testing, and then finally on to red teams, who, along with some appsec / product security folks who are embedded directly on product teams, tend to have the most expertise, and the most time to really dive deeply into a service or application.

Also, keep in mind that those gawking over this probably aren't security folks, and the competent security folks here may not be gawking at the file itself (or others) - just taking part in the discussion.


> There are plenty of times where I find several low severity vulnerabilities, none of which are exploitable alone, but which can be chained together to produce a functional exploit with real impact. There's no guarantee any of your testers will find every issue, and there's no guarantee that a seemingly innocuous finding can't have a greater impact than might readily be apparent

Absolutely this. Security often has a different perspective as to how systems may be exploited together to create a systemic issue where none exists independently. Of course, this is often also where security fails in communicating exactly why these low severity issues must be corrected and facilitating an engineering discussion as to how the attack chain can be effectively disrupted and detective controls implemented elsewhere in the chain such that attempts to exploit are detected.

In short, the failure isn't in finding the threat but in dictating solutions without getting everyone involved in engineering the interaction in the room, so to speak.

I would be ideal to have security engineering as an embedded function representing red and blue team findings as systems requirements and acting as a single point contact in regards to security issues such that mutual trust and respect may be developed.


I work in .gov so I have a lot of experience with that kind of security “engineer” but I’d take a more moderate position. This stuff is super-easy to resolve so you should spend a couple of minutes closing it and then focus on more complex things, with the reason being that when something like log4j happens you aren’t making it so easy for attackers to know whether you’re vulnerable – passively telling them makes it easier to avoid things like WAF blocking rules which will block IPs which actively prove.


There's no need to minimize or explode this; We need to put this into proportion. An information leak by itself is nothing, but it must be reported and taken seriously (by default, it should be fixed).

I'm not disappointed this happens at tesla.com; I expect as much. But to many people, this is a top-notch brand. You don't expect this on google.com or nsa.gov or fbi.gov either, do you?


Personally I'd not deploy these files. Although that would be more to do with not having to discuss it yet again with auditors or pentesters than it would for actual security.


It's not an arbitrary thing, and any kind of vulnerability (including this one) is potentially a step in a chained exploit. I wouldn't be suprised if we see a hack before Tesla fixes this. And yes, they will fix it because it's a security issue.


It's a bit of an information leak, but probably not a particularly serious one. It just gives some information about what tech stack they're using, which isn't really public but also not that hard to find out, and maybe a bit about where an attacker would want to look for other sensitive stuff. Pretty minor really, on its own.

It is a bit embarrassing because most web servers (and deployment setups) shouldn't be publishing/serving dot files anyway (files with names beginning with dot). But it's not necessarily a problem as long as they have some protection to avoid the _really_ sensitive stuff leaking, it's just kind of funny.


This shows that the teams in charge of code deployment have relatively weak quality control.

In practice, it means that if the gitignore file is leaked, that there is a substantial risk that they accidentally leak the .git folder someday.

The .git folder indirectly contains downloadable copies of the source-code of the website, which could very likely lead to credentials leak or compromised services.

Your life can depend on Tesla.com services.

Even if you are the pedestrian side.


What makes you think that there is some "substantial risk"? You seem to be mixing together git repos and site deployment rules. I don't see the big deal here with some CMS leftovers being deployed, but yes from a perspective of correctness this is not something that needs to be deployed.


> This shows that the teams in charge of website code deployment have relatively weak quality control.

FTFY. Little of Tesla's software is whatever they're using on the website. That'd be like judging Apple OS software by their website source.


This is customer control panel, which directly leads to car APIs behind that are using the same credentials.

On the same domain there is also the Tesla SSO.

It would be bad if this gets compromised as there would be direct impact in the physical world, not just a static landing somewhere.


So basically everyone’s life is at risk because the .gitignore got leaked. That sounds reasonable.


I'd be pretty surprised if the marketing / landing site was remotely connected to the user portal. Most companies have a marketing-friendly CMS for public content, disconnected from the actual customer-facing portal.


Tesla.com seems to be more than marketing, at least customers can sign-in there to do cars operations,.

If you can grab credentials from there you can do quite some things already.

See https://www.teslaapi.io/authentication/oauth (and this is in the case you don't trick an employee).

But I agree, that normally at some point they would catch it.


what makes you think the tesla.com website is where they keep their real code lol?


The gitignore explicitly called out where the sensitive settings file is, so presumably that makes it a lot easier to figure out where to start injecting bad code


Sure, but this appears like some very standard directories for popular website CMS platforms like Drupal.

So, not very surprising and probably doesn't really tip anyone towards anything particularly special.


It's probably caused by an incorrect nginx configuration, which means other static files may be exposed.

Otherwise, it's not much of a leak.


you could theoretically social engineer until you find something to exploit

ie, if the file said to ignore "/site/adminpasswords.txt" then you could go to /site/adminpasswords.txt and reveal admin passwords. this is obviously a simple eli5 explanation but i hope it helps

however, i doubt the tesla.com website is where they keep any important code that relates to actual tesla software like we would see used in cars. that would be like the army having their real code for their software/systems at goarmy.com lol


It's not really leaky and can't be exploited by anyone. It's an interesting curiosity at best.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: