These have a fairly simple fix that you can implement yourself as a developer. Don't let your services listen on (AKA bind to) 0.0.0.0 or 127.0.0.1.
The entire 127.0.0.0/8 block is dedicated to the loopback interface [1]. That's 2^24 - 2 unique IP addresses you can choose at random. This basically eliminates the feasibility of the DNS rebinding component, as it would take prohibitively long to find the actual loopback address that your services have bound to.
It's important to note that this is much more effective than not using the default port. It's much faster to iterate all 2^16 ports on the same IP address than it is to wait for DNS TTL to expire so you can rebind to another IP address.
As a bonus, you don't have to worry about port collisions when nobody's allowed to listen on 0.0.0.0. Everybody can use 8080 if they want.
I'll point out for the benefit of others that not all operating systems treat 127.0.0.0/8 as belonging to the loopback interface.
Linux does, so this will probably work for the majority of users here, but, I believe, it will not work for FreeBSD, for example (I'm on an iPad and don't have an SSH client at the moment so I can't verify). In those cases, however, you can add "aliases" to the loopback interface, with specific IP addresses in the 127/8 subnet, and then use them.
On OSX (and, I suspect, vanilla FreeBSD), this looks like (replacing 127.1.2.3 with the address you want to use; 0xff000000 must stay the same, since netmasks of all addresses on an interface must not confict if they're on the same subnet):
Recommendation for aliases on an interface is to use 255.255.255.255 as the netmask on FreeBSD.
For example to add extra interfaces to lo0 on FreeBSD you may use:
ipv4_addrs_lo0="127.0.0.2-5/8"
in rc.conf
And now those addresses are automatically assigned to lo0, the netmask will be set to 255.255.255.255 automatically for all but the first in that range (.2). Since 127.0.0.1 is unconditionally assigned to lo0, you could also use:
The ifconfig man page just says the netmask must be "non-conflicting" if on the same subnet as the first network address for the interface (that'd be 127.0.0.1/8 for lo0). I'm presuming the narrower netmask qualifies as non-conflicting?
On FreeBSD at least you can assign whatever address you want to a loopback interface. I'm currently using aliases on a loopback device (lo1 in this case) for host <-> jail communication. The jails get 172.16.0.0/16.
You can manage this trivially in rc.conf and then restart the netif and route services.
Yeah, good point. I fired up my Windows VM and confirmed that I can listen on a randomly-chosen 127.* address as well, so that should mean most people don't need that extra step. :-)
I had some fun with this as well a while ago. It's too bad that with IPv6 we'll only get one loopback address (i.e. ::1) and not again a whole loopback subnet.
IPv6 has a humongous private network range, though. fd00:/8 has 2^120 IP addresses. You could set up a Docker bridge with that range and run containers with random addresses. Correctly guessing one of 2^120 addresses would be nearly as difficult as cracking AES128.
Alternatively just use unix domain sockets. That way only locally running programs can access your service. Random ports sounds like security through obscurity (which might work from time to time).
> That way only locally running programs can access your service
A web browser is a locally running program. You mean, of course, that access to the socket can be limited to programs running under a user account sufficiently privileged to read and/or write to the socket file. Can all services be made to communicate only through domain sockets?
I agree that choosing randomly from the available loopback addresses and/or ports isn't providing any real security; it surely wouldn't take the web browser very long to make a connection to every port in the 127/8 address range.
That is of course correct. I should have worded it a bit more technical, I guess :-)
Support of unix domain sockets depends on the program, so not all support that, but Redis and memcache do. At least for those you can prevent the browser from connecting to them using TCP.
I guess another option would be to just limit what ports a browser can access. Some ports a blocked by default, but I feel like a whitelist instead of a blacklist makes more sense and I'm fairly certain that most of my daily browser usage would still work if only 80, 443 and maybe 8080 would be whitelisted. Is there a way to archive this (in chrome/Firefox) without reverting to some outside sandbox?
> You mean, of course, that access to the socket can be limited to programs running under a user account sufficiently privileged to read and/or write to the socket file.
They might mean instead that browsers can’t access domain sockets.
It's part of configuring whatever services you're running (or an option in your tcp/ip library if you're programming your own), and will depend on the service. For memcached for example, you can change the listen address with `-l <addr>`. Usually you just have to find the configuration documentation and ctrl-f search for "address," "bind address," "listen address," "IP address," etc. Or scan the page for something networking related.
It's pretty much always a command line flag or config file parameter.
If you're running with docker, it's even more standard. When you expose a port, just use `-p 127.12.12.3:11211:11211` (with your chosen IP address, of course), and docker will set up the forwarding for you, only for that address.
Hey, I know I'm a bit late to the party. But I'm looking to protect my company's redis instance from these attacks and your approach looks very promising. Things is, I don't know what to do now that I've written a similar script. Where should I use this code? In the redis.conf file?
Bear in mind, I'm a front end developer, so this is absolutely not my forte. Thanks for the help.
Right, and furthermore an address in 3.0.0.0/8 isn't likely to be added on any of your interfaces, so attempting to bind/listen on that will just fail. Even if you add it yourself, there's a tiny chance you might start redirecting some traffic you care about (say, facebook) to a local interface where facebook services are definitely not running. :-P
Fortunately, the whole 127.0.0.0/8 block is automatically added to the loopback interface (edit: on Linux at least), so you can bind away to your heart's content.
If you set your DNS TTL to a relatively-low 1 minute (as in the POC), that's 31.9 years for an exhaustive search.
The birthday paradox means that you're probably going to spend a lot less time than that, but it's certainly going to buy me a lot longer than 1 minute that I get if I'm using 127.0.0.1. The chances I stay on any random website longer than 15 minutes are pretty slim, and you've got really poor odds of finding the right IP in that time.
I don't see how the birthday paradox applies. If you pick a random IP under 127.0.0.0/8, then on average a website searching will take 1/2 of time required to check every IP sequentially.
cbr is correct; the birthday paradox refers to trying to find two random values that are equal to each other, with no other constraint on their value (so (3,3) and (78,78) are both valid solutions). What we have here is trying to find one value that is constrained (if 3 is a solution, 78 can't be). Assuming you enumerate IP addresses in random order, the expectation will be 16 years to find the one you want.
Ahhh, thank you for explaining that distinction much better than Wikipedia (maybe you ought to add this tidbit :-) ). I'd been assuming it applied to any probability of collision in a fixed space.
So then, presumably, even with this solution the attacker has birthday attack advantage for:
a) attacking multiple targets all using independent random IPs for their services
b) Users running multiple services, each on a separate random loopback IP
And if I'm correct there, then this would presumably extend to generic DNS rebinding attacks; the greater the number of IPs in a subnet, the greater the birthday attack advantage the attacker gains. (right?)
Your (a) is my example (2), where the dice are the IPs used by the multiple targets and the value 3 is the IP address you test for. It's still not a birthday attack, but it is helpful to the attacker, assuming the goal is "one compromise on any target, I don't care which". (In terms of the formula I gave as the solution to (2), if you make g guesses to attack t targets, your chance of success on at least one target is [1 - ([n-g]/n)^t].) In example (2) itself, there was 1 guess, 3, and 100 targets, the dice.
Your (b) is similar, but even more helpful to the attacker in that the loopback IPs are constrained not to overlap with each other, which makes searching for "any hit, I don't care which" easier.
It's a birthday attack if you generate a lot of values and hope for two of those values -- all of which you generated -- to be equal to each other. If you're generating values and hoping for one of those values to be equal to some externally-defined value, it's not a birthday attack, it's a guessing attack.
> If you set your DNS TTL to a relatively-low 1 minute (as in the POC), that's 31.9 years for an exhaustive search.
Right, but that assumes a new DNS record is created each time. An attacker might simply have records for the entire 127.0.0.0/8 range and would iterate over them.
It would still take a long time to make an exhaustive search, but it will be faster than waiting for a 1 minute TTL between each try.
Are you sure this actually makes a difference? Can't you detect errors when searching the IP space using the POST method, or try and resolve multiple IPs simultaneously through the DNS reminding trick?
Parent comment is referring to the PoC created here. The PoC creator would have to wait for TTL to expire each time for all of 127.0.0.1/8 as opposed to just having to wait for one TTL to expire to set it to just 127.0.0.1.
I don't think that is true, though maybe I'm misunderstanding.
The DNS rebinding is a workaround so that you can read the reply of a message.
Enumerating a list of services that are running is something you could do as a precursor to using the dns rebinding attack to attack it. I am not familiar with javascript networking enough to say what kind of rate limiting you would incur, but in theory at least you should be able to probe many IPs at once.
In a past life I had to write some DNS rebind attacks for some CPE testsuite software that is out there.
It was very easy to write some javascript that hangs out in the browser, gets the updated DNS host as the 192.168.0.1 address (sure sure, you can go crazy guessing other addresses) and then about 60% of everyone was on admin:admin or something as common; the first 12 or so bits of an ethernet address are the vendor identifier, which makes the process even easier to assume. Then you just start posting data to well-known web admin interfaces and update the router password.
I have no idea how well this works, three or four years later...
WOW. That's just... wow. You might even be to upload new firmware. And you could do all of that inside a hidden iframe with it's own domain, so the parent page DNS never needs to change.
Fortunately most routers have a random default password now, rather than admin or changeme or whatever. At least that's what I've found; I guess it might be regional.
This would break shared hosting, load balancers, or any other architecture without a unique IP address per hostname. I don't think this is feasible unless it's applied to IPv6 only due to IPv4 scarcity.
> The attack depends on multiple software products all making very reasonable decisions about how they should work, but the way they interact with each other leads to a vulnerability.
I'm sorry, but I disagree. A browser allowing externally loaded scripts to access private ip ranges is not a reasonable decision.
PSA: To protect yourself from this, and some more bad browser defaults, use NoScript with "allow all scripts globally." Keep the JS, but filter out some bad stuff. Also enable ABE (application boundaries enforcer, made to solve exactly this problem) for good measure.
A browser allowing externally loaded scripts to access private ip ranges
There are plenty of legitimate systems in the ecommerce world that rely on doing this.
Take Ariba Punchout, for example. The idea is for the store to pass the user's shopping cart back into the customer's internal ERP system to be turned into a purchase order and sent for approval. The way this is handled is that the initial request sets up a session including a URL to a customer-internal service that's intended to handle the transaction. When the user completes checkout, a page is returned to the user containing an XML payload, and script to submit that XML to the specified internal service URL. That way, the store is able to provide the PO data to what is otherwise a completely internal system.
Which can all be handled using an (automated) whitelist, à la CORS.
Too many things run on localhost (or intranet) and assume that localhost=safe. I know, "that's wrong" and "don't listen on 127.0.0.1" and all that, but that horse has left the barn, emigrated, founded a family and died happy. We can't put the 127/192/10 genie back in the bottle.
Prevent access to all private resources from the outside and whitelist on a need-to-access basis. Or be ready to keep monkey patching your system against these exploits forever.
No. What the parent is saying is that: if the script was loaded from an external IP address, it should be blocked from talking to an internal IP address. The browser knows whether it loaded a script from an external IP address, and it knows what the internal IP addresses are, so it can just ban something loaded from elsewhere from talking to ports on your local machine.
A few months ago there was a post [0] by antirez about how dangerous it is to leave a redis instance open to the world, in that an attacker could, for instance, authorize an SSH key on your machine and gain remote connectivity.
While the average workstation is not usually reachable from the outside network, you could probably combine some variant of that attack (the first thing that comes to mind: overwrite .bash_profile) with the attack of this article, causing a lot of fun.
Running your browser in Red Hat's SELinux sandbox [1] [2] limits the ports you can connect to and thus limits this type of attack to those ports (80, 81, 443, 488, 8008, 8009, 8443, and 9000 in the default configuration).
If you were attacking a local webapp interface instead of a non-http daemon like redis, you would need your browser to be able to access the web service. At that point, this kind of attack would still allow an attacker to also access that web service.
I have seen this story posted and discussed in several locations. It boggles my mind that everyone is talking about DNS filtering and/or browser security models, when it's painfully obvious that the actual problem is the fact that the targeted services (redis, memcached, elasticsearch, etc.) apparently do nothing whatsoever to authenticate incoming connections (at least in their default configuration).
Yes: remote DNS servers have no business serving up loopback addresses. Yes: browsers shouldn't let remote scripts access resources on the local network.
But WTF are you guys doing running services bound to network ports (even if only accessible from the local machine) that apparently have no authentication whatsoever? Have none of you ever used a multi-user machine?
When I was in university we had just three SunOS boxen shared amongst all undergrads in my faculty, and all three were directly accessible from the whole of the internet - there was no firewall of any kind. Even back in those rather more innocent days you learned real quick not to put up services which didn't authenticate every incoming connection.
A good firewall is not a substitute for having individual machines be secure.
A machine having only one (intended) user is not an excuse to run services that are not secure against local users.
Interesting attack. A far more feasible one is just to throw nmap around your next conferences WiFi network and try common postgres/mysql combinations. You'd be surprised how many developers have such services exposed, often with 'developer passwords' and production dumps loaded.
I think it'd be very common to "protect" these services by making them bound only to localhost. The fact that this attack bypasses that protection is pretty interesting.
It's also common to open these up so that team members can grab a copy of your database. I haven't done that, but I can think of a case in the past few months where a developer had done so.
Edit: Now that I think of it and especially with containerized dev environments and VMs, I'd bet it quite common. I'm sure I've opened up a DB or search container more than I needed to just because I couldn't get the damn things to talk. I still would have a firewall, but not everyone does.
So long as you have a DNS resolver which respects TTLs, you could be behind anything - they're dynamically changing DNS to point at your localhost. It's not the remote server making the connections, it's your web browser. At which point, it can be exfiltrated, thanks for the DNS tricks which get around CORS.
Question: could DNS rebinding be used to tap into 1Password inter-process communication? They use localhost websockets for IPC; it's authenticated through the request origin and then through verifying the PID is in fact the browser [1].
DNS rebinding could definitely get around the PID check, but could it spoof an origin to something like "safari-extension://com.agilebits.onepassword4-safari-2bua8c4s2c"?
Well first off, I honestly don't know if it's possible; if I knew, I'd just shoot an email off to 1password.
That being said, I don't think we're suggesting the same thing. I'm saying one could write a webpage that uses DNS rebinding to make requests on localhost, like OP. Then, the webpage, completely bypassing the browser's built-in extension system, makes a request to 1password over localhost (which they're already using for IPC).
The reason that DNS rebinding is relevant is twofold; first, you need the request to hijack existing IPC between the browser extension and the standalone desktop app, and two, because you need to have the request come from the browser's PID. That should all work.
The question here revolves around 1password's verification that the origin is coming from, for example, something similar to
So then your webpage would also need to be able to spoof the origin of your localhost requests to look like they were coming from that origin. I don't know if that's possible or not, but if it is, it would imply that this technique could get you illegitimate access to 1password.
Browsers could pin DNS responses when a page finishes loading so any further requests for that domain will use the cached IP instead of doing name lookups but that would be a PITA because they generally rely on the OS DNS subsystem. It might also break long-running pages that won't failover anymore.
It would probably be easier to simply keep a IS_LOOPBACK flag for every DNS name resolved and kill any connection attempts if the flag changes while the page is loaded. Then you can keep using the OS DNS resolver logic.
DNS might legitimately return a different CDN but it sure as hell won't flip between private IP spaces and the public internet.
Captive portals [0] often return short-lived DNS responses to private IP space until you pay or login, at which point their DNS servers start returning proper responses (and stop intercepting DNS requests). Although not "legit", this is a case which is very common - you wouldn't want your browser to suddenly stop working because you connected to a captive portal that started modifying DNS responses.
You're both correct. I hope they find a solution to fix the utterly broken garbage that captive portals are: Every "portal" is different and complete prevents automated internet access. I hope this gets fixed somehow, although I don't have high hopes: https://datatracker.ietf.org/wg/capport/charter/
Certain recursive resolvers, like unbound, have protections you can enable that disallow remote hosts to return private address space.
xip.io is one of those services that doesn't work on my home network because I have unbound block all RFC1918 space.
# Enforce privacy of these addresses. Strips them away from answers.
# It may cause DNSSEC validation to additionally mark it as bogus.
# Protects against 'DNS Rebinding' (uses browser as network proxy).
# Only 'private-domain' and 'local-data' names are allowed to have
# these private addresses. No default.
private-address: 10.0.0.0/8
private-address: 172.16.0.0/12
private-address: 192.168.0.0/16
private-address: 169.254.0.0/16
private-address: fd00::/8
private-address: fe80::/10
private-address: 127.0.0.0/8
It seems to me that they should quit on reading any input that they can't parse. At best silently ignoring bad input leads to software not doing what it's supposed to do. At worst it leads to attacks like this.
I realize this is contrary to "be conservative in what you do, be liberal in what you accept from others", but I never thought that was a good way to write software.
Postel's law / the robustness principle is a good way to write robust software, like a TCP implementation or an HTML parser. Most software doesn't need to be robust, though, and failing fast probably leads to less security issues than trying to continue on.
That's at the very least a much harder change to roll out, since if things are broken after a new version of Redis comes out people will blame Redis for breaking things.
I'm not very familiar with Redis, so I might be missing something here.
But if the choice, after a Redis update, is between:
a) my software breaking and, hopefully, an error message saying "redis failure parsing 'XYZ'"
or
b) my software /maybe/ continuing to function, while passing commands to Redis that it's ignoring
I would always pick (a), and I think most programmers would think likewise.
Let's say you're using a cache server as a best-effort cache, and you take advantage of its ability to store complicated data structures. Your client implementation has a small bug with one of them, and ~1% of the time it sends something to the server that's not to spec. Right now, the server returns an error for that specific request, but doesn't drop the connection and continues processing later requests on the connection. You know about the errors, but they're not worth fixing.
Now the cache server has a security update, so you apply it right away. But now when it gets your invalid command it not only returns an error but it drops the connection. Your client doesn't handle this well, and now your caching is fully broken and your server falls over from the load.
When you hear about someone jail breaking an iphone trough the browser, this is how. The fact that the browser works as a window to all tcp-sockets running on a device, it's the perfect way to exploit buffert overflows on a device that lacks a terminal.
Also remember this with all your IoT appliances running on your local network. Even if it has a local IP-address, as long as you have a computer with a browser on the same network, you might as well consider your devices being publicly accessible from the rest of the internet.
No, iPhone jailbreaks usually just exploit bugs in WebKit to obtain code execution in the context of the Safari process. There generally aren't any system services running TCP locally on iOS - iOS has much more sophisticated IPC mechanisms for that.
Your first link appears to require the installation of an app on the phone (after installing a dev certificate). The app, running arbitrary native code, performs the actual jailbreak (most likely using a chain of kernel exploits).
The devs seem to have gotten around the App Store by using an approach like TestFlight, which allows apps to be deployed for development and wider testing purposes without going through the App Store.
I know one of the original iPhone and iPod Touch jailbreaks worked that way. I just went to a website and it automatically rooted and installed the homebrew app for me.
None of these exploits have anything to do with accessing TCP services using a browser. The only exploit there having anything to do with Safari, CVE-2016-4657, is a WebKit memory corruption (per your [1]: "The stage1 employs a previously undocumented memory corruption vulnerability in WebKit to execute this code within the context of the Safari browser (CVE-2016-4657).").
Remember, kids: a firewall is only one layer of your security. Make sure you always implement proper access control and guide users through changing the defaults.
Do developers often run things on localhost? I mean sure, you'll have things running on your dev machine, but for me at least, http://127.0.0.1/ will just show the default webroot, with its placeholder index.html. All my actual sites listen for custom hostnames (since otherwise you only get one site per machine or have to do silly things with port numbers on the url).
So unless somebody has crafted a page specifically targeting me and my naming convention for local sites, this wouldn't be an issue. And of course, once you hit a site, you'd still need to deal with the same security that the public facing version sees. You certainly wouldn't go out of your way to disable that on your local machine.
Databases are named, and often live within named database server instances, so they'd need to be specifically targeted as well. And, again, they have authorization to deal with. It's not like you'd leave that open either.
Yes, I've always run everything on localhost and use different ports, I think most devs are the same. I've never even thought of doing anything else
> All my actual sites listen for custom hostnames (since otherwise you only get one site per machine or have to do silly things with port numbers on the url).
It's not just web servers though, every from databases to caches servers.
>Databases are named, and often live within named database server instances, so they'd need to be specifically targeted as well
I think most would stick with the defaults. With mssql server it's "." or "sqlexpress" or whatever.
>And, again, they have authorization to deal with.
I've always used windows authentication, but that now seems like a terrible idea.
Yes, When you're generally running one application server, and your other services all use specific ports by default, it's VERY common.
In fact, I've not worked at a company (in 12 years of web-app dev) that used different domain names on dev machines until very recently, when it became a requirement of the software itself to have subdomain-per-client code.
DNS rebinding can also be used to attack private addresses, for which it would be perfectly legal to have real DNS names (think DNS entries for a company's intranet). Blacklisting localhost only offers a bit of security against silly devs, but arguably pivoting an attack from a public server to a private network is much more valuable and harder to stop.
There should be some central network policy in the enterprise that says a DNS record coming from outside the company can't point to addresses on the 10.xx.xx.xx or 192.168.xx.xx spaces. I'm surprised that isn't already the common configuration.
Unbound has that as a possibility in the recursive DNS resolver.
Super simple to set up too:
# Enforce privacy of these addresses. Strips them away from answers.
# It may cause DNSSEC validation to additionally mark it as bogus.
# Protects against 'DNS Rebinding' (uses browser as network proxy).
# Only 'private-domain' and 'local-data' names are allowed to have
# these private addresses. No default.
private-address: 10.0.0.0/8
private-address: 172.16.0.0/12
private-address: 192.168.0.0/16
private-address: 169.254.0.0/16
private-address: fd00::/8
private-address: fe80::/10
private-address: 127.0.0.0/8
I'm not excusing it, but at megacorp we had an entry for localhost.megacorp.com to allow devs to get past the oauth whitelist. And, yes, it's on their public DNS (just checked).
I do remember, however, that either my ISP or whatever DNS voodoo DD-WRT was doing blocked resolution of things in 127.0.0.0/8.
Why would DNS be broken ? It's only an IP address. If you don't like the localhost part, then just change it to its IPv6 counterpart: "::1". And if you think that it should be blacklisted too, then you could use the computer's temporary IPv6.
But then, you could also use 127.x.y.z (so you'd have to "blacklist" all the 127.0.0.0/8 range).
And also blacklist the 192.168.x.y range (more specifically an attacker could be that your router serves 192.168.1.x with DNS starting at x=100, so serve the 192.168.1.100).
Then at this point you realise that its not DNS that's broken but the underlying applications.
So you're saying a remote DNS server should be able to serve up a record that points to a localhost address?
This is too clever by half, we simply shouldn't allow it!
Just because a localhost address is in the IPv4 address space doesn't make it just some ordinary address. And applications aren't broken, the local operating system's network software is being hacked in this case of DNS Rebinding.
Then for that oddball one-off system, there's a kernel compilation flag that is impossible to find in obfuscated C and assembler that you can set named
UNSAFE_ALLOW_REMOTE_DNS_RECORDS_DRAGONS = true
where the default setting is always FALSE.
Can someone who does kernel or DNS client development in operating systems comment? Am I nuts or is DNS nuts?
This problem doesn't just apply to localhost, although it's most straightfoward to exploit that way. You could also use this technique to scan the user's LAN or, in a more targeted attack, bypass IP address restrictions on specific servers.
Scripts from the public Internet shouldn't be able to access private or local networks as a matter of policy.
Similarly, in a high-security environment, scripts from a private network shouldn't be able to access the public Internet - to help prevent exfiltration of private data.
> Scripts from the public Internet shouldn't be able to access private or local networks as a matter of policy.
I agree, and it's encouraged some pretty stupid practices where it is used for things other than espionage or malicious intent.
Lenovo has a driver detection / update tool on their website, to run it you download a helper application that opens up a HTTP endpoint on localhost, then their website uses it to scan your system and (hopefully) shuts it down afterward. Why was this done in the first place, forcing users to download and run an executable (which has to be restarted each time you scan) that has no UI except for the web browser tab you already have open is dumb.
I'm sure I've seen browsers (maybe Opera?) which wouldn't let a website on a "routable" IP address make any requests at all to anything on a "non-routable" IP address; I assume that 127/8 is included in the latter range. That approach basically eliminates the DNS rebinding attack, I assumed that was normal practice in all browsers - obviously not, though.
and making sure the VM is not set up as bridged networking, I personally have a pfSense VM and route my computer and all my other VMs to it each on a separate virtual NIC with firewall rules etc.
look at my profile for a link to my blog where I explain things in detail, it's not finished (I still have to write a post about how I configure i3 and how I start virtualbox more easily) but the pfSense stuff should be ready for public consumption
Correct. Like developing to a common deployment target, provisioning a development VM with the same toolchain you use to provision production VMs, isolating your VM from the system state of your development machine. I cannot imagine developing any other way.
I did this for a while, but it falls apart with constant care and feeding. I had a vagrant box that could be fully provisioned with Ansible, and checked everything in to the repo. I paid for Parallels because virtualbox is too slow to do any real work (unit tests take 2x or 3x longer to run for example).
I couldn't convince anyone else to use this stuff. They were happy to just run some homebrew commands every now and then.
I was away from the project for about six months. When I got back I thought "great, I'll just `vagrant up` and be back in business."
Parallels had upgraded itself to some version that wasn't compatible with vagrant. I was eventually able to find an old parallels installer and downgraded.
My laptop had a newer version of ansible, and it couldn't run the old ansible scripts. There was no easy fix to something that should be easy (ansible could no longer create postgres users when ansible was running through a non-root account. With vagrant you log in as the `vagrant` user and that user has sudo).
I deleted all that stuff, and now I do all my development locally. I do run my DB's in docker containers, but that's it. Now it is super easy to add any version of any database to any project. Just a few lines in docker-compose.yml. But my docker-compose.yml was written for an older version of docker-compose. I tried to upgrade it to the latest syntax and nothing worked. I reverted that and things are still running. It is only a matter of time before some docker update renders my db and redis useless.
At that point I'll just `brew install redis` and `brew install postgres` and be done with it. Everything will run, and run at native speeds. Yay!
I think this is overstated. This only falls apart if you cannot get buy-in from your development team, and are not applying proper version management to your tool chain.
First, we use Ansible, not just for development, but for deployment. Because of this, if the time comes to upgrade Ansible, everybody upgrades, and any incompatible playbooks are immediately addressed.
Second, Parallels is a bad choice. They do not have a sufficient enterprise focus, and their product is updated too often. It's great for desktop virtualization, but not for much else.
We use VMWare Fusion, have no performance issues, and have never experienced the issue of hypervisor bit rot. On the one occasion when we upgraded VMWare, we also upgraded the corresponding Vagrant plugin, and everything worked fine.
I guess it depends on your use case but just about everyone I know uses Vagrant. It's true that you should never upgrade VirtualBox or Vagrant anywhere near a deadline, but I rarely have problems. I get similar performance from both VirtualBox and Parallels, both on network shares and execution. I'm on a 2014 MacBook Pro, Ubuntu guests, mostly doing PHP and Node.
That's interesting that you get the same performance on vbox and parallels. I tested vbox, vmware, and parallels on a macbook pro (2011 or 2012). vbox was an order of magnitude slower running our unit tests (and rails tests are slow enough as it is). Parallels was the fastest running the tests at close to native speed. Vmware was okay, but not great.
I was using NFS - the default VirtualBox Shared Folders is slow as to be useless (which I guess is what you were going to say). But on NFS (which is recommended by Vagrant) I had no problems.
Looks like the best way to protect against this is filtering private IP addresses from DNS responses. Is there a reason why ISP DNS servers in general would ever need to serve out a private IP?
The only problem with that is all ISPs will have to configure their DNS systems to do so (not going to happen soon). The only reasonable fix would be applied at the browser level as they are updated reasonably often.
A better mitigation than that mentioned in the article is that browsers should ignore a DNS update if it goes from a public IP to a private IP range. DNS pinning as suggested would cause havoc with most wifi captive portals, especially for those not computer savvy or the badly configured/implemented captive portals.
> DNS pinning as suggested would cause havoc with most wifi captive portals
It will require extra page reloads. Everybody is used to reload pages for any random reason by now (and you can reload by javascript too). I don't think it would cause much havoc.
I think you may be able to discover actual IP addresses with WebRTC. If the host is on IPv6 then it may well be a public adress. Hence it would be impossible to define "local" in this setup.
As far as I'm aware[0] there are private IPv6 addresses that should be treated the same as 169.254 and 10/172/192 in IPv4. It shouldn't matter what IP the host has, just what IP the address of the site you are communicating with goes from the public range to private range.
because some people set up services on internal/private networks, and split horizon DNS is a pain to set up.
I typically tell people if they can't connect to services on our VPN, to make sure they're using google's DNS servers because they don't protect against DNS rebinding "attacks."
This is an interesting, albeit well-known attack vector. A similar attack was used to attack Avast [0].
The author notes that write access could be used to inject dangerous objects (e.g. malicious pickles) into the database. This is arguably a much more serious bug because it does not require DNS rebinding (such a request can be performed cross-origin) nor can it be mitigated by refusing to read the response (as Chrome is proposing to do).
In short: the database modification attack is potentially much more severe, but as of yet no precise attack chain has been identified. However, I think it's very likely that some server software uses e.g. pickles in the database.
Hmm, Little Snitch, if configured properly (ie. you allow the browser to only connect to ports 80 and 443) will alert you if a site wants to connect to something weird like 3306, 9000 etc. Then you can kill the packet and nothing happens. Like on OPs PoC. Still, it's super interesting PoC.
I run my real databases on non-standard ports in docker and put honey pots on the standard ports. In those I fill it with dialog from love scenes in popular movies, it's not a ton of data but it's certainly interesting.
I've become increasingly complacent and often allow NoScript to "temporarily allow all javascript on this page", but will stop doing it, having just tried the PoC. It found Redis (which runs in a container, but with the port exposed).
The PoC failed to work when using TorBrowser (with the security slider set to High) and letting NoScript temporarily allow.
In essence, the resolved address of a request will be checked if it lies in a reserved block. If so, further policy checks will be made for the resolved address, and the IP address will be pinned for that HTTP request.
This reminds me of a similar vulnerability in webhooks [1]. I never thought of throwing a POST request at Redis to muck with keys but I tried it just now and it totally works. Geez.
With WebRTC local ip discovery [1] it can be easily extended to work against a whole local subnet. Looks very dangerous.
Probably best to attack this on the DNS rebind level. Encapsulating the browser network context somehow and firewalling this might help mitigating this attack too.
Seems to me that containerizing your dev environment with something like a well-constructed docker-compose YAML should mitigate this.
By "well-constructed" I mean that the backend services for a given project should only be available on the container network, and not be exposed to the host network.
Out of curiosity, what data do you have in your development databases that this becomes such a grave concern? I mean I'm all for security and love to see how creative people can get but we are talking about dev environments and not some part of the infrastructure (automated test machines, production, etc).
People work in development with copies of production databases all the time..
They absolutely should not, but I've seen it at companies I've worked at, at clients, and I'll admit I've done it at least a few times myself over the years.
I once requested sample data to work against while triaging/fixing a production issue and was given an unredacted copy of the production database. Lots of customer names/addresses/phone numbers/emails in there. I nuked it from orbit when I was done.
If you had read the whole article, you would have the answer to your question.
Es. It can get you to code execution if you poison python pickled data. Ecc..
I actually thought the opposite. Surely the database is more valuable than the ability to execute code. Assuming its a copy of the production database.
You generally work with copies of production databases. Hell I have prod copies of a couple banks' databases on my machine at work which I'm pretty sure violates some data protection law somewhere, despite us being (I assume) legally bound to keep the data secret.
AFAIK we're only allowed to have them on our work machines and we encourage clients to sanitize them before handing them over but not many do.
The crazy thing to me is that people here look for solutions at lower OSI level (DNS, interfaces, IP address,) where to me the problem is that there are these services that run with zero security.
Fix the services, require authentication and permission enforcement and the problem is gone.
I noticed that I am running redis-server in the background on my mac...but I have no idea what started it. I'm not doing any relevant development at the moment, and haven't in months.
How can I trace the source and ensure it doesn't restart on reboot?
It's very concerning considering Homebrew's popularity and its habit of running stuff as your local user. Compromising any application that runs as you with as much access to your computer as yourself is pretty bad.
Perhaps a server, when running in development mode, should require a custom HTTP header? This would be a non-simple request, and the browser will refuse.
Would this be a reasonable counter-measure?
The services discussed - memcached, redis, etc - don't use HTTP. The attack is successful because the protocols follow the robustness principle of 'be liberal in what you accept', and simply ignore the HTTP cruft, but still process the command.
A secret value, whether it's called a 'password', a 'key', a 'token', or comes in an 'Authorization' header or 'X-CustomHeader' is always a good countermeasure.
Can this be solved by configuring the local system (e.g. Debian?) to blacklist any DNS resolution that ends up being a private IP address? Is this is possible to configure at the firewall level?
I'm guessing that even in a future in which we have 100% of IPv6 deployment, we would still run our loopback/LAN interfaces with IPv4 for simplicity...
These exploits rely upon the ports being exposed on localhost. Even if you are not (as you should) using ssh tunneling to the VM, any exposed ports are on a separate IP address, not localhost. However, if one is forwarding ports to the host machine, that's obviously a problem.
These exploits rely upon the ports being exposed on a predictable IP. A lot of setups are going to give you very predictable IPs for the virtual machines.
Not if you actually work in the VM, and do not expose database ports to the host (why in hell would you do this anyway). No amount of ip scanning is going to open ports.
Actually I meant something like how Intellij has DB integration, so you could put in your host & port and connect to the DB to help you write queries etc.
If the user isn't using ssh tunneling into the vagrant vm, then this attack still works if the attacker uses the correct IP address instead of localhost.
No, it doesn't. If you are running Vagrant in a bridging config, it has it's own separate IP on the network. If you are running it as a private network, it has it's own IP on a private subnet isolated to the host machine. You have to explicitly do port forwarding to make it work as you describe.
EDIT: I'm not thinking straight. Please ignore me.
Most skilled developers probably. The rest can't figure out why the computer next to theirs can't connect to the test database and after googling a stackoverflow anwser have it listen on 0.0.0.0 forever.
Indeed. Turn off SELinux and disable the firewall too! And just leave the password for the MySQL root user blank since it's too much trouble keeping track of them.
I am guilty of this. The real question is will I ever change this habbit?
I see percentage of us here locking things down.
The rest, and the rest of the developers out there likely won't.
This needs to be fixed at the browser level I don't see it being solved any other way that would have the net benefit of having it fixed in the browser.
Simply pinning public and local and not allowing local rebinding should have most of the issues resolved.
That is the best approach, but a lot of people change it to NAT for testing purposes, like connecting to the VM from mobile devices, and other operating systems, etc..
I know there are ways to do that as well, but this is the easiest and so it's most often the way it's done.
Absolutely not a non-issue with all of the configuration options with vagrant. I'm honestly taken aback that you would consider this a nonissue with something like vagrant.
You can intentionally configure anything to be insecure, and what you describe is one such example. Most devs are either bridging or using a private network (the latter in our case).
The DNS rebind seems weird, any sensible DNS forwarder should ignore local ip (127.0.01/8, 192.168.0.1, etc). This attack doesn't seem feasible if you can't hinack local adresses.
Oh no, my development database! What ever will I do if 10,000 entries of Lorem Ipsum get leaked!? In the wrong hands, all of my bunk data from trying to get a PUT right could be really dangerous.
Perhaps you don't have anything confidential on your development machine -- and hopefully you don't! -- but plenty of people do, unfortunately, use real data. :(
Frequently data produced by hand (by developers writing the tests) or randomly generated (by something like quickcheck) does not have all the edge cases you would encounter in real data.
So there is some justification to having real data available - this is why in the finance space at least developer machines have relatively stringent restrictions placed on them compared to a lot of other organisations.
The entire 127.0.0.0/8 block is dedicated to the loopback interface [1]. That's 2^24 - 2 unique IP addresses you can choose at random. This basically eliminates the feasibility of the DNS rebinding component, as it would take prohibitively long to find the actual loopback address that your services have bound to.
It's important to note that this is much more effective than not using the default port. It's much faster to iterate all 2^16 ports on the same IP address than it is to wait for DNS TTL to expire so you can rebind to another IP address.
As a bonus, you don't have to worry about port collisions when nobody's allowed to listen on 0.0.0.0. Everybody can use 8080 if they want.
[1] https://tools.ietf.org/html/rfc5735#section-3