2) from the memory dump, it looks like it is post-authentication so the impact would be quite limited (stealing the password hash of an account one already has access to)
Folks on oss-security are skeptical about this (and rightly so, since posts like this have been showing up regularly since heartbleed). http://seclists.org/oss-sec/2014/q2/246
Did you issue the payment from your account a service? Did you issue it from a hardware wallet? Did you, by chance, happen to decide that you want to use an anonymization tool when buying your exploit? Perhaps, you have a strict retention policy and destroy keys which are no longer used and can't sign for it anymore…
If you are paying 20btc for a zero day exploit I suppose you know what you are doing.
Either way everything you had mentioned are not deficiencies of Bitcoin per se.
> This vulnerability exploits a bad check on the network layer of the sshd server
that we trigger to retrieve all children processes memory sections…
That's not making sense to me. I really thought that, and correct me if I'm wrong here, parents never had access to child process memory and that you had to go out of your way to do even limited memory sharing between processes.
Well, technically after a fork you have access to all the childs memory pages, they're identical to your own (that is, until any writes happen, usually that's when copies are made and each process ends up with a different version of that page). But then the 'exec' bit happens and it's a completely new process. Any access to memory of the child process thereafter would be through some deliberate mechanism unless there is a significant issue with the underlying operating systems ability to keep processes where they belong (separated).
The whole idea is that processes have separate memory spaces, debuggers, shared memory applications and IPC implementation work hard to do this on purpose.
If sshd really had unfettered access to its child processes then so would every other process that spawns some new process, init comes to mind as a nice candidate, cron as another (though neither exposes network sockets).
Of course, a process running as 'root' has technically access to all the other processes memory through (on Linux) /dev/mem (phys) and /dev/kmem (virt) but you have to open it first and as far as I'm aware sshd does not do this.
If the parent process were deliberately sharing memory pages with the child it might be possible to get those pages (or, if the child process executes code from them, to do pretty much anything with it). These seem like pretty novice mistakes for the openssh team to make, though.
A more important issue is that the window in which your connection is actually being managed by the parent process of all sshd instances must be vanishingly small. More likely you'd be looking to access memory while the ssh process was still privileged. I'm still not sure what that'd get you, though.
So, there are usually four sshd processes involved in handling a connection:
1 - master sshd, listens for incoming connections. When it accepts one it forks and executes a new sshd to handle it. It executes a new sshd to ensure that each one gets different runtime randomisations (ASLR mappings, PRNG state, etc.)
2 - privileged sshd. This is the result of the fork+exec of (1) above. It does absolutely no packet processing and exists only to perform options that require privilege or access to sensitive data (e.g. the host keys). As soon as it starts, it forks an unprivileged process.
3 - pre-auth sshd. This is the first unprivileged process forked by #2. It is responsible for packet processing and most crypto (except that which touches sensitive data). This process runs in chroot, In recent OpenSSH versions, this process is sandboxed using systrace, seccomp-bpf, OS X seatbelt or (if none of these are supported) restrictive rlimits. It exits as soon as the user is authenticated.
4 - post-auth sshd. This is forked from the #2 process after authentication completes. It runs with the privileges of the user and handles network packets and crypto much like #3.
It seems that once again, running services on non-standard ports where possible might have payed off.
Off to patching. Or rather, wait for eventual patches.
But you know what interests me the most about the whole thing? Theo de Raadts reaction. You know, after bashing OpenSSL heavily. And hinting at a SSH hole in FreeBSD, known to him but deliberately undisclosed.
Judging from the announcement's tested systems, it might be the exact hole he referred to. "...many more" sounds like they'd like to claim more successfully exploited platforms, but can't since they do something different, and the de facto reference implementation of OpenSSH on OpenBSD is suspiciously absent.
I'm curious how running services on non-standard ports has an impact on anything like this.
This specific threat appears to be a hoax, but even if it wasn't, an attacker just portscans you and then targets your SSH service. Moving the port doesn't even buy you a few minutes.
It doesn't. If anything change the line SSH emits and how it identifies itself. You want the automated tools to send the wrong exploits. If you can make your OpenBSD machine look like a Windows server running a commercial SSH server you're going to deflect a lot of attacks / attract the wrong kind of attacks.
The proper way to do something like this is generally three fold:
1) Trusted host (OpenBSD?) that runs only SSHD, and only allows users to run ssh from the shell. Log the fuck out of this server, and likely port mirror and record raw packet logs. Since no one should be talking to this machine except for SSH simply drop everything from those IPs when you recieve something other than SSH.
2) Ensure all other servers only accept SSH connections from the trusted host.
3) Restrict unconditional access to the trusted host to a list of trusted IPs, turn off all unused SSH options, you probably don't need that many if all it is doing is proxying connections to other hosts. Now that you have a list of trusted IPs you can be very aggressive in firewalling malicious connections from non-trusted IPs. (It's nice to have some way to getting back into your infrastructure when you're on vacation, etc)
Generally the best thing for security is a default deny rule. Chances are your database server doesn't really need to connect to the internet. Chances are your webservers only require inbound connections on ports 80 and 443, and only make outbound connections to your database & log server.
Fuckery with port numbers is generally a waste of time. The best thing for an attacker to do is extensive slow scanning in advance, then when an exploit comes out it's a quick lookup in a DB for likely vulnerable systems.
It helps against a masscan on the standard port. If the attacker is specifically targeting you or has the resources to masscan every port, then I agree that it's useless.
It isn't clear whether that is on a specific port or every port. The documentation[1] makes it look like it's on a specific port since it requires a target port to run. If it's on a specific port, you might want to multiply 45 minutes with 65535, if not I'm impressed. Thanks for sharing, I had not heard about ZMap before.
EDIT: The research paper says that it's on a particular port, from page 3: "The architecture allows sending and receiving components to run asynchronously and enables a single source machine to comprehensively scan every host in the public IPv4 address space for a particular open TCP port in under 45 mins using a 1 Gbps Ethernet link."
> If it's on a specific port, you might want to multiply 45 minutes with 65535, if not I'm impressed.
You're not? I am. Or would be if this wasn't known info yet.
Think about it for a bit - you can narrow down your search area quite a bit by excluding huge swaths of the Internet such as consumer ISPs. Focus on the target-rich environments like EC2 space, hosting providers, enterprises, etc. You also can not scan any unassigned v4 space (admittedly getting smaller), multicast addresses, RFC1918, etc. The usable pool of v4 is actually quite a bit smaller than 32 bits, and the interesting parts are even less. I would be surprised if you couldn't come up with a list of interesting space to scan that you couldn't do within 15 minutes per port. You aren't using this tool to pwn Joe nerd who runs a Linux NAT box off his cable modem.
From there, you just need a few hundred compromised servers (not difficult this day in age) and you can probably scan the entirety of "rich" space on all ports rather quickly via a distributed manner.
The only downside is such scans tend to generate complaints, so you'll need to balance your loss of compromised hosts with the expected payoff.
Indeed, but if you consider that plenty of cheap servers with larger links exist, and attackers aren't limited to using one...
Essentially, you'd run this across a botnet so that rather than focusing all the traffic at one target, you retrieve the massive amount of data much faster than your single system could.
One of the members of the full disclosure mailing list outed these guys here: http://seclists.org/fulldisclosure/2014/Apr/292
Only this time - they got smart:
> Please note that we are busy and we will NOT answer to questions, social engineering tentatives or dumb comments.
AKA
> Please note that we will not in any way prove this is legit (not falling for that again!)