Hacker News new | past | comments | ask | show | jobs | submit login
VMWare Fusion IPv6 NAT Black Holes (rachelbythebay.com)
130 points by l1n on March 28, 2016 | hide | past | favorite | 48 comments



This is a bit of a shot in the dark but my guess here is that they're doing this because their stack is not able to properly deal with ICMPv6 packets on the return path. In ICMPv6 for some reason they designers saw fit to add the IP header information into the ICMP checksum so that if you're doing a NAT or other rewrite then you need to recompute the checksum for the ICMP packet, and if it's an error packet you need to do this for the inner packet as well.

It seems plausible that their network stack wasn't up to the task of handling this so they sort of jury-rigged up this sort of odd connection forwarding.

That's the only thing that I can think of here as otherwise there's just no planet where this makes any sense. That said, NAT for IPv6 is a generally problematic concept and they probably were flying a bit blind on how to implement it since there's no real standard way to do this. IPv6 was really designed around the idea that every endpoint would have a unique, globally routable address.


Did they lay off the whole team to the point where they can't even push updates? Their second-to-last blog post is about a hotfix that they haven't released a proper 8.1.1 patch release for in several months, you have to download a random file from their blog and manually patch it in via the terminal...?!

http://blogs.vmware.com/teamfusion/2016/01/workaround-of-nat...


Presumably yes - people that worked there described it as: "VMware has decided to lay off the entire Fusion and Workstation teams."

https://twitter.com/jdotk/status/691771635771244545

They haven't release any patches since... :(


If you're on OS X, Veertu uses the inbuilt OS X hypervisor and has waay less interactive latency than Fusion did, seemingly as a result. It's also significantly cheaper.


My problem with veertu is the same as with Hyper-V: it's not crossplatform. With all its warts, VMW offered the best turnkey solution for small teams occasionally sharing VMs in offline contexts.


Jason has actually been rehired. [0]. You're correct on no patches since, but patches never were that fast anyways.

[0] https://twitter.com/jdotk/status/709520637396656128


But they had the gall of running promotions on twitter...


Yeah, and since I had just "switched" back to Mac and had a shiny, new MBP, I jumped and bought it. I don't think I had even used it yet when they announced the layoffs. :/


From 2016-01-27: "VMware Fusion, Workstation team culled in company restructure"

http://arstechnica.com/information-technology/2016/01/vmware...

Personally I suspect that they're offshoring these products rather than killing them completely.


I don't think so. I left VMware in late 2015. The news came to be a bit of a shock to me. Everyone that I've talked to about the layoffs was/is completely surprised about the fate of Fusion and Workstation. Every single person on those teams was let go. They were fairly small teams.

Also, it's worth noting that Workstation and Fusion will be EOL by March, 2017.

On a related note. It annoys me, in articles about VMware, when they mention that EMC owns 80% of VMware. Yes, that's true, BUT... EMC owns 97% of the voting stock. Once Dell finishes the acquisition, you might as well just call VMware a private company.


Anything EMC touches eventually turns to shit.


"Also, it's worth noting that Workstation and Fusion will be EOL by March, 2017"

And yet they are still sending me Fusion special offer emails. It feels dishonest is they really intend to EOL it in a year.


If they are EOLing them, I wonder if an investment group could buy those products out from under VMWare and rehire the original dev team. Anyone from the original team know how much tech was shared with exsi/vsphere?


What's a good alternative to VMware Fusion?


There's really only two other choices, Parallels or VirtualBox.


The problem with Parallels is that they've changed their product price structure to be a yearly payment (and I like to own the software I pay for, and only buy newer version when I need it) and, unlike VMware, they don't donate to open source projects, like FreeBSD (https://www.freebsdfoundation.org/donors/).


I hate parallel's business model. I bought Parallels version 10 in a box with a free upgrade to 11. It sat in the box for about a month after I got it and to my horror I found out the offer expired literally 1 day before I tried to redeem it. I contacted their support but they flatly refused to give me the free upgrade. I bought Fusion and I'll never give Parallel's another dime, this I vow.


From a technical perspective, I actually preferred Parallels over Fusion but their constant advertising inside of Parallels really annoyed the hell out of me, especially considering I paid full price out of my own pocket for it.


Veertu looks promising (http://veertu.com/)


Know anyone that's tried it?

I'd like an alternative to VMWare Fusion but don't want VirtualBox.

Other challenge is Fusion allows me to transfer VMs to my home ESXi box pretty easily, so I'd lose that too by switching


Yep, me. Fucking love Veertu, to the point where it looks like I'm spamming threads!

Veertu is way less featureful than Fusion but the latency means it's INSANELY fast. Windows 10 installs in sub 5 mins, but more to the point it just doesn't suck due to lag like Fusion did. I actually test on Edge now.


Offshoring indeed, not exactly in the best way, but that is what happened. They have a new UI team now twice the size of the old one.

For more info see update3 on this post. http://planetvm.net/blog/?p=2952


The Palo Alto-based dev team was laid off and a new team in Beijing will continue to develop the Workstation and Fusion products. Given that a workaround for the NAT issue was published on the blog, I would expect the next maintenance release of Fusion to fix the issue.

Source: I was on the Workstation team when the layoff happened.


It's not just NAT that's broken. On both Windows and Linux hosts, with bridged networking, SLAAC doesn't work for Linux or FreeBSD guest systems. It does eventually, after somewhere between 5 and 30 minutes, but for machines on the physical LAN it's virtually instantaneous. Something is dropping the router advertisements, but eventually one gets through. Once the guest has an address, it then works just fine.

Not so great when all the systems you want to talk to are v6 only, and the v4 NAT address is just for legacy use.


I totally understand that the observed behavior may not be what was intended, but there's clearly some complexity of the sort that doesn't happen by accident. What was VMWare trying to do, and which parts of this mess were unintentional? Is this an experimental feature that was correctly disabled for IPv4 but accidentally left on for IPv6, or was it intended to be released and on for both?


> What was VMWare trying to do, and which parts of this mess were unintentional?

It appears that they were trying to build an ad-hoc connection-forwarding faux-NAT for the guest's IPv6 connection to the host's, to approximately mirror the NAT they can do for IPv4.

IPv6 really doesn't want to be NATed, though, and they've done a poor approximation of it.


Why implement NAT for IPv6 at all?


I wish I didn't have to use it. Bridging the Linux VM means it escapes from the Mac-level VPN which I need to get actual work done. I'd have to tunnel in from both systems and would probably set off some alarms for being in two places at once. Ugh.

And yeah, as mentioned elsewhere, bridging onto a wireless situation is even worse.


People are going to have to wrap their head around the brave new v6 world.

I get "logged in from a new IP address!" alerts with a service I use almost every time I log in, even though my IPv6 prefix hasn't changed. Deciding it's a completely new IP just because something changed in the last 64 bits is probably a bad idea in a v6 world.

Devices being multi-homed is intended to be standard practice.

My iPhone regularly has multiple IPv6 addresses, with different reachability characteristics for different addresses. There's an address used by my carrier for voice, there are addresses I locally administer, there are addresses from a stable prefix I use, addresses from dynamic prefixes provided by my upstreams, ...

The v6 world is a world where many devices have many addresses and addresses do not all have the same scope.

Application developers are going to need to get used to this new normal.

An application I manage has ~40% of users accessing it over IPv6, most of which would have a degraded experience if we didn't offer v6 connectivity.

IPv6 is here, it's here to stay, and applications are going to need to understand the new world they live in.


Yes. And it's not just multiple IPv6. I don't know iOS, but Windows, OS X and Debian all use only "privacy-friendly" (RFC 4941) IPv6 with remote devices. And they change frequently. That gets to be a pain when you're pushing static routes. NAT was so easy.


I'm not sure I understand what your use case is for pushing static routes & would be interested to understand what you mean.

I've been trying a few different approaches to routing. Putting link-local addresses in routing tables has worked well in some deployments.

Debian & OS X use MAC based addresses in addition to their privacy addresses. https://www.danieldent.com/blog/remote-ipv6-device-fingerpri...


I'm pushing static routes over OpenVPN tap to get IPv6 assigned to remote LANs. In my (very limited) experience, the MAC-based IPv6 don't reach the Internet (http://test-ipv6.com for example). However, they do get revealed via WebRTC in Firefox (default install). IE and Safari block WebRTC by default.


My testing has shown they are accessible over the internet :(.

They are not marked as 'preferred' and won't be used by default. But they are still available for use if someone goes out of their way to do so.


Thanks. I was going from http://test-ipv6.com/. Unless the "privacy" address is routed, it reports no IPv6 connectivity. I'm guessing that ping6 should find them, right?


You can in many situations do the v6 version of proxy ARP, ND proxy. See eg http://wiki.stocksy.co.uk/wiki/IPv6%2BXen_on_a_Hetzner_serve...

(No idea if it works with VMware, which I regard as devilspawn)


Possibly because the other popular approach -- bridging the VM's emulated Ethernet card to one of the host's network adapters -- doesn't always work or isn't always supported. (Last I checked, bridging was unsupported by my wireless driver, though this was several years ago.)


It's IPv6, they could just route it.


The expectation would be for ipv6 routing that you have more than a /64 assigned, but many setups just share a /64 between all devices. You need to allocate yourself a /60 say so you can allocate a /64 per VM to route.


You can do ND proxy then.


Yeah, I thought NAT did not exist with IPv6: https://youtu.be/v26BAlfWBm8


NAT can actually still be pretty useful in IPv6. But in IPv6 it's also realistic to do prefix translation, so that while there is NAT, it doesn't foreclose on the possibility of end-to-end communication.

Multihoming, site renumbering, and a few other issues are areas where work still needs to be done. Prefix translation is a current workable answer to some of those problems.


Learning there was such a thing as IPv6 NAT wasn't quite as depressing as learning about a forgotten historical genocide, but close.


Because of wifi?


I thought this was posted not that long ago.

But in any case, I was wondering if this had anything to do with happy eyeballs but did not hear any further input.

EDIT: Upon rereading, this is the followup post.


Yeah, it's very unlikely this will be resolved given the team who developed Fusion was retrenched.

VMWare are no longer, in my view, a particularly innovative company.


This sort of thing isn't all that uncommon - enterprise "network optimiser" devices like http://www.riverbed.com/ work in this way too. Hopefully not buggy, though.


I can definitely say that for IPv4 VMWare Fusion NAT does not forward inbound ICMP path MTU messages. For ipv4 vmware fusion hosts are a black hole.


Yep, I was hit by the same thing - downloading the Homebrew installation script from Github to an OSX guest hangs, once I decreased the MTU from 1500 to ~1450 it works better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: