This is a bit of a shot in the dark but my guess here is that they're doing this because their stack is not able to properly deal with ICMPv6 packets on the return path. In ICMPv6 for some reason they designers saw fit to add the IP header information into the ICMP checksum so that if you're doing a NAT or other rewrite then you need to recompute the checksum for the ICMP packet, and if it's an error packet you need to do this for the inner packet as well.
It seems plausible that their network stack wasn't up to the task of handling this so they sort of jury-rigged up this sort of odd connection forwarding.
That's the only thing that I can think of here as otherwise there's just no planet where this makes any sense. That said, NAT for IPv6 is a generally problematic concept and they probably were flying a bit blind on how to implement it since there's no real standard way to do this. IPv6 was really designed around the idea that every endpoint would have a unique, globally routable address.
Did they lay off the whole team to the point where they can't even push updates? Their second-to-last blog post is about a hotfix that they haven't released a proper 8.1.1 patch release for in several months, you have to download a random file from their blog and manually patch it in via the terminal...?!
If you're on OS X, Veertu uses the inbuilt OS X hypervisor and has waay less interactive latency than Fusion did, seemingly as a result. It's also significantly cheaper.
My problem with veertu is the same as with Hyper-V: it's not crossplatform. With all its warts, VMW offered the best turnkey solution for small teams occasionally sharing VMs in offline contexts.
Yeah, and since I had just "switched" back to Mac and had a shiny, new MBP, I jumped and bought it. I don't think I had even used it yet when they announced the layoffs. :/
I don't think so. I left VMware in late 2015. The news came to be a bit of a shock to me. Everyone that I've talked to about the layoffs was/is completely surprised about the fate of Fusion and Workstation. Every single person on those teams was let go. They were fairly small teams.
Also, it's worth noting that Workstation and Fusion will be EOL by March, 2017.
On a related note. It annoys me, in articles about VMware, when they mention that EMC owns 80% of VMware. Yes, that's true, BUT... EMC owns 97% of the voting stock. Once Dell finishes the acquisition, you might as well just call VMware a private company.
If they are EOLing them, I wonder if an investment group could buy those products out from under VMWare and rehire the original dev team. Anyone from the original team know how much tech was shared with exsi/vsphere?
The problem with Parallels is that they've changed their product price structure to be a yearly payment (and I like to own the software I pay for, and only buy newer version when I need it) and, unlike VMware, they don't donate to open source projects, like FreeBSD (https://www.freebsdfoundation.org/donors/).
I hate parallel's business model. I bought Parallels version 10 in a box with a free upgrade to 11. It sat in the box for about a month after I got it and to my horror I found out the offer expired literally 1 day before I tried to redeem it. I contacted their support but they flatly refused to give me the free upgrade. I bought Fusion and I'll never give Parallel's another dime, this I vow.
From a technical perspective, I actually preferred Parallels over Fusion but their constant advertising inside of Parallels really annoyed the hell out of me, especially considering I paid full price out of my own pocket for it.
Yep, me. Fucking love Veertu, to the point where it looks like I'm spamming threads!
Veertu is way less featureful than Fusion but the latency means it's INSANELY fast. Windows 10 installs in sub 5 mins, but more to the point it just doesn't suck due to lag like Fusion did. I actually test on Edge now.
The Palo Alto-based dev team was laid off and a new team in Beijing will continue to develop the Workstation and Fusion products. Given that a workaround for the NAT issue was published on the blog, I would expect the next maintenance release of Fusion to fix the issue.
Source: I was on the Workstation team when the layoff happened.
It's not just NAT that's broken. On both Windows and Linux hosts, with bridged networking, SLAAC doesn't work for Linux or FreeBSD guest systems. It does eventually, after somewhere between 5 and 30 minutes, but for machines on the physical LAN it's virtually instantaneous. Something is dropping the router advertisements, but eventually one gets through. Once the guest has an address, it then works just fine.
Not so great when all the systems you want to talk to are v6 only, and the v4 NAT address is just for legacy use.
I totally understand that the observed behavior may not be what was intended, but there's clearly some complexity of the sort that doesn't happen by accident. What was VMWare trying to do, and which parts of this mess were unintentional? Is this an experimental feature that was correctly disabled for IPv4 but accidentally left on for IPv6, or was it intended to be released and on for both?
> What was VMWare trying to do, and which parts of this mess were unintentional?
It appears that they were trying to build an ad-hoc connection-forwarding faux-NAT for the guest's IPv6 connection to the host's, to approximately mirror the NAT they can do for IPv4.
IPv6 really doesn't want to be NATed, though, and they've done a poor approximation of it.
I wish I didn't have to use it. Bridging the Linux VM means it escapes from the Mac-level VPN which I need to get actual work done. I'd have to tunnel in from both systems and would probably set off some alarms for being in two places at once. Ugh.
And yeah, as mentioned elsewhere, bridging onto a wireless situation is even worse.
People are going to have to wrap their head around the brave new v6 world.
I get "logged in from a new IP address!" alerts with a service I use almost every time I log in, even though my IPv6 prefix hasn't changed. Deciding it's a completely new IP just because something changed in the last 64 bits is probably a bad idea in a v6 world.
Devices being multi-homed is intended to be standard practice.
My iPhone regularly has multiple IPv6 addresses, with different reachability characteristics for different addresses. There's an address used by my carrier for voice, there are addresses I locally administer, there are addresses from a stable prefix I use, addresses from dynamic prefixes provided by my upstreams, ...
The v6 world is a world where many devices have many addresses and addresses do not all have the same scope.
Application developers are going to need to get used to this new normal.
An application I manage has ~40% of users accessing it over IPv6, most of which would have a degraded experience if we didn't offer v6 connectivity.
IPv6 is here, it's here to stay, and applications are going to need to understand the new world they live in.
Yes. And it's not just multiple IPv6. I don't know iOS, but Windows, OS X and Debian all use only "privacy-friendly" (RFC 4941) IPv6 with remote devices. And they change frequently. That gets to be a pain when you're pushing static routes. NAT was so easy.
I'm pushing static routes over OpenVPN tap to get IPv6 assigned to remote LANs. In my (very limited) experience, the MAC-based IPv6 don't reach the Internet (http://test-ipv6.com for example). However, they do get revealed via WebRTC in Firefox (default install). IE and Safari block WebRTC by default.
Thanks. I was going from http://test-ipv6.com/. Unless the "privacy" address is routed, it reports no IPv6 connectivity. I'm guessing that ping6 should find them, right?
Possibly because the other popular approach -- bridging the VM's emulated Ethernet card to one of the host's network adapters -- doesn't always work or isn't always supported. (Last I checked, bridging was unsupported by my wireless driver, though this was several years ago.)
The expectation would be for ipv6 routing that you have more than a /64 assigned, but many setups just share a /64 between all devices. You need to allocate yourself a /60 say so you can allocate a /64 per VM to route.
NAT can actually still be pretty useful in IPv6. But in IPv6 it's also realistic to do prefix translation, so that while there is NAT, it doesn't foreclose on the possibility of end-to-end communication.
Multihoming, site renumbering, and a few other issues are areas where work still needs to be done. Prefix translation is a current workable answer to some of those problems.
This sort of thing isn't all that uncommon - enterprise "network optimiser" devices like http://www.riverbed.com/ work in this way too. Hopefully not buggy, though.
Yep, I was hit by the same thing - downloading the Homebrew installation script from Github to an OSX guest hangs, once I decreased the MTU from 1500 to ~1450 it works better.
It seems plausible that their network stack wasn't up to the task of handling this so they sort of jury-rigged up this sort of odd connection forwarding.
That's the only thing that I can think of here as otherwise there's just no planet where this makes any sense. That said, NAT for IPv6 is a generally problematic concept and they probably were flying a bit blind on how to implement it since there's no real standard way to do this. IPv6 was really designed around the idea that every endpoint would have a unique, globally routable address.