Google Container Engine VMs should be protected from this by the virtual NIC advertised to them. In particular, it advertises support for TCP checksum verification offload which all modern Linux guests negotiate (including the kernel used in GKE). If this feature is negotiated the host-side network (either in hardware or software) verifies the TCP checksum on the packet on behalf of the guest and marks it as having been validated prior to delivering it to the guest.
Older Linux kernels have an additional (I believe distinct) veth related bug that requires we do some extra work for externally verified packets (and jumbograms): in particular we must set up the packet we are delivering assuming that it might be routed beyond the guest and that the guest will not remove/reapply the virtio-net header as an intermediate step (this is a pretty leaky abstraction of the Linux virtio-net driver, but one we're aware of and have worked to accommodate).
Of course, none of the above changes the somewhat fragile nature of TCP checksums, generally.
(note: I wrote the virtio-net NIC we advertise in GCE/GKE, although very little of the underlying dataplane, but I double checked with the GKE team in terms of underlying kernel versions that we typically run).
Thanks for the details! This is basically what Vijay and I were guessing by the fact that the MTU is something less than 1500 on Google Compute Engine.
The demonstration that Vijay added to the post was done on Google Container Engine, using Kubernetes. The packets were sent corrupted using netem. We tested a few configurations and were unable to get corrupt packets to be delivered to a Google Container Engine instance, so I agree with your assessment. Most importantly: it appears that the Container Engine TCP load balancer drops corrupt packets from the public Internet.
However: If someone is using some weird routing or VPN configuration, it might be possible (but this seems unlikely). Notably: I seem to recall that if you send corrupt packets to a Compute Engine instance, they are received corrupted (through the Compute Engine NAT). So if you used your own custom routing to get packets to a Google Container Engine application, this might apply. But again, you would have to really try to have this happen :)
Update: Actually under rare circumstances we'll validate the checksum on the host side, realize it's wrong, pass it to the guest as CHECKSUM_NEEDED and the guest will happily fail to checksum it and mark it as already validated. So, GCE/GKE is not currently completely immune to this unfortunately :(
Practically speaking any traffic from the internet would not be affected as we require a valid checksum before doing the internet->guest NAT (see the GCE docs for what I mean here if this is unclear), but inter-VM traffic can potentially be impacted. On the other hand, inter-VM traffic has other verification applied to it that's stronger than a TCP checksum. Basically: if you try to trigger this by sending bad packets you might succeed (although even there it's not 100% guaranteed, but the details of what will/won't trigger it delve into implementation arcana that I'm not comfortable sharing).
Older Linux kernels have an additional (I believe distinct) veth related bug that requires we do some extra work for externally verified packets (and jumbograms): in particular we must set up the packet we are delivering assuming that it might be routed beyond the guest and that the guest will not remove/reapply the virtio-net header as an intermediate step (this is a pretty leaky abstraction of the Linux virtio-net driver, but one we're aware of and have worked to accommodate).
Of course, none of the above changes the somewhat fragile nature of TCP checksums, generally.
(note: I wrote the virtio-net NIC we advertise in GCE/GKE, although very little of the underlying dataplane, but I double checked with the GKE team in terms of underlying kernel versions that we typically run).