It doesn't do anything to prove LaMDA (or a monkey, or a rock, or anything) sentient, but at the same time it points out a real failure mode of how sentient entities might fail to recognize sentience in radically different entities.
I think this is true: sentience is hard to recognise (to the extent that "sentience" has any tangible meaning other that "things which think like us")
But I think with LaMDA certain engineers are close to the opposite failure mode: placing all the weight on familiarity with use of words being a familiar thing perceived as intrinsically human, and none of it on the respective whys of humans and neural networks emitting sentences. Less like failing to recognise civilization on another planet because it's completely alien and more like seeing civilization on another planet because we're completely familiar with the idea that straight lines = canals...