Not sure if this is still the case, but if you did this a couple of times, your account data would be permanently migrated to an instance with more CPU and RAM allocated - you'd also be in with all the other badly behaved accounts, so reliability goes down lots. The benefit was much quicker complex searches, and being able to bulk label or delete emails without it taking minutes or hours.
Don't believe me how slow it is on a regular instance? Try going to "All mail", selecting all of your emails, and applying a label to them all. In my experience, it can only label about 50 mails per second, so it can take hours to do them all. It will keep going if you quit the browser, but will stop if the gmail devs do a software update, which they seem to do on usually tuesdays, but never fridays or the weekends.
I find it hard to believe that Gmail will always serve certain users from the same machines, especially in this day and age, with “cattle, not pets” and ephemeral containers.
I’m sure they have machines that are only used to serve G Suite and Google One customers, and maybe some other VIPs, but regular heavy users? It sounds like an urban legend to me.
Gmail accounts will sometimes be automatically "hospitalized" -- assigned more than the usual amount of resources because for some reason they are chronically behind or growing without bound -- or "jailed" -- moved into isolation along with other bozo accounts, to keep from disturbing normal people's accounts. Not a legend.
The data has to be sharded somehow, though. You might not be hitting the same exact machine, especially for the frontend, but your data isn't just magically everywhere in "the cloud".
Not the same machines, but different groups or machines (pools, shards, farms, whatever). They'll certainly be able to move you around to balance the pools, or to decomission pools, or to put you in a pool with primary data closer to where you usually access from etc. Grouping by behavior makes sense too --- separating heavy and light users makes a lot of sense, you can serve a lot more light users from one pool, and the heavy users won't impact their service.
That's not the "cattle vs. pet" as I understand it. The servers are identical, i.e. cattle. This is just a case of sticky sessions. It's a common pattern to help latency and keep resource usage down.