I guarantee you there's automation around training this model. There's also the factor of the dataset itself.
And it doesn't matter much if it's perfectly deterministic. Source builds of traditional software aren't typically fully reproducible either. That doesn't change
And I give you better than coin flip odds that it is actually deterministic. The engineers at the big ML shops I've had conversations with have been doing deterministic training for quite some; they believed it was key to training at scale. That's what gives you the "did this model go way off the deep end because of something we did in the model, or because a training GPU is on the fritz".