I'd say that Google did the numerical analyses of the format, then proved the memory bandwidth improvement and behavior in regards to large production machine learning models with their TPUs. So they derisked the numerical format. That and how simple it is to implement given you already support IEEE 754 single precision (just fewer bits for significant), and lower overhead to convert to and from floats (relative to fp16) makes this format a no brainier for Intel.