Hacker News new | past | comments | ask | show | jobs | submit login

Something about Google being able to influence features in consumer grade CPUs rubs me the wrong way.



I'd say that Google did the numerical analyses of the format, then proved the memory bandwidth improvement and behavior in regards to large production machine learning models with their TPUs. So they derisked the numerical format. That and how simple it is to implement given you already support IEEE 754 single precision (just fewer bits for significant), and lower overhead to convert to and from floats (relative to fp16) makes this format a no brainier for Intel.


Compared to enterprises requesting hardware backdoors... err "out of band management" (Intel AMT), adding an instruction or two is relatively tame.


Since Intel had major customers, they have been influencing the CPU roadmap in a major way (IE getting the features they want).


Not surprising though, new features shows up where there's money to make.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: