Hacker News new | past | comments | ask | show | jobs | submit login

I've had some success using hyperbolic embeddings for bert like models.

It's not something that the companies I've worked for advertised or wrote papers about.




Hyperbolic embeddings have been an interest of mine ever since the Max Nickel paper. Would love to connect directly to discuss this topic if you're open. here's my email: https://photos.app.goo.gl/1khCwXBsVBuEP6xF7


Not much to discuss really, I just monkey patched a different metric function, then results for our use case became substantially better after training a model from scratch on the same data compared to the previous euclidean model trained from scratch.

I'm currently working on massive multi agent orchestration so don't have my head in that side of things currently.


Can you share what kinds of problems were conducive to hyperbolic embeddings in your experience. Also, separately, are you saying companies are using these in practice but don’t talk about them because of the advantage they give? Or am I reading too much into your last sentence.


They are better at separating clusters and keep the fact that distances under the correct metric also provide semantic information. The issue is that training is longer and you need at least 32, and ideally 64 bit floats during training and inference.

And possibly.

The company I did the work for kept it very quiet. Bert like models are small enough that you can train them a a work station today so there is a lot less prestige in them than 5 years ago, which is why for profit companies don't write papers on them any more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: