Hacker News new | past | comments | ask | show | jobs | submit login

So there will be some key given to the database. And the key either gets results from the right branch or the left (at decision points, as opposed to leaves which would be the result itself). But if you hardcode the branching condition to be something like a greater than or less than, it's not very useful in finding the best result for the key.

But if you have a probabilistic setup, then the key has some chance of retrieving data from the right and from the left. Then we have {expected goodness = p * f(right) + (1 - p) * f(left)}.

Where f is some objective function that tells us the 'goodness' of the resulting data the key got from going right or left. If you differentiate the expected goodness with respect to p you get f(right) - f(left). Which is how much the expected goodness will change if I increase the probability of going right.

Suppose the derivative f(right) - f(left) was positive, so going right was better than going left, then I can write some script to change p positively so that next time when I get that key the probability of going right is higher. That way we can optimize p for given keys.

Very interesting! I hope I got everything right; where I couldn't understand, I used gpt to help me out (when it told me f was a 'goodness' function, I had a breakthrough lol). The discrete way feels like clean, understandable logic while the probabilistic way with changing p values feels really messy, but I can see intuitively why the latter gives better results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: