Hacker News new | past | comments | ask | show | jobs | submit login
Build Your Own Probability Monads (randomhacks.net)
66 points by ultimape on April 19, 2015 | hide | past | favorite | 15 comments



Woah, this seems like a lovely abstraction. Seems like the guard does the magic. Does anyone know if there are advancements along these lines?



Probabilistic programming is the natural extension of this line of development. It's an active area of research. Two such languages implemented in Scala are Figaro and Factorie. Not sure about Haskell.

(Also, weird reading a comment I made on that blog 8 years ago!)


Thanks for pointing those libraries out. I am much more familiar with Scala so those were very helpful for me.



The title is intentionally strange. It is a nod to this great learning Haskell book: http://learnyouahaskell.com/

I also just finished reading "The Happstack Book: Modern, Type-Safe Web Development in Haskell" which I highly recommend taking a look at: http://happstack.com/docs/crashcourse/index.html


That lead quote is highly ironic considering that MSR employs arguably the most talented collection of mathematical probabilists anywhere in the world (including academia).


The quote is about general engineering of the companies. MSR is very detached from the products (e.g. I've seen research from them based on Firefox).


I understood the lead quote to be about comparing the software developers at the two companies, not people in the research arms.

Also, I'm curious: who are the top-notch probabilists at MSR?


I think Itai Benjamini is at MSR. http://www.wisdom.weizmann.ac.il/~itai/


Imagine a random number generator that picks 128 bits from a uniform source of 0's and 1's. Then, if I understand this correctly, if this RNG is implemented using PFP, the result will consist of 2^128 different bit-strings, each with the same probability (?)

How is this represented internally? What is the time and space complexity? Is there even a way to practically do this?


Probability monad is a really novel way to experiment with stochastic processes. I saw one in scala few weeks back. To get insight into 128-bit strings, you can transform/group the sample space such as finding the P(1st bit is 1) or get a histogram of P(# 1s in string). Otherwise, each bitstring will be almost unique. The code is really concise and execution is less than a second.

    val d: Podel[Vector[Int]] = Bernoulli(p = 0.5).repeat(128).map(_.toVector)
    println(d.probability(bits => bits(0) == 1))
    println(d.map(bits => bits.count(_ == 1)).hist(sampleSize = 10000, optLabel = Some("Number of 1s in 128 bits")))
output:

    /*
0.5054

Number of 1s in 128 bits

42 0.01%

45 0.09%

48 0.43%

51 1.53% #

54 4.49% ####

57 9.84% #########

60 16.21% ################

63 20.46% ####################

66 19.62% ###################

69 14.76% ##############

72 7.86% #######

75 3.49% ###

78 0.89%

81 0.23%

84 0.07%

87 0.01%

90 0.01%

     */


I can imagine a simple scheme by which this works: simply keep a "distribution" with every single variable. However, I guess this stops working correctly when there is correlation. For example, imagine that somewhere in the program, I generate a variable X with 50% probability of being 1 and 50% of being 0. Then I generate Y = -X. And finally, I compute Z = X + Y. Now, if the program does not keep track of the fact that Y and X are correlated, then Z will falsely contain nonzero values in its distribution.


Perhaps this example is what you describe? flatMap (used by the for comprehension) enables monads to combine in powerful ways. The first part shows uncorrelated X,Y such that Y ~ -X, then the second part shows correlated Y = -X such that every y = -x

    {
      val X: Podel[Int] = Bernoulli(p = 0.5)
      val Y: Podel[Int] = X.map(-1 * _)
      val Z: Podel[Int] = X + Y
      val d: Podel[Int] = for {
        x <- X
        y <- Y
        z <- Z
      } yield {
          z
        }
      d.hist(sampleSize = 10000, optLabel = Some("Uncorrelated X, Y"))
    }

    {
      val X: Bernoulli = Bernoulli(p = 0.5)
      val d: Podel[Int] = for {
        x <- X
        y = - 1 * x
        z = x + y
      } yield {
        z
      }
      d.hist(sampleSize = 10000, optLabel = Some("y = -x"))
    }
    /*
Uncorrelated X, Y

-1 25.52% #########################

0 49.60% #################################################

1 24.88% ########################

y = -x

0 100.00% ####################################################################################################

     */
So far, the weakness I find with probability monad, and simulation, is lack of precision. This makes them useless when the probability you want to find is precise such as 5.324e-11. Only formulas can give you the answer.


The naive representation that you're pointing at would indeed be very slow. Better representations examine the language of the problem as you describe it and then build efficient factor graph representations of your problem and do message passing simulation (see Factorie).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: