Hacker News new | past | comments | ask | show | jobs | submit login

I think that's covered in the article - there's a passage on using `tf.scan` when the `tf.dynamic_rnn` abstraction won't cut it. `tf.scan` is more flexible than `tf.dynamic_rnn`, but provides a little more scaffolding for RNNs than using `tf.while_loop` directly.



Using tf.scan is a bad idea.

scan implements strict semantics so it will always execute the same number of timesteps no matter what the accumulator is (nan).

while_loop implements dynamic execution (quit once cond is not met) and at the same time allows parallel execution when some ops are not dependent on accumulator.

If you read the code for `dynamic_rnn` and contrib.legacy Seq2seq model you'll find while_loop. I have yet to see tensorflow library code using tf.scan anywhere!

Also, internally, scan is defined using while_loop. In my code, I find scan lacking in RNN and always have to fall back to while_loop.

Here is video of a talk by the RNN/Seq2Seq author himself:

https://youtu.be/RIR_-Xlbp7s?t=16m3s


I don't follow. tf.scan will execute as many time steps as there are elements in the input series, which is the same behavior you'd get with tf.while_loop or tf.dynamic_rnn. It does not execute for a fixed number of time steps, which I think is what you're implying?

The difference from using tf.while_loop directly is that tf.scan handles the logistics of an accumulator to keep track of hidden states, so you don't have to implement that piece yourself.

As you say, tf.scan uses tf.while_loop internally; it's not particularly different from something you might build using tf.while_loop yourself.


In neural translation seq2seq, using while_loop in the decoder RNN saves a lot of GPU time because it can quit early when a sentence ends.


I see - you're talking about a use case like this: https://github.com/google/seq2seq/blob/4c3582741f846a19195ac...

I agree that you have to use a tf.while_loop in those cases. But then tf.scan isn't an option, so I don't understand what you mean by 'quit early' or 'saves time'.

When tf.scan is possible, i.e. when you have an input sequence you want to scan over, it is a perfectly good option.


Unless you want to execute the structure on multiple GPUs.


I don't understand how that's related.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: