This is not a problem in ETH [1]. It's a bug in one of the contracts written by Parity. Anyone that has (had) money in these specific contracts is in trouble. If you are not using these specific contracts, you are safe (for now).
ETH and ETC have the same underlying technology and language to write the contracts. So if someone writes a contract it can be used in both variants. The contract may check that it is in ETH or ETC (probably looking for the contract that was added during the fork) but I think very few contracts check this, so almost all contracts are usable in both chains.
So if it was possible to use the Parity contract in ETH, then it's very probable that it was also possible to use it in ETC.
To answer your question: Probably the same bug that was used to steal ETH can be exploited to steal ETC, if someone is using these contracts in the ETC chain. But it's not a problem in ETH or ETC [1].
[1] As other commenters noted, it's a very bad design flaw to make all function public by default. It's not an error, but it makes much easier to write buggy code.
I read the article and seems to be well-written though lacking.
For even more customized RNNs such as attention mechanism, beam search as in Seq2Seq, you'll need to skip the tf.dynamic_rnn abstraction and use a symbolic loop directly: tf.while_loop
I think that's covered in the article - there's a passage on using `tf.scan` when the `tf.dynamic_rnn` abstraction won't cut it. `tf.scan` is more flexible than `tf.dynamic_rnn`, but provides a little more scaffolding for RNNs than using `tf.while_loop` directly.
scan implements strict semantics so it will always execute the same number of timesteps no matter what the accumulator is (nan).
while_loop implements dynamic execution (quit once cond is not met) and at the same time allows parallel execution when some ops are not dependent on accumulator.
If you read the code for `dynamic_rnn` and contrib.legacy Seq2seq model you'll find while_loop. I have yet to see tensorflow library code using tf.scan anywhere!
Also, internally, scan is defined using while_loop. In my code, I find scan lacking in RNN and always have to fall back to while_loop.
Here is video of a talk by the RNN/Seq2Seq author himself:
I don't follow. tf.scan will execute as many time steps as there are elements in the input series, which is the same behavior you'd get with tf.while_loop or tf.dynamic_rnn. It does not execute for a fixed number of time steps, which I think is what you're implying?
The difference from using tf.while_loop directly is that tf.scan handles the logistics of an accumulator to keep track of hidden states, so you don't have to implement that piece yourself.
As you say, tf.scan uses tf.while_loop internally; it's not particularly different from something you might build using tf.while_loop yourself.
I agree that you have to use a tf.while_loop in those cases. But then tf.scan isn't an option, so I don't understand what you mean by 'quit early' or 'saves time'.
When tf.scan is possible, i.e. when you have an input sequence you want to scan over, it is a perfectly good option.
Do you know if using tf.while_loop speed things up? Using dynamic_rnn at the moment and it's _so_ slow. I'm not finding implementations using tf.while_loop, there's dynamic_rnn as you said but that's so convoluted to read (like TF code..).
Hey greato, I made the same thing 6-9 months ago in https://comments.network/ , but I was contacted by HN stating that I do not have permission to use the comments from HN and to stop doing so. Just letting you know.