Hacker News new | past | comments | ask | show | jobs | submit | greato's comments login

Does this mean ETC is compromised too?


This is not a problem in ETH [1]. It's a bug in one of the contracts written by Parity. Anyone that has (had) money in these specific contracts is in trouble. If you are not using these specific contracts, you are safe (for now).

ETH and ETC have the same underlying technology and language to write the contracts. So if someone writes a contract it can be used in both variants. The contract may check that it is in ETH or ETC (probably looking for the contract that was added during the fork) but I think very few contracts check this, so almost all contracts are usable in both chains.

So if it was possible to use the Parity contract in ETH, then it's very probable that it was also possible to use it in ETC.

To answer your question: Probably the same bug that was used to steal ETH can be exploited to steal ETC, if someone is using these contracts in the ETC chain. But it's not a problem in ETH or ETC [1].

[1] As other commenters noted, it's a very bad design flaw to make all function public by default. It's not an error, but it makes much easier to write buggy code.


The vulnerability being discussed is for a particular multi-sig wallet, not the Ethereum blockchain itself.


Not directly, but Parity multi-signagure wallets on Ethereum Classic, Expanse, Musicoin, and other public chains are affected as well.


Houellebecq's first book Whatever is about the unfruitfulness of programming.


I read the article and seems to be well-written though lacking.

For even more customized RNNs such as attention mechanism, beam search as in Seq2Seq, you'll need to skip the tf.dynamic_rnn abstraction and use a symbolic loop directly: tf.while_loop


I think that's covered in the article - there's a passage on using `tf.scan` when the `tf.dynamic_rnn` abstraction won't cut it. `tf.scan` is more flexible than `tf.dynamic_rnn`, but provides a little more scaffolding for RNNs than using `tf.while_loop` directly.


Using tf.scan is a bad idea.

scan implements strict semantics so it will always execute the same number of timesteps no matter what the accumulator is (nan).

while_loop implements dynamic execution (quit once cond is not met) and at the same time allows parallel execution when some ops are not dependent on accumulator.

If you read the code for `dynamic_rnn` and contrib.legacy Seq2seq model you'll find while_loop. I have yet to see tensorflow library code using tf.scan anywhere!

Also, internally, scan is defined using while_loop. In my code, I find scan lacking in RNN and always have to fall back to while_loop.

Here is video of a talk by the RNN/Seq2Seq author himself:

https://youtu.be/RIR_-Xlbp7s?t=16m3s


I don't follow. tf.scan will execute as many time steps as there are elements in the input series, which is the same behavior you'd get with tf.while_loop or tf.dynamic_rnn. It does not execute for a fixed number of time steps, which I think is what you're implying?

The difference from using tf.while_loop directly is that tf.scan handles the logistics of an accumulator to keep track of hidden states, so you don't have to implement that piece yourself.

As you say, tf.scan uses tf.while_loop internally; it's not particularly different from something you might build using tf.while_loop yourself.


In neural translation seq2seq, using while_loop in the decoder RNN saves a lot of GPU time because it can quit early when a sentence ends.


I see - you're talking about a use case like this: https://github.com/google/seq2seq/blob/4c3582741f846a19195ac...

I agree that you have to use a tf.while_loop in those cases. But then tf.scan isn't an option, so I don't understand what you mean by 'quit early' or 'saves time'.

When tf.scan is possible, i.e. when you have an input sequence you want to scan over, it is a perfectly good option.


Unless you want to execute the structure on multiple GPUs.


I don't understand how that's related.


Do you know if using tf.while_loop speed things up? Using dynamic_rnn at the moment and it's _so_ slow. I'm not finding implementations using tf.while_loop, there's dynamic_rnn as you said but that's so convoluted to read (like TF code..).


He was against people calling Deep Blue AI when it was not. I doubt he would say the same thing about AlphaGo.


This is one of the many unconstructive comments


So's your mum.


It's been fixed. It was broken because it was querying a sub comment(not root comment).


I wrote this primarily for tech blogs. For example: http://rickyhan.com/blog/k8s.html


Hey greato, I made the same thing 6-9 months ago in https://comments.network/ , but I was contacted by HN stating that I do not have permission to use the comments from HN and to stop doing so. Just letting you know.


Thanks for the heads up. This will be free and open source. Also checkout my main project txtpen thanks.


hn.algolia.com is aggressively cached. It's updated every 3 minutes.


Neato


Cecum removal.


MVP is minimum measurable product. When you have analytics set up, you are ready to launch. (mailing list)


So just landing page + mailing list "subscribe" button + Google Ads to drive traffic?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: