Hacker News new | past | comments | ask | show | jobs | submit | rasmi's comments login


Not to indulge the troll, but Arvind Narayanan is an (associate) professor of CS at Princeton and is one of the foremost researchers in the field on topics of ML/data privacy and ethics [0]. His papers/talks/tweets regularly attract attention on HN [1]. That you're judging the talk based on which conferences the author hasn't published in says more about your ignorance of the STS field than it does about the author's knowledge of the topic. This is top-notch content!

[0] https://scholar.google.com/citations?hl=en&user=0Bi5CMgAAAAJ...

[1] https://hn.algolia.com/?q=random_walker


It seems his main research focus is poking holes in popular tech, especially when he is the main author.


There are a lot of holes to poke, and not enough pokers.


[flagged]


Congratulations on your success! I'm actually familiar with the author's past talks and research, and am not just assuming he's competent because he lists his affiliation with Princeton.

I encourage you to familiarize yourself with the field of socio-technical systems. It is related to (but not the same as) "ML/DL", and it is important to know about if you are doing research in CS. A good place to start is the FAT* conference [0] (which was previously a workshop at NeurIPS).

Regarding manual scoring: The author cites this study [1] and specifically says: "This is a falsifiable claim. Of course, I’m willing to change my mind or add appropriate caveats to the claim if contrary evidence comes to light. But given the evidence so far, this seems the most prudent view." so by all means, do reach out to him with better evidence.

[0]: https://fatconference.org/2019/program.html

[1]: https://arxiv.org/abs/1702.04690


You seem to be knowledgeable on the matter then. Why hide behind a throw-away account and the ad hominem against the author though? You could articulate better your perspective, we would appreciate. (I hope you're doing okay)



It's also the 5th link "open sourced" in the article.


Here are two more great articles about Transformers:

The Illustrated Transformer (referenced in the parent): http://jalammar.github.io/illustrated-transformer/

The Annotated Transformer: http://nlp.seas.harvard.edu/2018/04/03/attention.html


I reference one of the articles but I hadn’t look at the other one! Very interesting. Thanks for sharing


You do more than reference it -- you've copied a bunch of text and figures from it as well. Search for "The encoder’s inputs first flow through a self-attention layer" and read on from there. Most of the article is a word-for-word copy.


I’ve tried to use a bunch of the figures and information from these articles. I hope it was useful for some people


Regardless of whether or not it's useful, it's substantially plagiarized from another source. You don't have an inline citation or visual indication that many of the figures are copied from another article on the topic. The same goes for copying paragraphs with extremely minimal modifications.

Slightly changing sentence structure is not paraphrasing or stating in your own words. Pointing a reader to an article for further reading is not the same as a citation.

To put it bluntly, your arrangement of the material, substantial paragraphs, and a significant number of your figures/graphics are copied from elsewhere without citation.


The rest of the images are largely from colah’s blog posts; it’s plagiarized from a mix of sources.

Providing links billed as additional reading material doesn’t count as a citation.


I cite them at the end. The idea was to summarize all the articles into one. I will add a note to the self-attention section - that is the one that I used from Jay's blog. My idea was to summarize all the content from these posts and videos that are referenced at the end into one blog post. I hope it was useful for some people


That's not good enough. Citations must be in the text, so that nobody mistakes their work for yours. This can be informally like 'Jay describes", or 'Sally's article says', and then your words, or a quote 'in quotation marks' with a link to their work.


I'm adding a note to the self-attention section adding the fact that this was taken from another blog post


Note that if the other blog post is not licensed under a license that allows you to do so (such as Creative Commons Share-Alike), you are simply not permitted to copy images or text without explicit permission from the authors. It's not enough to state that it was taken from another post.

Of course, you are allowed to cite excerpts, but then the text should be clearly marked up as a citation.


The right thing to do is to either bring down the blog post, replacing it with references to the used articles, or quickly transform (pun intended) the paragraphs and graphics into your own. As it stands, it’s plagiarism.



For those interested, you can track Beam Python 3 support progress here:

https://issues.apache.org/jira/browse/BEAM-1251


The paper is "ImageNet Classification with Deep Convolutional Neural Networks" by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, and is available here:

https://papers.nips.cc/paper/4824-imagenet-classification-wi...


The search features discussed in the article are now available through Google Cloud Source Repositories: https://cloud.google.com/source-repositories/docs/searching-...


I link these resources often, but they are often relevant! See "The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction" [1] and the Rules of Machine Learning [2]. Another classic: "Machine Learning: The High Interest Credit Card of Technical Debt" [3], and recently added: Responsible AI Practices [4].

[1] https://ai.google/research/pubs/pub46555

[2] https://developers.google.com/machine-learning/rules-of-ml/

[3] https://ai.google/research/pubs/pub43146

[4] https://ai.google/education/responsible-ai-practices


>It is expected that the Department of Electrical Engineering and Computer Science (EECS), the Computer Science and Artificial Intelligence Laboratory (CSAIL), the Institute for Data, Systems, and Society (IDSS), and the MIT Quest for Intelligence will all become part of the new College; other units may join the College.

http://news.mit.edu/2018/faq-mit-stephen-schwarzman-college-...


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: