LSH is a really neat algorithm, but to my understanding (at least what I’ve seen in literature), it also tends to be rather inefficient. For it to have good precision, you need longer hashes, but that reduces recall. It also does not really tend to produce a well balanced distribution of entries over buckets. More current research has therefore focused on more elaborate hashing functions that are capable of producing shorter, and better balanced hash maps.
The article is well put together and nicely illustrated, though :)
After a paper has been accepted, authors can submit a repository containing a script which automatically replicates results shown in the paper. After a reviewer confirms that the results were indeed replicable, the paper gets a small badge next to its title.
While there could certainly be improvements, I think it's a step in the right direction.
You can always put "certified by the Graphics Replicability Stamp Initiative" next to each paper on your CV. It might influence people a little, even if it isn't part of the formal review for employment / promotion. Although "Graphics Replicability Stamp Initiative" does not sound very impressive. And Federal grant applications have rules about what can be included in your profile.
Informal reputation does matter though. If you want to get things done and not just get promoted, you need the cooperation of people with a similar mindset, and collaboration is entirely voluntary.
Whatever the case might be, companies have shown themselves to not handle people's personal data properly, as shown by the massive leaks in the past. Whatever utopia you're thinking of, it's not happening anytime soon, and GDPR is a rightful measure to slightly apply the brakes on rampant data collection and misuse.
It certainly is related. One core argument is that the vast majority of currently stored personal data has no good reason whatsoever to be there, the company should not be collecting, using and storing any personal data.
If half of the companies remove that private data which they shouldn't have, then that will reduce the impact of breaches, as there'll be twice less breaches where's something sensitive to leak.
No particular single example in mind, but just going through random large leaks:
The Republican National Committee leak (https://gizmodo.com/gop-data-firm-accidentally-leaks-persona...) - all the involved companies which swapped data records to make up this trove would not have had the permission to have much of that data under GDPR.
World Wrestling Entertainment 2017 leak - the leaked data included home and email addresses, birthdates, as well as customers' children's age ranges and genders where supplied, and even ethnicity; there's simply no reasonable reason why they should have had data like that in the first place. It it hadn't be collected, it couldn't have been leaked.
Joblink breach (https://www.identityforce.com/blog/americas-joblink-data-bre...) leaked among other things birthdate and social security number. There's no good reason to ask the birthdate in the first place, and to store the social security number after you've run whatever verification they do (presumably it gets used for background screening).
The big point is that almost always data minimization would have reduced the consequences. Companies keep old data forever, and that creates extra risk; Companies ask for and store more data than they need and that creates extra risk; Companies buy and sell data that shouldn't be bought and sold, and so the data copied in multiple organizations and again, creates extra risk.
I wonder: the article states that the SI unit for Kg up to this point was defined using a single object. Doesn't this definition also involve the fact that it's placed on earth, thus requiring two objects for its definition?
Nope. Mass != weight. The weight of 1kg on Earth is about 10N (and varies by location). The weight of the same 1kg object on the moon would be much less, but the mass remains the same.
Mass is constant across all gravitational fields (and anywhere there isn't) for any given object.
People interchangeably use lbs <-> kg but the actual equivalence is lbs <-> newton (N). The difference doesn't matter in the average person's life since we're all down here where gravity is homogeneous enough for most applications and people.
It's a fair point that the article makes. However, I've started out about 1.5 year ago with a programming group for kids in high school, and my choice was Python. I have since not regretted this decision because of one factor: fun. I could have gone with a weekly lecture on all sorts of theoretical things will learn again if (or when) they start at university.
Instead, I wrote a wrapper around OpenGL that provided some functions like "drawRectangle()" or "drawImage()", and they have used it to build all sorts of things. Additionally, they are constantly wanting to try other stuff. I don't think they would have done this if they didn't enjoy the process of writing code.