Hacker Newsnew | past | comments | ask | show | jobs | submit | jolfdb's commentslogin

"Middlebrow dismissals" like this are discouraged on HN.


I don't think my post is "middle-brow". It's also not a dismissal. Can you explain what about it made you think so? I posted detailed, personal experiences that directly argue against particular items of advice in the PDF. This is DH4 in http://www.paulgraham.com/disagree.html.


I was interested what the exact meaning of a “Middlebrow dismissal” is (non-native speaker). But I could only find the following in the HN guidelines which doesn’t seem to mention middlebrow dismissals nor does it seem to be applicable to the toplevel comment:

“Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something”

What exactly is a “middlebrow dismissal” and why would the toplevel comment be that?


I'm a native English speaker and can take a stab here. Lowbrow typically means unsophisticated (as in, like a neanderthal or ape) whereas highbrow means sophisticated. I'm assuming middlebrow means, well, something in the middle.

All of that being said, the "middlebrow dismissal" observation/criticism makes zero sense to me here.



I'm a native speaker and I've got no idea what he's saying. honestly I thought the guy made it up to try to sound cool...

I also assumed given that it was a green account and a one line response that it would be downvoted and marked as spam.


Why girls?


Maybe the GP went to an all girls school?


The Green New Deal is a mishmash of left wing talking points, not a coherent proposal.


Not every bad thing is authoritarian. Sometimes you have people powered plagues, such as racism and bigotry, which have been part of human nature for millennia.


Lawful discrimination against minorities (Jim Crow, Poll Taxes etc.) is by definition authoritarian. See: https://en.wikipedia.org/wiki/Authoritarianism


When joining the party is required if you want to get a job in your career field, you'll join the party.


"it's for a popular audience" is a bad justification for publishing misleading or wrong information that teaches people a wrong headed view of what a science is. It's not just poor quality, it's a actively harmful to culture and social progress. It's a supernormal stimulus, the journalism equivalent of replacing real foods by sugar and oil mixes -- it pushes people toward preferring wrong things over right things.


The general public care about conclusions of the science, not making up their own minds about what the science says. The overall article does a very good job of summarizing the paper. I'm not sure that what you're looking at as "wrong information" actually qualifies as such.


> fiduciary responsibility to lower costs and raise profits to their shareholders.

This is a thoroughly debunked lie


That doesn't work at all. Are you going to fill your network with people who aren't really your friend? Visit locations you don't want to visit? Devote more than have your time to this disinformation?


You are saying that people like the banks more than they like tech companies? Or that credit companies had anything nearly approaching the data that is available today -- 24/7 location data, all purchases, all associates, and more?


Be real. You are saying that data should not be collected at all.


I've long dreamed of a contract system in the world that does this. Every contract you're apart of, becomes explicit. We get paid for our data based on our preferences. Let's say the cost of showing me an ad is $51, so be it, it that means I can't read a blog post - fine, at least I agreed to the the terms for a price.

The system would be all encompassing but organized by agreement type - titles, loans/mortgages, insurance, purchases/warranties, ad networks, bets, etc... All managed by an open system but used by private parties. I don't advocate for this to be done by say, etherium, I still think the classic system can be used to decide disagreements, but at least all of what you agreed to will become very explicit. And there can be ways to "break out" of agreements, with whatever very explicit ground rules to be followed after that.


I'm absolutely being real, and I know how to do it.

It's not even hard in concept. Of course because data export/import mechanisms are so baroque and error-prone it will take effort to implement but that's already true with all existing systems.

Any time you export data you sign the transfer. Anyone else who then re-exports it has to sign, incorporating your signature in to the export, and so on.

It would actually make keeping corporate-held data clean and healthy rather much simpler, which is something people spend considerable time and money on already. And it's a basic policy mechanism to implement subject-dictated controls rather than vague, invisible, and unenforceable corporate-dictated controls such as exist today.


I work with user data as part of my job.

We gladly set up large pipelines and infrastructure to let data flow from users, through message queues, into databases, and from there into analytics workflows. But we balk at the thought of this process being anything but unidirectional, or in implementing exportable logs to track how data is tranferred, combined, or analyzed.

If the way user data propagates through third parties where auditable and visible, it would definitely at least double the work of setting up user analytics. But, other industries make do with similarly powerful regulations. If you can't afford to let users see what you're doing with their data, should you be allowed to collect user-level metrics anyway?

We could also blacklist a limited set of data types, as is effectively done with HIPAA, to better enforce privacy. However, even HIPAA is not restrictive enough, and there is a whole subfield of academia engaged in privacy research which has shown that even HIPAA compliant (in the sense they don't contain certain columns of data) datasets can be used to reveal senstive information using relinkage against public forms of data [0, 1, 2, 3, 4]. But the tech industry is better equipped than any other industry to enforce algorithmic privacy and be good stewards of data. We just don't want to, because it's hard. Building structures up to building safety code is also hard (and, in some ways, too bureaucratic/poorly implemented. Sometimes private companies can actually copyright building code laws[5]), but it's good that we do it, in general.

[0] https://en.wikipedia.org/wiki/K-anonymity

[1] https://en.wikipedia.org/wiki/L-diversity

[2] https://en.wikipedia.org/wiki/T-closeness

[3] https://en.wikipedia.org/wiki/Differential_privacy famously used by Apple

[4] https://en.wikipedia.org/wiki/De-identification

[5] https://techcrunch.com/2019/04/09/can-the-law-be-copyrighted...

P.S. : I do think HIPAA, GDPR, etc. have their flaws. But that just means we should try to do better, rather than just blindly oppose any attempt to do better. The vast majority of privacy gains can be accomplished with the simplest changes: anonymization, pseudonymization, limits on time/spatial granularity, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: