Hacker News new | past | comments | ask | show | jobs | submit | nolemurs's comments login

I get so many contacts from recruiters that I'm never going to follow up on almost any of them.

Is salary information absolutely essential? No. But if I'm not pretty confident talking to you is a good use of time, you'll never hear from me, and not providing salary information is a great way to keep me uncertain.

I can think of two exceptions. For my first job I didn't really care about salary - my main goal was to get good experience and get into the field. Also, there weren't a ton of recruiters constantly spamming me. Alternatively, if you're a large well known company and I'm interested, then I'll just google to figure out what sort of salary to expect. In that case you don't have to explicitly tell me.

In all other cases, if you leave out the salary, I'll probably ignore your job.


Thanks for this. I read the title, and my thought process was:

"I agree".

Then:

"There's no way the study backing the article is good enough to draw meaningful conclusions. This is clickbait."

Then:

"I guess if i'm going to be intellectually humble I should still read the article."

I think based on your comment I can safely skip the read, and save myself some time. Intellectual humility is one thing, but I can go read something with a higher probability of being informative instead.


There's more to it than the excerpted quote. It's worth a read. The article does not breathlessly treat any of the studies it discusses as the word of the flying spaghetti monster, and it does a good job of presenting a broader perspective on the question.


One study showing the opposite result likely means that all of their results comes from confounding variables.


I feel like this angle is massively overplayed in articles about autonomous vehicles.

Is automation paradox a real issue? Absolutely. But at present, all the evidence seems to indicate that the people automated cars will save far outnumber the people they will kill.

Focusing on the dangers is alarmist and misleading. The 737 crash is a good example really - I've never heard anyone with experience in the field suggest anything other than that the automation makes planes safer. Focusing instead on the fact that the automation isn't perfectly safe only makes sense if you're a media company looking to scare people.


I don't think it's overplayed at all. It gives people a false sense of a security when they have automation that works 99% of the time. Look at all the people who think that Teslas can drive themselves. Most of the time they are fine, but then they are lulled into a false sense of security. Sure, they get a ton of warnings, but you start to ignore the warnings when it keeps working most of the time.

Heck, my car has adaptive cruise control, and I already find myself wandering sometimes, assuming that it will slam on the brakes if someone gets in front of me. And I'm well aware of the automation paradox and its dangers.


The issue I have is that when people talk about the dangers of automation they never compare them to the dangers of non-automated driving. They always make it sound as if automation is making things more dangerous.

The current evidence suggests that semi-autonomous cars are on balance safer than purely human driven cars. The evidence isn't conclusive, but I've literally seen no evidence presented to support the case that semi-autonomous cars are more dangerous - at best I've seen people argue that we can't yet trust the safety claims. Teslas have been on the road long enough now and in large enough numbers that if they were really more dangerous than non-semi-autonomous cars, I think we'd have seen some non-anecdotal evidence by now.

The reasonable presentation of the issue would be "semi-autonomous vehicles likely make people safer, but there are still dangers and people should pay attention."

Instead, the story you're telling is "Tesla drivers are dying, semi-autonomous cars are dangerous, and you should be scared."

You must see how this is misleading at best, and likely downright counterproductive from an overall safety point of view.


> This is a cohort study conducted over 15 years. You can't really criticize it only based on those numbers

You absolutely can.

Scientific studies should be judged on their potential to produce interesting results. This study has too small a sample size to provide results that merit making even the weakest claims about. Even the initial sample size was, frankly, just barely large enough to maybe provide suggestions for future research.

If a meaningful drop-out rate is expected, I would argue that the researchers running these sorts of studies are mostly wasting everyone's time and money for the sake of publishing a few papers.


This article is just fear-mongering and speculation. No facts are given to support the title, and the proposed solutions to the problem (which is never established) are vague to the point of meaninglessness.

Skip the read.


This is the best guarantee any company ever gives.

For smaller companies if something becomes too expensive or hard, they just go out of business. For larger companies, they have to draw the line somewhere. You won't get stronger guarantees from anyone, that would be insane.

Google definitely has a history of turning down free services when they're proving unprofitable, but that's only free services. There's no history of Google turning down paid services without good notice that I know of.


I'd add that there's a big difference between having a non-CS degree and having no degree at all.

With a non-CS degree it may be a little harder to get an interview, but if you can interview well, it's not going to count much against you. A degree in a strong technical field (like Physics or Engineering) might even count in your favor compared to a CS degree if you can still do well on an interview.

With no degree at all, you may find it harder to get interviews in the first place. There are definitely still jobs to be had, and companies who will interview you, but the options will, at least initially, be more limited.


The tone of this article makes it sound like it's a bad thing that Facebook and Google won't be much affected by this law, but I don't really think it is.

The reason the big players aren't much affected is that they've already had to make huge privacy changes in response to GDPR. They've already paid those costs.

A lot of what you see about privacy is uninformed fear mongering. This article is a good example: it seems to take it as a given that anything that doesn't substantially increase privacy protections is a bad thing, but no reasons are given, and the costs aren't weighed. In reality there's a balance to be had here and we need to be looking at both sides. I'm not saying there's nothing to fear, just that there's a real disconnect between what the media's afraid of and what we should actually fear, and there's a very real danger of legislative changes doing more harm than good.

GDPR actually provides some pretty substantial privacy guarantees and I think we should wait to see the effects of GDPR play out before we go fiddling with the system more.


The article concludes with a paragraph which basically echoes your sentiment. I don’t see much of your claimed “fear mongering” at all:

“The good news is that while the activists missed their big, showy target, they hit the often sketchy data arbitragers who do the real dirty work of the advertising machine. Facebook and Google ultimately are not constrained as much by regulation as by users. The first-party relationship with users that allows these companies relative freedom under privacy laws comes with the burden of keeping those users engaged and returning to the app, despite privacy concerns. Acxiom doesn’t have to care about the perception of consumers—they’re not even aware the company exists. For that reason, these third-party data brokers most need the discipline of regulation. The activists may not have gotten the legal weapon they wanted, but they did get the legal weapon that users deserve.”

In fact, many of your claims are wrong! The article argues that the new California regulations are a good thing, not a bad thing. And furthermore, it even directly acknowledges your point about how media complaints about privacy w.r.t. tech companies are usually misguided and not what we should actually be afraid about. You even make the same point about how tech giants have already adapted to GDPR.

You basically say “the article’s tone is bad” but then you make exactly the same arguments as the article.

I know it’s against HN rules to insinuate that commenters didn’t read the article, but your comment is so far off base w.r.t. the article’s content that I’m going to ask: did you even read the article? If you didn’t - kudos to you for having a very nuanced viewpoint; it’s certainly was much better than mine before I read it. (On the other hand, you really should read the entire article before criticizing it, though...)


The article appears to be more nuanced than your comment seems to imply. The conclusion of the article is that it's a good thing that the law impacts smaller advertisers rather than Facebook and Google.

>The activists may not have gotten the legal weapon they wanted, but they did get the legal weapon that users deserve.


I agree, large companies weren't the targets. Ultimately large companies are always better position to respond to regulation than small companies anyway.

However, just because you're a small company doesn't necessarily mean you're dealing with small data anymore. The data broker companies mentioned in the articles are great examples. These companies are the next Cambridge Analytica if not regulated by laws like this.


Gdpr introduces a cost but from what I heard,it's very messy.

What is needed is civil laws that permit lawsuits and restraining orders against tech companies as well as criminal statutes to imprison CEOs of companies that willfuly track users against their wish.

For example,Facebook's CEO should be criminally prosecuted for "shadow profiles" collected against non-users. I should be able to file a restraining order against Google preventing them from stalking me online.

I don't care if they pay a hundred billion dollars to the government,I want clear and specific lines drawn with clear and efficient consequences.

Consider this: under CFAA, intentionally attempting to gain unauthorized access to a system can result in a federal criminal prosecution. Google and Facebook are intentionally collecting unauthorized information on individuals who have clearly expressed their desire to not be tracked by them(browser settings and http headers). Their CEOs should be just as liable as an individual would if that individual was stalking another person or logging into their account against their permission.


It seems to take it as a given that anything that doesn't substantially reduce exploitation is a bad thing, but no reasons are given, and the costs aren't weighed. In reality there's a balance to be had here and we need to be looking at both sides. There's a very real danger of legislative changes doing more harm than good.

Sure, a few billion humans might be being invasively tracked and having their data sold, and their mental biases ruthlessly exploited for profit, addictive behaviours encouraged with personalised alerts, but you've got to look at both sides - on the other hand, a few companies are making a HUGE PILE OF MONEY, and I think everyone can agree that's unquestionably a good thing. (No reasons are given).


> a few companies are making a HUGE PILE OF MONEY, and I think everyone can agree that's unquestionably a good thing.

Unquestionably? No, it's a bad thing for the monopolies to maintain through capitalism's achille's heel. Weighing the costs would require insight into the cost-benefits of those mega-corporations, which we will never see.


The title of this article is pretty misleading. There is no confirmation that the cell-site simulators disrupt emergency calls. Rather:

> Harris Corporation claims that they have the ability to detect and deliver calls to 911, but they admit that this feature hasn’t been tested.

The fact that the feature isn't tested is a serious concern, and should be addressed, but this headline is completely inaccurate, frankly dishonest, and reduces my faith in the EFF.

I understand where the EFF is coming from, and for the most part believe in their causes, but this sort of willfully dishonest headline just serves to reduce credibility. In the future when I see EFF articles with dramatic headlines I'm going to assume they're probably not what they seem and be less likely to read the article.

Fans of he EFF will forgive these sorts of inaccuracies. Skeptics will not - this sort of article just serves to drive reasonable but undecided people away from your cause.


Repeating a study if, and only if, the study failed to provide the desired result is the most basic p-hacking technique there is. If this isn't p-hacking, I don't know what is!

To give a simple model, suppose you decide an effect is significant if there's only a 5% chance that you'd see this data if the effect didn't exist. If you run the experiment again with the same threshold for significance when don't get the desired result, then the probability of seeing an effect that doesn't exist rises to 9.75% (= 0.05 + 0.95 * 0.05).

The effect isn't merely "not 100% unproblematic" it's a serious problem! You've gone from what looks like a p-value of 5% to a p-value of 9.75%.

The fact that the second study is done with a larger sample is pretty much irrelevant unless it also comes with a higher p-value threshold for you to accept the result.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: