Hacker News new | past | comments | ask | show | jobs | submit login

Thank you. I'm not having a good time at the moment. Anyway, the basis of my test hypothesis is that people are easily fooled by URL both by HTTPS and brand recognition (e.g. subdomain) so I conducted a survey which revealed the very real problem: https://dejanseo.com.au/trust/

Raw data: https://dejanseo.com.au/wp-content/uploads/2017/04/survey-te...




Hey man, I know how hard the hate hits when you explain something like this to a community. It happened to me here too when I talked about the mass weaponization of autonomous systems via cyber attack. One guy said I was somehow right and a crank at the same time and dismissed one of my conclusions out of hand without addressing any of the reasoning behind it. I hurt at the time, but I came to understand it wasn't really directed at me.

The thing you got to realize is that many here make their livings trying to secure systems and we're finding it hopeless. The way you did what you did was fine. In terms of proving the hack you needed to violate Google's trademarks. It's in the very nature of the hack, and as far as I'm concerned, warranted given that they have a bug bounty. Now, I probably would have disclosed it to Google, Bing, etc. ahead of time, but it's your bug. You could have sold it to blackhat scammers and you didn't. For all we know this hack could have been going on for years.

I think most people are confusing their anger at the situation with anger towards you. You're cool.


This was my experience working on election integrity issues.

No good deed goes unpunished.


Thank you! :)


Don't listen to the haters here. The same people upvoted this article 3 days ago, and then promptly forgot about it https://news.ycombinator.com/item?id=17799083


I think you'll find that a lot of people on this site—from lots of political leanings—believe there's a wide gulf between "This behavior is a bad idea and we should use social mechanisms like debate to discourage it" and "This behavior should be illegal and we should point the government's monopoly on violence in your face to make you stop."

The flip side of the defend-to-the-death quote is that caring about someone's right to speak, even caring about that position being well-represented, doesn't mean you have to agree with what they say.


The irony...


I believe that you acted ethically, unlike Google. The history API should be locked behind one of those: "RandomSite.com wants to use the History API: Allow / Deny" dialogs, and the TLD and second-level-domain should be clearly marked in browsers, to prevent this sort of https://google.com.search.mydomain.cz schenanigans


For what it's worth, I love reading about this stuff, though I specialize in InfoSec so this sort of thing is actually pretty common in our communities.

You would have definitely had a much easier time with them than you are right now.

But for what it's worth, this will blow over soon enough, the internet does not have the greatest memory (unless you actually did something horrendous, which you didn't)


I hope so, and I also hope Chrome gets a fix for this.


I read the article, but still don't get what's Chrome-specific about this vulnerability, or what a good fix would look like.

My reply to someone who proposed making the back button always go to the previous URL: https://news.ycombinator.com/item?id=17826406


The issue is that some web applications don't load what traditionally were discrete pages (e.g. PAJAX) with their own URLs. It's a trend you'll find in sites built to feel more like applications. Scroll the the bottom of an onion.com article and watch your URL update to the next page without a page refresh. This was done so modern sites built like this could still allow the user to navigate back and forward. It let's the site update the browsers location history and effectively what URL that back button will point to. I could imagine blocking this behavior if it points to a site off the TLD and it's sub domains. Hard pressed to figure out how they could prevent this, definitely a flaw in the trust model but probably worth the trade off.


I could imagine blocking this behavior if it points to a site off the TLD and it's sub domains

This would not address the vulnerability in the article.


Hopefully, but even then, it's good that you are making people more aware of just how sketchy it can get.

Chrome will always have nasty exploits, because it's dealing with the flexibility of the world wide web. It's more important that we the users are aware of the tricks that attackers employ, rather than having clean solutions.

I don't trust that any software is secure, and to date that mindset hasn't burned me yet!


We need experiments like this.

Clinical studies, try to address various factors beyond does the drug technically work, but does it work in practice (coping with people doing everyday things like having dementia, drinking or babies).

We have a flawed obsession with responsible disclosure (that we should mandate includes public disclosure). What we need is a framework for Software Studies that allows any nature of research including in at risk areas and they should answer to ethics committee and regulators, not a disclosure terms of service from the company likely to be put in a bad light.

We need an equivalent to ICH GxP. Drugs have to deal with all the same craziness as software, they're just centuries ahead at how to do it (although they still fail at public disclosure).

Was this study appropriate? Whilst Google corrupts the security integrity of the internet with its Ad and Analytics system, it shouldn't be complaining. For the rest of us, I think we need to pressure for regulation if you want to draw lines and look to the drug industry for inspiration. At the very least we need InfoSec Trials if not the whole suite of Software.


Did we not have enough evidence already that this is true??!


I'm willing to give you the benefit of the doubt and assume you were just unaware of how things are supposed to be done (reporting exploits to the vendors privately and waiting for the fix before going public), but man, you did a fantastically dangerous thing even if it was unintentional.

I'd never condone beating up on somebody on the internet, but I dearly hope you've learned a valuable lesson here. You've put lots of people in danger of being exploited. It's not about whether or not you'd do anything malicious with it, it's about all the other people who now can because Google doesn't have a fix out there yet.


This is the misconception I can't stand. Where we hold individuals responsible for a product / companies defect. I thoroughly disagree with the idea that it's his fault people are vulnerable.

So called responsible disclosure is just a marketing spin term. Disclosing bugs privately is a favour not a responsibility. All this does is reduce the risk of bad software decisions. It doesn't solve anything.

How about free market instead? If you run a multi-billion dollar company that can be hurt by issues like this, then it's on you to make it more profitable to disclose issues privately. If you can't or refuse to do that, then you're exposing your company and your customers to risk. Enough with the shunning and the "responsibility" of individuals which expose bugs.


I sympathize with THIS position. It’s the same blame shifting crap when “identity theft” becomes your fault, even though any cashier or clerk can “steal your identity”.

What this marketing spin does is give cover to those who design badly secured systems.

http://www.youtube.com/watch?v=CS9ptA3Ya9E

Also similar is the “jaywalking” idea, made by car manufacturers to make the default right of way to cars!

http://amp.charlotteobserver.com/opinion/op-ed/article650322...


[flagged]


Google? When I brought a serious issue up in 2012 https://dejanseo.com.au/hijack/ Google never fixed it:

In summary, I can take any of your (or anyone else's content) pass more pagerank to it than the original page and then I become the original page. Not only that but all your inbound links now count towards my site and I can see your links in Search Console of my domain.

This is something link graph theory refers to as "link inversion" and is very harmful to smaller publishers.


I can't speak to that particular exploit, but no matter what you always go to the vendor privately first. Period. If they are uncooperative you can then go public. Not before.


I'm not sure how to respond to your comment (for the record I didn't downvote you). The free market point was obvious to me, but I'll elaborate.

When he chose to expose this bug, either he wasn't aware of an alternative (so called responsible private disclosure) or that alternative just wasn't appealing enough. Since we're dealing with a company that generates income (indirectly) through the product, they risk financial consequences from this sort of exposure. It follows that doing more to incentivize and generate awareness of their disclosure policy would reduce their risk which would have a financial impact. It's up to them to decide how much to money / effort / resources to spend on reducing that risk.

My stance is that public shunning doesn't solve the problem of releasing buggy software. I'm actually a Google fanboy, but (to me) they could do better. Instead we get "The site is completely removed from their index without any notification." Maybe we need to elevate browser security to the level of Space Shuttle safety? Obviously that costs more and takes longer, slowing innovation, but IMO the market should determine that.

TLDR; The idea that the individual is responsible for exposing a companies bugs is completely absurd to me. I'll respect you having a different opinion on it.


Please don't do this.

https://news.ycombinator.com/newsguidelines.html (see "idiotic")


I mean, Google has been told (over and over) for a long time that HTTPS doesn't fix trust on the web being broken, and that the back button shouldn't have an API. These are both well documented security problems. What has happened now is that Google is under public pressure and scrutiny to actually fix these things. A fire has been lit under their bum, and rightly so.


I believe they did remove the green lock from https sites to avoid implying trustworthiness. And removing the back button API is something Google can't decide on their own; it has to go through the standards process.


> I believe they did remove the green lock from https sites to avoid implying trustworthiness.

Nope. I'm on build 68.0.3440.106, the latest public stable build and as I'm writing this comment, little green lock and "Secure" right next to https://news.ycombinator.com.


Its a process. The green lock will be eventually removed, and instead of the "LOCK Secure" you see now, you'll see nothing, and http-only sites will be "Unsecure". This you can already see if you go to a non-https site like neverssl.com. There's a "Not Secure" banner in white.


Trying to be objective and understand my own motivations here. Obviously I didn't do anything out of malice. But yes, I could have told Google directly about the problem, but then I'd have no cool story to publish on my blog. At the end of the day, that's what it boils down to. Now that I got too much attention from it, I regret all of it.


"I could have told Google directly about the problem, but then I'd have no cool story to publish on my blog"

First of all, you definitely would. Standard practice is 1) report the bug privately, 2) wait for a fix, 3) get the go-ahead to publish your report and take credit publicly. That's how it always works; that's how security researchers build their reputations and careers. I guess you just weren't aware of that.

Second of all, even if you wouldn't get to publish it, that is horribly selfish reasoning. Putting millions of people at risk of having their information stolen for the sake of a popular blog post?


I fail to see how dejanseo put the people at risk. Exposing how a tool is dangerous and poorly conceived isn't the same as conceiving a dangerous tool.

In this case, Google put millions of people at risk, and dejanseo actually contributed saving them.


Right; but sometimes someone is the first to have an idea or realize a vulnerability, even if it seems trivial to them. Once it's public, novelty is no longer a factor, and it is a good idea to allow the vendor a chance to remove that vulnerability before the novelty is clearly eliminated. Obscurity does actually matter in the real world, even though it is a useless design principle.


That's right.

But while there are a lot of domain where I don't accept the reasoning "someone else must have thought about this before", finding vulnerabilities is somewhere where I can't help but believe that every publicly disclosed vuln has probably been secretly exploited and sold for years.

(The only data point I have behind that is that there are nations level agencies pretty much dedicated to finding those, and they've gotten really good at this (cf Stuxnet !)).

So, while by conviction only, I highly doubt any independent white/gray hat vuln finder will ever be the first to find it, and I applaud any kind of disclosure.


Yes, the reveal is required. But it doesn’t have to be without the vendor’s knowledge. The rush to get it out without allowing the vendor to respond is unjustified and reckless. The TLAs using the vuln are keeping it a secret, after all, and the script kiddies enjoy public trashing of people which I think is worse than the TLAs careful abuse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: