Hacker Newsnew | past | comments | ask | show | jobs | submit | nickodell's commentslogin

Are non-sequiturs always malicious? For example, suppose you have a news site, and it has a story about Ukraine, followed by a story about school shootings. Even if two links next to one another are unrelated, that doesn't prove that they're not genuine.


but, in your example, the context is "news" and both examples that you have provided do fall into that context.


Alex Oh worked for a law firm which represents Exxon. That law firm was accused of obstructing a deposition to avoid providing answers to the other side. [1]

>ORDERS defense counsel to show cause by May 14, 2021 why sanctions should not be imposed under Rule 11 (b)(3) for alleging that plaintiffs' counsel was agitated, disrespectful, and unhinged during the deposition despite a lack of record evidence supporting those allegations. See Mem. Op. 29-31. >ORDERS defendants to serve a copy of this order on Ms. Oh.

Was Alex Oh specifically responsible for this conduct? The order doesn't say.

What does this have to do with Clinton? Nothing, except in a six-degrees-of-kevin-bacon sense.

[1]: https://www.courtlistener.com/recap/gov.uscourts.dcd.14559/g...


He was senior enough to share the blame because was more likely aware if not involved.


>My prediction: this firm will probably try to get removed from the case, rather than open source their shitty code.

That isn't necessarily their choice. The prosecutors will make the decision about whether to withdraw the DNA evidence. They probably won't, given that they would need to give the defendant a new trial, which could lead to an accused murderer getting off. A bad look for any prosecutor.

More to the point, if the firm withdraws from any case where their credibility is questioned, what does that say to law enforcement agencies who are thinking about using their software?


My understanding is that (some) law enforcement agencies have been more than happy to drop cases rather than subject investigative tools to proper scrutiny[0]. They have no qualms resorting to "parallel construction"[1], and simply using the inadmissible (sometimes illegal) evidence to find admissible evidence.

[0] https://arstechnica.com/tech-policy/2015/04/fbi-would-rather...

[1] https://en.wikipedia.org/wiki/Parallel_construction


Stingrays are more useful as an investigative tool than an evidentiary tool. DNA is the other way around.


That would be implying that the prosecutor would prefer taking the life of an innocent rather than having it hurt his career, making the prosecutor kind of a criminal.


>making the prosecutor kind of a criminal.

Never met a lawyer before huh?

Jokes aside, prosecutors pushing through cases they know to be unsound isn't exactly uncommon. Many prosecutors are more concerned with their conviction rates than they are in justice, because that's what they are measured and rewarded by.


I often hear this, who is rewarding them for high conviction rate.


Voters, because when it comes to issues of criminal justice, crowds are rarely paragons of sober temperance and restraint.


I think you are wrong and that most prosecutors want to do the right thing, like most working people


"Right" and "wrong" are dependent upon the system and how it rewards you.I would agree that most prosecutors what to serve justice for malfeasance that has been committed. That's different than whether a case is the "right" or "wrong" one to take.

If a case seems unclear, and you could spend years working on a conviction that will ultimately fall through, that hurts your ability to do justice for more readily winnable cases. You have to spend the time building a case, do all the paperwork, go to trial, etc. That's opportunity cost. So spending that on a case you have 10% chance of winning just isn't a good use of time. Add that to the fact that conviction rate is a metric used to quantify skill, you're rewarded for serving justice successfully. And that then dictates how much money you can get which can help fund enforcing justice.

I believe you're looking at the moral right/wrong, and I don't believe that is the same right/wrong being discussed in terms of how lawyers often choose cases. At the end of the day, lawyers need work and they get that mostly through word of mouth and reputation. You don't really get either of those when you lose cases.


You're version of the right thing and the prosecutors version might not align.

The right thing for them is to put as many criminals behind bars. They review cases and pick ones they can win. They will attack and find unrelated weak points in your character to win. They believe they are doing the right thing and will use whatever they can legally against you. You being innocent and going to court is means someone made a mistake. To confess to a mistake loses you credibility, to confess to an ongoing process mistake could open up other cases where dangerous people could be set free.

Is that your version of the right thing?


It is trial if wrong to convince one's self that accused are probably guilty and that actions that convict them are moral even the proof is insufficient or weak or the procedure flawed.

Most people want to do the right thing wherein right thing is almost entirely defined by norms and customs of their environment. If the norms and expectations are high ethical and correct standards people will follow them to the degree they are able.

To what degree are such standards broken or defective in America though?

Lest we forget the head lawyer of Texas a state home to aprox 27 million people or around 8% of the nation is a man whose own prosecution has for years only been stymied by the difficulty of prosecuting the man at the head of the states justice department. Either 8 or 9 (I've lost track) directly beneath him have resigned and accused him of corruption.

This isn't even an isolated instance corruption is found in fact all over the united states.

Even when in theory we would like to do the right thing we have a hard time establishing what standards are even real. Look at the fact. For proof of that look no further than the science of hair analysis which the FBI spent decades using to convict the accused before we realized that they were incapable of differentiating dog hair from human hair.

Think of entire people going in to work producing work product about imaginary science they were pretending to do competently and sending people to death row in part because of their fake work product.

https://www.washingtonpost.com/local/crime/fbi-overstated-fo...

The justice system in America is a bad joke that is primarily differentiated from say Cuba in that bribes are paid to your lawyer instead of directly to government officials.


Prosecutors are shaped by an environment that equates “the right thing” to “punishing the guilty.” It’s like any profession... a surgeon will think you need surgery and a prosecutor will think the guy in handcuffs needs to go to jail.


Sadly, evidence contradicts that thought. It shouldn’t but it does.


Everyone wants to do the right thing.

It is just that some think the right thing for themself is to maximize their career progress.

And I would not know in general about state prosecutors, but what I know anecdotally second hand, does not sound good.


I believe that's true as well, and I never said otherwise.


The prosecutor doesn't see it that way. They see it as just "knowing" the guy is "definitely guilty". It's just like, a feeling you know? And a win will look great when they go for re-election (why is that even a thing?).

Presuming rational actors in this case is missing the general problem with the system: people very easily convince themselves they know the truth despite how the validity of the evidence changes. Whatever it said initially, that must be right - it's misinformation 101. Once a belief is established it is much harder to change.


And a win will look great when they go for re-election (why is that even a thing?).

You would prefer that they not be elected? That they would be appointed by some politician, with the public having no recourse?

The fact is that the public like prosecutors who convict people. That's deeply unfair. But it's also deeply democratic.


You already elect politicians. If that system is producing people you don't trust to manage the affairs of state, why would electing prosecutors lead to different results?


Hard to say, really. It's one of those compromises, quis custodiet and all that.

I very much agree with you: a government has a monopoly on violence and ultimately we all end up trusting it. Too many checks and balances lead to gridlock. Too few lead to oppression. Much of it ends up being decided on inertia. We do it both ways in different jurisdictions, with successes and failures in both.


That's not how prosecuters work in the US. Their goal is to win the case, not make the "right" decision. They'll spin evidence as hard as they can against the accused.


>taking the life of an innocent

The prosecutor isn't unilaterally deciding whether the DNA evidence is valid. There will be a public hearing where both the prosecution and defense show evidence about the validity of the DNA evidence, and a court will rule based on that evidence.


You should read up on the rates of plea bargaining, as well as the methods prosecutors use to push defendants to do so, which include:

- Not revealing all information they are required to.

- Parallel construction (see above)

- Overcharging, with the goal of making the plea more palatable than the cost/risk of defending multiple absurd charges.

- Lying to you while getting to throw you in jail if you lie to them.

As a result, only 5% of federal cases go to trial.

None of behaviors these are rare. If your understanding of the legal system is based on popular culture, as most people’s is, it is basically law enforcement propaganda that has little relationship to reality.


Believe it or not, I was already aware of all of those things, having followed a number of criminal defense blogs.

If you read the article and appellate decision which is linked, it says what I just said:

>On Wednesday, the appellate court sided with the defense [PDF] and sent the case back to a lower court directing the judge to compel Cybergenetics to make the TrueAllele code available to the defense team.


Apologies, I thought you were stating general facts about DNA evidence in general, not about this specific case.


Yeah, the system is in a pretty horrific state when you have to count on prosecutors' restraint for anything. Granted, we are in such a state, but it's beneficial not to just accept that as the status quo.


It would also give every person convicted using their software an incentive to open an appeal.


I like how this is considered a bad thing. Like we can’t let this guy point out that he’s being convicted by an unauditable black box that suddenly isn’t worth using if it has to stand up to scrutiny because then everyone would want to. The horror.

Like I’m actually kinda shocked this is the reality. I would have assumed that DNA evidence would have some blessed methodologies and tools/algorithms, with a strict definition of what constitutes a match or partial match specifically so this wouldn’t happen.


Here in Sweden, there is a legal practice that you can't find someone guilty based on DNA evidence alone. Probabilistic evidence is nice to point law enforcement in a direction, but there is always a risk of false positives.

In this case we are also dealing with probabilistic genotyping involving DNA Mixtures with DNA from several individual contributors, and most likely degraded DNA. It is the tool the police can use when other more traditional methods is not possible because of the mixture. That should mean the qualitative value of the DNA evidence is lower, requiring even stronger additional evidence from other sources.


In the U.S.A., a man can be convicted upon the word of a single witness, even if the defence poked significant holes into the reliability of said witness.

What can happen in the U.S.A. is that one lone man says “I saw the defendant do it.”; the defence attorney can point out that the witness was drunk at the time, that he has motive to lie, that he initially reported another story to the police and only later settled on this story, and what ever else to render him completely unreliable.

The jury can nevertheless return a verdict of guilty, and there are no grounds for appeal then, as it is the power of the jury to decide who is “reliable”, and it is not required to explain it's thought process at all.

What a shocking development that such would result into a criminal justice system where a defendant's race and gender plays such a factor.


> What a shocking development that such would result into a criminal justice system where a defendant's race and gender plays such a factor.

It takes only one person in the jury to hang the jury. It's not a majority vote it's a unanimous vote.


Bench trials are also required to be unanimous.

Methinks the U.S.A.-man often thinks that bench trials in other countries are done by a single juror; they are not and can range from three to twelve in how many professional jurors are required to reach a unanimous conclusion.

But this is not so much about lay fact finding vis-ǎ-vis trained fact-finding, but the rules of evidence.

Scotland also has jury trials, but does not permit that a man be convicted upon the word of a single witness; there must be further independent, corroborating evidence.

There are many other differences with, for instance, the Dutch system that guarantee a fairer trial. One very big one is that in the Netherlands both the defence and prosecution have one groundless appeal; either side if it not agree with the verdict can demand a fresh new trial with different jurors once. — this obviously reduces flukes of justice.

The other is far stronger rules of evidence and more consistent rulings. Juries are very fickle and legal experts rarely know what verdict they will return based on the evidence they saw before them; whereas with trained jurors, their verdict is often similar with the same evidence given to them.

Indeed, one might argue that the practice of plea bargains, which would be considered unconceivably unethical in most jurisdictions, are actually the saving grace, as they permit stability to this otherwise fickle system as the negotiations between both parties are more reproducible given the same evidence, than fickle juries.


Interesting. What does Swedish law consider non-probabilistic evidence? Even something like eye-witness testimony I would consider to be probabilistic, given how easy it is to manipulate memories, even unintentionally.


Clear videographic, use of a PIN that only the accused had access to, etc


How do you prove that only the accused had access to a PIN? Surely that's probabilistic as well?


Isn't all evidence probabilistic? What's an example of something that isn't?


This is one of these scary areas where reality matches my teenaged experiences playing Shadowrun. I used to hope that the brutal dystopia we played through was just fun. Now I’m seeing that the present needs a word even more brutal than dystopia. :(


Kafkatopia


You nailed it! Thanks friend, that’s some wicked writing and thinking.


I do not find this reality worse at all than people being convicted upon the black box testimony of blood splatter analysts, which is simply an expert testifying that in his conclusion the blood indicated such-and-that.

Or of course, that the U.S.A. permits conviction based on the sworn testimony of a single eye witness, which is noteably unreliable.

All of these are black boxes that are routinely meant to convict. — it would not surprise me if such software were far more reliable than human eye witness accounts, but if there's one thing I noticed, it's that a man is seldom afraid of bad matters, he is only afraid of bad matters produced by new technology; far worse matters can stay, so long as they be ancient enough.


Why couldn't you attach to the screen, press Ctrl-C, then press up arrow to get the original command?


The script was running in production, doing actual work. Stopping it - when you don't remember a damn thing about it and how it worked - was not an option I wanted to consider at the time.


or ctrl-z for that matter


The answer to that is simpler: Try doing the Ctrl-z/fg sequence in your bash with this one:

    while true ; do sleep 1 ; done
You'll see that after 'fg', the loop ends :-)

Simply put: C-z followed by fg is not bulletproof. Not to mention that I had no idea what I was running in there, and how any signal would impact it... So I wanted to find a safer way to dump what was already there, in my shell's memory.

Anyway, I hope you guys enjoyed reading this regardless :-)


>First, the client makes an unauthenticated request to the server to retrieve the password salts associated with the username. If no user is found, an error is returned to the client. If a user is found, the server sends the client the user's password salt and password token salt, which the client uses to rebuild the password token. The password token is then passed to the server for authentication. To prevent brute force password guesses, clients get 25 incorrect attempts in a row before the server locks the user out of their account for 24 hours (Note we are aware this introduces a DoS vulnerability. Our first priority is to protect user data. We plan to implement a more sophisticated lockout mechanism in the future).

Hang on, so the process of retrieving the salt gives the remote client information about whether the user exists? Doesn't this mean that an attacker could take a list of possible usernames, and confirm which of them are using your service?

Seems like you could return a salt even when the user doesn't exist, and that would prevent this information disclosure.


Userbase is built on the assumption that in the event an attacker compromises the Userbase server and database, the attacker would not be able to access protected user data. We chose this assumption to build on because we figure that users and developers alike should assume that data stored at rest in cloud-based databases will eventually be leaked, as we've seen countless examples at almost every major company. Thus, we figured the default assumption is that usernames would not be expected to be private (and so yes, to answer your question, user enumeration is currently possible).

Additionally, practically defending against user enumeration beyond rate limiting sacrifices a level of security and privacy (for example, by requiring users provide an email to sign up to your service, or through some other means that likely ties the user to an identity and storing this in our database in plaintext), rather than allowing them to use pseudonymous usernames alone.

While we do recognize username enumeration is an issue (because users tend to reuse passwords from other sites, or don’t want to be found out using a site), we concluded that properly defending from user enumeration by default would have too material of a negative impact on user experience for little gain on top of what we already provide in way of protecting user data, and instead focused on defending against potential follow-up attacks by limiting brute force login attempts, and recommending that you tell your users to use a password manager at sign up.

The most significant place defending against enumeration affects is during sign up. When a user’s account already exists, we say the username is already taken, which isn't possible when properly defending against enumeration.

We're planning to allow you to enable email verification in your app if you want to, so users will need an email to successfully create an account. Once that's in place, we'll defend against enumeration more concretely. There are other places in addition to the salt retrieval that would be modified in similar fashion. For example, password reset will need to always successfully return even if a user provided the wrong username, and sharing a database with another user will always successfully return even if the other user doesn't exist (e.g. from a typo).


I know this goes contrary to “best practice” but I am very much in favor of this approach.

You want to focus on implementing good soft and hard rate limiting on all your endpoints.

You can obfuscate the login function to return an unhelpful error message, but unless you harden every possible public API against user enumeration — and most sites do not - you are just hurting the UX for no actual security gain.

This would include constant timing for returning results when there is or isn’t a user, so for example, running your hash function even when you don’t have a password to compare it to.

Years ago there was a big push to return unhelpful error messages, but then the signup or password reset functions would act as a user exists oracle anyway. Login got harder for zero actual gain in security.


That's a thoughtful answer; thank you.


WordPress works the same way and it's awful. No offense. They don't see the impact of allowing user enumeration.

Security isn't about one feature. It's layered. You need to have layers because there is no such thing as guaranteed security.

Bank safes are my favorite analogy. Safes are given a time rating. "How long can this safe resist being broken into." A bank with a 15 minute safe means that it might take an attacker 15 minutes to open the safe.

A 15 minute safe is not secure. Infact it is guaranteed to be compromised past 15 minutes. How do you secure an insecure 15m safe? With a 5m guard duty. Now you have a safe to buy you 15 minutes and a guard to ensure that nobody has 15m worth of access to the safe.

You built a safe with no guard... and by allowing enumeration you're telling attackers where you put the safe. You are almost guaranteeing someone will compromise it eventually.

Security doesn't always mean that successful attacks are impossible. Oftentimes security just means you've made the cost of intrusion higher than the return on investment. If you allow enumeration you're giving the attacker an advantage.


>You built a safe with no guard... and you're telling attackers where you put the safe. You are almost guaranteeing someone will compromise it eventually.

Userbase is built on the assumption our entire database and server will be compromised, and the attacker would still not be able to access protected user data. Validating that we protect user data in that scenario was the goal of our security review. [1]

On top of this, requiring users to provide an email or some other identifiable means to sign up, which is the practical way to defend against enumeration, compromises a level of privacy AND security in the average user (since this data would be leaked in the event of a breach). So this is a significant tradeoff, not as simple as one way is secure and the other is not.

Finally, we recognize the impact of allowing user enumeration. We will offer protection from user enumeration for those who are comfortable with the tradeoffs in user experience, and with sacrificing a level of privacy and security for their users.

[1]: https://userbase.com/announcements/#1-security-review


Adding to this, properly defending against enumeration also sacrifices a level of security in addition to privacy, since the average user would likely need to store some additional identifiable data (such as an email) in our database in plaintext that would be compromised in a breach.


How does your service differ from NameCoin?


Handshake has taken a lot of lessons from previous attempts at alternative blockchain DNS roots like NameCoin. Here are some ways its different:

- Handshake names are not *.bit domains like NameCoin they're actually top-level domains. This is because Handshake's purpose is not to decentralize domain names per se but too decentralize the root zone and create a more secure root of trust than Certificate Authorities [1]

- Handshake name auctions were spread out over the course of the first year after launch to prevent early adopters from hoarding all the good names. For instance, .crypto was available in the first few weeks after launch but .information isn't available until next week. This is important because early adopters hoarding names prevents latecomers from supporting the protocol.

- Handshake names are sold via vickrey auction instead of a flat fee. Different names are more valuable than others. Flat fee pricing allows a hoarder to arbitrage that fact by being first whereas auction pricing ensures that names are better distributed. .X sold for 311k HNS[2] (about $40k) whereas other names sell for a few cents.

- Handshake has a light client[3] that can trustlessly verify DNS records on-chain. This is critical because very few people run full-nodes, so without a light client the majority of users would rely on third parties which provides worse security than users resolving names in a decentralized manner

[1] https://news.ycombinator.com/item?id=20995969 [2] https://namebase.io/domains/x [3] https://github.com/handshake-org/hnsd


They're not top-level domains until most people's clients resolve them as such. They're off to the side in a niche name space that most people don't know about, and to which you have provided a brittle gateway that doesn't even present them as top-level.

... and they have a million other little niche name spaces to compete with to reach the point of serving as "top-level domains". I can create my own alternate name space, too, and nobody will care about mine, either.


They're top-level domains on an alternative root but that doesn't make them not top-level domains.


What happens if your definition of a top-level domain disagrees with ICAN's? I mean, what happens if you sell .xyzzy to Bob, and then ICAN, without caring about your project, sells .xyzzy to Alice? How should the conflict be resolved?


What I see happening as a complete outsider is that "the community" (eg browsers, libraries) puts them under a tld against their wishes, to resolve this exact issue.


I was looking for a more technical response.

e.g. How are new blocks created? Proof of work, predefined functionaries, something else?

You say you have SPV proofs which allow a full node to prove a particular domain is owned by a particular party. How does that work?


It's more profiteering-oriented.


I beg to differ. The original Handshake developers collected 10.2MM USD from sponsors and distributed it among free software and open source projects [0]. Actual USD were given out, not just magical internet money, eg FSF received 1 million [1].

[0]: https://handshake.org/grant-sponsors/

[1]: https://www.fsf.org/news/free-software-foundation-receives-1...


The article also suggests that COVID was maybe an inside job, and that it's unpatriotic to follow health restrictions. (Not in exactly those words, but it warns about "acquiescing" to government restrictions.)


Then the question is, "how much do I trust my ISP/DNS provider?"

Those DNS lookups tell your ISP 1) that you use a mac and 2) that you have an application from a specific developer installed.

I think I trust my ISP less than I trust Apple, here. Am I wrong to do so?


Well, back to the state right now where your ISP can see your plaintext HTTP packets if they want to, so it wouldn't be any worse than the current situation. I guess you could get much the same effect by configuring your company Macs to point at a shared Squid server to cache the GET requests from the OCSP server, but in practice almost no one does that.


Apple says they're going to move to an HTTPS based system, so the relevant comparison is between HTTPS and DNS, not HTTP and DNS.


So what happens if a removed function is called? Or can you guarantee that won't happen?


They replace it with some illegal instruction, so the process crashes. With a lot of logic running in per-origin processes, this may only bring down one tab instead of the whole browser.

Edit: Or maybe the error handling can avoid killing a process? This is what the paper says, but I feel like a child process would almost certainly be killed:

> Code elimination is trivial because we nullify unused code with illegal instructions based on known binary function boundaries. Once the instructions triggers a Chromium’s error handling routine that catches an exception, an error page shows an “Aw, Snap!” message by default instead of crashing a whole Chromium process. (section 5, p467)


Chromium displays an "Aw Snap!" message when a renderer process dies.


For some, sure, if an API implements "new foo(), new foo(string), new foo(string, int), and new foo(string, int, int)" but the code that uses the library only uses "new foo()" and sets the other properties after creating the object then it would eliminate the extra constructors. Figuring out what stuff gets called is really hard in something as big as Chromium, where there are so many components that in some spots have their own domain-specific language to link stuff together.

For others, no, they do allow the minifier to break some sites: they analyze the Alexa top 1000 and check which functions get called and how often by those sites.


Note that minority shareholders have some rights, though this varies by state of incorporation:

(PDF warning)

https://www.ibanet.org/Document/Default.aspx?DocumentUid=F20...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: