Hacker Newsnew | past | comments | ask | show | jobs | submit | fluidcruft's commentslogin

Human police errors are so routine that they're not news worthy.

Probably picked up by animal control as abandoned and euthanized.

That’s really horrible. I’d prefer to know rather than guess at that.

It's pretty common when a dog is abandoned. Likely her children couldn't afford to care for it. I suppose there is a chance they put it up for adoption (same outcome is likely).

Yeah I'm not buying the last bit about lower MSE with one term in the model vs two (Brier with one outcome category is MSE of the probabilities). That's the sort of thing that would make me go dig to find where I fucked up the calculation.

With one term it gets more robust in the face of excluding endpoints when constructing the jackknife train/test split, I think. But you're right, it does sound fishy.

What the post is describing is just ANOVA. If removing a category improves the overall fit then fitting the two terms independently has the same optimal solution (with the two independent terms found to be identical). MSE never increases when adding a category.

This is why you have to reach to things that penalize adding parameters to models when running model comparisons.


No, the post is doing cross-validation to test predictive power directly. The error will not decompose as neatly then.

Why would they do that and where do you see evidence they did?

Because it's a direct way to measure predictive power, and it says so: "We’ll use leave-one-out cross-validation"

This admin technically can't do a lot of the things it does. They do it anyway with utter contempt for the rule of law. Congress is useless and gives them a blank check and the Supreme Court just stalls everything using the shadow docket.

I've generally thought that but lately I've been finding that the main difference is Claude wants a lot more attention than codex (I only use the cli for either). codex isn't great at guessing what you want, but once you get used to its conversation style it's pretty good at just finishing things quietly and the main thing is context management seems to handle itself very well and I rarely even think about it in codex. To me they're just... different. Claude is a little easier to communicate with.

codex often speaks in very dense technical terms that I'm not familiar with and tends to use acronyms I've not encountered so there's a learning curve. It also often thinks I'm providing feedback when I'm just trying to understand what it just said. But it does give nice explanations once it understands that I'm just confused.


The difference is that Anthropic actually dotted the i's and crossed the t's whereas OpenAI fell for the weaselwords and is now desperately trying to renegotiate.

OpenAI didn't fall for anything, they knew exactly what they were signing and went ahead anyway, then started gaslighting people about what they had signed.

For a lot of people (me included) the lack of integrity and the gaslighting is what has soured them on OpenAI, rather than them signing up to build surveillance and weaponry.

To non-US citizens, all AI companies are as dangerous as each other, OpenAI just really botched the optics here.


All of that's irrelevant for what "newspeak" means.

Maybe, but the comment I was replying to wasn't talking about newspeak.

It's in a reply chain that's talking about newspeak. You compacted your context way too early.

The reply chain is talking about newspeak but the parent of the comment I was replying to was

> DoW is newspeak. Thats not it's name.

I understood that comment I was replying to was responding to was replying to the latter part of the comment.

Discussions and threads can evolve. They are not static.


I'm confused... now you were talking about newspeak? How odd.

I'm not sure how you got that from my comment.

As a recap, my reply to your reply was that DoD is the actual newspeak, and your reply to that evolution of the discussion is that you were not discussing newspeak.

In trying to understand if I'm missing something, I looked up what newspeak means. I (as well as probably a few other commenters based on the contents of their comments) was under the assumption it meant "new speak" meaning it's something new.

In case anyone else reading this was not aware of this, this is what I discovered.

It's a term from George Orwell's 1984, describing a language used to make thoughts unthinkable by removing words from the language. It has nothing to do with "age of the term."

Hence, Dept of Defense is indeed newspeak. Dept of War, while being a new name for the dept, is too literal to be newspeak.

Thanks for the opportunity for me to learn something!


Department of Defense has historically been a prime example of newspeak.

I think Department of War is also newspeak. Or at least, they didn't change the name just to get the name in line with the amount of war the department does.

They changed it because they wanted to do more even more war. The amount of war the department does under the name "Defense" has been status quo for a long time, and my take is they wanted us to think of them differently so they could do even more war, which they have since been doing.


Oh apologies, I interpreted your comment as intended to be part of the discussion rather than as a non-sequitur.

Discussions and conversations can evolve. Read the thread again.

Well I will say that if there's a word that describes what the Department has been up to in Venezuela and Iran, "Defense" does seem to be the least Orwellian option.

What term to you prefer for referring to sailors, pilots, soldiers, etc collectively?

Literally what they wrote: service members.

They're asleep until Pacific time.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: