2024-08-21 they had 156,328 unique posters, 2025-08-2021 they had 669,909. So "nearly" a year is cherry-picking.
Whatever "bleeding" is happening* is after some enormous growth. I'm not a booster at all, they ought to be concerned about this recent depletion, but this slow decline is hardly the death knell.
The are fewer posters and likes on bluesky today than there were in early September of last year, and both graphs have been trending down consistently since February of this year.
Bs went from 2.8m likers/day its peak to 1.1m today.
Yes certain so-called scientific fields that have a 100% viewpoint capture in academia only feel comfortable sharing views in the bluesky bubble, and they’re the exclusively-BA schools like “Integrative Biology” that this paper is published in.
There was a huge bump of new users right after the election, and things have gone down since then, but they are still much higher than they were in September 2024. And if you look at the tail of the graph, it seems to be holding pretty steady, which is about what you would expect for a social network after a huge spike of growth: not all of those users will stick around.
JD Vance's account was temporarily banned because it was suspected of being an impostor account, and it was reinstated within 20 minutes, and continues to be active.
He was not banned because his "pretty innocuous post" offended anyone's political sensibilities. He has 15k followers. He isn't being oppressed or censored.
Huh? You talk about your fetishes/turn-ons and sexual experiences with your co-workers, or are you mocking something I'm not online enough to understand?
>Huh? You talk about your fetishes/turn-ons and sexual experiences with your co-workers, or are you mocking something I'm not online enough to understand?
"Lotta straight guys like watching their buddies fuck. I know I do."[0][1]
More seriously, on at least one occasion, a co-worker was showing a group of co-workers photos of their vacation at a swingers retreat. Let's just say nothing was left to the imagination. I'd add that this was an employee at the corporate headquarters of a Fortune 50 company.
Why is our media literacy so in the gutter? A few second clip of an easily accessible and free interview from a NYT podcast would not be accepted as gospel fact in the hn of the past.
You may very well and with good reason disagree with Thiel on the downstream effects of climate regulating agreements/regimes on global productivity and liberty, but regurgitating “Greta is the Antichrist” just replaces discussions of interesting issues to yelling at shadow puppets in Plato’s cave.
What? GP didn't even contradict anything I said, just claimed there's some ambiguous problem with citing a few second clip of the person saying the thing I accused them of saying.
My claim: "he believes Greta Thunberg is very possibly the actual antichrist"
Thiel's words:
> Thiel: ... The way the Antichrist would take over the world is you talk about Armageddon nonstop. You talk about existential risk nonstop, and this is what you need to regulate....
> in the 17th century, I can imagine a Dr. Strangelove, Edward Teller-type person taking over the world.
> In our world, it’s far more likely to be Greta Thunberg.
He's talking about the Antichrist dude... he's a devout Christian... they believe in things like the Antichrist.
People really do be bending over backwards not to hear the words spoken to them if they seem too wacky to be palatable. Dark secret though: billions of people believe truly wacky shit. Some of those people are unbelievably wealthy.
Anyone can go read the transcript. It's quite clear he's saying he believes Greta might very well be the Antichrist.
Or watch Peter Thiel's interview on the anti-chris and if he thinks humanity should survive (uhhh, well, ughh, ummm, you see). Wild scary stuff. Go watch the whole interview but a taste:
Here’s a more contextual excerpt of the transcript which features the 5 questions prior to this and his answer. When I listened to it, to me it seemed like he was thinking about how to weave answers to those into his response, which he does after the exerpt. It’s an entertaining memed clip but we owe it to intellectualism to understand the full context and not simply consume YouTube shorts as the be-all, end-all.
———-
Douthat: … It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a mechanism for transhumanism — for transcendence of our mortal flesh — and either some kind of creation of a successor species or some kind of merger of mind and machine.
Do you think that’s all irrelevant fantasy? Or do you think it’s just hype? Do you think people are raising money by pretending that we’re going to build a machine god? Is it hype? Is it delusion? Is it something you worry about?
Thiel: Um, yeah.
Douthat: I think you would prefer the human race to endure, right?
Thiel: Uh ——
Douthat: You’re hesitating.
Thiel: Well, I don’t know. I would — I would ——
Douthat: This is a long hesitation!
Thiel: There’s so many questions implicit in this.
It's already been published. There's nothing special in there. But publishing to GitHub doesn't mean anything if it's not actually the source of truth for where changes come from. A snapshot of a system prompt at some point in time is uninteresting.
- Starting now, we are publishing our Grok system prompts openly on GitHub. The public will be able to review them and give feedback to every prompt change that we make to Grok. We hope this can help strengthen your trust in Grok as a truth-seeking AI.
- Our existing code review process for prompt changes was circumvented in this incident. We will put in place additional checks and measures to ensure that xAI employees can't modify the prompt without review.
- We’re putting in place a 24/7 monitoring team to respond to incidents with Grok’s answers that are not caught by automated systems, so we can respond faster if all other measures fail.
> If you read a book and later understand its plot but can only explain it in your own words, did you copy it?
I think that is the center of the conversation.
What does it mean for a computer to "understand"?
If I wrote some code that somehow transformed the text of the book
and never return the verbatim text but somehow modified the output,
I would likely not be spared
because the ruling will likely be my transformation is "trivial".
Personally, I think we have several fixes we need to make:
1. Abolish the CFAA.
2. Limit copyright to a maximum of 5 years from date of production with no extension possible for any reason.
3. Allow explicit carveout in copyright for transformational work. Explicitly allow format shifting, time shifting, yada yada.
4. Prohibit authors and publishers from including the now obviously false statements like "No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording" bla bla bla in their works.
5. I am sure I am missing some stuff here.
For brand protection, we already have trademark law. Most readers here already know this but We really should severe the artificial ties we have created between patents, trademarks, and copyright.
I just happen do read the Phoenix Technologies wikipedia page a few days ago. This company is known for developing BIOS software for computers. Maybe you've seen their logo when you first turn on your computer.
In early computing, everything was closed sourced. Quoting the wikiepdia page,
To develop a legal BIOS, Phoenix used a clean room design. Engineers read the BIOS source listings in the IBM PC Technical Reference Manual. They wrote technical specifications for the BIOS APIs for a single, separate engineer—one with experience programming the Texas Instruments TMS9900, not the Intel 8088 or 8086—who had not been exposed to IBM BIOS source code.
The legal team at Phoenix deemed inappropriate to "recall source in their own words" for legal reasons.
My non-legal intuition is that these companies training their models are violating copyright. But, the stakes are too high--it's too big to fail if you will. If we don't do it, then our competitors will destroy us. How do you reconcile that?
You're right that if we want to have usable LLMs at all, there's no way around training them on copyrighted materials. So it has to be allowed, but in a way that compensates the original authors somehow. For example, every model provider has to publicly declare all works used for training, and then all inference providers offering that model have to collect a per-token tax that gets distributed to authors in proportion to their presence in the dataset (by the by, this could also be a way to fund websites like Wikipedia).
But any such arrangement needs to be hammered out by the legislature. As laws are, I think it's pretty clear that infringement is happening.
Perhaps Phoenix just looked at the potential adversary (IBM) and decided to approach the project in an exceedingly cautious way, knowing that IBM could litigate it forever if there were any plausible argument that they "copied" even a line of code.
To what extent connections in a neural network are analogous to connections between neurons in your brain is open to interpretation and study, but the point of the analogy is that in neither case is a copy being made.
I can arrange a series of bricks in many ways to try and build a wall but that doesn't mean I will automatically get a good result if my process (like a ML training algorithm) doesn't precisely arrange then in a manner that produces a rigid wall with the desired characteristics. In the same vein you can have a fancy neural network arranged by some fancy LLM training algorithm with gobs of data about a subject but current methods likely won't produce anything with the depth of "understanding" that a human can do. It's a crumbly wall that falls once you do any real inspection or put any real load into it.
Yeah but a copy IS made. A human just reads. The machine copies the full text then compresses a lossy copy in its weights. You keep dodging that with tortuous analogies of a human learning.
I’m sure all these ‘clever’ questions would be useful if this trial was about humans but it’s not.
Model training works roughly by feeding the model a text excerpt and then hiding the last word in the excerpt. The model is then asked to "guess" what the final word is. It will then move around it's weights until the guess sufficiently matches the actual token. Then the process repeats.
The training material is used to play this guessing game to dial in it's weights. The training data is picked up, used as reference material for the game, and then discarded. It's hard to place this far from what humans do when reading, because both are using the information to mold their respective "brains" and both are doing an acquire, analyze, discard process.
At no point is training data actually copied into the model itself, it's just run past the "eyes" of the model to play the training game.
https://bsky.jazco.dev/stats
reply