There's exactly three reasons for people to stick to Twitter:
* They don't care/agree with the policies of the guy running it.
* Legacy reasons; either they have no reason to leave (automated org accounts keep running until something in the workflow breaks) or they have an existing community that doesn't want to move. This group will eventually leave but is currently stuck with inertia. Most "public service" accounts are in this category.
* And finally, for artists, Bluesky is undesirable as a platform because it has some very aggressive image compression compared to Twitter (2000x2000 is the absolute limit). Some are dualposting to Bluesky, but are unlikely to fully leave Twitter for this reason.
Finally, I'll note that while accounts are generally abandoning Twitter, this doesn't automatically mean they're moving to Bluesky either. A lot of those service accounts just up and vanished and just said "well, go visit our website".
> * They don't care/agree with the policies of the guy running it.
I don't care, I care that even though I follow/get followed by CS / Math people and still see mostly far right / nazi / trump /crypto comments about everything. In even small threads about very technical stuff, always people come up with the most crazy shit. And these days the almost mandatory 'Grok, is this true/profound/worth anything/etc'. It's just annoying and maybe I shouldn't care. Don't have that experience on other platforms (mostly same following/followers as they are also there).
Ever since Musk took over Twitter, the replies have been useless. It used to be that you can follow replies as a discussion. Now, the replies are a mix of unmarked ads, bots, and weird call-outs to Grok, and maybe all at the same time.
All the automation and algorithmic garbage crowd out actual human discussion. It's a big reason why I stopped using Twitter myself. If they want to optimize for bots and weirdos driving up engagement numbers, go for it. But that's not a service that brings me value.
On the note of artists, I always wondered why so many artists don't use a proper gallery/portfolio in addition to social media. This could be a general art-sharing platform, one of the many niche- or fandom-specific gallery sites, or their own website. Get the audience and reach through social media, but link back to a portfolio with the originals for those who care.
X really sucks in it's current state, but it's where the things I'm interested in happen or are discussed first (eg AI state of the art, bootstrappers). There's a bunch of tech people I follow who aren't on BlueSky, Threads, etc.
Interestingly, when I glance at my Bluesky feed once a month or so, it's a lot of complaining about everything. I think I hear more about Elon on Bluesky than I do X. And yeah, I follow reasonably high-value people.
That said, I keep some sort of X exit plan in place, and I look at it a lot less than before. When the signal vs noise value shifts, I'll be done, but I'm not quite there yet.
It's surprising that any serious organization used it at all. It was never a good place to spend your time really.
It's sad that the science community is just moving to another walled garden rather than spawning its own network of federated ActivityPub services (eg: mastodon).
Bluesky seems to be based on an open protocol (AT Protocol), but how actually interoperable is that ? I can't find a list of non-bluesky AT protocol servers that can interoperate with Bluesky.
Back in the 90s, every University had its own mailserver, USENET server, etc. These offered authentication to any user in the University, and each was federated with other institutions and the internet as a whole.
I'm surprised Universities haven't set up a federated network of ActivityPub servers, with each University hosting its faculty and student accounts on its server. The signal-to-noise ratio of a University-only network would be amazing.
> The signal-to-noise ratio of a University-only network would be amazing
Ha; no. It would be students self-censoring to avoid anything that could draw a universities' ire... while they meet up on Discord to share their actual thoughts, cheating techniques, personal feelings, and date nights. It would be University LinkedIn.
It was a good place to get messages out quickly - if I wanted to know that my cable company knew the internet was down, either from their direct acknowledgement or people sending messages to them, I went there. But now that I need an account to even see the comments or posts it's impractical to use.
I don't agree. There are still serious organizations and people using Twitter. I believe that to be true. I'm just surprised they haven't moved.
For example there are emergency systems or local governments that announce information on Twitter. These feel like serious organizations to me. At minimum I feel like they should be in multiple places and not just Twitter.
Ultimately Twitter is timely and has almost universal mindshare.
A few weeks ago when there was the pacific earthquake, I had family who were very close to a danger zone vacationing. Google was not sufficient for finding good local timely info as an outsider but twitter was.
I would never even think to check BlueSky or Mastodon, and my family will never have heard of them.
Things have to be posted in those other places for your regular person to have a chance of hearing about them eventually. If everyone waits for some adoption threshold to support something then it will never reach that threshold.
The question that motivated the comment obviously implies that this is unbelievable for undisclosed reasons (related to its "current state.") Smarter to argue with the premise than the fluff.
Well, if it helps you feel better, the downvotes are probably because it's not a No True Scotsman fallacy. And yours are probably due to your endorsement of it and whining about "lefties".
Is a declining socio-economic performance inherently a bad thing? Why does the output of a country need to go up forever rather than remain constant? Or even decline to come to an equilibrium with the new lower population.
I feel like the ideal is to have a population with a near perfect 2 - 2.1 replacement rate with a socio-economic performance that allows for the fewest people in poverty and then for that to continue forever.
Perhaps this is the first time in history that most of the world has reached its population limits and since we overshot it, it is now attempting to correct and will come to an equilibrium eventually.
I've wondered about this. The current world population is close to twice what it was when I started college in the late 1970s.
Comparing what life was like then to life now and thinking about what I'd miss if I had to go back to say a 1976 lifestyle everything I'm coming up with is just that we've now got better technology, more advanced medicine, cleaner energy sources, and we passed laws and regulations to take better care of the environment.
Did we need 4 billion more people to get those things? I don't think so. In an alternate universe where the world decided in 1976 to voluntarily lower the fertility rate to just keep the population at 4 billion I think that 2025 in that world would be a lot like 2025 in our world as far as tech, medicine, energy, and the environment go.
Inclined to agree. If per-capita well-being and productivity continues to rise, is a GDP decline in proportion to population decline such a bad thing? The only bad thing I can think of is that we will see fewer children around than we're used to.
And as soon as some class diving issue comes up where it is the 99.9% against the ultra wealthy it will become a non-partisan issue and get done immediately in favor of the ultra wealthy.
There is no plan in place at all for the outcome of most work becoming redundant. At least in the US I highly doubt we will be capable of implementing some system such as UBI for the benefit of all citizens so everyone can take advantage of most work being automated. Everyone will be left to pick up scraps and barely survive.
But I am extremely skeptical that current "AI" will be capable of eliminating so much of the modern workforce any time soon if ever. I can see it becoming a common place tool, maybe it already has, but not as a human replacement.
> There is no plan in place at all for the outcome of most work becoming redundant. At least in the US I highly doubt we will be capable of implementing some system such as UBI for the benefit of all citizens so everyone can take advantage of most work being automated. Everyone will be left to pick up scraps and barely survive.
If 80% of US citizens lose their jobs, I assure you that there will be a political response. It might not be one you (or I) like, but it will happen and it will be a big deal.
> If 80% of US citizens lose their jobs, I assure you that there will be a political response. It might not be one you (or I) like, but it will happen and it will be a big deal.
In societies where 80% of people are not able to draw an income and there are more firearms than there are people allowed to own them, that political response is revolution.
That is also what I would expect, regardless of firearm laws. Like look at how many people are fans of populists right now. Then 10x that if basically everyone is unemployed.
It's gonna be a wild ride.
That being said, I'm not convinced LLM based workflows will be transformative, at least over the short term (3-5 years). It took a long time for the whole of society to end up on the Internet and I'd expect around the same speed for AI/LLM approaches (even in the best case scenario).
I can see usb-c being convenient for a smartwatch, but for a fitness watch they you are going to wear while hiking, swimming or any type of rugged outdoor activity it is important to not be a hole that can get a bunch of stuff stuck in it. Unlike proprietary cables for smartphones of the past the connector on high end smartwatches aren't there because they are trying to sell you a bunch of expensive cables and accessories that use that port.
They want you to do it in person because there they can:
1. brand protestors as radicals and arrest them, serving as a public warning to deter others.
2. police(outside the US) have a major force advantage over unarmed population and can easily overpower you, whereas online they do not have any kind of power without censorship, which is why they're trying to gain control over it
I assume this is just a police state overreach rather than genuine intent to stop crime. They must know that anyone actually engaging in criminal activity is going to not be caught by this because they use other forms of encrypted communication.
It'd be extremely easy to circumvent, too. Since the scanning runs client-side on images uploaded into the messenger, you just need an app to mangle and unmangle images. XOR the pixels in your payload with a picture of static, then do it again on the other side.
It does not need to be particularly secure—the messages are still E2E encrypted so long as nothing trips the client-side scanner.
I'm not saying that this is NOT police state overreach, but the assumption that all (or even most) criminals practice good operational security still seems laughable to me.
I think you are letting your ideological alignment (against surveillance state) push you into irrational standpoints ("more surveillance would not catch additional criminals").
I'm 100% with you on opposing legislation like this, but it is very important to not delude oneself about its likely effects, and to pick the right hills to die on, figuratively speaking.
They meant, I assume, that it's the same as gun control laws. You have to prove your permission, you have to show your id, you gave to have a gun that is registered, however unidentified gangsters are running right now with unregistered guns and shooting people.
And then they go and catch the people who didn't do that stupid process and act as though they caught "real criminals" when they only really caught "fake criminals" that they just minted.
You see this all the time with all sorts of areas of law, not just guns. The real evil-doers are the enablers cheering it on. "Well they didn't have a permit so they deserved it" and the like.
I'm reminded of the story of the bombing attempt that failed because two cells miscommunicated on timezone.
The bomb was handed off across a political boundary and detonated at some arbitrary point on the way to its target, an hour earlier than expected.
(And then there was the one in France, where a cellphone-triggered bomb detonated prematurely and eliminated its builders because the mobile carrier they were using sent a "HAPPY NEW YEAR" SMS to every customer).
We in infosec are often trained to imagine the ideal "adverse actor" who could do the most possible damage to our systems to test their vulnerability.
It's a good model for identifying and closing gaps (especially if one is not, oneself, prone to think like a criminal), but like all other human population groups, half of all criminals are below average.
I think the slightly more sophisticated position is that, regardless of the operational security that is currently employed, if you were to implement something like this, then criminals would quickly adapt to improve their operational security accordingly. Especially because "operational security" in this case is doing a lot of heavy lifting to obscure how easy it would be: just use a good E2E messenger.
This is not some wild hypothetical, the recent explosion in VPN use by every country that has implemented an age restriction law should be sufficient to display this effect in place. In a world without weird country restrictions (whether that be intellectual property restrictions or content restrictions), VPNs would be a niche technology for business. Instead unbelievably large amounts of the general population are now not only using it, but paying for it.
I think the assumption that criminals would not learn how to use one of the many free E2E encrypted messengers is the deluded and naive position.
That's not true. You're stereotyping criminals. People who commit assault, petty robbery, public indecency, etc. are probably on the whole not brilliant. But how about fraud, embezzlement, or parking infractions?
Given that we're talking about cybercrime here, what are the odds that the criminals in question are too dumb to Google "how can i get around whatsapp image scanning"?
I hear this a lot, but I wonder if that is just because the only criminals you hear about are the not very smart ones doing crime on unencrypted monitored services. This sounds like a survivor bias situation. How can we know how many criminals there are if we only know about the ones we know about?
> you were to implement something like this, then criminals would quickly adapt to improve their operational security accordingly
This just isn't the case. Many criminals use non-encrypted phone calls, leave voice mails, etc. all the time. For example this recent theft of a gold toilet:
> A photograph found by police on his phone showed a carrier bag stuffed with cash, which was sent on WhatsApp with the message "520,000 ha ha ha".
The only reason that was E2E encrypted is because everyone in the UK uses WhatsApp and they enable E2E encryption by default.
> I think the assumption that criminals would not learn how to use one of the many free E2E encrypted messengers is the deluded and naive position.
It absolutely isn't. Some would, but the vast majority of criminals are not security experts.
It's still a dumb law. Also the criminals that it claims to target (paedophiles) are probably the least likely to get caught because they're already used to lots of electronic scanning things. Though even there it's not like they're all criminal masterminds. I can't find it now but there was recently a story about a someone who tried to hide child porn just in a deep folder structure like .../secret/do_not_open/i_warned_you/...
Dumb law, but lets use real reasons to argue that.
I don't believe that literal typing of code is the limiting factor in development work. There is the research and planning and figuring out what it is even you need to develop in the first place. By the time you know what questions to even ask an LLM you are not saving much time in my opinion. On top of that you introduce the risk of LLM hallucination when you could have looked it up from a normal web search yourself in slightly more time.
Overall it feels negligible too me in its current state.
I think it depends a lot on the task. While you’re right that just typing is rarely a bottleneck, I would say that derivative implementations often are.
Things like: build a settings system with org, user, and project level settings, and the UI to edit them.
A task like that doesn’t require a lot of thinking and planning, and is well within most developers’ abilities, but it can still take significant time. Maybe you need to create like 10 new files across backend and frontend, choose a couple libraries to help with different aspects, style components for the UI and spend some time getting the UX smooth, make some changes to the webpack config, and so on. None of it is difficult, per se, but it all takes time, and you can run into little problems along the way.
A task like that is like 10-20% planning, and 80-90% going through the motions to implement a lot of unoriginal functionality. In my experience, these kinds of tasks are very common, and the speedup LLMs can bring to them, when prompted well, is pretty dramatic.
> There is the research and planning and figuring out what it is even you need to develop in the first place.
This is where I have found LLMs to be most useful. I have never been able to figure out how to get it to write code that isn't a complete unusable disaster zone. But if you throw your problem at it, it can offer great direction in plain English.
I have decades of research, planning, and figuring things out under my belt, though. That may give me an advantage in guiding it just the right way, whereas the junior might not be able to get anything practical from it, and thus that might explain their focus on code generation instead?
I really don't mean this in any negative way, but I find it fascinating the wide range of opinions and attitudes in this matter. I find it _so_ hard to imagine having this view myself.
The amount of things I've learned by asking very specific, technical questions to ChatGPT (mostly with web search turned on, but some times it's not even necessary) - things I can immediately verify and/or use, such as small bash commands/scripts, visualizations, diagrams. The value of that, alone, is certainly in the hundreds of dollars per month. Things I would never learn because they are buried somewhere among the 30 answers/comments, sometimes pointing to 20 more terribly-hard-to-read pages or manuals riddled with irrelevant (for my question) content, somewhere in the first page of web search results... maybe it's an attention span question? I certainly won't spend more than 10 minutes reading anything if not's interesting or required in the most extreme sense of the word way for my job (books on quantum mechanics, general relativity, topology, all fall in the former category - bash and pandas documentation fall in neither).
I'm convinced I've saved _at least_ low thousands per week by using coding assistants (mostly Claude Code in my case, but that's personal and likely to change at some point), as evidenced by the amount of work I'm able to finish, get paid for, and maintain. I'm not vibe coding, mind you - most of the time, I have an almost complete mental model of what I want after a couple of hours thinking, and the only thing left to do is type the code, at which point I'd, previously, feel bored, since the fun part (the thinking) is over.
Edit: I have 20 years of experience with code, 15 in the industry as a SWE (been coding since I was 13)
I have had no trouble finding solutions to coding problems by normal searches very quickly. The solution is typically right there in API documentation or one of the first few results.
But I suppose that answers how they may make LLMs profitable. They could cripple or even eliminate normal search until paying for LLMs is the only option.
I find it's either people being flippantly dismissive because it's something they've decided to make part of their personality and refuse to consume any information that may challenge their opinion, or they just massively lack imagination and creativity both on what it can do but also what it could do in 1, 5, or 20 years.
AI: I'll get right on it! But before I do, have you had dinner yet? KFC's new finger-licking MEGA feast will bust your hunger for only $19.95. Click here to order.
Me too! Except I wondered why my non-tech wife had stopped complaining about ChatGPT limits, and it turns out she has quietly been subscribing to Pro plan. It's happening.
I have used both free and paid Google Gemini. I'm as cheap as can be. Im back to the free because it's good enough, and presumably getting better.
If they took away the free, I'd pay $20 and be thankful they kept it at $20.
I love doing things myself.. I mow my lawn, change my oil, change my water heater, and try to never use frameworks or libraries. But not using LLMs seems insane. If they weren't free, you wouldn't use them?
The majority of the time I’ve used an LLM, it’s failed to do the task properly. The times that it worked are the times that a bit of Googling would have solved. I’m not OP, but I’m not at the point where I would spend money on an LLM either.
I have almost never had an LLM not do the task I wanted. Either I'm asking it very easy things (not really, you definitely couldnt Google the entire task of what I wanted, although obviously each tiny subsection you could), or it's important to scope what you're asking for well.
But as a general statement, you cant just Google a comprehensive summary about beta glucans from chanterelle mushrooms, dosages, cooking methods, immune benefits and processes and get a 10 minute read about exactly what you asked for. But with Gemini deep research you can.
I’ve started with tasks that I understand at an expert level myself. The LLM has invariably gotten surface-level work correct and subtle details wrong. Given those errors, I’m not willing to use them for things I don’t understand.
LLMs are good at generating text that sounds authoritative. They’re great for creative writing to share for a laugh with friends. I’m not at the point where I’m willing to use them for important work, let alone pay for them.
(I’ve yet to try them as a coding assistant, though. Maybe that’s the missing link.)
I've done this too and sure it's not perfect. But it's better than the average person in my industry. So, 1, that means a decent chunk of my industry could use it and be about as good as they are now.
2, unless I magically have a plan for talking to an expert in HVAC repair, and not just an idiot in HVAC, I can diagnose my HVAC unit with AI just fine. And I did. And no it wasn't as simple as "well duh, every post online says it's the large capacitor".
That may be an issue of going from a CRT tv to an LCD tv. As far as I am aware there was no software manipulation of the video input on a CRT. It just took the input and displayed it on the screen in the only way it could. Newer tvs have all kinds of settings to alter the video which takes processing time. They also typically have a game mode to turn off as much of it as it will allow.
Why should the user care whether the lag is introduced by the software in the controller, or the software in the gaming console, or the software in the tv.
The lag is due to some software. So the problem is with how software engineering as a field functions.
I hear it claimed that you're only supposed to enable game mode for competitive multiplayer games -- but I've found that many single player games like Sword Art Online: Fatal Bullet are unplayable without game mode enabled.
It could be my unusual nervous system. I'm really good at rhythm games, often clearing a level on my first try and amazing friends who can beat me at other genres. But when I was playing League of Legends, which isn't very twitchy, it seemed like I would just get hit and there was nothing I could do about it when I played on a "gaming" laptop but found I could succeed at the game when I hooked up an external monitor. I ran a clock and took pictures showing that the external monitor was 30ms faster than the built-in monitor.