The sycophancy is obviously intentional. People are vulnerable to it, and addiction is profitable. It has nothing to do with the nature of LLMs and everything to do with user engagement metrics.
You can certainly do it with RAII. However, what if a language lacks RAII because it prioritizes explicit code execution? Or simply want to retain simple C semantics?
Because that is the context. It is the constraint that C3, C, Odin, Zig etc maintains, where RAII is out of the question.
Ok then I understand what you mean (I couldn't respond directly to your answer, maybe there is a limit to nesting in HN?).
Let me respond in some more detail then to at least answer why C3 doesn't have RAII: it tries to the follow that data is inert. That is – data doesn't have behaviour in itself, but is acted on by functions. (Even though C3 has methods, they are more a namespacing detail allowed to create methods that derive data from the value, or mutate it. They are not intended as organizational units)
To simplify what the goal is: data should be possible to create or destroy in bulk, without executing code for each individual element. If you create 10000 objects in a single allocation it should be as cheap to free (or create) as a single object.
We can imagine things built into the type system, but then we will need these unsafe constructs where a type is converted from its "unsafe" creation to its "managed" type.
I did look at various cheap ways of doing this through the type system, but it stopped resembling C and seemed to put the focus on resource management rather than the problem at hand.
The idea is, you could have a language like Rust, but with linear rather than affine types. Such a language would have RAII-like idioms, but no implicit destructors; instead, it'd be a compile-time error to have a non-Copy local variable whose value is not always moved out of it before its scope ends (i.e., to write code that in Rust could include an implicit destructor call). So you would have explicit deallocation functions like in C, but unlike in C you could not have resource leaks from forgetting to call them, because the compiler would not let you.
To the extent that you subscribe to a principle like "invisible function calls are never okay", this solves that without undermining Rust's safety story more broadly. I have no idea whether proponents of "better C" type languages have this as their core rationale; I personally don't see the appeal of that flavor of language design.
It is about types that can't be copied and can't go out of scope, and the only way to destroy them is to call one of their destructors. This is compile time checkable.
In theory they can solve a lot of problems easily, mainly resource management. Also it generalizes C++'s RAII, and similar to Rust's ownership.
In practice they haven't got support in any mainstream programming language yet.
I'd keep in mind that internet usage of 96 (I was there) bears no resemblance whatsoever to internet usage of today. The level of predatory sophistication of today's attention economy makes any sort of comparison between the two misguided at best.
Yes, but complaints about my generation sitting in front of computers was not that much different from my generation's complaints now of the next generation being on social media.
As opposed to taking like 30 seconds to install cargo and rust?
I get that the elegant thing to do would be to bootstrap this, but in practice does this actually cost you anything, or is this a purely aesthetic concern?
> As opposed to taking like 30 seconds to install cargo and rust?
I think you're oblivious to the problem domain. C and C++ projects are tightly coupled with build systems. If you are not smack middle in the happy path, you will experience problems. Having to onboard an external language and obscure toolset just to be able to start a hello world is somewhere between a hard sell and an automatic rejection.
I recently tried Cursor for about a week and I was disappointed. It was useful for generating code that someone else has definitely written before (boilerplate etc), but any time I tried to do something nontrivial, it failed no matter how much poking, prodding, and thoughtful prompting I tried.
Even when I tried to ask it for stuff like refactoring a relatively simple rust file to be more idiomatic or organized, it consistently generated code that did not compile and was unable to fix the compile errors on 5 or 6 repromptings.
For what it's worth, a lot of SWE work technically trivial -- it makes this much quicker so there's obviously some value there, but if we're comparing it to a pair programmer, I would definitely fire a dev who had this sort of extremely limited complexity ceiling.
It really feels to me (just vibes, obviously not scientific) like it is good at interpolating between things in its training set, but is not really able to do anything more than that. Presumably this will get better over time.
If you asked a junior developer to refactor a rust program to be more idiomatic, how long would you expect that to take? Would you expect the work to compile on the first try?
I love Cline and Copilot. If you carefully specify your task, provide context for uncommon APIs, and keep the scope limited, then the results are often very good. It’s code completion for whole classes and methods or whole utility scripts for common use cases.
"If you asked a junior developer to refactor a rust program to be more idiomatic, how long would you expect that to take? Would you expect the work to compile on the first try?"
The purpose of giving that task to a junior dev isn't to get the task done, it's to teach them -- I will almost always be at least an order order of magnitude faster than a junior for any given task. I don't expect juniors to be similarly productive to me, I expect them to learn.
The parent comment also referred to a 'competent pair programmer', not a junior dev.
My point was that for the tasks that I wanted to use the LLM, frequently there was no amount of specificity that could help the model solve it -- I tried for a long time, and generally if the task wasn't obvious to me, the model generally could not solve it. I'd end up in a game of trying to do nondeterministic/fuzzy programming in English instead of just writing some code to solve the problem.
Again I agree that there is significant value here, because there is a ton of SWE work that is technically trivial, boring, and just eats up time. It's also super helpful as a natural-language info-lookup interface.
I (like a very large plurality, maybe even a majority, of devs) do not work for a consulting firm. There is no client.
I've done consulting work in the past, though. Any leader who does not take into account (at least to some degree) relative educational value of assignments when staffing projects is invariably a bad leader.
All work is training for a junior. In this context, the idea that you can't ethically train a junior "on a client's dime" is exactly equivalent to saying that you can't ever ethically staff juniors on a consulting project -- that's a ridiculous notion. The work is going to get done, but a junior obviously isn't going to be as fast as I am at any task.
What matters here is the communication overhead not how long between responses. If I’m indefinitely spending more time handholding a jr dev than they save me eventually I just fire em, same with code gen.
A big difference is that the jr. dev is learning compared to the AI who is stuck at whatever competence was baked in from the factory. You might be more patient with the jr if you saw positive signs that the handholding was paying off.
That was my point, though I may not have been clear.
Most people do get better over time, but for those who don’t (or LLM’s) it’s just a question of if their current skills are a net benefit.
I do expect future AI to improve. My expectation is it’s going to be a long slow slog just like with self driving cars etc, but novel approaches regularly turn extremely difficult problems into seemingly trivial exercises.
Without commenting on the (important) political or reputational considerations here, I want to talk a bit about the operational risk presented by this practice. There is a somewhat sizable "So what? Signal is e2e encrypted. Nothing bad happened and you're all overreacting." narrative floating around. (not so much in this thread, but in the general discourse)
If this operation was planned in Signal, then so were countless others (and presumably so would countless others be in the future).
If not for this journalist, this would likely have continued indefinitely. We have high confidence that at least some of the officials were doing this on their personal phones. (Gabbard refused to deny this in the congressional hearing -- it does not stand to reason that she'd do that unless she was, in fact using her personal phone).
At some point in the administration, it's likely that at least one of their personal phones will be compromised (Pegasus, etc). E2E encryption isn't much use if the phone itself is compromised. This is why we have SCIFs.
There was no operational fallout of this particular screwup, but if this practice were to continue, it's likely certain that an adversary would, at some point, compromise these communications. Not through being accidentally invited to the chat rooms, but through compromise of the participants' hardware. An APT could have advance notice of all manner of confidential and natsec-critical plans.
In all likelihood this would lead to failed operations and casualties. The criticism/pushback on this is absolutely justified.
Or not even the device: The other reason we have SCIFs is they provide a secure location. These personal devices could have been in use anywhere, including places where they were subject to observation. Including but not limited to Moscow. :)
Something I havnt seen discussed is that you can get the information from signal without compromising the phone or person. Just reading the texts "over the shoulder" would be enough of a leak. Being in Moscow is bad, but even a Starbucks has security cameras good enough to read text on a phone. A SCIF would fix that
I agree with all of this, my only quibble is that I would bet there have already been costs associated with this idiocy. Hostile powers knew going in that this would be an incompetently run administration and I'm sure were looking at gaining access to personal devices out of the gate. It's possible that a great many highly sensitive conversations have already been read by adversaries. I also expect that similar sloppiness like adding the wrong person to a Signal chat has already happened without being reported on.
Yes, this was one of the main points on infosec Mastodon today. While everyone is aware enough to be concerned with encryption over the wire, it's the endpoints that matter. Personal Android devices capable of running Signal are going to be some of the easiest to compromise for a sufficiently motivated attacker. I've seen n00b cops do it for drug gangs here. There's no question that Russia, China, et al. can do it just as well and we have as good as much as confirmation that that's what's going on in at least Tulsi Gabbard's case.
Not on Android. You can set your Signal PIN, which is a recovery code for if you lose your phone and are locked out of your Signal account. You cannot change the lock screen PIN, which is the same as that of your phone.
I suspect we won't know the true damage until all these people are gone, kind of like how Apollo 13 didn't know the true damage to the service module until they jettisoned it.
My prediction is, given the way the narrative is shifting to digging in their heels and insisting they did nothing wrong, the lesson they are learning from all this is that they should have hid their activity better. Nothing will happen to them, they will continue with impunity, and they'll just be more careful about not inviting outsiders. I suspect this isn't the last leaked top-secret group chat we'll see.
This assertion is sharply undercut by the facts. I have an incredibly hard time believing that you're engaging in good faith here.
There is literally zero evidence whatsoever that Russia cares about 'equality for ordinary people' and a mountain of conclusive proof that it does not.
Ukraine did not owe Russia anything at all, so these 'negotiations' were nothing more than theater. Russia gave Ukraine the choice between either surrendering their sovereignty (for literally zero benefit in exchange) or being invaded. That is not a negotiation, that's state-sponsored terrorism.
For example It is clearly that some Ukraine nationalist did bloody crimes before the war even if Russian media exegarates it. Even the European Court of Justice has acknowledged crimes on the side of Ukraine.
I'm Russian, but it is my real opinion. I don't get paid anything for it. And I understand that not all Russian(goverment) actions are good, some were incorrect or questionable. Russia just don't want NATO expansion to the East even without transparent referendums. It's all very complicated in reality, in war no side perfectly correct and right and clean... :(
Why does Russia have any right to say whether sovereign countries on its borders join NATO or not?
The only reason Russia cares is because it wants to continue controlling them -- not because it's worried about the mythical NATO invasion of Russia its news and leader trumpet.
And in contrast, the only reasons those countries want to join NATO is because they're scared of Russia invading them, which it historically has. (See: Finland and eastern Europe)
Why US and EU worried about Nuclear Weapon in Iran?(I've exaggerated a bit here for an example).
NATO has more troops and equipment than Russia, it does not need to be afraid of Russia and seeks to expand even more.
To be sure even about majority opinion in Finland about joining NATO, in such serios questions you need referendum data but there is no such referendum. Even supporters of the West are not always in favor of joining a purely military and not only defensive NATO Alliance.
Yes USSR invading Finland in Soviet-Finland war it's bad, the USSR offered Finland a territory in return before the war, but unfortunately, it did not seem very profitable, but then, during WW2 for most of the time, Finland fought on the side of the German Axis coalition. And Finland did not fight quite adequately and also committed crimes, created concentration camps to isolate peoples who were not ethnically related to Finns and ("non-indigenous peoples") to move out of the territories where these people lived all their lives and many people died in these camps and there is some evidence of crimes in these camps. If someone want to take away something from you, for example, a part of the territory, then would it be adequate to ask for help from a notorious bandit(Hitler) who burns people? Such question has no good answer.
I'm not oneside propagandist. I just want that more people try to see things from all sides and analyse more information. Maybe I'm wrong.
In countries where there is a very significant part of the Russian-speaking and sympathetic to the Russia population, Russia wants their opinion(russian speaking people) to be taken into account, they are not forbidden to speak and study Russian in schools. Yes, sometimes they exaggerate reasonable demands. But I recognize that such countries have the right to require that all official documents be in the main language and the officials need to know the main language. I think it's not that Russia want fully control of this Countries. Russia wants trade and interact economically with these countries, and not just to have all Russian goods blocked or subject to huge duties without reasons.
Sorry for lots of text. And I may be mistaken in some points.
You need to rethink your information environment, you are repeating many false claims that I recognise from past propaganda.
For instance your view of NATO membership is fundamentally flawed as it assumes a NATO push to take on more members, when in reality even the most shallow research shows that it was actually based on a pull from countries who lobbied to be able to join NATO and had to jump through hoops to qualify.
Why did those countries want to join NATO? Because they recognised that, alone, they were vulnerable to what’s clearly a revanchist Russia looking to annex or otherwise control other countries in the region. By being part of a broad security alliance like NATO those countries made themselves safer from Russian attacks.
As for Russian speakers in Ukraine, I know many Ukrainians, most of whom from the east who learnt Russian as a first language. All but one of them absolutely detest Russia, have nothing good to say about Russians in general, who they see as complicit, and have become even more fiercely pro-Ukrainian and patriotic than they were before the war. Many have chosen to speak Ukrainian primarily, despite it being their second language.
And why wouldn’t they? Russia’s invasion destroyed their homes and their way of life, levelling entire cities, and killed tens of thousands of Ukrainians. The idea that all of this was done in their name or to their benefit is insulting.
Fwiw, my experience from growing up in deep red America was that anti-intellectualism was staggeringly strong there. People would actually define their beliefs in opposition to those of people they perceived to be 'smart'.
The way that I always understood this was that if they had a disagreement with someone 'smarter' than them, and they operated in good faith, they would lose ~98% of the time. This doesn't feel good. It makes smart people threatening -- it breeds resentment toward them.
However, if you have a roomful of people who define their position in opposition to the 'smart' person, your beliefs are the ones that matter, regardless of what the truth is, so you get to feel like you've won the argument. Most arguments are not consequential, so this practice doesn't really cause meaningful short-term harm so there's no negative feedback.
Over the long-term, this herd mentality is how people learn to navigate the world, and you end up with a giant mess.
Thanks for sharing this. I've been slowly coming to this conclusion, though I haven't lived in a red state since I was 16.
Your description fits our current world, IMO, far better than every other narrative I've seen. Some of those narratives feel good and fit OK, but they fall apart at the edges. The idea fits with why Hilary Clinton was so hated, better than anything.
On a personal level, I've become much more wary about seeming smarter. When I help and engage I try to do it in a way that doesn't threaten. I'm quick to say when I don't know something. I offer to help "figure it out" with others rather than preach.
Another comment somewhere in these threads talks about how social media has accelerated our problems by 10 or 100x. I think that's true for this, too.
This is a very good description of the paranoid, mob-mentality anti-intellectual subculture I've seen wax and wane in America throughout my lifetime. And I grew up in the 80s surrounded by people who were rejects from this anti-intellectual culture, who were smart enough to think all kinds of unauthorized thoughts for themselves. I believed that, above all things, this rejectionist safety-seeking and fearful mob mentality was old and that it was inevitable that its smarter progeny would rebel against it.
It's really only when you look at it through the frame of the Weimar Republic, or, the 60s youth in Argentina or Chile or the 50s in Hungary or right now in Russia or China, that you see how fragile individualism is, because it is so damn easy to whip up a mob against anyone who thinks differently, as you're describing. What is so telling about this book is how the mob itself barely even thinks it's a mob. Most of the time it doesn't even think it's doing any harm or anything unpopular. That is the lesson that we need to learn as a species - not some vague idea of freedom, but that hard individualism is more valuable than easy camaraderie. There were about 50 years worth of Hollywood movies trying to reinforce this notion, but about 10 years of social media obliterated it.
I think the current thing is a little bit different, though. It has gone from "academics are bad" to "anyone who knows how to do a thing is unsuitable to do that thing", which is a far more extreme viewpoint.
I got a little carried away with this response and it's a little off-topic, but I figured it might be worth posting anyway.
I think this has to do with the nonlinear growth in the human-facing complexity of the world over the past 30 years.
Humans aren't getting more intelligent (they may not be getting dumber either, but at the very least, the hardware is the same), but the complexity of the world that we have to engage with has undergone accelerating growth for most of my lifetime. The fraction of this complexity that is exposed to 'normal' people has also grown significantly over that period of time with the 24-hour news cycle, social media, mobile internet, etc.
It's obvious that at some point in this trend any given person will start running into issues with the world that are above their complexity ceiling. If this event is rare, we shrug it off and move on with our day. If this becomes commonplace, we start to drown in that complexity and desperately cling to sources of perceived clarity, because it's fucking terrifying to be surrounded by a world that you don't understand.
The thing that the right has done really well and that the left has generally failed to do in my lifetime is to identify sources of complexity and provide appealing clarity around them. This clarity is necessarily an approximation of the truth, but we NEED simple answers that make the world less scary. People also, as a general rule, don't like to be lectured or told that they are part of the problem -- the right never foists any blame upon the people it's targeting.
In my lifetime, the left has pretty consistently fought amongst ourselves over which inaccuracies are allowable or just when we attempt to create simplifying approximations. Instead of providing a unified, simplifying vision for any given topic, the messaging gives several conflicting accounts that make it easy to see the cracks in each argument, and often serve to make the problem worse. If you're competing with another source of information that is simple, clear, and makes people feel good (or at least like they are good), you will always lose if you do not also achieve those three things.
In the vacuum created by a lack of simple, blameless, intuitive messaging from an (arguably) well-meaning left-leaning establishment, the intuitive (though generally wrong and often cruel) explanations offered by the right have found huge support and adoption by people who need someone to help them understand the world. Because both messages are approximations of the truth (and thus sources of verifiable inaccuracies) people just choose the one that makes them feel better.
tldr I think we've hit a point where:
- The world is too complex for many people to independently navigate
- People need rely on simplifying approximations of the world
- Media provides these approximations, often in bad faith
- Sources of credibility or expertise often provide these approximation in good faith, but can't agree on which approximations are the 'right' ones
- Good faith messaging often either fails to simplify or makes people feel bad/guilty
- People are sick of feeling bad or guilty
- People associate expertise with being scolded over things that don't feel fair or fully accurate to them
Thus people often reject expertise out of principle, and just believe whatever Fox News tells them because it feels better.
ALSO: People who believe the 'right' things are often pretty shitty to people who don't (it goes both ways, but the other direction doesn't matter for this post). I've been guilty of this. This just further galvanizes the association between expertise or the 'right' ideas/people and feelings of resentment/guilt/shame for these folks. They may not understand what you said, but the do understand that you were talking down to them, and they hate you for that.
You assume that people have the bare requisite knowledge to even accept a simplified explanation of complex things. For example, I said something like, "I can't believe they're shutting down USAID" to someone the other day and their response was, "what's USAID?"
This was an American. This caused me to think back to my schooling and you know what? I don't think I ever did take a class that went over all the divisions and (larger) programs of the US government. It was like you suggest, a simplified explanation (three branches and that's about it).
Clearly, that wasn't enough. Even something as simple as, "USAID provides humanitarian aid to foreign countries in order to give the US a strong influence in those places" would've been better than nothing.
Right now we have a critical US foreign influence apparatus in a non-working state and most Americans don't even know what it does and by extension, don't care.
You make some really good points in your comment. One of the most unfortunate (that I believe to be true) is:
> don't even know what it does and by extension, don't care.
Apathy is a problem in so many different social, political, and even technological systems - for instance, if people cared just a little bit more about digital privacy, the entire adtech scene probably wouldn't exist.
There is a lot in this that is phenomenally well stated. I don't disagree with most of it.
I'd point out that there was a lag between when extremely well informed people started to get the firehose of bullshit and when they became able to parse some facts from it; right about at that moment, the firehose was turned on the totally unprepared, uninformed mass of other people, and since that point it's been a power struggle over who controls this out of control hose. But that's not to say that, at some point, the incredible level of distrust among the confused and ill-informed won't turn against whoever seems to be pointing the hose at them at the moment. If simple answers are what people need, then information overload has its own logical way of overwhelming whoever is trying to control the flow of information.
I'm in a red state and I hard disagree with your generalizations. Even in red red red states Kamilla got 30%-40% of the vote. Trump got 40% in California.
I think you should re-read what they wrote. Asserting that you're part of a group of people who agree with you (aka conformism) is what they're saying is common in places that hate intellectualism and are scared of individualism. You're kind of just doing what they said you'd do.
Not just that, but your post is almost a perfect example of the attitude which most of the Germans in the book "The Germans 1933-1945" ended up regretting, if not on moral grounds, then after it led to the destruction of their country.
The money here (in the AI realm) is coming a handful of oligarchs who are transparently trying to buy control of the future.
The difference between the two scenarios is... kinda obvious don't you think?