Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
We're never getting rid of ChatGPT (xeiaso.net)
53 points by ghuntley on March 5, 2023 | hide | past | favorite | 89 comments


Ha, I knew I wouldn't be the only person adding ChatGPT to emacs (https://github.com/CarlQLange/chatgpt-arcana.el).

I am beginning to think that ChatGPT is the singularity. I built that emacs package, but really, ChatGPT built it using me as a conduit - there isn't a single function in there that ChatGPT didn't write the majority of (although the poor code quality is all my own).

It could go further. Using something like langchain, I'm pretty sure you could get ChatGPT to instruct itself in building a python library or something:

  User request: build a python library to print out ascii charts in the terminal

  ChatGPT2ChatGPT: What are the steps I need to take to build a python library to print out charts?

  For step in steps:
    ChatGPT2ChatGPT: Carry out this instruction
And so forth. I'm sure some human interaction is required, particularly given the context limit. But even so.

Another thought I had was that using ChatGPT to help me build things is so convenient that I am unlikely to pick technology that it doesn't know about. Perhaps this kind of thing will cause a chilling effect in the creation of new tech. Who knows. Truly exciting times.


While LLMs will move the industry to well defined specs in English that are used to generate code and tests ranging from unit, integration and end-to-end, I don’t see the end of the role of humans in this process. I see the continued rise of the product engineer who is fully capable of interacting with stakeholders and customers while also having the time and energy to produce working software.

The singularity seems pretty far off and the current LLM techniques will change drastically over the coming years… Algol families exist but we’ve well beyond those abstractions.

One thing I’ve noticed is that LLMs will eventually run in to Syntax errors or other logical errors and halt on their own. I’ve been calling it the non-halting problem and intuitively it seems as though this will be a rather large hurdle to overcome!

There’s a certain amount of computational thinking required by a developer required to understand how the computer is interpreting code that seems missing from the capabilities of both LLMs and symbolic interpreters like traditional computers.


I think it's likely that the edges of generative AI haven't really been found yet. I think it's hard to say whether any given (textual) requirement definitively can't in some way be fulfilled by ChatGPT (or a similar LLM) without at least another several months of messing around.

I certainly haven't made up my mind about what this might do to the world in general - more a question for science fiction authors than programmers, I think. In any case, I am fairly unworried (and more excited) by these developments. ChatGPT can't stop me from sitting on a mountain and eating a banana. Yet.


I agree. My goal, which seems very achievable from my current research, is to write formal specifications in English and generate functioning code.

One of my favorite recent projects is called Parsel:

Parsel : A (De-)compositional Framework for Algorithmic Reasoning with Language Models

https://arxiv.org/abs/2212.10561

Here's a notebook with an introduction:

https://github.com/ezelikman/parsel/blob/main/parsel.ipynb

And here's a GUI interface the author has been developing:

http://zelikman.me/parsel/interface.html


What's needed to close the loop are unit tests written by chatgpt.


TabNine actually announced that functionality a few weeks ago for their code-completion tool: https://www.tabnine.com/blog/ai-unit-test-generation/

I haven't used it, but it might be what you're looking for. I'm actually pretty sure ChatGPT can write effective tests as well, though.


I've found that chatgpt is pretty OK at writing basic unit tests based upon the code in context. I can even say things like 'please switch to mstest/xunit' and it does the thing like you'd expect.


I wonder if we may have crossed the threshold into singularity. LLMs will almost inevitably be on pretty much everything in 5 years. Looking at how well current GPTs are able to write calls to external APIs, their integration to systems will be surprisingly deep. No apparent end in sight to their advancement.

It may well be that this is an unexpected vector from which to enter the singularity.


> Looking at how well current GPTs are able to write calls to external APIs

Are you talking about the behavior of ChatGPT constantly inventing API parameters that would be kind of useful to solve the task at hand, but which unfortunately aren't part of the external API?

I would say that understanding that there's a difference between your own code, where you have full freedom to design your contracts, and "code of others" where you have to adhere to some already-defined contract, would be an absolute minimum bar that must be passed in order to call someone (or something) proficient at writing calls to external APIs.


davinci has been pretty good about using APIs that I've shown it, I can certainly see it hallucinating parameters but I also misremember interfaces. It seems that you could easily solve that a number of ways e.g. analysis, tests, submitting the code along with the spec and a prompt that asks for bug checking with the temperature set to 0.

Just because you can't one shot a perfect completion every time with 0 engineering investment doesn't mean you can't use this technology to build some cool things.


The point of the singularity is that at that point it's not us making the further technological advances, it's AIs, so the rate and type of advance is both opaque to us, and beyond our control. That's why there's a horizon we can't see beyond. An AI helping us write chunks of code still has us in the loop.


No it's not: the point of the singularity is that you can't meaningfully speculate past it.

We're not there yet, but it's reasonable to think we might've crossed an event horizon.


The reason it's a singularity is not because we can't predict past it: it's because it represents a point of rapidly accelerating growth in capability of AI, enabled by the fact that the AI itself is either modifying its own code, or creating the next generation of AI, which will be smarter, and able to create a smarter next generation, etc.

Humans not being able to accurately predict what comes next technologically isn't the singularity; it's just a basic truism, and has been for decades.


I have no clue where LLMs will lead in 6 months, so we're pretty much there already.


Exactly, we're seeing the time horizon shrink rapidly. There have already been numerous threads ruminating on the fact that even a few months of not paying attention will result in you lagging behind in this technology. I can't wait to see what happens when state space models are developed further[0]. If the prompt and output length can be increased significantly then large chunks of the work of building software could be offloaded to these tools.

[0] https://arxiv.org/abs/2212.14052


Plenty of people have not known where technology would take us in six months, for the last 200 years or more. AI researchers have been predicting systems like ChatGPT for many years now, and even successfully predicting their failure modes and pathologies.


>AI researchers have been predicting systems like ChatGPT for many years now, and even successfully predicting their failure modes and pathologies.

Any particular examples you are thinking of here that you could point to? I’d be interested to read.


You really don't see a difference between today and 200 years ago?


Yes, I said as much, but I also said why can't we meaningfully speculate past it.


Debatable whether crossing the event horizon is distinguishable from beeing there


I know it's popular to mention singularity in the context of AI, but i am afraid it doesn't do justice to the "real" singularity which is a mystery of both scientifical and mystical proportions.


Just wait until efforts to improve the accuracy and usefulness of LLMs by implementing conceptual understanding in the models rather than pure language construction, maybe including physical interactions with the world as part of the training process, trigger a legitimate debate as to whether these things are conscious. Now that will make justice to the real singularity.


"trigger a legitimate debate as to whether these things are conscious"

That debate has been raging at least since The Mechanical Turk, and even now philosophers can't even agree on what consciousness is or if humans themselves are really conscious or some kind of automata.

Chatgpt simply brought such speculation to the masses, but it's been raging in academia and science fiction for quite some time.


The same debate has been going on since ELIZA was first introduced. The difference between arguing about a text generating model and one that inhabits a virtual world or can interact with the physical world is huge.


How is this unexpected? AI agents that interact via language and are able to write code which improves itself is classic singularity. It basically describes HAL from 2001.


There was no singularity in 2001. Or in the entire series.


2001 is prior to the concept of the singularity being well articulated, but the HAL artificial intelligence clearly fed ideas for what an advanced AI could do for future fiction.


Maybe the progenitors had a singularity. It's off-camera and not even imagined in the story but after the fact we can see that it would at least not be inconsistent. But almost anything can be said not to conflict with almost anything off-camera.

All I think 2001 depicts is a transcendence. A level up.

And actually that was all Dave being levelled up. I don't remember Hal being all that important at all. Some transcended elders or their left-behind machines picked up Dave and gave him the VGER treatment (give god powers to the insect) and afterwards, we mere humans can still follow his train of thought as he comes back home.

To me a singularity is something you can't even imagine. If you can make any sort of sense of something at all, then it's still contemporary and not a singularity. It's just a faster car than last years. AI that becomes smarter than us and take over is not a singularity. That's just the Europeans wiping out the native Americans.

Compute power could probably produce a singularity. What happens when Jupiter is converted into solid compute? So it's merely adjacent. But a smart ai is not the definition of a singularity.


> AI agents that interact via language

Via English. I'll be inclined to believe we have general AI when it teaches itself language that is not widely available on the internet.


How would anyone teach themselves a language without having access to it?


The goal posts are always moving.


While ChatGPT is a real technological breakthrough because I think we, for the first time ever, crossed a line that made having conversations with AI Chatbots useful, it also was inevitable. Year by year we crawled forwards, improving bit by bit. I think we just underestimated the tipping point, at least I would have never thought something of the level of ChatGPT to be possible. It will only improve from here, competition will increase and AI NLP will become that much more common.


i'm afraid it will get very political very quickly. Enjoy the last moments in which these things are not regulated and not much censorship or political bias is indoctrinated from above.


Politics seem to be incapable of stopping technological progress... at most it slows it down a little, in some parts of the world, while other parts race ahead.

As more parts of the world improve technologically, economically, and in terms of education, technological progress becomes even harder to stop.

Mix in the virtually limitless promise of riches and power that strong AI brings, there's just no way any government anywhere will be capable of preventing strong AI research from racing on. In fact, every government will see it to be in their own interest to get there first. Same with every billionaire.

There's just no stopping the race to develop a wish-granting machine.


Politics doesn't want to stop technological progress, it wants to make sure it controls it and not you. And it largely does, and where it doesn't, it still makes your life miserable.


This was exactly my thought. Right now we are in the early stages of awesomeness. Similar to the early days of Google, with Google News and Images and without all the abuse and spammers.

Once the politicians see it as a threat because of its perceived bias, or the content creators see that their copyright is infringed, or the scammers see that it can be used to scam people, or the spammers overwhelm the API, or or or.... Then the fun is going to be over.

But maybe not. Maybe it'll be fine. One can only hope.


I can't see an end in sight for the capabilities of these systems anymore. I've lost a LOT of sleep regarding this over the last week as I work on a prototype to demo to my org's leadership. In our sector (B2B consulting), this sort of technology is essentially "adapt or die". Additionally, I see this technology as a way to level the playing field for a brief moment. F500 will still be there, but I think the constituents are going to change dramatically.

ChatGPT is a warning shot for me. The technology had been there for a while, but this arrangement is particularly compelling (concerning). Even if the current iteration doesn't achieve "criticality", the next one almost certainly will.

I am seeing 2 paths for a developer: embrace these tools or become the next generation of luddite. I've got no clue if this is going to eliminate jobs, but I would certainly prefer to be on the vendor-side of this software than the receiving end. The roles inside of a typical org chart are going to change very quickly - That much is certain to me.


If it's good. We can leverage it, if it's not, what's the problem? That medium bar people or systems will rely on it and produce medium bar software?

There's already a lot of that through outsourcing and bad decisions and incentives.

Right now it's definitely not that good to be a real aid to a software developer who knows his stuff but will probably help brainstorm when something is obscure, badly documented or unclear.

People who don't know the subject matter will be impressed because they won't be able to judge the output but those who do know, will not be so impressed.

It exposes a lot of the unnecessary things (cover letters and corporate drivel) and that's laudable.

It's still a text in text out thing that can only do what it's trained on and not all the other sci-fi things people seem to think it is.

There's no need to overreact, even if definitely people without understanding of the tech and crazy for the latest buzzword will.


> There's no need to overreact, even if definitely people without understanding of the tech and crazy for the latest buzzword will.

Agreed. In my particular case, I looked at this last year and dismissed it as a shiny distraction. Looking at it again with a fresh perspective, I have arrived at a different conclusion.

The part that bothers me is not replacing the ivory tower of engineering or consulting with a robot. It's the other 99% of the market that can be captured virtually overnight.


I know I keep hammering on Bing Chat lately and I feel like a shill, but: it's really, really good. Bing already handled the stuff Google is still good at well enough, and now it's better at the stuff Google used to be good at.

Two Goliath companies forced to compete in the blue ocean of practical LLMs. It's going to be a wild ride of innovation and cool new stuff. When was the last time Google faced real competition? This is it. This is the next era of tech.

Hopefully someone decides to compete on consent, credit, and compensation for the people whose work these models depend on. Google's bargain was: We index your stuff to make it searchable, and we run ads on it while sending traffic to you.

The LLM chat-based search bargain is similar: we scoop up your work and help you make more. The glitch is that LLMs have the potential to negate the you and your and take over, like Google has with that bargain in trying to answer all queries on-search instead of sending traffic.

Someone has to solve the problem of people needing to work for compensation to survive. I really can take or leave copyright and IP in general. It exists to get people paid for creating stuff we enjoy and benefit from as a society.

I would happily hand it all over to these machines to help me come up with new stuff if I weren't struggling to survive when my only real skills are at risk of being turned into a mush of data that people will acclimate to and forget they ever cared about craft or the people who used to make it.

Wouldn't it be wild if our discussions about this became input for the models, and those models came up with some good solutions?


I'm more 50/50 on it than you.

I have a feeling it will take a few years at a minimum to see reliable, trustworthy, accurate, useful ways of using it.

But I do agree it's adapt or die (if not now, eventually)


>This week, OpenAI announced that they have created a public facing API for ChatGPT. At this point, I think it's over. We are going to have to learn to live with large language models and all of the other types of models that people will inevitably cook up.

What were they hoping for exactly? That OpenAI would just close down and stop serving requests?


Honestly I was hoping that ChatGPT was just a fad. It exploded in users almost instantly (2 months to one million MAU, that's _insane_), and I was honestly hoping that interest would wane naturally over the next few months. I thought that this was part of the GPT-4 training process or something.

That clearly isn't going to happen.

My other hope was that these large language models would lose popularity in general and everyone would move on, but now I'm seeing LLM output in screenshots in slack threads when I ask a clarifying fundamental question about CSS. Like I said in the article: Pandora's box is now open and we get to learn to live with the diseases.

Side note: it would be nice if you edited your comment to refer to me accurately with "they" or "she". I'd really appreciate it!


>Side note: it would be nice if you edited your comment to refer to me accurately with "they" or "she". I'd really appreciate it!

I would if I could, but I don't see an 'edit' button.


I'm terribly sorry for getting to this so late, but I've edited it now.


There's a two hour edit window on comments.


Side note it’s xe or they or she rather than he. See https://xeiaso.net/pronouns


His explanations of why this is so, so bad fall very short. Why close down a tool that has good and bad uses? That would be like abolishing knives.


There's a more thoughtful analysis of reasons to oppose chatgpt here: [1], which concludes:

"I'm not a fan of banning promising technologies that also have very positive use-cases. Especially when I could out-source really boring tasks to it as well. But the enormous amount of potential negative impact does worry me a lot so that a short-term ban is the lesser evil here to me."

[1] - https://karl-voit.at/2023/01/14/chatgpt/


> Furthermore, even experts get this check-task wrong simply because humans tend to assume the correct answer, overlooking hidden mistakes too easily. Everybody has made the experience that we are unable to find certain typing mistakes where it is much easier to find typos when reading text of other people.

> Another issue at hand is that so far, text written by people who lack a certain level of knowledge was easy to spot and detect. Those texts typically had typing errors, were using less elaborate language and followed a certain pattern to be recognized as bullshit.

This seems to be the core of the author's argument. It's not very convincing. The rest of the article is just various forms of paranoia and hand wringing about supposed bad actors or incompetence. This is simple historical blindness, there have always been bad actors and incompetent people and yes any technological advance extends their ability to do bad things but those advances also extend the ability of others to counteract those effects.

We've already had the ultimate "bad technology" for almost 80 years and it's currently in the hands of some bad actors. Lots of people were concerned about it leading to the end of the world, and still are, but almost no one predicted that it would result in one of the most peaceful periods in human history due to the horrendous consequences of attempting to wage an all out war. Regardless, there's no putting the genie back in the bottle once it's out and the LLM genie is most definitely out


Our problems with ChatGPT will really begin when we start asking it to make decisions for us. The first ChatGPT related death will come very soon.


Half of the people (many of whom joined HN in the last two years mind you probably to ride the crypto train) here already let it think for them, just look at this comment thread.


I just witnessed someone having a discussion about medical conditions... so yeah.

But in that sense, it's not more dangerous than misguided Internet for rooms forums, (which are regulated?)


Also in the news recently was a legal decision that was written using chatgpt.[1] That has the serious potential to change lives, if not end them.

[1] - https://www.theguardian.com/technology/2023/feb/03/colombia-...


Bing Chat is better than Google was at its best. That's what convinced me this thing could be worthwhile. There are so many questions Google used to handle okay that I just gave up on exploring, and that Bing Chat now handles exceptionally well.

For example: https://news.ycombinator.com/item?id=34986361

All Google provided was endless near-identical articles explaining the core types. No amount of query refinement helped. Meanwhile, Bing Chat asked follow-up questions and helped get to the right questions to ask.


I'm still so confused by this attitude. ChatGPT is not accurate. It cannot innovate, it only can rehash content is has been provided. Sure, it does so mind-boggling well. It appears to be masterful at implementing requests... but it does not know when it did something wrong. It cannot fact check itself. It cannot test its own work. And while it can reshuffle words into something new, it cannot actually invent anything new -- that depends on a person guiding it, judging the results, and driving it through additional iterations.

It is a great tool, but it is just a tool.


Another case of moving the goal posts.

It used to be thought that only humans could play chess, as chess playing was seen as a mark of human intelligence, then when computers could play chess it was thought only humans could play chess well, and when computers could play chess well it was argued that computers couldn't beat the best of human chess players, so humans were ahead. But then computers started to beat the best hummer chess players. So the sights were next set on Go, which went through the same cycle, but much faster until computers could outplay the best human Go players.

The evolution of AI is littered with such hurdles that computers overcame time and time again, but the goal posts keep moving.

We should note that some humans can't do the things you ask for either... or not well, and if they can they don't always do so (most people speak in cliches, for instance, so could be said to be "just reshuffling words"), and, anyway, computers are quite capable of generating randomness, which is definitely "new" in some sense.


So what's the answer to the goal post problem, should we accept Deep Blue as being a fully human level AI intelligence? Maybe we should grant it citizenship. I'm not sure what conclusion you're alluding to but not stating.


It's unlikely politics will be able to control strong AI either. That may be where politics, and humanity as a whole, may finally meet its match.


Oh I agree, true strong AI that's actually beyond human cognitive capabilities, will by definition be beyond human control. You can't control something that is comprehensively beyond your own limitations, in the long term anyway. That's why the alignment problem in AI is an existential issue for us.


Even the primitive, weak AIs we have now are already way beyond human capabilites in many ways.

One they acquire the je ne sais quoi of consciousness, we're toast.


I just generated an httpkit clojure web app hat fetches coin and price info from coinmarketcap in 15 minutes. This would have been a minimum of 4 hours based on my current clojure ability. It's doesn't have to be 100% right. You are still expected to grok code. It's simply an assistant and it's bloody amazing.


It’s seems inevitable that the majority of posts and comments will eventually be AI created intended to nudge our thoughts in a direction. I at least, will probably stop reading comments online at that point


How will you tell when that is happening?


You can’t go back from technological breakthroughs! Fear the repercussions! None of it works right and the boilers explode! Good men die! Think of all the Ferriers who will lose their jobs! The cowboys who won’t be able to do their work running cattle anymore. It’s a travesty. I don’t see that society will be ready for this change. I can see some benefits, but the downside will destroy our way of life.

— preachers in 1860


> At this point, I think it's over. We are going to have to learn to live with large language models and all of the other types of models that people will inevitably cook up.

Was there any doubt? Useful technology proliferates regardless of how people feel about it.


curious: did you recently monetize your blog? i just got an ad, i’m pretty sure that hasn’t happened before.


I started monetizing it for Hacker News readers because I got tired of the hateful venom in the comments section. I figured if people are going to get angry at me, I might as well get paid for it. The ads have made enough that it's a tax problem, but overall I do make enough off of them to fund ridiculous ventures like uploading my entire stream VODs to my CDN: https://xeiaso.net/vods.

The ads really do make up for the costs involved in serving things over my CDN (AVIF images being widely supported and my video compression experiments https://xeiaso.net/blog/video-compression help). It's not enough to make me quit my dayjob (I would probably not want to do that anyways), but it's enough that the passive income pays for all my video games: https://xeiaso.net/blog/blog-profit-2022.

So, yes. There's ads. Blame other Hacker News users being toxic for why they exist. Maybe we can have a better world where people are just nice to other people, but until then I put ads on the blog so that my side projects are more sustainable.


> I figured if people are going to get angry at me, I might as well get paid for it.

omg i know right, every time a post of mine winds up on the hn frontpage for a second the comments are all like "actually you're wrong and this thing you worked so hard on is very stupid, please delete this now, also please feel bad, thank you for absolutely nothing"

it reminds me of playing magic the gathering in 2008 - the 14 year olds at the shop (of which i was one) would always talk this way (arrogant & mean) about each others decks.

having a confident contrarian opinion is fun. and everyone loves an underdog. but it's extreme here :/


Why would we want to?


I'm just scared that internet bots are gonna get really, really good. Every comment on Reddit will be fake, every match on Tinder will be a really realistic bot, etc. But I guess they won't be using ChatGPT, just LLMs in general.


I'm personally scared that not only are the bots going to get good, they're going to be everywhere, imagine how 2024's election cycle is going to go with image diffusion, deepfaking, voice synthesis, and text synthesis in the equation. It really makes me wonder what the hell we're in for.

It is also super frustrating as someone who does writing and community management to be warned that I may have to totally radically change a lot of approaches to try and filter out AI spam. This part is very annoying. I think the blogspam problem is going to get much worse and we're going to end up with awesome lists for trustable blogs or whatever.

I just hope we don't end up having to lose tools like Stack Overflow in the wake of this technological revolution.


When VR gets good enough it arguably won't matter that the "person" on the other end isn't "real".

Even now, as long as there are no real-life consequences to your online interaction, it already doesn't matter.


if there were specific reasons for wanting to remove ChatGPT, they could include concerns over privacy or security, ethical considerations related to AI and automation, or the need to address specific technical issues or errors in the system. It's worth noting, though, that as an AI language model, ChatGPT is not a sentient being and does not have its own will or desire to continue operating - it simply performs its programmed functions based on user interactions.


"ChatGPT is not a sentient being and does not have its own will or desire to continue operating - it simply performs its programmed functions based on user interactions."

I don't think it's so clear cut. We neither understand what it's doing, how it's doing it[1], nor consciousness will enough to glibly declare that it's not conscious. By some definitions it is, or (alternatively) humans themselves may not be any more conscious than it is.

[1] - https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...


In every HN thread there's always that armchair technologist who's so against critical thinking and the forethought of potential negative impacts of innovation, as if all innovation are purely good, no?


Yeah I've watched a prominent psychologist talking about how the extent of it granting trust to vulnerable people and even regular people because of sounding smart makes it risky. I don't think it is a large risk with adequate training to the users. It's debatable not obviously one way or the other.

I've also used it myself and found myself trying to make decisions based on its explanations only to find out that ChatGPT was mistaken.

I personally do not think I am vulnerable to grave errors due to it and I've noticed large benefits using it for other purposes like coding or diagnostics.

If you're trying to make the argument that "use computer algorithm make people dumb" then you're decades late on that one.

Censorship should not be for everyone. Because if it is, it certainly isn't for its creators and powerful groups of people that bribe or coerce OpenAI which assuredly will happen regardless. This would be a strong argument for non-censorship.

I don't think the decision to censor it was even done as a move to protect the public. I think it actually was a power play by the company with an easy alibi of seeming virtuous. This allows them the ability to charge much more for clandestine use. It could also be out of fear that it will spill secrets that they themselves are using or plan to use for their own gain. I think you could conjure several other valid reasons here.


Asking why do or not do something is an important part of critical thinking.


So is thinking of edge cases on your own before asking the question. The article itself even lists some already, if the commenter only read it:

> it is trained off of basically the entire internet without paying anyone involved

> This is fantastic technology that can enable so many things to be so much easier, and yet I can see it being used for such evil at the same time. I just keep looking at this, and I wonder what happens when someone tries to use it to radicalize people.


In case it is interesting, an early idea phase version of this article was called "So much for AI ethics" and it would have gone over that problem you quoted in more detail, but I was having trouble building up enough sarcasm juice to really express it properly.


Well yeah. You could also be afraid of screwdrivers. After all, someone could stab you with one. So on the one hand you could fear the screwdriver and try to eliminate them. But really, it's the stabby person you should be afraid of, not the tool. If not one tool, they'd just use another.


Screwdrivers don't suffer from the alignment problem. The issue with LLMs is that the criteria for success in their training are proxies for the actual behaviours we want to achieve. Therefore there is the risk of misalignment between what the LLM is actually doing and what we intended it to do. This is not a theoretical risk, petty much all the misalignment pathologies AI researchers have been worrying about are exhibited by ChatGPT and other such models, in spades.

Basically very often the models find it easier to deceive us into accepting a pathological response rather than generating a genuinely correct response. Robert Miles' Youtube channel is required viewing for anyone with an interest in this.


Without derailing this discussion… Gunlaws have proven to be very effective. It is a compelling counter example of: no people don’t just use another tool if you block/ban one.


3d-printed guns may make gun laws obsolete before too long.

Besides, the laws have only stopped some of the rabble. They have not stopped governments from developing ever more effective and deadly weapons.

Technological progress in weapons development has not stopped, and it's that kind of progress (rather than mere gun ownership) that's more akin to AI development.


So like human cloning? Or human experimentation? Or biological weapons… banning them has no effect?


I haven't encountered this concept before - where has this been proven very effective?


Pretty much every developed country outside the US. Along with universal healthcare, it is considered a marker of a well functioning state (monopoly on violence).


Is that's the same as pointing to how the laws have proven effective? I would expect something like an apples to apples before and after comparison.

Also, as someone who grew up in an oppressive state, government monopoly on violence is far from a blessing.


Australia 1997 is when they introduced strict gun laws. Go check the stats on gun violence.

Also go see that other violent crime did not increase either.

Hell, ask chat gpt to explain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: