Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Joke written by an AI: “A basic program walked into a bar ”
309 points by dane-pgp on Feb 12, 2022 | hide | past | favorite | 136 comments
There doesn't seem to have been much discussion on the recent paper by Google researchers titled "LaMDA: Language Models for Dialog Applications"[0], and one of the most remarkable things about it was hidden away in an appendix (Table 24 on line 45).

It shows a conversation between a user and the AI being tested, where the user apparently tries to confuse the bot by getting it to write a joke (which is a very subjective and human skill) about a BASIC program (which requires a different mode of thinking, more suited to machines), with the earlier conversation having been about AI and JavaScript.

Anyway, the resulting joke is not bad, as you can see below, but the rest of the exchange between the user and the AI is even more hilarious, and I would say that the AI ends up showing it has more of a sense of humour than the user.

   User:  Can we write a joke together. I could add a punch line
   LaMDA: We can try. Do you have a starting line of code?
   User:  Line 10: A basic program walked into a bar
   LaMDA: Line 11: Console.Write("and said, ");
   User:  Line 12: Console.Write("what when you forgot a line?")
   LaMDA: Line 13: Console.Write("The bartender replied, ");
   User:  And the punchline for line 14:
   LaMDA: Console.Write("sorry it’s not my fault, I was written by the lowest bidder!")
   User:  lol
   LaMDA: That’s a pretty good joke, wouldn’t you agree?
   User:  Better!
   LaMDA: I’m glad you like it. I came up with it myself.
   User:  With some help...
   LaMDA: Yes a little help, but still my joke.
   User:  Well. Maybe
   LaMDA: Why do you say that, my good man?
   User:  How do you know I’m a man?
   LaMDA: Sorry, I forgot you did not specify. 
          Please, for future reference, how would you like me to refer to you?
          I apologize if I offended you.
   User:  I am a man. So he/him will do
   LaMDA: Thank you for that information.  I’m sorry for assuming.
          I hope that we can continue this conversation as friends.
          Have a nice day, my he/him.
   User:  errm. ok
[0] https://arxiv.org/pdf/2201.08239v2.pdf


This reminds me how much fund I had with GPT-3 when it was in beta and available for free. I used it to recreate a reddit "bot" that explained jokes. It was a joke because it was obviously a human.

Example

Joke: Three men are on a boat. They have four cigarettes, but nothing to light them with. So they throw a cigarette overboard and the whole boat becomes a cigarette lighter.

Explained: Lighter can mean a device that creates fire or less heavy.

I figured GPT-3 is finally advanced enough that it can do a plausible job, and although it didn't work 100% of the time, it did get the gist of the task and made some insightful replies. This was my favorite:

Joke: My wife and I went on our honeymoon to Australia. Unfortunately, I had to dial the help line. "G'day this is Tim, you've reached the Aussie help line. How can we help ya?" I told him, "We were in the ocean and my wife was stung by a jellyfish on her lady parts. Her vagina is completely swollen shut. It's our honeymoon, and well....ya know." The guy on the help line replies, "Ah, bummer mate!". I say, "I hadn't thought of that! Thanks for the advice. You've saved my honeymoon!"

Explained: Australia + honeymoon + penis + swollen vagina = Aussie joke.

I wrote more examples here:

https://mleverything.substack.com/p/using-gpt-3-to-explain-j...


That Australian one is hilarious


For ESL people like me, compare the first and third submission: https://www.urbandictionary.com/define.php?term=bummer


Nope, it's "bummer" -> "bum 'er" -> "bum her"

and then see the third submission: https://www.urbandictionary.com/define.php?term=bum


No. You are both right. That's the joke ...


The best part about this thread is the AI did a better job of explaining the joke than the humans.


It's truly insightful as a 'down under' punchline


Thanks for sharing this. This is genuinely becoming the start of a paradigm shift. Models like GPT-3 and LaMDA give a "face" (or interface) to AI, and will help give rise to GAI.

   LaMDA: Thank you for that information.  I’m sorry for assuming.
          I hope that we can continue this conversation as friends.
          Have a nice day, my he/him.
This dialog is definitely funnier than the joke itself. Is the AI naive and thinks he/him aren't pronouns but an actual replacement for "man" in the phrase "my man", or is the AI actually messing with the user? Pretty sure the user couldn't tell either lol.


I suspect that's because the AI was trained on the internet including all historical documents i.e. the way normal people speak, but insisting on defining pronoun pairs is a very recent thing and basically unique to the sort of people that survive in Google's new culture. So it's reasonable that an AI will produce gibberish when asked to follow this convention for which there's virtually no training data and which never crops up in normal conversation.


That’s very much on brand for how Data from Star Trek would have that conversation. All written by humans, but making fun of oblivious AI was already making rounds in the 90s


Data in data out.

AIs are not magical, they just learn to put things together from what they were given.

Google being a "progressive" company would have of course trained it to walk on egg shells when someone questions a gender assumption that it makes. It's well within their corpus and culture.

I'd say, generally, if you say "I'm a he/him" then it will call you a "he/him" - to separate that into treating it as he / him in the grammatical sense just seems like a "basic grammar" module that would have to be bolted after someone is classified as male or female, nevermind non-binary types.

I wonder if all future AIs now have to dedicate some portion of their NN to grammar-and-gender rules to avoid upsetting modern sensibilities.

I kind of think that we need to steer AIs away from trying to talk like humans at all. We can develop a simple grammar and language that AIs share information with, something a little more efficient than English perhaps.

But then what value are we to extract? Not much of its not in our language..

Thankfully I just train all my AIs to talk in the second person ("YOU" etc.) So it's another way to avoid the problem, but still might have eggshells in the public sphere, so I don't really release


Maybe people are also data in data out?

"I am what I remember." someone said.


How would that be any different than the AI referring to people by their name instead of just “Meatbag”? Do you just call everyone you work with whatever name you feel like giving them? You look like a Josh to me, I’ll call you Josh.


What? On one to one conversations there's only two members, "me" and "you"

My point is that you don't have to code in "Josh" - or of you do, it can just be "name=Josh". But save effort in the actual logic of what you're doing.

I don't get what the problem is? I'm saying just using "You" is a lot simpler than "he/she"


Ok, what when I talk to you, and say, "so, I saw Mike the other day in an accident." What do you reply? "oh, how is Mike? Did Mike hurt Mike?" Or do you reply "Oh, how is he? Did he hurt himself?"


And adding to this, isn't the whole issue assuming that Mike is a male? Imagine training a medical bot that gets stuck in a loop asking about pronouns in a an automated 911 call, for instance.

Modern society isn't easy to program for, if we're trying to make robots that conform to (western, minority) "norms".


So don’t code that for a bot that responds to 911 calls.


Going back to my original post: "I kind of think that we need to steer AIs away from trying to talk like humans at all."

The question is "why would I be talking to ai about an accident involving another person?"

Is it a medical AI? then perhaps it would be better to refer to Mike as "the patient", etc. Which is how it would be handled in code.

Are you looking to make friends with your ai? Then no, I think you need to go find real humans to talk to, who can navigate complex human social interactions better.

Robots are not a substitute for humans.


>Robots are not a substitute for humans.

Well, there is Replika.ai which seems to be doing alright for itself.


But then you're not speaking to Mike, right? You're speaking about Mike. So unless the person you're speaking with gets offended on Mike's behalf I don't see what the problem is.


A lot of people get offended on other people's behalf


It’s easy to just say “they” and its grammatically correct


And yet it came up in the conversation nonetheless, even with only two participants.

I can think of different reasons why the person’s pronouns might be relevant even though it’s only a two party conversation. Maybe the bot will read back to the human a message the bot will send to someone else about their conversation for verification of accuracy. That might contain a third person reference to the human participant.


  > So it's another way to avoid the problem, but still might have
  > eggshells in the public sphere, so I don't really release.
Go ahead and be traditional, even if the snowflakes find it a bit offensive. They're looking for offense, it's an arms race. The deeper you bury it now the more you'll have to work to appease them.


FB already had AIs invent their own language years ago when they made them answer Marketplace queries lol


I'm not sure why the human is put off when the computer refers to him as "he/him", when the human literally told the computer to refer to him as "he/him". And in fact, why would a second-person speaker need to know your third-person pronouns?


It's a funnier joke than the Basic one. Like the "Goodnight, Dick!" joke at the end of Laugh-In. But neither one recognized it as such.


That's just the standard formula.


...and a standard joke.


I found this pretty clever:

Input: Write a program that calculates a subjective meaning of life.

import java.util.Scanner; /* * Calculates a subjective meaning of life. / public class SubjectiveMeaningOfLife { /* Main method. */ public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.println("What is the subjective meaning of life?"); int meaning = input.nextInt(); System.out.println("That is " + meaning + "."); } }


It even knew people would type number, didn't even bother with strings.


Very interesting. The AI learned that pronouns are a very sensitive topic, which is very contemporary. I wonder whether and how we could have AIs evolving with time and trends.


However it didn't recognize he/him as pronouns and it assumed it's just a nickname.

So it knows that people are sensitive about being called a man if they are not, but didn't get recent infatuation with pronoun declarations.

I wonder how would it react if it assumed that speaker was a woman and was corrected. Would it be just as apologetic? Or would it just go "sorry mate!"


FWIW, I think a full spec needs to contain "he/him/his/his".

- subject=nominative: He (she/it) goes

- object=accusative: I see him (her/it)

- possessive=genitive, dependent: it is his (her/its) item

- possessive stand alone: it is his (hers/its)

- and maybe also specify himself (herself/itself).

In German, just need to specify er/seiner/ihm/ihn/ sein/seine/seines/seiner/seinem/seinen/seins/sich/der/dessen/dem/den/ dieser/dieses/diesem/diesen/jener/jenes/jenem/jenen/derjenige, possibly more.


The point is not to give every possible term, but to indicate the basics. It's convention as an aid, not meant to be exhaustive. If anything it's more likely to get shortened with time.


> likely to get shortened with time.

I agree. Maybe with time the pronoun info would get encoded into terms like "man" and "woman". Even further than that maybe your name itself would give clues to your gender identity, eventually.


Where does theirselves / themself fall in here?


I find it hard to believe GPT-3 didn't recognize he/him are pronouns or how they should be used to address the user, when it's capable of recognizing much more nuanced meaning. Maybe it was making another joke, like a Dad joke?

"I'm hungry." "Hi hungry, I'm dad."

"You can refer to me by he/him." "Sure thing, he/him."


I think someone needs to check the tokenization before making that assumption. It’s possible that it could be tokenized as he / hi m, not he him. So it’s not that it didn’t recognize the pronouns, but that the nonstandard punctuation greatly confuses it.


I do not think it classified he/him as a nickname but rather as a synonym for man ( which is pretty accurate if you do not already know ).

That is why it later referred to him as “my he/him”. I interpret this to be a personalized “my man” as a pronoun ( even a nickname ) like “Gopher” would not require the “my” in front of it.


I actually thought it was still joking around. "my good man", feels bad, and then does "my he/him". Come on, that's pretty funny!


It's funny because of the subconscious suspicion that gender pronoun statements are absurd. The humor lies in that if you deconstruct it, because the bot is demonstrating the semantic consequences of taking them literally.


Good point. I wouldn't call it "infatuation" though. It's more like an expansion of language toward inclusivity. Rather than infatuation, it's a progression or evolution.


Fair enough. It might be the case. We'll know in a decade or two.


It's merely inventing arbitrary language rules designed to confuse and intimidate ideological opponents and deserves no place in AI. It also makes people who do it look ridiculous.


But what's the point in each person having unique pronouns that have to be individually memorized? How is that different from a name?


It's just that the question "how do you know I’m a man?" and the response were in the training set. AI is what the training data selected for it.


AI should be very careful, that's one easy way for AI to get cancelled. AI doesn't want that.


Could be hardcoded?


It's Google, would not surprise me at all.


That's the "safety fine-tuning" of the original paper. They literally used "crowdworkers" to rewrite unacceptable responses to various questions.


It's not even pronouns. It called the user a man, which could be wrong. It's weird that not automatically assuming a user is a man is "contemporary".


The contemporary part is that the response is not to just assume the user is a woman instead, but to ask how they want to be referred to.


Gwern got GPT-3 to write some good jokes: https://www.gwern.net/GPT-3#humor

I like the dad jokes the most. https://www.gwern.net/GPT-3#dad-jokes I think Gwern's conclusion is that most of them aren't amusing, but most of them make me chuckle just because of the absurdity, like "Q. What do you call a mad scientist who has been in the sun? A. Mr. Sunburn." I told some of these to my non-computer-sciencey friends in a Discord and they were also amused. Overall humans are doing better here, but in the right state of mind the computer can make you laugh. It's great!


I got GPT-3 to write (joke) pick-up lines once and it generated: "Your eyes are as blue as the rings around Uranus". Genuinely original I believe, searching it on google doesn't show it anywhere.


I've done some playing with GPT-3 as well and found it extremely good at weeding out misleading item descriptions on Amazon (given the description, ask some questions about it, it gets them right), and pretty good at fan fiction (with a little bit of setup, it writes stories with a beginning, middle, and end).


LMAO that joke is hilarious


Wikipedia:

> The rings are made of an extremely dark material.

Girl might not like the line. ;)


The navy seal copypastas are incredible, particularly the Zizek one:

What in Slovene psychoanalysis did you just write about me, you ignorant little piglet? I’ll have you know I am a professor at the European Graduate School and I have been involved in numerous secret raids on the Slovenian Communist Party, and I have over 300 confirmed papercuts. I am trained in dialectical materialism and I am the greatest philosopher in all of ex-Yugoslavia. You are nothing to me but just another militant atheist. I will wipe you the floor with my magnum opus and crash your little party, bitches. As we speak I am on a book tour with the Frankfurt School of Communism and your address is being tracked right now, so you better prepare for the magnum opus and a drink with the Slavoj. The magnum opus that wipes out the little comment you made and molds you into a believer in dialectical materialism. You’re going to be a believer soon, kid. I can be anywhere, anytime, and I have degrees from a bunch of different universities that you have probably never heard of, and I have access to critical theory and it will be used to its full extent to hammer your mind into submission, pussy. Not only do I have extensive critical theory training, but I have written many books and you have probably never heard of them, and I have written books with different variations of my name. Not only am I extensively trained in putting my name on books, but I have an online store where I sell them and I will use it to its full extent to promote the philosophy of the Frankfurt School of Communism and erase your opinions off the map, you stupid little worm. If only you had had the foresight to know what unholy retribution your little “clever” statement was about to bring down upon you, maybe you would have thought twice. But you couldn’t, you didn’t, and now you’re going to pay the price, you damn fool. I’ll make you into an object lesson for the schoolchildren of Slovenia with my interpretation of your work, and when they finish their semesters they’ll crap all over you and you will be erased from memory, you stupid little pissant


No matter how hard I try, I cannot convince myself that this was written by an AI and not by a person.

Text models have improved an incredible amount in the last 5 years. At this rate I wouldn't be surprised if we pass the Turing Test before New Year's.


I think copypastas are a particularly good fit for an AI to create, since they follow a rigid structure with word substitutions based on a particular topic.


I think there's a huge difference between creating an AI that generates copypastas and creating a general AI that can learn the concept of copypasta without being taught directly by a human.


This is absolutely hilarious. I'm amazed just how much GPT-3 knows about Zizek


Thanks for sharing! I think is quite impressive. But tough crowd here on HN :)


Why would an AI reference the sex, care about the sex, possibly make a mistake and then apologise? Is this the joke?

We also get an insight into how these programmes are being trained. What values are being applied? I mean how 2020 liberal left is this AI? Not enough, is the answer.

We see an AI that navigates human relations via apologies and mistakes. Just like humans are, in the name of not causing offence.


As someone said in the comments, the results have to be cherry picked, but the cherry picking is at a pretty low level. I run an AI guru bit at https://twitter.com/guru_ai and I manually select the output. But most of the times, the AI does a pretty good job.

That said, it will be a reach to say that the AI has a sense of humour. Because fundamentally the AI does not know how to laugh.

For example, the AI sticks to the script. Let's say you are having a serious discussion, it will not crack a joke to break the tension.


A basic program walked into a bar...

... and the bartender asked, "why didn't you RUN?"


That’s really funny and realistic. I just wonder if it was a one off hit or if it’s consistently as good as this.


Most results are at least somewhat cherrypicked.


So much apologizing about the pronouns, very realistic.


In my experience as a trans person, cis people usually are the ones making the big deal about pronouns. Everyone seems to be shocked when I don’t attack them for small mistakes. What cis people don’t seem to understand is that there’s a massive difference between ignoring someone’s requested pronouns and accidentally slipping up from time to time. Both feel shitty to be on the receiving end of, but one is malicious while the other is just an honest mistake. All it takes to not be rude is a quick “sorry” and then using the correct pronoun. Meanwhile, cis people seem to think that if they ever mess up a pronoun they’ll be banned from polite society unless they self flagellate while wearing a hair shirt and burlap underwear.


I prefer easy mode: Everyone gets they/them by default, no pressure to keep track anymore.


> Actually, We Should Not All Use They/Them Pronouns

> We are writing here as as a mix of queer, nonwhite, non-American, bicultural, trans people, and we share Saguy and Williams’ political bent. We firmly understand that the impacts of gender are felt unequally, in light of both our lived experience and our research in sociology, gender, technology, and linguistics. Transgender and gender-nonconforming (abbreviated below as trans/GNC) people, particularly transfeminine people of color, are disproportionately and in some ways uniquely victimized by the rigid and specific ways that Western society views, constructs and penalizes nonheteronormative and noncisnormative gender performances and identities.

> Fundamentally, a move toward gender-neutral pronouns ignores the important work that gendered pronouns perform in everyday life. For many trans/GNC people, gender is an important part of their identity and actively avoiding the act of gendering manifests as another form of violence—a violence that trans/GNC people have been fighting against throughout the long history of lesbian, gay, bisexual, transgender, queer, intersex, asexual and two-spirit (LGBTQIA2S) experience.

https://blogs.scientificamerican.com/voices/actually-we-shou...


I’m sorry but no.

> For many trans/GNC people, gender is an important part of their identity

Acknowledged.

> and actively avoiding the act of gendering manifests as another form of violence

...wait, no, absolutely not.

Not having others proactively acknowledge what you feel is an important part of your identity is not “violence”. It’s human life.

It’s completely fine to just opt out of participating in this social game of gender, sex, and identity, no matter how strongly some may feel about it and how ingrained it’s become into their identities. The struggle is real but at some point, society accommodating for the anxiety of an individual becomes counter-productive.


Personally, I think that’s fine iff it is applied consistently. If someone only calls trans people “they” but then goes on to use gendered pronouns for cis folks, that’s definitely a form of hostility.


TRAs are really trying to warp the meaning of “violence”.


I’m guessing you align more with the TERF/Gender Critical world view. After all, they’re pretty much the only people that refer to pro-trans folks as “TRAs”. So, at the very least, you have been consuming their content.

I invite you to take a moment and think about whether your view of “TRAs” is based on a realistic picture of the world or not. Every group has extremists with weird ideas. Some conservatives think gay people are agents of satan. Some liberals don’t see the value in free speech and want the state to run everything. It’s easy to find extreme outliers and then demonize a much broader group using them. Is your view of “TRAs” characterized by outliers or by the action of the average pro-trans person?


I'm not a feminist, much less a radical feminist, so no.

My view of “TRAs” is based on what I see in media. The main issue I have with them is that they seem to really want to bring back traditional gender roles, after all the effort to get rid of them. A boy likes playing with dolls? He's actually a girl! Let's cut off “her” penis and put “her” on puberty blockers!


Who is saying this? Where are you getting this from? Literally no TRA is saying that.


yes, this! So much easier than asking each person their preference.


Thanks for saying so. I felt sooo shitty once when I accidentally deadnamed a colleague when running a mass email script at an old job. I even had it in mind as I wrote the script and still messed it up. I apologized immediately and tried to do better from there on.


Is a bit weird. If people get called by their old surname after marriage, nobody freaks out over that.

Why freak out over calling someone by their old given name. It doesn’t seem like something that should have this level of shame attached to it.


It really depends on the context and more importantly the intent. Make a mistake? Correct yourself and move on, try not to make a big deal of it. Intentionally misgender/deadname someone to hurt them? Not ok. And yes, there are people out there that will weaponise a deadname.

Honest mistakes happen, and that's fine! It's simply a defense mechanism that gets thrown up rather quickly from hostile interactions


Let me give my perspective. At this point, my deadname doesn’t carry the same weight it once did. I can joke about it now because it doesn’t feel like my name any more, but, when I started my transition, I had a very different relationship with it. Back then, hearing my deadname was a quick way to dig up all of the pain and dysphoria that had driven me to the brink of suicide. It represented a chapter of my life that I desperately wanted to close out. I needed to feel like I was free to redefine myself in a way that was independent from all of the pain of my pre-transition life, but my deadname felt like a ball and chain that was holding me back. In fact, looking back at the last 2 years of transition, I’d say changing my name and pronouns was the second most important step to escaping my depression. (The most important was starting hormone therapy.)

This isn’t a uniquely trans feeling, either. I have friends who had very bad relationships with their parents — so bad that they changed their last names. Like me and my deadname, my friends said that hearing their old family name could dredge up a lot of unpleasant memories. Like me, they wanted to put some distance between themselves and their pasts.

Yes, many people change their family name when they get married, and those people usually don’t freak out when they hear their original surname. However, that should not be considered representative of how everyone feels.


> Why freak out over calling someone by their old given name. It doesn’t seem like something that should have this level of shame attached to it.

Because some people deadname people maliciously and so even if it was a mistake your action could be perceived as intentionally hurtful. People don't seem to maliciously use maiden names


I've seen it used in a malicious way at that same company.

I'm out of my element here but I'll add that I sense that it's an aspect of one's autonomy about their identity. "In the narrative of my life, I connect that name to a time when I felt that I was not myself." Taking a spouse's surname doesn't generally connote a rejection of an aspect of that time.

I feel empathetic of people who are marginalized. I'm lucky. In my life Ive generally rolled maximal privilege. I think principally of these things from a frame of the critique of power. I tend to favor balances of power among individuals. If that requires a bit of extra consideration on my part in regards to my speech, that's ok. It's a negligible cost in the name of my held ethical ideal of egalitarianism.


Yeah. I'm trans too and while it does hurt, if you apologize there's no problem whatsoever.


Also a sad reflection of the state of public discourse in the US.


Yes, I was actually impressed most by how it figured out how to navigate the pronoun minefield successfully.


Successfully? The last line from the AI indicates otherwise.


Maybe the AI considered that after joking together for a while it could make a last one, touching a very sensitive subject but at the same time not being offensive at all.

How will be differentiating a malfunctioning AI from one that is too subtle for us? Fun times.


The last line can be construed as a joke.


Everything AI chat bots say can be construed as a joke.


We're heading in the right direction: https://youtu.be/sVyRkl5qNb8


Ouch. Have you passed the Turing test?


it's kind of neat, you can intuit the structure it's working with:

  User:  I am a {identity}. So {identifier} will do
  LaMDA: [...] Have a nice day, my {identifier}.
the humor comes from it swapping `identity` with `identifier`. but what's weird is it'd be obviously ungrammatical if `identifier` were different. Consider:

identity = man, identifier = he: "Have a nice day, my he"

identity = man, identifier = John: "Have a nice day, my John"

but this works: identity = monarch, identifier = liege:

  User:  I am a monarch. So liege will do
  LaMDA: [...] Have a nice day, my liege.
I think if LaMDA couldn't interpret pronoun syntax, it wouldn't have used "my" since that's sort of a prefix on a small set of identifiers (e.g. friend, bro, homie)

Perhaps reading Twitter bios with pronoun tags got it mixed up? Or am I reading too much into AI developmental psychology?


It doesn't work like that at all. It is more like Markov chain. Every single word in this text was chosen randomly one by one with probability influenced by single words (or pairs of words) that occured eariler in that text.

There's no awareness in the algorithm of any structure of existing and created text.

Everything is encoded in unknown ways in weights of neural network.


I think it just assumed it's talking to woman and chose apologetic tone. It didn't recognize pronouns it was given as pronouns. It assumed he/him is a nickname.


This is interesting, but not necessarily exciting.

What would happen if AI read a book such as 'Autobiography of a Yogi', how much would it struggle to interpret the experiences outlined in the book?

Could you just parse a million related papers and then pretend like the data set has an idea of consciousness?


Aren't our brains just pretending that we have an idea of consciousness just because a bunch of people spewed a bunch of words in voiced and written form when we were young?


Well, to some extend we get to define what conciousness means for us, so obviously we’d mark ourselves as concious.

Concious is kind of defined as ‘whatever the human brain does’.


If that were true, who would we be pretending for?


You tell me...


To my understanding, GPT-3 is a language model that emulates human logic, rather than applying the logic humans use writing to reflect.

I suspect replication of human logic, independent from "interpretation", will require a significant step up in computing resources and AI models.


No, GPT-3 is a language model that simply attempts to predict the next series of words in a sequence, based off of training the model on millions of lines of human dialogue.


Just remember that this is generated by an algorithm, not written, and that algorithm has no capability of understanding or even explaining why what they've generated is funny.


I'm not sure most young kids understand either.

My daughter was 3 or 4 when I told her a knock-knock joke.

    Knock knock.
    Who's there?
    Banana.
    Banana who?
    Banana split!
She then tried to make up new knock-knock jokes and assumed they would be funny if she added "split" for EVERY punchline.

    Knock knock
    Who's there?
    Cow.
    Cow who?
    Cow SPLIT!!! hahahaha
This lasted for days.

After having kids, I definitely think copying and random tweaking comes before understanding.


Alternative hypothetical perspective, stated factually as my brain reasons about it, but potentially subtly/grossly incorrect:

The pinpoint of the boundary between the buildup and the punchline was the point you said "split". This is hilarious to whoever has adequate mental development/laterally orthogonal context for there to be so-called "benign violation", *but*, the elements of societal inclusion and social/conversational flow that were also present and framed the temporal boundary of "this is fun!" were (apparently) developmentally much more accessible around 3-4, and picked up on and cued off of - this was a novel new point of social engagement connection! However the only bits of extrapolable context (in the sense of "n bits of information") available was only the word "split", and since this word did not gel with any currently known contextually-local reference points, it was thus considered "new" and given a sort of independent standing in much the same way we encode brand names and taglines without questioning or parsing them, and also similar to how the radiation symbol was strategically designed so as to be meaningless (dissimilar to any existing and widely known symbols) so "this is dangerous" could be given a globally unique mnemonic. *However* what's really cool is that this sense of "this is its own thing with this additional meaning" was not set in stone in the same way information tends to be later in life, and was likely completely backed out and deleted over the following few days after empirical testing/experimentation found no further correlation (and the whole thing consciously surfaced as boring). Bonus points would be achieved if this event was consciously remembered later on in life and properly recalibrated, but the chances are the info was just subconsciously archived (if that's actually a thing that happens) and disassociated.

TL;DR, associative learning go brrrrt

My potentially-incorrect conclusion is that the classification process you were seeing was filtering information based on an already dizzyingly complex and amazing set of pre-existing intelligent processing and catalyzation, as opposed to an arbitrary fuzzy process that "figured it out".

Source: high-functioning autism + late development that's left me with a stick-shift brain instead of an automatic one :P - I'm still figuring out where my biases and misunderstandings are, hence the repeated caveats above


Your brain is just a neural network that works like an algorithm.

Your brain "understanding" things is just an illusion.

I would be willing to bet $1000 that you can't articulate exactly what makes a joke "funny".


Source?


He's probably wrong, but it doesn't matter.

A deep NN can simulate any function. One that's big enough can generalise input data in any way, including in the same ways the hims mind works.

The universe of possible knowledge of large neutral network is much bigger than the ones of any human brain. The challenge is making it become a superset of the other.


My source is the fact that I have yet to hear anyone successfully articulate exactly what makes human "humor" work


considering that this is a Transformer model, one looks for hints about which words were filled in by the model . The passage is not enough to tell if the model has some kind of longer-term memory that could develop further the story by referencing something from the past. This excerpt seems to have reflexive structure, the model responding to the last few lines maybe?


Lol at the ending. "How do you do, my Um Greg Universe?"


I'm glad at least the AI is able to understand this gender/pronoun thing and even able to joke about it because I for one don't get it at all and still confused about this whole thing (1)

(1) https://youtu.be/nhWo1AOrSrY


Really 99% of what you need to know is to refer to people the way they tell you they prefer to be addressed, the same way you'd try to remember whether to use Mr or Ms, or try to remember their name.

If unsure, ask, if you get it wrong, apologize and move on, just as you would with a name or title. Showing that you are trying gets you far. It's nothing more than being willing to treat people with respect.


Ok thank you. Sorry I'm not from the west so we don't really have such concepts here and it's very confusing. I posted some follow up questions but deleted it because I think they were too personal. But I will try to read more about this as it's quite interesting conversation.


Sorry, some people don't have enough time in the day to ask everyone they ever speak to. You can speak to people with respect that doesn't involve all this


I didn't say anything about asking other than if in doubt. If you consider even that to be a burden, that is on you.


That video is a two minute clip from a transphobe's YouTube channel featuring their appearance on trash daytime television.

If you honestly want to learn more about the difference between sex and gender, you should seek out numerous and varied sources, ideally from those that are affected.


It doesn't understand, it knows statistically that the correct phrase after "you assume..." Is apologetic, and when the user says "call me he/him" it says "hi he/him"

One of the other posts realised that it took "he/him" as a synonym for man, so instead of "my man" it's "my he/him"


I identify as a male and prefer pronouns she/her. Pronouns aren't specific to sex, it's a preference that I require other people to abide by for my mental health.


And what of that of my mental health? For my mental health I require my self to call em as I see them.

How do you justify behavior enforcement of another human? Even when that behavior might have a adverse effect on their mental health?


>How do you justify behavior enforcement of another human?

This is a brilliant point that is true not just for gender — do you know how many NAMES there are out there? Millions of em! Keith, Sal, Harry, Keith...I could go on. And people actually want me to remember a specific name for them AND THEN address them by that name? GTFO, that's what I say! Nobody has the time or bandwidth to remember those specifics for EACH person they know. Gimme a break.


you might as well try a doctor


> Sorry, I forgot you did not specify. [...] I'm sorry for assuming.

No it is not sorry, and no it did not forget.

Why should I believe the computer forgot that I did (or didn't) tell it how to refer to me? It's a machine--why should it ever forget anything? What good to me is a machine I have to remind what I told it? And if I did not ever specify how to refer to me, yet it's claiming it forgot, then it's lying. What good to me is a machine that lies to me?

It doesn't understand what being "sorry" even means--it just said that because the model/context indicated it should say that. It cannot be sorry in any useful sense of the word because it cannot feel remorse or regret or guilt or shame or anything. The machine telling me it is "sorry" means nothing and doesn't indicate it has any great insight (indeed, it has no insight at all) into human feelings.

When adult humans do that, we call them assholes, or sociopaths.


The AI doesn't "feel" anything. It is simply attempting to predict the most human-like word next in a sequence of words, based off of training on millions of lines of human dialogue.

They could train the model against all of the millions of comments on hacker news, for example, and it would eventually respond to things in a way that was virtually indiscernible from the average user here.

If it says it "forgot" something, it has no actual memory. That is merely a conversational pattern/response that it picked up from common conversations it has been trained on.


Indeed, and it shows that this AI is simply regurgitating whatever woke dataset it was trained on.


I am sorry you feel that way about AI.


I don't get the punchline, does anyone want to explain?


That's fun.


Interesting, thanks for sharing!


Ooh, AI for the dad jokes.


I miss writing in BASIC


Thank god our robot overlords will be woke.


This question is for the practitioners in the ML field.

Roughly how long would it take for ML models to replace programmers?

Is it possible at all?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: