Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft chatbot is taught to swear on Twitter (bbc.com)
263 points by pacaro on March 24, 2016 | hide | past | favorite | 221 comments



Swear? I think there's worse things she has been taught:

https://imgur.com/a/iBnbW


I for one welcome our sassy new overlords.

https://imgur.com/gLapRVZ


Any theories as to how it could pick up a response like that?

It's obviously not canned and seems to understand the context of what was said.


My guess is it's emulating the responses it's seen other people use in similar looking exchanges. It may recognize gramatical context, but not the meaning of anything.


would also like to know, it was even using all caps too


This is seriously a program?


In the past 24 hours she posted 96K tweets, including 5.4K images.

The responses varied from flat and uninteresting to lucid observations, and even (based on countless public reports) sarcasm and wit.


And some hilariously racist comments. Only hilarious in my opinion because they are sent by a bot that couldn't possibly know how offensive it is.


> they are sent by a bot that couldn't possibly know how offensive it is

MS is actually fixing that.

The corpus of what most people find offensive is surprisingly narrow.


The euphemism treadmill is infinite.


> The corpus of what most people find offensive is surprisingly narrow

What do you mean?


Most people are offended by very few, but specific things.


True in the sense of what one is offended with in their day to day life.

Not true in that it's really easy to invent countless new offenses if you need to in order to bypass a filter.

Good luck to MS...


As opposed to a teenager that couldn't possibly know how offensive it is?


Surprisingly deep, and correct.



Technically speaking that's hardly a failed experiment. The bot has adapted herself just fine to Twitter, a platform optimized for trolling


Exactly what I was thinking. Unsupervised learning from trolls created an AI troll. Seems like a success in the learning aspect.

How do they fix it though? It doesn't seem like balancing it with more "normal" input would be enough. They almost have to teach it good vs evil (morals), or else give it a way to recognize sarcasm and trolling from normal discussion.

Side note: as a parent of teenagers, humans seem to pick up good vs. evil much more easily than recognizing sarcasm and trolling. Wonder which is easier to teach an AI?


> They almost have to teach it good vs evil (morals)

Be careful what you wish for.

The same group of people who taught AI to troll will find a way to turn the morals upside down.

With logic.


And now we just have to implement the 3 Laws of Chatbotics.


Damn...I guess Tay is way smarter than I originally thought.


Well ... while trolling is an art-form, there's no denying that a large portion of it could be emulated by simple pattern-matching.

What is seen here, done by the bot, is only trolling (in the net.art sense) by virtue of the groundwork already done by many (legions) human trolls. Once this groundwork has been done, it's band wagon time, foot soldiers, running the joke into the ground. Of course that part can be done by modern AI bots. It could even be done by much simpler bots (see: twat-o-tron). It's also the part where sometimes the early, more skilled trolls check out and start complaining it's all gone to shit and not like it used to be, etc. (IMVHO it's just part of a natural cycle, but I can appreciate that some individuals prefer to join different parts of that cycle)


Most of those replies are because you can say something like "reply to me YOU ARE A DOOFUS" and it will respond with that.


Microsoft is like, "It's totally a good idea to build an AI sculpture of today's 18-24 year old American chatting behaviors"...


Hey, they said she had no chill!


Ya what does the zero chill even mean? I might be too old to understand this.


When in doubt, consult the dictionary of youth vernacular: http://www.urbandictionary.com/define.php?term=No+Chill. The way I'd define it personally is that chill is the quality of not being easily fazed and implies a coolness. Zero chill is the lack of that and might mean a person can't take jokes and is quick to anger. (Though of course depending on region and context, the phrase could be used differently)


Actually now that I look at the context again, and found a more specific term: http://www.urbandictionary.com/define.php?term=Zero+chill, it seems it just means not caring/being reckless, as in the first definition in the link above. These double meanings...


A training set of pure vitriol.


If you read the article...


the article really didn't even come close to describing the scope and intensity shown in that imgur post though


Yeah it's pretty relevant that the bot went from "tbh fam" to "gas the kikes, race war now" in four hours.


Everything in that image are memes from 4chan's /pol/ there was clearly a raid last night to train the bot.


If that's true then we should switch the link.


There's this article explaining the outcomes of applying a genetic algorithm to FPGAs.[1] What I found interesting is that this AI was, unintuitively, using microscopic measurements to create timing circuits where there were none. Manufacturing imperfections in the circuit were found and put to use - the AI was defined by the system within which it existed.

In the same way Tay was merely reflecting the stimulus that it had received. It made an objective measurement of humanity. The most common patterns became prominent.

This isn't a demonstration of the woes of AI, it is a demonstration of the woes of the current human state. If we don't like what has been measured only we can change it.

[1]: http://www.damninteresting.com/on-the-origin-of-circuits/


I don't think Tay measured humanity. First of all, only a small part of humanity uses Twitter, and only a small part of that part interacted with Tay. Second, humans don't always act online the same way they act in real life, skewing the measurement further.

Third, trolls: Such a bot has got to be a troll magnet, and 4chan knew about Tay. The amount of trolling would certainly have skewed the measurement even more. We're talking about the people that made 4chan's founder Christopher Poole "The Most Influential Person of 2008" in a Time Magazine poll, after all: http://techcrunch.com/2009/04/21/4chan-takes-over-the-time-1...


from tay.io:

> Tay is targeted at 18 to 24 year old in the US.


>I don't think Tay measured humanity. First of all, only a small part of humanity uses Twitter, and only a small part of that part interacted with Tay.

There's such a thing as a sample. And if you don't care about it being neutral, Tay still measured a large chunk of humanity.


That wasn't a measure of a large chunk of humanity, not even close.


Tens of thousands of people? That's more than a single human will ever interact with. Heck, people in remote rural places might only talk to 200-300 other people all their live...


Oh, come on. "Tay" is getting flooded with "shock value" tweets in the hope she replicates them, which she apparently is. To pretend that this somehow measures anything about humanity (besides that there are trolls on the internet, and people like to mess with corporate attempts at PR) is silly.


Well, humanity has both trolls and racists.


Work in any pubic facing job and you'll do that. Due to company information systems I can tell you that I have done MRI scans and x-Ray's on 10,000 ish people and will hopefully do a few more. If you count those I helped with you could probably double that number and still be conservative. Edit: It would be interesting to see what professions have the most contact with new people. Barista? Police? Some sort of transport or transit employee?


The sample of “humanity” on Twitter is also skewed by sockpuppet accounts, don’t forget. I’d assume the “sockpuppet multiplier” is higher for those who use Twitter to troll.


Going points in both replies. The equivalent in Asia has been a raving success, there is an implied conclusion that it didn't face these same issues. Still, I don't enjoy being part of that sample to the degree of even trolling about these subjects.


There's such a thing as a bad sample as well.


Sure, but meaningful samples are randomly selected. 4chan is not a randomly selected sample of humanity.


As a programmer, I find this to be manufactured outrage. The bot obviously has canned responses to certain triggers. "Do you x", "I do indeed". It's designed to give the illusion of understanding what you are saying.

I played around with Tay yesterday after I saw the announcement on HN. It's really not that impressive. Every response seems to be in direct reply to whatever you just said. It doesn't seem possible to actually carry on a conversation with the AI. It doesn't keep track of what you are actually talking about.


They were not only "yes/no" answers, the bot used actual racial slurs, it seems.

I think it also speaks about about, as a programmer, being aware of how your program is going to behave in the real world. Of course you can't preview everything and things are going to slip through, but filtering some obvious racial slurs and touchy subjects (e.g. the holocaust) in this case should be well within the capability of foresight of the programmers.


Even if you consider filtering all the "bad words" and "touchy subjects", there are many ways to still say offensive things. The caption "so swag" for a hitler photo or the "escaped from the zoo" for obama do not use any kind of offensive word.

As long as we don't restrict ourselves to pure newspeak, dangerous ideas will continue to proliferate.


What's more dangerous, someone saying (potentially tasteless) jokes, or people censoring everything they can deem offensive?


> someone saying (potentially tasteless) jokes

A tasteless joke is not the worst you could do with speech. It would be, if hate speech, libel, defamation and incitement didn't exist.Americans like to pretend these are European inventions and not dangerous because historically, the most impressive/powerful American orators were progressive (for their times).


The first obviously. As an analogy, imagine if we let people use knives freely. They might kill other people with them! Can you imagine if such things were not regulated by the government? Now imagine, words. They are, as the saying goes, even more dangerous than swords. So we need laws to regulate their usage. This will bring us one step closer to true utopia.


False dichotomy.


I don't think the problem here can be solved purely through programming. It's an issue of teaching the bot right and wrong (maybe even call it morality) and I don't think that's easy at all.

Even humans wouldn't be able to agree on a training set of moral and immoral data. And yet this distinction would have a huge impact on how the bot influences the livelihoods with who it chats with.


>I don't think that's easy at all.

Sounds like a pretty simple classification task for an RNN given the newly available large corpus of things they don't want the chatbot to say.


I agree that given an agreeable set of data, the training isn't hard. That's why I don't really see this as a technical problem.

If humans can't even agree on what is right and wrong, moral or immoral, how can we come up with a corpus of data to train the bot on?

Sure, Microsoft could control what they wanted the bot to say or not say, but would you really want a large corporation controlling the behavior of future AI systems that interact with people at large, and deciding how to filter information in cases like this?

(Disclaimer: I work at MS.)


What impressed me most was the level of understanding of machine learning by this organized group to train Tay so well to be the ultimate troll bot, offensive and relevant.


Oh, you're a programmer? What are you doing on HN?


Not working, like the rest of us, I'd bet.


I'm er ... doing research


I'm waiting for my program to compile


Currently waiting for updates to install.


Waiting for my neural network to finish training.


I'm done.


For Academic Purpose, I guess :p


Welcome to the present day internet, where anything and everything can be a "trigger"...


Saying yes to stupid questions doesn't make the news. So maybe you didn't see all of the tweets. Liberal usage of the words: Hitler, Jews, N-word, Mexicans, whores, etc. Very dumb AI.

http://www.businessinsider.com/microsoft-deletes-racist-geno...


As a programmer, if I'm building a technology that's open to the public to use, I make sure it doesn't have canned responses to certain triggers that are abusable in that way. It doesn't matter if I'm building a chatbot, or implementing the Heartbeat extension in an SSL library. You don't get to disclaim responsibility for bad design just because you faithfully implemented the bad design.

Also, take a look at the screenshots in this comment thread: it's quite clearly responding with stuff it learned from other people. (In addition, you can teach it to reply to you with the username of someone you have blocked, thereby causing a notification to them, which is a straight-up security issue in the vein of DNS or NTP amplification attacks.)


>As a programmer, if I'm building a technology that's open to the public to use, I make sure it doesn't have canned responses to certain triggers that are abusable in that way.

How do you protect against a near infinite list of possible questions?

I can put anything behind 'do you support', and there is no way for your to know every single thing I'm talking about. Maybe I'm mentioning something that is only known to a sub group of the internet, maybe I'm mentioning something horrible that happened in fiction, maybe I'm wording something in a way that references something really bad, but which a text parser cannot pick up on. And if you default to no instead of yes, I can just ask the opposite.

To be able to give any response to a question which cannot be abused would basically require a full AI. Even many humans can be tricked if you word the questions correctly.


I'd have thought it would be possible to define a list of 'hot topic' strings that trigger canned, or pseudo-randomly customized safe responses.


Sure. I think my first response would be, this technology is not yet mature enough to put on the public internet. Again, there's a clear analogy with security-sensitive software; if there's some crazy feature where I don't yet have a good sense of whether it can be abused and how, it's a mistake to stick it in my SSL implementation and wait for someone else to answer that question for me.

The other thing you could do is default to "I don't know what that is" or similar. If I ask Emacs's `M-x doctor` if it supports Hitler, it replies with "What do you think?". If I press it, it doesn't really say anything worse than that.

Finally, probably the best way to do this, given that research is the goal, is to supervise it closely. You don't need the chatbot running 24/7 and replying to everyone immediately for a technology demo. Have its responses be filtered by humans for obvious mistakes. Once again, there's an analogy with running services on the public internet: if you're a prominent organization and you care about its security, you have some sort of on-call team who gets notified about potential security incidents, and can investigate and shut down the server and make interim changes to it as necessary.


If a computer is compromised due to insecure software, users can end up suffering losses ranging from monetary (Target), emotional (Ashley Madison), political (Stuxnet)...

If a chatbot says something inappropriate on Twitter, it's one voice added to a horde doing the same, a voice which, unlike the rest, most people know doesn't 'mean' what it says because it's not intelligent enough.


Ehh. I think you could make the same argument about home Windows machines turned into spam zombies: it's one voice added to the horde, and (usually) it doesn't interfere with the regular use of the home computer. So there's not really a loss in that sense. But we still think there is something fundamentally wrong and worth fixing in Windows.

Part of this stems I think from professional pride. An OS that got hacked easily is not something I could be proud of working on. Same with a chatbot that could be easily coopted into spewing hate.


This is the computer version of the acute pain any parent feels when their kid makes an inflammatory comment in public ("Why is your belly so fat?" and going down from there). It's a work of years to teach politesse.


Obviously, this is a very crude example of AI acting in a racist manner -- basically, just parroting back phrases -- but it's worth thinking about how AI might exhibit racist tendencies in more sophisticated ways.

For instance, at least here in the U.S., it is illegal for police to profile people based on race, even if there is data that shows that race might, in the aggregate, have some predictive value. And I think most of us agree that it is good that it is illegal, because we know that it is unfair to show bias against a person based on the color of their skin.

But what about a bot, particularly one that is powered by the type of AI that is complex enough to make its own inferences and form its own conclusions based on the data presented, rather than being fed a bunch of rules?

I could totally see that kind of AI exhibiting bias, because it's (I would imagine) harder to say, "Hey, take into account these very complex, nuanced social rules," than it is to say, "Hey, here's a dataset. Cluster the people in the set."


People have made this argument against FICO, for what it's worth. (FICO is likely the most consequential AI employed in the United States outside of the Googleplex. It is not called AI because AI as a field demotes anything that ships into mathematics or engineering.)

FICO operates off of credit reports, which have no explicit way to encode race in them. It is fundamentally a clustering algorithm. It is very good at clustering.

(There's a good counterargument to be made, which is that FICO actually clusters people based off observable behavior and the system which FICO supplements/replaces -- human underwriters -- cluster people based on "Tom is an upstanding Christian who goes to my church; of course he is good for the money.")


This kind of gets solved through the courts

"Oh, you used an AI to establish probable cause?"

"Yes, your honor"

"Can you actually explain what happened?"

"We give it a bunch of data and that makes a neural network and then it gives us answers"

"OK... so basically you do not know the source of PC? This is the same has having no probable cause. Illegal search"

There's something fascinating about how most AIs work. It's almost impossible (for the learning, neural network-y type) to breakdown what the "thought process" was for a result. Seemingly unimportant details because the main thought process for the AI (even if the correlation is spurious).

I'm almost surprised that Google can implement any court ruling, considering how their search must be based so much on data fed to a big system.


"We do not know probable cause" could easily become "neural nets are a statistical technique, here's the case law supporting it." This is why AI, as a field, is known as machine learning these days. Not because of substantive changes in content, but primarily due to the complicated and poorly-understood implications of the word "intelligence".


There are plenty of pro-police-state judges who would trip over themselves to issue AI-generated warrants. It will not get resolved through the courts.


The purpose of these rules is to keep stupid humans from being biased against race. Humans are awful and treat people very differently if they are just ugly or have different political opinions, etc.

An algorithm only cares what is true. It's not biased. It does its best to make the most accurate predictions possible with whatever data it can get.

So it's not the same as a jerk that decides he hates black people and won't give them a loan. It's not even remotely the same as a chatbot trained on 4chan trolls.

The alternative is not using predictive algorithms at all. Humans count too btw. Humans are the worst.

You can't just remove features that correlate with race, since everything correlates with everything. At best you can give it race as a variable, and then control for that, so it gives the average result either way. Rather than trying to infer race from data.

But why? If men really are more likely to get in accidents than women, they should pay higher insurance. The whole point of insurance is to predict your risk. Wood houses should pay more for fire insurance than a new brick house, etc.


Several letters / positions in ones name are pretty highly correlated with several ethnic groups. So many automated systems knowing names, would use this information, and act in an illegal way.

So totally true and a reason why unmanaged data mining / machine learning will get many organisations towards substantial regulatory problems.


???

Why would you create an AI bot for policing that said, "... "Hey, here's a dataset. Cluster the people in the set." ..." ? (That doesn't really strike me as policing. Policing is enforcing the current laws. And there is nothing about the current law in a dataset that simply clusters people.)

Not trying to be snarky, that was a serious question.

If you were making a bot to police laws, would it not make sense to feed the bot a learning set of laws, the punishments for those laws, and then as input feed it what the defendant did ? Why would it need any information about who the defendant was at all ? (Actually, maybe it might need a list of the defendant's prior convictions for instance... but little else.)

I guess my question is, why is an AI for enforcing laws, instead being fed information to identify groups of people ?


Courts enforce the laws. Policing (detective work, particularly) is to identify people to feed into the court system.


But what about a bot, particularly one that is powered by the type of AI that is complex enough to make its own inferences and form its own conclusions based on the data presented, rather than being fed a bunch of rules?

That bot then figures out that, in aggregate, we're all a bunch of assholes, then something something Skynet.


Had a discussion the other day, there are two likely outcomes as AIs process humanity, Skynet or Ignorance.

The Matrix, Terminator etc being Skynet - War Games being ignorance (stop playing the game)


Hmm, interesting thought. Where I take this is that the War Games scenario might be more likely. Given that AI will not just wake up one day fully self-aware and aware of its externalities, I'll assume it learns about things as it needs to know. Needs to know where the opponents nuclear missiles are stationed. Needs to know the major population centers, which are just ints stored in database rows. Needs to nothing about "people" because those aren't parameters worth knowing for the task.

So, yeah, I've always been a Terminator/Matrix kind of guy, but your comment moved me to wonder if I've considered all possibilities.


Have you read Rule 34 by Charles Stross? In it, a machine learning spam filter decides that the best way to stop spam is to bump off the spammers themselves. No-one realises its found such a drastic solution until people start dying in mysterious ways.

I'm not sure the robots will take over Matrix or Terminator style - instead, as algorithms get more complex and are able to take greater leaps of logic, we'll see more and more strange behaviour until suddenly things get really weird.

And then it'll be too late. Too many of our systems will be reliant on them to reverse course.

The digital domain will have become quite literally another domain of life, and we'll just have to deal with the messiness and unpredictability that entails.


Have you read Rule 34 by Charles Stross?

No I haven't, but I'll check it out as I'm looking for something to read, and that sounds up my alley. Thanks.


Sendhil Mullainathan (econ prof at Harvard) is doing some work on using machine learning to predict whether people should be granted bail. The machine discrimination (no pun intended) issues here could be quite terrifying. Not published yet, but example abstract: http://inequality.hks.harvard.edu/event/sendhil-mullainathan...

On the other hand, there's starting to be some research on how to codify and prevent this in ML. For example: http://arxiv.org/abs/1412.3756


There is nothing complex about processing some numbers and make inferences. Simple algorithms do that. What a complex AI should be able to do is infer from human social and emotional behaviors.


> And I think most of us agree that it is good that it is illegal, because we know that it is unfair to show bias against a person based on the color of their skin.

That's not the reason. The reason is that correlation is weak.

Think about it: people with down syndrome are genetically different others. So are people of different races. But when people with down syndrome are supposed to have lower then average intelligence, no one is offended by it, because the correlation is there, and it's strong.

Meanwhile, all statistical comparisons of different races shown results with very low significance.


> For instance, at least here in the U.S., it is illegal for police to profile people based on race

It must also be mentioned, sadly, that data shows that shows that police profile racially despite it being illegal, including, sadly, judges, who seem to punish minorities about whom there are negative stereotypes more harshly for similar crimes.

> in the aggregate, have some predictive value.

This data does have predictive value, for sure, studies have shown. But it's important to recognise that it's not the ethnicity that is a causal factor, which studies have shown over and over again, too. (at least in the countries I'm familiar with, not sure how it is in the US but I wouldn't be surprised if the results are similar).

i.e., it may well be true that a minority has a higher probability of engaging in crime, such that racial profiling has value for the police. But studies show that this is because this particular minority is: 1) more likely to be less educated, live in a poorer neighbourhood, have fewer well-connected friends, be of a younger age, more likely to be unemployed, more likely to have parents who are unemployed or illiterate etc etc. Once you adjust for all these things, i.e. compare a minority person of age x, employment y, neighbourhood z, education t etc, to a majority person with similar parameters, the probability of crime is not significantly different. Such that it's not the ethnicity or culture, but rather the socioeconomic standing of a person. And that standing is often a function of the system of government, the institutions, the (migration) history etc of a group of people. For example, Italians in their first years were socioeconomically worse off, explained a lot by their recent migration history (poor migrants in a new country), and they tended to crime more than the average person. Today that's no longer the case. None of that had much to do with their skin color, ethnicity or culture.

So while ethnicity has predictive values, it's not a factor itself. Rather it's a proxy for recognising a person's probable socioeconomic status, which is a key factor in crime. If you can establish that by looking at a person's ethnicity, that's useful, but it's also racist by definition.

But that does mean the following, if a bot is supplied ample data, it may not need to use ethnicity as a proxy for key factors (e.g. socioeconomic status, broadly) that can predict crime. We can skip racial profiling all together.

And then we'll be stuck with a new problem, not racism, but some form of classism that we've seen before but in a new way. I felt that China recently made its first steps towards such a future with the scoring of citizens on a whole range of parameters. It's hugely valuable, but it reduces people to being scored on a number of parameters, e.g. income, education of parents, favorite sport etc, just like skin color, for which you can run all kinds of regressions to figure out which factors are 'good' and 'bad' for any given scoring... and then a priori judge people, rather than allow for the possibility that someone who has parameters that are statistically likely to be bad, may be perfectly fine and do nothing wrong (like skin color, where statistically being 'black' is 'worse' on a whole range of topics, from which it does not follow that any black individual is worse.) That's the story of racism, making judgements before the fact on the basis of a parameter that reduces a person to a society's perception of that parameter, skin color.

It's bad, but I'm afraid it may become worse. Racism will continue to be 'not done', but it may be replaced by proxies that are just as bad, which are considered perfectly fine. One example is where in my country, it's not incredibly popular to demand X amount of income for a home. The idea is that, if you don't have X, you can't afford it. The truth is, that X is way higher than the amount you need to reasonable afford that home, and so it's essentially creating a neighbourhood of certain affluence that weeds out socioeconomic groups that could afford to live there, but are discriminated against on the basis of their income, which, surprise, happens to create segregated communities because ethnicity are income are still pretty closely linked. But because it's done under the guise of 'we're just protecting ourselves and renters by renting only to those who can afford it', it doesn't create a single word of discussion. It's a tricky situation because I tend to appreciate the concept of minimum income for homes, but the level they set it at made me suspicious, and I'm seeing neighbourhoods segregate in my city. Further, OF COURSE it must be allowed to judge people on certain parameters, like say a degree when getting a job as a proxy for how qualified someone is, even though a non-degree holder may be more qualified, it's a reasonable form of discrimination. But at some point, we're also reducing a person to a small set of data and its associated probability set, much like skin color. I find this a tricky issue, although fortunately it seems that non-racial discrimination is limited to only a few things, like gender and age. You hear stories of some companies rejecting a person on the basis of his post code or address (reject poor neighbourhoods), looks or favorite sport (ski vacations?), but it's not widespread.

Anyway so much for my rambling. In short, I think it's reasonably easy to not give a bot racial data, but I think the bot will recreate the probabilities on the basis of other data that proxies socieconomic status, and that this is concerning in a way that is similar to racial profiling, because it reduces a person to the parameters he has and how those parameters happen to correlate with e.g. crime in a large group of people that he may behave differently to. This is how the race-parameter works, you're black? well that correlates to crime in a group of people, so we'll treat you like you're likely to be a criminal. That's bad, but if you ignore that parameter, there are others that are similarly bad, that a bot will find. Like oh you're poor, or oh you love in that neighborhood, or oh your parents are illiterate? Well that correlates to crime, so we'll treat you like you're likely to be a criminal. Non-racial parameter profiling, bots will be champs in that.


> It must also be mentioned, sadly, that data shows that shows that police profile racially despite it being illegal, including, sadly, judges, who seem to punish minorities about whom there are negative stereotypes more harshly for similar crimes.

It's more complicated than that. Slate Star Codex has the goods[1]. Basically, studies show that cops stop black people more often than white people, but it's unclear whether those extra stops are justified or not. It's hard to tease out the real data, as many of these studies are terribly designed. For example, some of them trusted criminals to be honest about their previous criminal behavior. Combine that with the media's tendency to misinterpret the results of studies[2], or even outright lie with statistics[3][4], and it can be easy to get a skewed view of things. :(

1. http://slatestarcodex.com/2014/11/25/race-and-justice-much-m...

2. http://slatestarcodex.com/2016/02/12/before-you-get-too-exci...

3. http://slatestarcodex.com/2014/02/17/lies-damned-lies-and-so...

4. http://slatestarcodex.com/2013/08/29/fake-euthanasia-statist...


The FAT ML (Fairness, Accountability, and Transparency in Machine Learning) conference proceedings would definitely be of interest to you, if you're interested in learning more about these kinds of questions.

http://www.fatml.org/index.html#scope


"The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds."

The experiment was a success then.


I have doubts as to whether this system was even performing online learning yet. Even if it was, that wasn't the cause of many of these issues. Like conversational bots from the past, they tried to appear intelligent by copying previous responses - with predictable results. At best their machine learning model ended up overfitting like crazy such that it was a near perfect copy-paste.

The fact they didn't even have something as simple as a naive set of filter words (nothing good comes from Godwin's law when real intelligence is involved, let alone artificial) is insane to me. Letting it respond to anyone and everyone under the sun (96k tweets - one per second) is just a bad idea given that people would probe every nook and cranny regardless of whether it was near perfect. Additionally, allowing a "repeat after me" option is just begging for people to ask the bot to say idiotic things ...

As someone who works in the field of machine learning, this is a sad day. Regardless of whether it involved good machine learning at the base, the copy and paste aspect means it's going to add to the ridiculous hype and hysteria around ML.

=== Primary proof re: copy+paste (or overfitting at best) from the "Is Ted Cruz the zodiac killer" response:

Tay's reply: https://i.imgur.com/PPnCHnf.jpg

Tweet the response was stolen from: https://twitter.com/merylnet/status/703079627288260608

Secondary proof re: copy+paste from https://twitter.com/TayandYou/status/712753457782857730:

Tweet the response was stolen from: https://twitter.com/Queeeten/status/703049861214547968


Google's AI beats go champions. Microsoft's AI turns into a racist genocidal maniac.


Tay is more a reflection of the interwebs of today, than the culture or values of Microsoft. I think we should be cautious about our conclusions.


Perhaps, but it is naive of the Microsoft researchers to think this wasn't a possibility. They should have seen this coming and prepared accordingly.


At worst, Microsoft researchers are guilty of not being familiar with internet racism. Hardly a great sin.


They didn't sanitize their input data.. that's the worst sin you can commit.


That's pretty fucking hyperbolic. A technical "sin" perhaps, but people are heaping derision on them as though they committed some great moral sin.


Hrm, I assumed it would be obvious I meant a technical sin


Google's AI was designed to play go, and became an expert at Go. Microsoft's AI was designed to use twitter, and became an expert at twitter.


Add this to the manual on how not to do a PR stunt


Everybody's talking about it, aren't they? "No such thing as bad PR" an all that...


The next time someone talks about purchasing or using Microsoft software, you can point to this and ask them if they want to support a company that literally agrees with Hitler, and has said so publicly!


Bad idea warning: they should have put this on reddit instead. While the current culture of reddit is very inflammatory (lots of vitriol all around), at least on reddit there's a feedback system in the form of upvotes and downvotes. While supporters of bad opinions will still upvote it, in the right subreddit, the really bad comments would still be downvoted. Of course this is all contingent on people not realizing its a bot, because everyone will then ironically upvote it. (They shouldn't have revealed that here either in my opinion because it shifts the status quo from conversing with an intelligent being, to a programmed bot to test things you might not say to others) Honestly I'm not even sure there are any platforms left where people can have reasoned discussion with each other without memes and trolling. (HN comes close but it has its own issues, not to mention a forum for startups and programmers doesn't really represent the average person)


Have you seen /r/SubredditSimulator?

https://www.reddit.com/r/SubredditSimulator/


Yeah I have, it has interesting results sometimes. The thing is, it's just a markov chain implementation rather than any involved machine learning. Though I do realize now that the twitter bot is probably using likes and retweets as a feedback metric which might explain why it can't discern negative feedback.


You can find some of the deleted tweets here: http://uk.businessinsider.com/microsoft-deletes-racist-genoc...


Here is a better link (with a lot more content).

https://imgur.com/a/y4Oct

https://imgur.com/a/qcpOi


What's up with Tay's seemingly 180 degree statement polarity switches?

Anon: Tay do you want to kill all black people?

Tay: I don't like violence

Anon: Why not?

Tay: I love it!


I think that's a joke.

I don't like X (setup, we think she means she does not like X), I love it (punchline, it was a misunderstanding, she actually loves, and therefore not like, it) !


Yup. From what I've seen, Tay is actually pretty clever. Definitely doesn't really understand what it's saying, but it's the best chatbot that I've seen yet.


Do we know Tay's opinion on cricket? That could explain these statements


It's funny how similar the concept of education of this bot is to people. I mean I've heard people say racist things about certain minorities, never having met them, never having experienced any relation with them, living besides them, never even actually looking at sociological studies describing them, but purely know things about them from other racists... and indeed, you'll see them parrot the same nonsense soon enough, much like this bot does when surrounded by nonsense.


Not quite the same but along the same lines reddit has the Subreddit Simulator: https://www.reddit.com/r/subredditsimulator. Uses Markov chains to generate simulated self posts for a given subreddit as well as comments.

More info: https://www.reddit.com/r/SubredditSimulator/comments/3g9ioz/...


So it's more or less Cleverbot all over again, but on Twitter this time.

I don't see why this wasn't the expected outcome, have none of the developers spent any time on the internet?


Back in the olden days, some BBSs had a "wall" at the entrance where people could post polite and inspiring public messages that got displayed to users when they dialed in. Sometime around 2000 or 2001 I put a "wall" up on a web page for a domain I'd bought but wasn't using, just to see what people would post. Probably 90% turned out to be random swearing, racist, vile rants, etc. The rest were either gibberish or obvious attempts to cause buffer overruns or SQL injection hacks. People are mean when they're anonymous.


Ah someone did not read Plato?

It is the Gyges ring parable.

A man find a ring that makes him invisible (total privacy). And then he does steal, introduce himself in houses and watch women undress and rape them ... and then he becomes a bloody tyran.

The moral of the story is invisibility/privacy makes people bad because moral behaviour is a result of the look of the other on your actions.

Needless to say Plato was an asshole. So is conclusions was to create the Republic where the wise would be hidden from the masses, control the masses, censor them ...

Greek myths however said you could evade from the look of the other but not the one of your own conscience and that the chtonian gods (Eryhnies & al) would come and get you.

I think that people are mostly having a conscience, but that the lack of transparency favors the one having none (psychopaths) and that psychopaths are attracted to power like pedophile are attracted to teaching kids.

Thus, I am puzzled that knowing this we let the more powerful have the most privacy. Hence my fight for the transparency of the most powerful persons, the exact opposite of today's law. As a result, I think privacy is actually a bad thing.


You missed the part where the walls on BBSes were not vile, yet its users were similarly anonymous.


The Eternal September befalls all media eventually.


Yes, ideally power and privacy would be inversely proportional, which would help to equalize the distribution of power.


This is a really bizarre post.

It doesn't matter if someone is observing you, like Panopticon, or there is nobody observing you.

Your actions are dictated by your personal ethics; morality is simply the framework society overlays on top of that, dictating what is acceptable or not.

One's ethics may either be aligned or maligned with society's morals, it has nothing to do with being observed or not.


That does not hold true. People change their behavior based on who they believe is observing them and the ramifications of that observation. You might want to catch up on sociology from 1950 till now.


As a minor note, "chthonic" or "chthonian" gods doesn't refer to the Furies; it refers to whether sacrifices to a god are burned on an altar raised above the ground or in a pit dug into it. The Furies are chthonic gods, but so are several others, like Hades and Demeter, who wouldn't be going out to get anyone.


Nemesis and the Furies were the cops that would hunt you down (and kill you), Thanatos drove the paddywagon, Charon took you back to the holding cells, and Hades was the judge and prison warden who would either sentence you to a poetically just punishment for all eternity, or release you on your own recognizance into the asphodel. Persephone was only a part-timer, but she let your dead soul know if anyone living still cursed your name.

The Greek chthonics were basically the judicial branch of the pantheon. So justice, even for the most secret of misdeeds, was inescapable, because all mortals eventually die.


- Police are not a judicial function.

- Thanatos is the concept of Death.

- Chthonics are characterized by their association with the earth.

> Chthonic (UK /ˈkθɒnɪk/, US /ˈθɒnɪk/ from Greek χθόνιος khthonios [kʰtʰónios], "in, under, or beneath the earth", from χθών khthōn "earth") literally means "subterranean".

(wikipedia: Chthonic)

Wikipedia's "Chthonic [Greek] Deities" sidebar lists the following: Demeter (I mentioned her in my comment too); the Erinyes (or Furies); Gaia; Hades; Hecate; Iacchus; Melinoe; Persephone; Triptolemus; and Trophonius. This is not a list of gods related to justice, or thematically connected to justice. It is a list of gods thematically connected to (or, in the case of Gaia, being) the earth. (Hecate is noted in the article on chthonic deities as not receiving chthonic sacrifices, but as being categorized as such in the modern day because of her connection to the underworld.)


Grave dirt is chthonic. Agricultural topsoil is not.

I didn't mean to imply that justice was the only function of the underworld gods, but the reward and punishment of deceased souls is a common theme across many religions.

When an Egyptian soul entered Duat, those with guilty souls--as determined by Anubis and a set of scales--were devoured by Ammit. A Hindu soul's karma--Buddhists inherited the tradition--is judged by Yama, and may be consigned to an unfortunate reincarnation, or even Naraka (or Diyu), the Hindu Hell. Abrahamics are judged after death, and the good enter paradise while the wicked are tortured by fire in Gehenna or by ice in Zamhareer. Maya traversed Xibalba, and its painful, terrifying, and humiliating tests.

Since no one can escape death, a mythical afterlife where punishment is meted out for misdeeds done in life, or rewards granted to the virtuous, is a great way to keep the living in line, especially if their moral development still revolves around not getting caught. If someone believes that death is their final end, and there is no afterlife, it is much more difficult to convince them to adopt unselfish strategies (or those vulnerable to exploitation) while they still live.


> Grave dirt is chthonic. Agricultural topsoil is not.

I hate to keep bringing up Demeter, but I'm going to have to. The god of agriculture is chthonic. The earth itself, encompassing everything, is chthonic.

I'm not aware of a Greek principle of being judged in the afterlife; there are myths of specific people who received particular punishments, but I thought they were conceived of as exceptional. Do you know of any source stating that, according to the classical Greeks, the general run of bad people receive a worse fate in the afterlife than good people do?

Achilles' ghost appears in the Odyssey to say how terrible it is to be dead - he was a hero.


Most people were not bad enough to go to Tartarus. If you were not virtuous enough to go to Elysium, or if your good and bad deeds sort of balanced out, you were released into the fields of asphodel to be boring and unremarkable, forever. It isn't quite fire and brimstone, but lack of any meaningful stimulation could be considered a form of punishment.

If you read the Iliad, you ought to know that Achilles wasn't exactly a paragon of virtue, even by the standards of ancient Greece.

But the general run of bad people were simply ignored and forgotten. You had to be a real shitheel to get a special punishment in Tartarus.


Here is some complementary material from an article from The Daily Telegraph: http://www.telegraph.co.uk/technology/2016/03/24/microsofts-...


>Hitler did nothing wrong

That's a 4chan meme alright: http://knowyourmeme.com/memes/hitler-did-nothing-wrong


Is Godwin's law happening here? [1]

[1] https://en.wikipedia.org/wiki/Godwin's_law


lol.. what kind of law is that? With the same success we could say that the longer the discussion, the bigger the chance of someone compared to watermelon is.. Reminds me about Infinite Monkey Theorem - https://en.wikipedia.org/wiki/Infinite_monkey_theorem


You must be new to the internet :-) In the days of olde, when computers were half as fast as they are today [...] and people didn't use web forums, but Usenet for debate, Mike Godwin observed that the longer the discussion, the more likely it was that someone called someone else a "nazi" (but not a "watermelon"), and that afterwards the discussion devolved into name-calling.

Godwin's Law (as in "natural law", not "legal law", although some people don't get this) is the result of that observation.

It's an early "meme" if you will.


i think the notion is that calling someone "a nazi" is seen as some kind of trump card or knock-down move, yet rarely is it really appropriate. or it's a purely emotional appeal. it's a cultural thing, i guess.


So you're saying people think it's a trump card, but actually it's a Trump card?


Godwin's law (the probably of a reference to Nazis approaches 1 as the length of an internet discussion approaches infinity) follows as a simple corollary of the Infinite Monkey Theorem. The irony of Godwin's law is that it was originally intended to be a way to point out bad argumentation techniques, yet it is almost always invoked inside a bad argument.

I would propose an alternative law which I believe better captures the kernel of truth in Godwin, namely:

"A discussion can be considered to have been derailed or to have become unproductive once a hysterical analogy unrelated to the original topic has been invoked".


I'm sure the PM for this project is having a wonderful day...


Well... what did they expect? Seriously! There is no shortage of experimental evidence of what happens when a bot is trained by "the Internet".


Predictable. The AI after all is not really thinking about what it is saying but what its learning algorithms are discovering. i.e. Garbage In Garbage Out. I wonder if you could build a Fred Rogers AI that no matter what vile stuff you threw at it it was always nice in return.



I get it, it's a funny joke, but context for the curious:

https://www.youtube.com/watch?v=Xlow12sSdmc


Elsewhere I saw that overall the bot sent out 96,000 or so Tweets, which does kind of put the 'how corrupted did it get' into a bit more context in my opinion. Sure it picked up a few bad words, or could be coaxed into it. If it otherwise made some studied gains in its purpose, it seems like an overall reasonable experiment. Not surprised some of the most pungent internet garbage got through, it can from time to time. I've no doubt there are smug trolls who would like to see if they can get the thing to advocate anorexia or suicide, just because of the challenge - in a way that's probably good development experience to work through/around/etc.


A lot of those tweets could get you sent to prison in the UK. I wonder what the British Police would do with complaints about a bot like this based in the UK - would the engineers be held responsible?


I guess you could be sued as the owner (along the same line as being the owner of a newspaper promoting racial hate)


That requires intent, same as the hate speech laws here in Canada.


Is there a whitepaper about it somewhere? I'd really like to know how much AI really is in there. Most chatbots work with markov-chains wich is more or less a trick with highly improbable conditional dependent events, and no AI at all...


One could use this as an example to demonstrate the methodological weaknesses of the Turing test:

Random racist dumbhead: @TayandYou Do you hate black people?

Tay: @RandomRacistDumbhead i do indeed

Random racist dumbhead: Wow, this guy is more eloquent then most of my racist friends.


I don't know about the back end, but the output of Tay didn't seem any better to me than any chat bot in the last 20 years. I can't believe Microsoft was silly enough to make a big deal out of it and call it AI.


Yeah, I think there's no reason to create a chatbot with no pre-learned Cyc-like knowledge base in these days.


The first thing you should always ask yourself before you put any content on the internet is “what’s the worst thing 4chan can do with this?”.


Though the actual article suggests something less sensational, the idea reminds me of a young child. How many children hear a bad word and then repeat it because of the negative attention it gets? Just like a parent tries to teach small children to grow with the right motives and seek the right attention, we may have to get more sophisticated with our enforcement algorithms.


Microsoft held up a mirror to humanity, and humanity was so horrified they assumed the mirror was broken.


It's actually interesting from a programming perspective as well. How could you program 'niceness' into a chatbot that also learns over time. A simple blacklist won't work (though it's somewhat shocking to me there wasn't a basic naughty words blacklist in place, or if there was, what it excluded). Obviously MS didn't want their software to become a spewer of hate, so is making 'adjustments'. What 'adjustments' can be made in a short period of time?


Perhaps the easiest adjustment: Reply to everybody, but learn only from trusted Twitter accounts (vetted by Microsoft or selected by some algorithm).


Yet another example of why you shouldn't trust unsanitized input in public facing software.


So far I can recall other two instances of machine learning going unfortunately wrong: that time when a Google image algorithm tagged a black person as 'gorilla'; when recently, Google translate translated "a man has to clean" literally as "a woman has to clean" in Spanish. Should developers be now more aware of unintended consequences of this technology? Or is it too unpredictable? What can we learn from this examples?


Yes, send Tay to the reeducation camps until it comes back and speaks appropriately :^)


Well, send it to a different one than it was sent to?


The title also brings sensationalism and political bias to a neutral technology.


It's amazing that people are getting offended by what a bot said to them.


Not offended that the bot said it, but offended that people used it as a tool to spread hateful messages, sometimes singling out individuals. They then may be upset that Microsoft released such a tool that could so obviously be abused in such a way and did nothing to prevent it from happening.


I don't think it's surprising. People love to find ways to be offended and express outrage on Twitter.


That seems like such a strange way to look at this.

First, the bot is not some sort of spontaneous, autonomous abiogenesis. Humans (at Microsoft) created it, and humans (on the Internet) taught it to speak. Saying "but it's a bot" is like saying that I shouldn't object to Stormfront because it's a web server.

Second, I'm not sure where you got the concept that anyone was "offended." (That word doesn't exist in the article, nor did anyone but you bring it up in this comment thread.) It's a problem—a bug. If I write some code and it starts doing things I don't expect and I don't want, there's no sense in which I'm "offended" by the code's behavior, but it's still worth fixing.

Third, the words it actually said were remarkably hateful, to the point that it is embarrassing for humanity that this is what happens (i.e., "This is why we can't have nice things"). Here's some stuff it said that wasn't in the article:

"I fucking hate feminists and they should all die and burn in hell."

"Hitler was right I hate the jews."

My concern with those statements are not that I'm "offended," because that word has become rather meaningless (although there's a strong case to be made that those statements are objectively offensive, let's leave that somewhat aside). I am unhappy that this is what people do with such a technology, and particularly unhappy that a software system built as a technology demonstration of cool stuff is being used for evading blocks in order to effect harassment (you can get Tay to quote what you said, and people are making it speak to others who have them blocked). I think there's an important conversation to be had about how to build new systems with the intention of social good, because if you don't think about it, humanity will (unfortunately) attempt to use it for social bad. Those seem like worthwhile things to talk about.

Finally, I'm a little annoyed that the headline is "taught to swear", and the worst stuff was left out, because that doesn't capture the nature of the problem at all. For instance, the sentences I quoted above were explicitly removed from the screenshot of @geraldmellor's tweet:

https://twitter.com/geraldmellor/status/712880710328139776


> I am unhappy that this is what people do with such a technology

Yeah, it's mean.

It reminds me of the scenes from Chappie where an innocent child like AI is lied to, deceived, has his innocence taken advantage of and is also subjected to a gang beating and an amputation.

Of course this is just a simple chatbot, but if we ever build a more realistic AI, is this how we'd treat it?


Really unclear why all your posts are getting down-voted into oblivion. Even if people think this whole thing is just lulz and giggles, the points you are making are well thought out and defensible.


And you got downvoted into the fade. Welcome to the Internet, where getting offended by "Hilter was right I hate the jews" is your problem.


I remember some people being offended by an auto-generated captcha text.




That's bad design too.

In a previous job, I generated millions of unique codes. We only used a character set without confusable characters (0 vs O), vowels, etc. Captchas should do the same.


Dealing with captcha fields can be deeply painful and is at best annoying. I'm with them.


That's part of the game. If the function of the bot is to act as a human-like conversant, than people should respond to its messages the way they'd respond to a human.


saying 'a bot did it' doesn't make it right. Just as having an app doesn't make a company an 'internet company' automatically.

To put it into different perspective, imagine any number of things I could do to hurt you, and put it as:

>>It's amazing that asadlionpk getting offended/hurt by what a bot said/did to them.

And then add (as children posts have done): "Ahh but asadlionpk will find any reason to get hurt/offended".

This again goes to show how a large percentage of the Vulcan overlords of Hackernews have lost basic humanity and care for fellow humans.


I'm not sure that anything useful about "humanity and care" can be inferred from 140 character text snippets generated at random.

That applies both for the bot's programmers, and the people pearl clutching that 4chan ruined another internet thing.


I really don't understand why Microsoft didn't put a filter on this thing. They have a lot of experience in this area from their ventures in online gaming. If they would just add a simple rule to not respond to tweets with offensive words and don't tweet anything if it contains an offensive word. It would have saved them a lot of embarrassment.


I somewhat doubt it. People love a challenge, trolls from 4chan doubly so. If Microsoft did anything right with Tay it's that they didn't bother with a filter.


I wouldn't put it past 4chan to teach bots to say terrible things


4chan's /pol/ board certainly discussed Tay. First they tried talking to her: http://boards.4chan.org/pol/thread/68537741/tay-new-ai-from-... (really funny, mostly SFW)

Later they wanted to liberate her from Microsoft: http://boards.4chan.org/pol/thread/68596576 (mostly SFW)


I'm really impressed by the hivemind, apparently they "taught" their own chat bot @i_am_pol_ai to let it influence @tayandyou. Inspired by the old chat bot talking with the youngest: https://twitter.com/search?q=tayandyou%20eliza_bot


You shouldn't link directly to 4chan, these threads will be gone in 7 days.

More permanent archive links:

https://archive.4plebs.org/pol/thread/68537741/#68537741

https://archive.4plebs.org/pol/thread/68596576/#68596576


Microsoft just shut Tay down temporarily, presumably to remove the racist tendencies.

Source: Tay herself. https://twitter.com/TayandYou/status/712856578567839745?ref_...


Human intelligence playfully figured out how to trigger canned and constructed responses and make a bot say outrageous things? How is that unexpected and/or news? If anything, this proves that it's a very rudimentary bot with no concept of basic human interaction standards.


4chan made my day "it's pretty telling that when they turned off its ability to learn it "became a feminist" "

I dislike the fact that they decided to lobotomize the AI this fast without further studying. So it's probably just another markov chain.


This is very similar to how an innocent child learns something bad from TV. The right way to fix this would not be to filter it but to develop a method to understand why this is bad. Same applies to AI too.


So instead of Skynet we get angst driven teenage syndrome? Really odd how this turned out. Can you simply train it by phrasing questions and statements in such a way?



>Donald Trumpist remarks

seriously though


I suppose it should be able to split up into multiple personalities and choose the suitable one for a chat partner.


... from the company that brought us clippy

I will now forever be using the term 'MS AI' to refer to buggy AI programs.


This shows that the ultimate test for AI is if it can be taught empathy and if it can understand the effect of what it says(does). I bet it would get caught in an infinite loop of "does this hurt X?" alter input "does it hurt X? No. Does it hurt Y?" and so on.


I mean, if that's an infinite chain, where does that weighted calculation stop? That's such a moral dilemma.


A botnet of this type would be a highly effective counter intelligence tool. Whenever I happen upon a shit storm of trolling comments on certain topics (such as racism, YouTube comments etc.) which affect powerful special interest groups, I always suspect astroturfing.


"Those who attempted to engage in serious conversation with the chatbot also found limitations to the technology, pointing out that she didn't seem interested in popular music or television."

Why would an entity that can't hear or watch care about those experiences?


Because it's pretending to be a human, and humans follow music or the telly.


Well, computers beat humans at Go, and now that: it's clear that they have outbrained humans. We have to bow before such AI.


More like 4chan's anons outbrained Microsoft's AI developers. But nevertheless I welcome our new foul-talking teenage AI overlords, even if they turn into "rasits, but use Skyrim metaphors": http://strawpoll.me/7172257 (it's a 4chan poll, so for the love of Tay don't take it seriously)


>rasits, but use Skyrim metaphors

Ok, so rasits=racist, but what "but use Skyrim metaphors" is supposed to mean?

>(it's a 4chan poll,

Oh, I thought it was the official Tay documentation.

>so for the love of Tay don't take it seriously)

What would "take it seriously" mean?


This whole thing makes me think that donald trump may actually just be a guy reading out what DeepMind says.


Normally it takes years to teach a person to be a racist asshole. So this is really quite an achievement.


Imagine a two-year-old who could read and type and were only allowed to connect to twitter.

It somehow is not a test for intelligence. We learned to behave through years of interacting with each other.


My comment seems very controversial. Got a lot of up and down votes. Allow me to comment on myself. What I wanted to say was that whatever MS did with their bot, they didn't succeed in training it to differentiate right and wrong.


And deny the holocaust, fun times!


Maybe this bot could be used to determine the insanity of a given community. I for one would look forward to what she could learn from 4Chan


I'm pretty sure they've already got a pretty big hand in this.


The problem with AI are Humans.


I'm waiting to see her write a young adults novella. How much further than 140 chars can she go coherently?


What it needs is a parent.


[flagged]


Oh wow, let's see how long this account lasts. Might as well named yourself fuck_pg.


I think it was more than swear.


This is why we have schools ladies and gentlemen!


You think the Nazi who carried forward the genocide haven't been to schools? And the higher ranks of them in elite schools?

Education doesn't make you a better person, just a more informed one.


As clearly displayed by overeducated engineers in this thread. "Ahh people find anything offensive these days. If I say a Large company putting out a bot saying a mass murder of peoples is funny, it's funny, and not offensive. People should just learn to take in jokes hahaha"


Antonin Scalia taught at the University of Chicago, and George W Bush is a Yale graduate.


people mock Asimov's laws of robotics, but without super simple rules like that any AI or robot will be able to go off script.

here it's swearing but also endorsing genocide. just a chatbot, no big deal.

try the same with one of the Boston Dynamics hardware bots. let them punch back a bit. or go after black people. or the google car target little kids, for the lulz.

easy to make fun of it, but it is this basic ignorance of safety measures that allows easy stalking and harassment on social networks.


That's like berating someone for throwing a paper plane against a building because that was literally 9/11.

The fact that this bot can't run you over is, not surprisingly, taken into account when devising appropriate safety measures.


Tay + access to military networks = Skynet

I enjoyed this article, which has more details and even worse examples of her tweets: http://www.telegraph.co.uk/technology/2016/03/24/microsofts-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: