Hacker News new | past | comments | ask | show | jobs | submit login
Google to warn when humans chat with convincing bots (bbc.com)
32 points by baddash on May 13, 2018 | hide | past | favorite | 55 comments



It'll be interesting to see how a human knowing they're talking to a bot changes their behaviour - in the demos people thought they were talking to another person, and are polite and professional - I wonder if knowing they're talking to a machine people may change their tone, become more abrupt, speak more slowly or become agressive - maybe it would lead to unconscious (or likely conscious) discrimination against bot calls, in a similar way to stories seen of people with 'ethnic' accents calling restaurants and being told there are no reservations, when in fact there are.


Can't wait for a day we'll be talking about human privilege and quotas for chatbots in call centers. /s

Your observation is most likely true, though. Humans do talk to machines differently, and I'm not talking about those who swear at bots "just for fun".

Personally, as soon as I'm positive I'm talking with a bot (because it doesn't respond naturally) I remove any unnecessary verbal clutter, as it is confusing for the machine and my respect for the bot (haha) is by exactly omitting what's irrelevant. Also I try to guess what keywords bot would recognize best, so I would get what exactly I'm asking for.

So, "Hey, can you please help me? I'm looking for a way to configure Foo to do Bar, but stuck with error code 123. I wonder if there's any blah blah blah. Thank you very much!" becomes something like "How to configure Foo for Bar? It fails with error 123." And that if I still try to use natural language and don't go with `Foo Bar configuration error 123`. Surely, such conversation with a human would be considered impolite to say the least.

The above's tech-biased, of course.


You're not alone in doing that, though it'll be interesting to see how these boys perform when given these sort of succinct prompts and instructions, given the Google example was trained on real conversations - you may find they perform worse compared to 'natural' speech


I wonder how it does with non-native English speaker on A2-B1 level (e.g. Chinese restaurant ran by immigrants).


Do you actually get a useful answer from the bot?

I don't think I have ever got a useful technical answer from any bot that was superior to just using google.


> The above's tech-biased, of course.

I'm also very curious how 'normal' people would handle this. Judging by the google search queries I've seen people type, and by their interactions with chat bots, it's quite possible they won't adjust the way they talk all that much.

On the other hand, tools like Siri and Alexa do seem to be teaching these same people to adjust how they talk, so maybe this won't be as much the case anymore.


What if it's self correcting?

1. Comp has crap / no website

1. robot can't automatically make booking

1. Comp gets annoyed with robot callers

1. Comp sorts out a decent booking system

1. robots don't need to call anymore.


I would prefer talking to an AI for business calls, because then I know I can get straight to the point without wasting time on being courteous, without having to worry about hurting someones feelings.


> discrimination against bot calls, in a similar way to stories seen of people with 'ethnic' accents

Are you seriously comparing racial discrimination and people wanting to talk to a human instead of a bot? Racism causes huge problem. What‘s the problem in not wanting to talk to a bot?


It's less profitable, and that's a way more important problem. (/s (but also !/s))


> (/s (but also !/s))

The term for that is HHOS.

http://www.catb.org/~esr/jargon/html/H/ha-ha-only-serious.ht...


Prior to this warning feature, I wonder what would have happened if during the phone call the hairdresser had asked "are you a real person?". Would the Google assistant reply "Ummm... I'm not real" or would it lie?


From reading the headline, I assumed Google was providing some useful service where chrome or Google voice or some other Google medium would warn hapless human when they ended up in conversation with an AI pretending to be human.

But no! Google itself IS said evil AI. But hey, it's ok, don't worry, it will come with a built in warning!

Things like this make me think that big tech has really lost the plot. You'd think in the current climate that Google would be keeping their heads down, staying away from things that are creepy, unsettling and potentially providing evildoers with another way to maliciously influence people.

But no... because ads.


I'm not sure it can be dismissed that easily. If Google stick to "providing tools" and let others decide how to use them, maybe they will do better.

It's when Google product managers stand on a stage and tell us how their technology will make our lives better, that I cringe. I don't want Google telling me how I should live my life. Just provide the tools and tech, and let us work out how best to use them for ourselves.

There will be situations where this "warning" will be unwanted. Google should not be dictating when or how the warning is delivered. That should be at the discretion and option of the business or individual.

I can see this tech being useful in the reverse scenario they demonstrated. That is, the bot answering calls on behalf of the restaurant and accepting bookings. Often when you call a restaurant, it's noisy and you just know you've interrupted someone from doing other tasks.


> "the reverse scenario they demonstrated. That is, the bot answering calls on behalf of the restaurant and accepting bookings"

In the scenario of bots talking to bots, if they identify themselves as bots to one another, then they could quickly switch over to a much more efficient machine communication methods. :)


> warn hapless human when they ended up in conversation with an AI pretending to be human.

If people can't make the difference, what do they gain by knowing, and if they can, how does the warning help ?


Why can't they? They can hang up. Sure, some people might not be allowed (a restaurant owner might not be willing to lose reservations because the employee doesn't like talking to bots), but in other cases, they are.


Don’t really get this tbh. Why should I care if I’m talking to an AI or not?


I can't wait for automated phone spam to be weaponized so that we can receive verbal Turing tests when we call the customer service hotline.


Will Google record calls or metadata? More importantly, to alleviate concerns, will they indicate what their data policy is during a call?


I'm sure they will record everything and say it's for quality / training purposes like most call centers do. Only instead of training humans they are training AIs.


Ha!


The problem is that the technique is known, and it's going to be duplicated. Although honourable people will not defy patents or reason to use it maliciously dishonourable people will ! The community (and Google) needs to develop a better solution to this and the deep fake video's.


Some of the first people to start using this once it "breaks free" will surely be businesses who are tired of answering the phone. How does this not end with computers talking to each other using a low precision, inefficient, low bandwidth machine protocol over the PSTN?


Frankly I'd love a bot framework to turn the tables and call into my ISP's IVR to log a complaint.

A bot that would do all the waiting, trudge through the options, deal with the transfers, tell them I did the standard debugging steps and get back to me the complaint number.

That would be just incredible.


Google may be the first to release a system like this but it won't take long until there are equivalent services, which may not warn that it is an AI. How long until those fun calls to automated systems start with a captcha?

sidenote: how long until the Butlerian Jihad?


> how long until the Butlerian Jihad?

It already started for some people. I personally know people that have made fighting against technocracy their long-term goal. Technology has been such a destructive force in their lives and is a continually growing threat. When you have to worry all the time about the new Sword of Damocles that Silicon Valley hangs over your head every month, thoughts of Butlerian Jihad style active violence against technocracy become inevitable.

In case anybody wants to dismiss this as merely an outlying opinion, consider the poem "There's No Reception in Possum Springs" from the game Night In the Woods:

    ... (see [1] for the rest) ...

    Replace my job with an app
    Replace my dreams of a house and a yard
    With a couch in the basement

    "The future is yours!"
    Forced 24-7 entrepreneurs
    I just want a paycheck and my own life

    I'm on the couch in the basement
    They're in the house and the yard
    Some night I will catch a bus out to the west coast

    And burn their silicon city to the ground
[1] http://nightinthewoods.wikia.com/wiki/Selmers#Possum_Springs...


It's a little bit surprising that the people in the Dune setting even bothered _trying_ this. By the time of Dune they're constantly nibbling at the edges of the resulting limitations. Sure enough, the humans who disobey the Orange Catholic Bible and make thinking machines end up crushing their opponents.


Why does the BBC think we should read three times that this is “horrifying”?


Because a computer is standing in for a human unbeknownst to another human. The computer is effectively tricking a person into thinking it's another person... but tricking nevertheless. This opens a whole new door of opportunity and failure.


Oh hi. I'm, uh, calling to arrange the manufacture of a large quantity of paperclips for a client...


Tricking implies intention. The computer has no such intention yet. Google has it, or, most likely, Google just want to get data efficiently, tricking is a side effect of how good their speech processing is.


Does Google have intentions? Or do only individuals have intentions? Serious question, I am sure philosophers have thought about this.


If Google - an organization with a purpose, charter, and organizational goals - can have intent... how is that different from software?


It is not that different. The software just doesn't have the intent to trick humans into thinking it is a human. There's no feedback loop assessing performance of tricking humans and changing behavior to increase it. To be precise, such feedback loop is probably external to the software and implemented by the engineers.


Yeah but they just got a tweet from a professor and repeated it three times. A bit more effort could be made.


What difference does it make if a human, dog or robot tries to book a table at a restaurant? As long as it speaks in English it doesn't matter. It's the same outcome.


In the first place, the demo would have been much better of it was used in their cylinder rather than to impersonate people.


This tech might work well dealing with Emergency Service calls, to filter out inappropriate calls and only pass on genuine emergencies to the human operator related the required service.


If I had to pick a single area of phone calls that I would not want automated, it would be 911 calls. Any potential delay or misunderstanding could result in a horrible result.

There's no question that humans could make similar errors, but they can be held accountable in ways a computer could not.


This sounds like the worst situation to use it in, unless it's actually better at English than humans. Even if 99/100 emergency calls are garbage, you want the best responder on the line immediately for the one call that might save lives.

And if it decides to hang up on an actual emergency? That would be a special kind of fail.


I guess it depends on whether Google think it would be up to the job. If so then it would need to under go a lengthy trial for the Services to establish a suitable level of confidence. Emergency service calls are recorded, so there's plenty of real world data to test and tune with. And for a live trial I'd have real people shadowing Duplex; listening in and ready to take over if the call doesn't go in the right direction.


I imagine it won't just hang up, it will solve the call just like a human would, but since it's trivial, it can be automated. Any case that the bot doesn't 100% surely know how to solve -> call is forwarded.


Emergency call takers are trained to detect callers who are unable for some reason to clearly express the emergency and therefore can only have a mundane conversation and maybe say where they are.

They also deal with calls initiated by small children who don't understand what's going on beyond that they were told at some point to dial 9-1-1 (or 1-1-2 or whatever the local equivalent is) if anything bad happens. "Mummy won't wake up" is a common complaint which can signify anything from a drug overdose, homicide, through carbon monoxide poisoning, animal attack...

It would only make sense to filter out non-emergencies if you're able to achieve an _extremely_ low rate of rejecting true emergencies and your cost to service false alerts is high. For example the analogue COSPAS SARSAT system was phased out because it had more than 99% false alerts and servicing an emergency typically means dispatching helicopters and/or coast guard vessels to a distant location. False alerts (now often caused by human error, ie typically somebody did press the button but they didn't mean to or they hadn't appreciated that "I'm bored" isn't an emergency) remain a majority of all COSPAS SARSAT alerts with the digital system, but at least it's not 99% any more.


That would be fairly neat, although I think there might be some problems. Who is to blame when a voice assistant fails to pass on the call when somebody speaks with the wrong accent?

On the other hand, I imagine a system where you could speak in your native language and have the report passed on as a transcript may also save lives.


What if there are false positives?


And we should rely on Google's pinky swear? We need authentication for phone calls and a set of laws requiring disclosure when this type of service is used by legal entities. And we need these laws now.


Because otherwise... we might be somewhat annoyed? What terrible events do you see happening that require laws with such urgency?


There exists audio synthesis software that can mimic anyone with an almost indistinguishable voice. There also exists software through which you can automate responses. This will not go down very well. If automated scamming isn't bad enough for you, there is no law prohibiting a corporation implicitly posing as one of your friends, your employer or a person in general.


I don't know about your jurisdiction, but laws against impersonation already exist in many places.


Wouldn't that just classify as fraud? I think existing laws already have that covered.

After all, impersonation done by people is nothing new, this technology just makes it easier.


>Wouldn't that just classify as fraud?

20 years ago, much of the ToS you blindly accept on many websites, would land the developers and company management in jail. At least in my country. That is for counts of misuse of personal information, defrauding the customer and potentially espionage.

Also if they sold physical devices, lying about the function of the buttons, installing erasure buttons that don't erase anything etc, would also cause a class-action lawsuit and the district attorney to press charges for fraud.

I can totally see companies, especially the ones that don't care much about keeping a good face, like dept collection services, using this kind of service in extremely unethical ways while retaining plausible deniability in court.


If companies are evading existing laws, what makes you think more laws will help?


Fraud is generally defined as (1) a lie; (2) told intentionally; (3) that caused some tangible damage. Unlikely to fall under fraud.


Why? What's so bad about this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: