Hacker News new | past | comments | ask | show | jobs | submit login

What actually happened: Alexa misinterpreted some voice commands and activated a "call" skill. The people involved and local news got very excited and escalated this into a conspiracy story.

Amazon takes customer privacy EXTREMELY seriously. There's no way a team would get the "ok" to build a skill that randomly records private conversations then sends them to a random contact. It also doesn't make any logical sense to build such a skill.

Yes, I might sound biased because I am an engineer at Amazon. This statement is my own and unrelated to Amazon's opinion.




I get what you are saying but I would say that Amazon does NOT take privacy extremely seriously or this couldn't have happened. Let me be clear that I'm not saying they don't care at all or they are conspiring with the NSA.

What I mean by the above is that the "call" skill is much different than the "weather" skill. All Alexa has to do is have a confirmation prompt in the "call" skill and this wouldn't have happened. That is what extremely serious looks like. This is exactly the same as the phantom laughter incident from a few months ago. Alexa "heard" someone say 'Alexa laugh' and laughed, but that wasn't the user's intent. It was fixed by moving to 'Sure, I can laugh,' followed by laughter.

Voice UI is very hard and still in its infancy but ability for personal harm (physical or emotional) must be considered in these interfaces. Turning off the lights may not need confirmation but unlocking the doors or turning off the alarm probably should. Sending recordings or answering calls or even calling people should require more hoops or at least allow the user to control the risk/reward.


> they are conspiring with the NSA

There is absolutely no reason to believe they are not conspiring with the NSA. They have a huge deal with the CIA, plausible some of these funds are for surveillance capabilities. They would not (and likely would be legally prohibited from) disclosing any relationship they have.

https://venturebeat.com/2014/03/18/snowden-slams-amazon-for-...

https://www.theatlantic.com/technology/archive/2014/07/the-d...


> There is absolutely no reason to believe they are not conspiring with the NSA.

Or, more likely, being made an offer they can not refuse by the NSA. And the rank-in-file engineers may not even know it's happening, all it takes is inserting some diverting code into the pipeline and calling it "QA monitoring" or something. Couple of people in the whole org would know that somebody from some IP connects and downloads these "QA" data periodically, all the rest would be completely ignorant and indignant at the thought. Don't see anything preventing this from happening at Amazon - or anywhere else.


~~conspiring with~~ accepted an "offer" they couldn't refuse.


I think you accurately identify the issue with this situation / skill; voice triggers need to weigh convenience (how easy should it be to activate this skill) against it's permissions or potential (how can this skill affect a customer). In this case, I think this was not done properly.

This does not mean Amazon does not take privacy seriously. It is a company of small teams with very few layers of management between an engineer and business decision. The error in judgement of one team does not reflect on all of Amazon.

As you say, Voice UI is still in its infancy and not lacking growing pains. However, because Amazon does take privacy very seriously, after this incident, I'm certain there will be actions taken internally to ensure teams properly weigh the gravity of a skill with its voice trigger (or adjustments made if there is already an existing policy).

My own thoughts, not Amazon's.


> I get what you are saying but I would say that Amazon does NOT take privacy extremely seriously or this couldn't have happened.

Apple shared personal photos of myself onto the internet without my permission. I could not delete it without getting support to assist, and they could not provide me with a reason why this happened.

Would you say that Apple does not take privacy seriously?


I would need you to explain how that happened.

The issue from the article is systemic. Everyone could easily accidentally be recorded and share that with random people


I know you feel like the story is not being fairly presented, but you are almost certainly not helping Amazon by posting here about it.

Why? Because from a PR perspective, it doesn't matter what Amazon or the engineers intended, all that matters is what actually happened. You said:

> There's no way a team would get the "ok" to build a skill that randomly records private conversations then sends them to a random contact.

But that's exactly what happened. Whether it was a bug or a feature doesn't matter. Imagine if the wing fell off a plane and an airframe engineer came on here and said "hey, there's no way we would get the OK to design a wing that falls off in midair." Yeah, we know. The fact that it happened by accident doesn't make it better (maybe worse, actually).

Now you've got dozens of people responding to you, and BTW a lot of reporters read HN too. Do you really want to see stories like "Amazon Engineer Calls Customers Conspiracy Theorists"?


The obvious, predictable and serious screwup occurs and people shrug it off. Engineers from Amazon post how it's all a regrettable mistake and it won't happen again. I am not gonna unload on you specifically here, but if you could maybe pass along to whoever is making business and product decisions over there that yeah, maybe 80% are stupid enough to buy this crap, but there are a huge number of people that are extremely creeped out by this and avoid it like the plague. That deafening silence you here from them should not be interpreted as license to push the boundary even further.

If there was an Alexa in my place of residence, I would rip it out of the wall, smash it to bits with a hammer, and fire it out of a cannon into the sun.

Hell no to this crap from my side. I hope there are more incidents like this until people wake up to the fact they are putting telescreens in their homes. You guys can't be trusted!


>If there was an Alexa in my place of residence, I would rip it out of the wall, smash it to bits with a hammer, and fire it out of a cannon into the sun.

You may consider disposing of the bits at Kilauea instead of the sun. It would be almost the same and much more affordable.


It's pretty clear to me that this was an inadvertent activation of the call skill. The equivalent of butt-dialing someone from your phone. I get random voicemails from people all the time that are clearly recordings of their phone in their pocket.

If that's the case, then you need to provide additional features to reduce the chances of this happening. Longer and more unique wake word options, or more complex and deliberate confirmations before the call is placed. Something that a user can enable if they're worried about this sort of thing.


100% this. Seems like there should at the very least be a product feature that automatically disables any outbound messaging by default (similarly to how you can block purchases without a PIN). This doesn't solve the problem of always listening, but at least prevents this particular situation from inadvertently happening. Additionally you should be able to set an automatic deletion of voice recording after a specified amount of time. I'm sure some of this has to be in the pipeline with GDPR.


Your comment sounds belittling to me. The OP isn't a "conspiracy story", it simply says what happened. Yes, it tells the story from the couple's point of view. Shouldn't it? Sure, they "got excited". Who wouldn't?


What is this alleged 'conspiracy' story? An Amazon device recorded a private conversation and transmitted it to someone else, and it did so without the user's knowledge or intent. That actually happened.

How it happened is only relevant to the engineers who build and maintain the thing. I, on the other hand, could not care less how it happened, and the fact that it did happen is reason enough never to buy one of those infernal devices.


Yes. The fact that the capability exists is the issue. We need system design that mitigates these core privacy violations.


>How it happened is only relevant to the engineers who build and maintain the thing. I, on the other hand, could not care less how it happened

I'm sorry, this is just stupid. If you cannot see the distinction between a feature that was intended to spontaneously record audio, and a bug caused by faulty voice-processing, then that's on you. The distinction is pretty critical


Critical to what, exactly? The consequences to the user are the same.

Obviously, intentionally designing a feature that spontaneously records and transmits audio is a problem for many reasons. But the lack of intent does not magically erase the consequences for the people who experience this kind of bug.

And, to be clear, this was not simply a matter of "faulty voice-processing". The fact that this could happen without the user's knowledge is a problem in itself. Clearly, there are inadequate visual and audio cues, and insufficient or nonexistent verification. Those failures are not bugs; they are bad design and engineering.


And this is where "software" diverges from "engineering". It's not a conspiracy, it's negligence.

Bugs happen in architecture, aircraft, etc. too. the difference is that the actual engineers are paid to have a precautionary approach -- and spend significant resources -- to actively prevent bugs from making it into the final product.

Amazon and your team has built a great product (I have one and make moderate use of it, have even considered building some skills).

But, you have planted a full-on bugging device in millions of people's homes. Done by a government, this would be cause for war or revolution. This is serious, and you need to treat it much more seriously than you obviously are. Every 'skill' does not require the same minimal levels of security and verification, some, like this one, require much more, or should be forbidden outright until such security can be properly implemented (and yes, this should probably include calls only to pre-configured whitelists, intent confirmation, etc. and to any manager that says "that's too inconvenient to the user", the response is "screw you, it's critical").

You call yourself an "engineer" at least twice, and claim that you take privacy "extremely seriously". The evidence from this incident and others noted in this thread indicates otherwise. Clearly, insufficient resources were allocated to figuring out the potential failure modes of a "call skill", and preventing them.

All due respect, but your team needs more of an engineering approach than you have. This entire "it's gotta ship yesterday" mentality in the software industry used to be just inconvenient. Now it's getting dangerous. Please help stop it.


I agree with you and my response to another comment elaborates: https://news.ycombinator.com/item?id=17154679

I'm all for doing away with the "it's gotta ship yesterday" mentality, what your thoughts are on how developers can help stop it?


First, the Tech/Dev managers need to stand up. Instead of saying 'Yes' to every feature request, they need to say 'No', or 'Later'. I've seen too many who just feel that their job is to implement everything as fast as possible, and as close to approximating the sales/mktg/product guy's latest half-baked idea as fast as possible.

This is not easy, especially as the CEO is still typically above the CTO, and can overrule. I had it happen to me, when we were ahead on a scalable version of the product, but they didn't like the timeline. The mgt decision cost almost a year of messed-up myopic development schedule just to roll out apparent features sooner, but ignored the likelihood and eventuality of bugs. Afterwards, when we got back to the scalable highly modular version, we started taking biz from competitors who couldn't scale. I'd say that my mistake was to only give the broader consequences, and not spend time to be able to enumerate in detail the consequences of non-scalable quick program would be. Of course you cannot predict exactly what bugs will happen, but I could probably have done a better job of drawing the scenarios (not sure it would have made a difference, but it might have).

I'd also say that we need to create specific structures and plans to study and quantify risks, as is done in real engineering like aerospace, architecture, etc. Classify those risks into a range of categories, from small bugs to existential for your customers or project.

Different steps need to be taken for each class, and significant part of the planning needs to go into de-risking the project.

I'm in physical vs software development now of carbon fiber type technologies, and I notice that my military customer who are building very cutting-edge stuff often talk of 'de-risking' the project, whereas I don't hear this much from other customers. Seems like an important distinction to take on board.

--- From a user perspective, I noticed after looking at the issue on our own Echo yesterday: the UI is a totally greased slide to hide choice for the user and slide them right into giving permissions for contact list. It seems that effort was made to hide the actual features and functions that will result from giving permission, and obscure the 'Skip' option. So it would be easy to not even notice that your device had these new possibilities. Obviously, I'd recommend taking more time to sell the features and let us make an informed choice. Then even if things go wrong, you'll enjoy some benefit of the doubt in the market and press.

(edit: add parenthetical)


> First, the Tech/Dev managers need to stand up. Instead of saying 'Yes' to every feature request, they need to say 'No', or 'Later'.

Yes, I see this a lot too. I think software managers have more incentive to get new features deployed. I don't think this is a good way to measure their performance as a manager because of the consequences we've already mentioned.

I would also like to see more concepts from physical development implemented in software. At times feels like the wild west out here and too often we ignore the lessons from similar experiences.

Thank you for that input btw.


Glad to hear it's helpful, thx.

I know amazon's got a different motivation matrix than a startup; e.g., Amazon won't die if some feature isn't delivered by the next trade show, but they do have competition from the other majors.

That said, amazon certainly also has the funds available to invest in a parallel risk team. If they're not motivated to do it from the risk to their users, the best argument might be the potential reputation setbacks if stuff like this gets out there & causes problems, bad press, reputation for creepiness, etc.

I know it always seems inevitable that you'll weather the reputation hits from errors and just press on to greater usage/adoption/sales, but it will always seem that way from the inside -- until it doesn't. Google's experience with Glass comes to mind; could have been a fantastic product, but it went just under the tipping point of being creepy, and poof, they're gone. It'll be a generation before anything similar comes back.

I'd hate to see that happen to Echo/Alexa. TBH, it looks like this product has both greater potential, and also greater creepiness potential than Glass ever did.


If a "call" skill is accidentally triggered, before it is sent to any email addresses, it should tell user "N seconds of voice was recorded and about to send to ....", please said "Send" to send, "Play" to play back the message, etc.

The default must always be voice recording will be auto deleted after 1 minute if no response is heard. It should let user know about that too.


Better yet, let the user pick a confirmation word (kinda like a 'safe word'), or choose one at random from a list of moderately complex but well known words that are unlikely to be uttered in casual conversation.

To send this message to John Smith, say 'artichoke'.


Having a UI that gives the customer no audio confirmation that it got a command to record and send a message is a serious UI failure. This isn't a small thing here. If a bug like this makes it though, how can I trust that device.

I don't think the article is indicating what you say. The report says it's a bug, but it does bother me why Amazon won't comment on the specifics. This should be patched immediately with a full retrospective explaining what happened.


> There's no way a team would get the "ok" to build a skill that randomly records private conversations then sends them to a random contact.

Of course. But it is entirely plausible that somebody builds a skill that records certain conversations, and somebody builds a skill that sends recording somewhere, and then due to some bugs or coincidences or missing controls to prevent such occurrence, the first skill is activated when it should not be, and the second one is activated when it should not be and with wrong parameters.

So there should be more checks. Like - ask for explicit permission before sending any voice recording out, or ensure that the target where recording is set is validated against whitelist explicitly set by the user, etc.


> There's no way a team would get the "ok" to build a skill that randomly records private conversations then sends them to a random contact.

And then builds exactly that...


Yeah, the skill may not be (and I'm sure it wasn't) intended to that, but it did, in fact, do it. Intentions don't really matter after the fact.

It's a mistake, but it's a pretty bad one.


What if mistake was that the recordings were sent to one of the contacts and not to some agency?


Thought the same thing. They actually built exactly that!


I shouldn't have called this a "conspiracy story" and I should have given more respect to the parties involved who experienced this. While I think the issue is similar to a "butt dial", the customer felt violated and more precautions should have been taken to prevent this.

What I should have said is this report makes it difficult to understand what actually happened. It seems clear this article favors click-bait quotes that insinuate a "big brother" vibe such as:

> "'unplug your Alexa devices right now,' she said. 'You're being hacked.'"

If she had instead said "Alexa butt-dialed me!", would you still be interested in this report?


>There's no way a team would get the "ok" to build a skill that randomly records private conversations then sends them to a random contact.

If you took privacy seriously you would have added an audible prompt to ask for confirmation before sending or similar safeguard.

I mean if you are an engineer then you know that there is a nonzero false positive rate for the device to detect the "call" command.

Knowing this, and not implementing a trivial safeguard sure seems to me like the team was "ok" with building exactly that, a device that: "randomly records private conversations then sends them to a random contact."


> Yes, I might sound biased because I am an engineer at Amazon.

> This statement is my own and unrelated to Amazon's opinion.

Just because you admit your bias doesn't dismiss you from it. Your statement and opinion could very well be influenced by your work surroundings at Amazon. While I do not doubt that Amazon does take security and privacy seriously, I find it hard to believe that they employ proper ethics (based on this and past examples such as the NYT incident).


Is your best guess at why this might impact someone's trust in Amazon products really that people would think it was an intentional design choice? (Seems kind of straw-man-y.) If not, then you should generally address the part that will matter to people -- not whether this was on purpose, but what issues caused it and how will they be fixed.


>>Amazon takes customer privacy EXTREMELY seriously

I have not experienced that, can you elaborate? They keep spamming me with mails of all there different services, show me what I should buy, etc. No way off even turning these things off...


Actually, I can think of a very good use of the ability to trigger silent phone calls. In the case of home invasion or domestic violence. If Alexa detects what could be signs of distress it can dial an emergency number without alerting that it was triggered. Otherwise, trying to shout "Hey Alexa, dial nine one one" will alert or enrage the perpetrator.


Alexa misinterpreted some voice commands

Oh is that all.


...and activated a "call" skill.

Heh, skill.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: