Hacker News new | past | comments | ask | show | jobs | submit login
Path texts my entire phonebook at 6 AM (branded3.com)
1223 points by kemayo on April 30, 2013 | hide | past | favorite | 409 comments



How about another detail -- the fact that the message said the user had photos to share, when he didn't?

There's annoying spam, and then there's straight-out-lying spam -- the "x has sent you a message, you need to create an account to view it" type.

Just curious, is there a way to sue/fine a company like this for false advertising, essentially?


We need to nip this in the bud.

Maybe it's time to create a Hippocratic Oath for developers to publicly commit to?

A future Path developer could then refuse to implement an unethical "feature" by pointing out that the company had hired them with the full knowledge that the oath had been undertaken.

I don't think the company would pink slip the developer, as they would probably want to avoid any attention being drawn to the unethical "feature" in a tribunal or other legal setting.


A "Hippocratic Oath for developers" sounds like a great idea, but we vastly underestimate the number of jerkbags in the world.

People want validation. Line-level employees want praise from coworkers and bosses. Executives want praise from their peers, investors, industry, and press. Concepts like ethics, "right," or even this-is-good-for-thie-world isn't a concern when faced with "X will increase my social status and happiness with my peer brogrammers." What's X? It's anything possible, regardless of legal, right, wrong, or ethical.

A nontrivial number of companies use unethical methods (spam, false invites, false installs, phone and email address book capture, fake attractive profiles) to increase their vanity metrics. Employees see those methods as either: "this is bad, but it's sooooo good for us — look at all the lame n00bs who fall for our tricks" or "this is bad, and I'm ashamed to work here."

The ones who feel shame would take the Hippogrammer Oath. Those who revel in manipulating others and standing on their broken bodies will rake in all the profits while the good guys just sit around and "play nice."

Even the tech darlings of today used spammy methods to grow their initial user base. How do you grow your userbase to ten million when you're growing at a constant 5,000 per day? Obviously you want to "go viral." How does one just on a whim "go viral?" You can either become a meme, a social phenomenon, or spam and manipulate unsuspecting people. Spam is less work than creativity.


Haven't seen this mentioned yet, but ...

http://blogs.msdn.com/b/oldnewthing/archive/2006/11/01/92244...

I often find myself saying, "I bet somebody got a really nice bonus for that feature."

"That feature" is something aggressively user-hostile,...


A lot of the things mentioned there are good reasons for a curated App Store approach. It's almost impossible to stop arbitrary programs from abusing features of the OS on which they run, unless you have control over which ban poorly behaving programs from ever reaching end users.


On the other hand, isn't Path's app distributed by a curated App Store?


I don't think anyone is saying it's a solution to all problems, just that it might be a problem to some problems.


Alternatively, you can think of it as a good argument for open source. Take abuse of the notification area, for example: in Ubuntu this was eliminated by modifying every package in the archive. That's something a system like Windows with closed components belonging to dozens of different manufactures can't really do.


wow, I (like many others, obviously) have experienced many of the things described in the msdn blog article you linked to, yet never really thought critically that those things don't have to be there!

Things are going to look different to me now when I'm on a windows machine.


Nice post, but putting shortcuts in Quick Launch bar seems pretty standard these days and I like it as long as the installer asks. Also, he mentions the fact a programmer would have to hard code the file path ... I don't see why they couldn't write an algorithm to discover it instead.


I suspect that the post's point is that many don't use an algorithm and cause problems for non-English installations of Windows.


This would be great...

I remember having to put my job on the line a few times for refusing to program / setup something awful.

One of the worst was when I was asked to combine all divisions email lists and send out a marketing email selling some overpriced book, this was against the Privacy Act (AU), against our privacy policy and highly unethical to boot, refused and was given a written warning.


There was once when I did this as well.

It was a contract web development company I was working for, about ten years ago. One of our clients wanted some SEO work done, and my supervisor had recently been reading a lot about SEO. He started out reading white-hat stuff, but by this point he was delving into some black-hat research, and he essentially asked me to program a message-board spam-bot. I just told him straight up that I believed that would be unethical and I refused to do it, knowing full well that simply refusing to do assigned work could cost me job.

Thankfully, not only did this not cost me my job, it caused my supervisor to re-evaluate his own position and he decided to go back into completely white-hat SEO. And in the end, he actually thanked me for refusing to do that work.

I feel like lucked out on that one.


> The ones who feel shame would take the Hippogrammer Oath. Those who revel in manipulating others and standing on their broken bodies will rake in all the profits while the good guys just sit around and "play nice."

I think it is important that we should realize this kind of mentality won't work. You might see bad people making money, but eventually it will be no good for them. Either they don't sustain, or the money is no good for them, or they can't sleep with all that money under their pillow. You will see lot of examples of this from history.

I have seen people who are ethical and right also make a lot of profits. May be not in the short term, but in the long term. The idea is, you don't go behind money, instead you do what you do best, and money will come behind you.


you underestimate humans' ability to conceive themselves doing good while actually immoral, i.e. hypocracy.


I am not a jerkbag yet multiple times I have put my beliefs to one side to implement something I really didn't want to do. It made me sad and demoralized for weeks. Yet the choice between sticking to values and feeding family is not a difficult one to make in the end.


This is why we got mad at the top of the Nazi regime, not the bottom. The workers just needed to feed their families, the ones at the top orchestrated the evils.


There were plenty people at the lower levels of the Nazi regime who were put in trial if they were considered to have committed crimes (e.g. concentration camp guards).


Computers attempted various guises of "can't we all just get along".

At one time, it was cooperative multitasking and memory management. Programs were supposed to behave themselves and get out of one anothers' way. Except that, due to bugs or malice, some didn't. We called this world "DOS" (or pre OSX Macs).

Microsoft still attempts to allow vendors to install programs whereever the hell they want, and to, pretty please, not overwrite other program's infrastructure or system-level DLLs. Yeah. Right.

In the Linux world, we've solved this problem, if done right, though distro-managed, well, distributions. Any program can be included if it meets qualifications (generally limited to licensing requirements), and a sponsor steps up. Once included, the package gets the benefits of being included in the package lists, distributed over archive mirrors, and included in bugtracking and support systems. However it's also got to play along with the requirements of Debian Policy as to how it behaves on a system.

The proper way to address the issues of app privileges is to control privileges centrally on the device and grant them to specific apps. If a user doesn't wish to give an app, say, addressbook access, then they can deny it (or feed it a bogus addressbook). The app vendor can decide what they're going to do at this point, but what they can't do is override the user's explicitly stated limits.



>Maybe it's time to create a Hippocratic Oath for developers to publicly commit to?

This is silly. Stop trying to add grandeur to writing some code at X,Y startup/company.

People don't die or get harmed when some social-messaging application spams someone. Code is a way to implement an idea. Most applications exist to make money. If this a shock to you, read the user agreement before installing/upgrading, uninstall the application, or realize that in social networking, your personal data what the company uses to make a profit.


I disagree. As a computer engineer in Canada, I must swear by the Code of Ethics because what I do (or potentially don't do) can cause harm.

Ethics in computer-science-related fields are important and I think we do need a set of rules we can dogmatically follow like the Hippocratic Oath. Of course, the HO is different in that failing to follow can cause physical harm. However, the world is progressing quickly and more and more information is hosted online -- personal information.

I think it's our jobs to make sure we don't promote poor practice and un-ethical behaviour.


>I think it's our jobs to make sure we don't promote poor practice and un-ethical behaviour.

No, it's our responsibility as decent people. I don't need to sign some online pledge to keep myself from pushing people in front of trains. If I was the sort to harm others, why would I care about some meaningless online campaign?


"No, it's our responsibility as decent people"

Well, yes. But there is a reason that every profession that has tackled this problem has used a system of oaths and certification. Engineers, Doctors and Lawyers are the canonical examples.

You need something that is given and can be taken away for bad behaviour in order to change behaviour at this level. Damn human brains.


Taken away by whom and on what basis? I would dispute that you need the ability to take away other individuals' ability to lawfully write software. That ability is bound to be abused for political reasons (which is also what it looks like when people have reasonably different ethical systems and one imposes his by force).

Anyway, the issue at hand is bad corporate behavior, not bad programmers. I don't see why we need to start licking our chops about the prospect of forming a blacklist against individual programmers.


This is just a bonding/licensing arrangement, it's in use by every other profession that has this exact problem.

So go ahead and try and stop bad corporate behaviour, everyone else can use a proven system so that programmers can easily say "no" when asked to do something unethical and not be fired for it.

I'm often confused by how often programmers completely reinvent the wheel when faced with social problems. The idea of looking to other similar industries never comes up, even if the problem is exactly the same.


There's nothing to sign and it's not a campaign. You're right, it is our responsibility as decent people to uphold a certain level of moral and ethical behaviour, especially when the software we write is in control of sensitive information.

The Oath is there to remind you to act in the best interest of the user. There are no formalities and although it seems common sense to people like you and me, others might not see it so clearly.


An engineer holds a license such that they can profess, which license is conditional to the respect of their code of ethics (and a bunch of rules). If they do not follow those conditions, their license can be revoked.

So what if they lose their license? They can still write code and do harm.

Yes, indeed. As it stands right now, the reality of the engineer's license is such that it doesn't fit very well the software world. The vast majority of companies couldn't give less of a damn whether you are licensed or not. However, it depends.

Regulations might eventually come in place to force software producers to hire only licensed engineers if the nature of their business is prone to put the public in danger. And as technology grows ever deeper into our lives, the danger that consumer apps can cause on the public is ever growing as well. For instance, breaching a user's privacy can be enough info to grant an ill-intended operator access to the user's e-mail through social engineering, from which it is then often trivial to gain access to that user's bank informations. You don't need that much imagination to figure out a scenario where a user's life can be turned to shit by some software abuse.

Given that this risk is ever growing, the possibility of a code of ethic on software business is plausible. Say in X months, the government of country Y decides that companies hoping to run a social network available on their territories must hire licensed software-engineers, and have them all sign-off any code that is presented to the public. That software engineer they'd hire would have to put their license and career in jeopardy if they were to implement some evil feature.

Before Québec's bridge, engineers didn't need a license to build infrastructure. The parallel between the current situation and the past isn't too hard to make.


> I don't need to sign some online pledge to keep myself from pushing people in front of trains.

Neither do doctors really need the Oath of Hippocrates to stop themselves from harming people.


Which is convenient, because the Oath of Hippocrates has not actually stopped doctors from harming people.


So in Canada, software engineers never cause harm to anyone? Nice to know.

>>>> I think we do need a set of rules we can dogmatically follow like the Hippocratic Oath.

So you don't actually have to employ your own brain and your own moral judgement, because somebody already did it for you and wrote this nice set of rules, that you swore by Apollo to faithfully execute, without thinking, not unlike that box of wires and silicon chips you are paid to play with? Nice arrangement, I suppose.

>>>> I think it's our jobs to make sure we don't promote poor practice and un-ethical behaviour.

And you need to sign an explicit oath to do that?


I think you're missing the point. The oath is to remind you to use your head, your best judgement and a body of ethics and to act in the interest of the public. Sure engineers still make mistakes, but the code of ethics isn't some magical document that eliminated human error.

Also, I think you're sort of merging the HO with the Code of Ethics for engineers in Canada -- two very different documents and I suggest you give them a read. And no, no one still thinks they're swearing to Apollo.


>>>> The oath is to remind you to use your head, your best judgement and a body of ethics

Why you need an oath for that - shouldn't it be always the default behavior?

>>>> and to act in the interest of the public.

"Interest of the public" is a very dangerous thing. I can remember a lot of very bad things that were done "in the interest of the public". You can make almost anything pass as "in the interest of the public" if you want to. Murder? Millions were murdered "in the interest of the public", because they were of the wrong ethnicity, class, physical features or just in the wrong place in the wrong time. Robbery? Millions were stripped of their property and reduced to utter poverty because it was claimed it is "interest of the public" to do so. And so on, and so forth.

I would rather steer clear of anything that has "interest of the public" written on it, at least until it's very clear what is underneath. Too many things that were underneath such writing proved to be a disaster.

>>> no one still thinks they're swearing to Apollo.

Swearing to a document composed by a faceless bureaucracy is no better. If you have code of ethics, live by it, if you do not - get one. What Apollo or his modern equivalent, the almighty bureaucracy, has to do with it?


Because that's how humans work?

bookoutlines.pbworks.com/w/page/14422685/Predictably%20Irrational

One more variation: Nina, On, and Ariely conducted a similar experiment. But, one group was asked to write down 10 books they had read in high school, and the other group was asked to try to recall and write down the 10 Commandments. When cheating was not possible, the average score was 3.1 When cheating was possible, the book group reported a score of 4.1 (33% cheating) When cheating was possible, the 10 Commandments group scored 3.1 (0% cheating) And most of the subjects couldn't even recall all of the commandments! Even those who could only remember 1 or 2 commandments were nearly as honest. "This indicated that it was not the Commandments themselves that encouraged honesty, but the mere contemplation of a moral benchmark of some kind." Perhaps we can have people sign secular statements--similar to a professional oath--to remind us of our commitment to honesty. So Ariely had students sign a statement on the answer sheet: "I understand that this study falls under the MIT honor system." Those who signed didn't cheat. Those who didn't see the statement showed 84% cheating. "The effect of signing a statement about an honor code is particularly amazing because MIT doesn't even have an honor code."


Interesting experiments, the question is if this persists - i.e. if you read the 10 commandments at the beginning of the semester and take the test at the end - would the difference still remain.


"So you don't actually have to employ your own brain and your own moral judgement, because somebody already did it for you and wrote this nice set of rules, that you swore by Apollo to faithfully execute, without thinking, not unlike that box of wires and silicon chips you are paid to play with? Nice arrangement, I suppose."

So I assume you never use any open source code in any of your programming projects, since you are fundamentally opposed to adopting any ideas that are not exclusively your own?


You assume wrong, this in no way follows from what I have said and I never said that I am "fundamentally opposed to adopting any ideas that are not exclusively your own". You must have by accident commented in a wrong branch.


So you wear the ring then, eh?


Soon :) I'm just entering my final year of study.


I'm not sure where you are, but... be sure to wear a lot of layers of clothes with at least one all-black layer on the finals day.

Seriously. I hope that you'll see this in time!


You really come off as an amoral jerk here.

What happens if an app for job seekers texts your boss? What if a casual hookup site texts your new girlfriend--even though you signed up a year before meeting her? What happens if your app calls an old person and they crack a hip trying to answer a phone--when everyone that knows them personally knows not to call until they're awake and their caretaker is in?

These may sound far-fetched, and we all mostly don't pretend we're as disciplined as structural engineers, but "we do it for the money lulz" is a shitty and stupid argument.


>You really come off as an amoral jerk here.

I'm okay with this. I'd rather be calculating than have my head in the sand about the business models of social networking what-have-you applications.

>What if a casual hookup site texts your new girlfriend--even though you signed up a year before meeting her?

While I don't and won't have to experience this, your imagined relationship suffers more from lack of trust and honesty than "some dumb app does some dumb, annoying thing."

>"we do it for the money lulz" is a shitty and stupid argument.

Don't Straw Man me. If my code was going to be used for something I perceive as evil, I'd leave the job.

Our industry doesn't need yet another pointless, embarrassing ethics/integrity campaign when the people writing the code don't care.


> Our industry doesn't need yet another pointless, embarrassing ethics/integrity campaign when the people writing the code don't care.

When was the last one?


> People don't die or get harmed when some social-messaging application spams someone.

Of course they get harmed: their time is wasted, and perhaps their concentration disturbed. This is a small harm to each victim, no doubt about it, but if you write code that makes your social-messaging application spam people then you're delivering that small harm to a large number of people. If your code wastes 10 seconds each, just once, for a million people, that's about three person-months of aggregate time you've stolen that will never come back.

It's quite true that "in social networking, your personal data what the company uses to make a profit". But if it were near-universal practice for new software engineers to swear a solemn oath not to use their powers for evil, who knows? perhaps some other business model for social networking might have had a chance to succeed.


Its also true that in search, your personal data is what the company uses to make a profit. And yet, one hears less complaints about that.

I'm not justifying either approach, by the way, just observing what I regard as a strange disconnect.


People don't die or get harmed when some social-messaging application spams someone.

I disagree. This case reminds me of Geni. You would put a relative's email address in to invite them, and they'd then receive a torrent of spammy "updates", until they registered to unsubscribe.

My less tech-savvy father added many relatives from his address book to Geni. Lots of hate from deranged relatives, and some less technically-inclined relatives are probably still being spammed, 6 years later-it made family gatherings awkward for a while. There are real-world, harmful consequences to this kind of scummy, unethical tactic.


> People don't die or get harmed when some social-messaging application spams someone

http://www.telegraph.co.uk/news/worldnews/europe/russia/8284...


Of course people get harmed. The harm just has vastly less depth and vastly more breadth.


This response assumes that joining all social networks is user choice. Unfortunately, we live in a world where it hurts consumers NOT to be on some networks. LinkedIn (which also has had a history of horrible spamming in the past) is an example - sure, you can choose not to be on it, but you'll probably get dinged by hiring managers because your credentials somehow seem less legitimate if not corroborated by LinkedIn.


if there were such an oath, programmers would write the code and the business will fill it with content..

//It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter.

http://www.codinghorror.com/blog/2007/05/your-favorite-progr...


I'm a member of the Order of the Engineer. Their oath is a pledge of responsibility in engineering, in the interest of the public good: http://en.wikipedia.org/wiki/Order_of_the_Engineer


Engineers have this system already and have had it forever. In Canada (the one I'm familiar with) it is the P.Eng (Professional Engineer) license.

I think the system should be licensing and involve losing that license if you commit an ethics violation.

There would be unlicensed developers of course, but connecting the incentive to not do unethical things with the incentive to be part of the elite class in your profession has worked pretty damn well for Engineers, Doctors and Lawyers.


This is an important sticking point. Maybe a Professional Engineering certification isn't the solution, but let's not get hung up here.

Example: P.E. certified people should have no problem creating weapons systems for a nation-state at war. Does that make it ethical? Depends on who writes the history books afterward.

Example: P.E. certified people might refuse to participate in experimental, unorthodox methods. But especially in software these often become the runaway successes.

In other words _you_ have to own _your_ personal ethics. You won't be able to point and say "I was just following orders!" The pointy-haired boss who gave the orders isn't going to be able to exonerate you of the guilt. Often he doesn't even congratulate you for "doing the right thing." Maybe he'll fire you or give you a bonus – or join you in prison! – but my point is: it's orthogonal to your personal ethics.

Ethics may sometimes appear to conflict with rapid progress. That doesn't necessarily imply an existential crisis, just a lack of forethought. So many ethical problems arise due to overflowing ignorance / lack of forethought combined with a sudden rash of malice (when it comes time to pay the piper). Ethics are a way of expressing realities about the world that conflict with the general Adam Smithian "enlightened self-interest." I view ethics as meta-enlightened self interest – like how Apple is more than just industry-leading, they carved new niches where no one thought to go.

Engineers (software engineers or otherwise) have untangled things much more complicated than this. It's only overwhelming if it blows up in your face.

Path seems like a classic case of all of the above.


Your _personal_ ethics can easily get you _fired_.


Of course. The _whole point_ of _any_ ethical or moral principle is that it directs you to do things that are right even at some possible cost to yourself.

If you believe that standing up for your strongly-held beliefs will get you fired, you should look for a new job _now_. Sure, that incurs the trouble and uncertainty of a job switch, and possibly a pay cut (though perhaps less of that than you think). But if it means that you don't have to be ashamed of what you do all day --- it's generally worth it.


> has worked pretty damn well for Engineers, Doctors and Lawyers.

... and has pretty much screwed over the rest of society, at least in the latter two cases. The legal and medical cartels have done incredible harm to their customers over the years.

See http://mises.org/freemarket_detail.aspx?control=51 (law) and http://mises.org/daily/4276 (medicine) for details.


I have to say that your link about medicine is short of laughble.

Even if there was some bad "allopathy" back in the day, there is more bad eclictics and homeopathy right now. And to practice medicine you have to understand scientific method, especially falsifability.

And I also have to say that the "free market" idea isn't falsifable. "Let it to free market" rarely works.


Parts of the "free market" idea are falsifiable.

They assume rational actors (people making decisions based on their own self interest). That's been falsified (when applied to humans).

Most variations of the efficient market hypothesis have been disproved as well, for the same reasons:

Humans have cognitive biases and other types of irrational behaviour.

But anyone linking to mises.org is probably a follower of the church of the free market. And they generally strongly disagree with the idea that humans have cognitive biases (because their faith requires it not to be true).

I'm glad someone else laughed at the pro-homeopathy / conspiracy theory around the history of snake oil salesmen content on there.


> They assume rational actors (people making decisions based

> on their own self interest).

That's untrue of some schools of economics that advocate free markets, e.g. Austrian.

> But anyone linking to mises.org is probably a follower of

> the church of the free market. And they generally

> strongly disagree with the idea that humans have

> cognitive biases (because their faith requires it not to

> be true).

That's an ... interesting ... claim. Care to justify it?


> That's untrue of some schools of economics that advocate free markets, e.g. Austrian.

I took the term "free market 'idea'" to be specifically talking about those for which it's true. That seemed to be the point, and the site linked to was Austrian. Both articles make the assumptions in question about the ability to self-regulate that assumes rational actors. So yes, my statement was not true of all schools, but it seemed like those types of Austrians were not in the scope of the discussion.

> That's an ... interesting ... claim. Care to justify it?

Subjective opinion. I read economics news and neuroscience news because it's interesting. Comment threads, especially here, frequently have two types of subjects that start the vocal libertarians arguing and proclaiming: government regulation and the phrase "humans are irrational".


This may vary by province, but our provincial engineering board will not stand up for you if you get fired due to upholding your code of ethics (they even told us so in ethics class).

Furthermore, whistleblowers are often unemployed for extended periods of time, due to corporations not wanting to hire them as they could be a liability.


Hippocratic Oath for developers? Here's a nice post from 2005 on this exact topic: http://glyf.livejournal.com/46589.html . Choice quote:

> Who would knowingly submit themselves to a doctor, knowing that they might give you a secondary, curable disease, just to ensure they got paid?


We don't need an oath. We need to stop using Path. We need to get everyone we know to stop using Path. No need to over complicate things.


Getting everyone to stop using Path is pretty over-complicated.

We need them to change their ways, not disappear.


Twitter essentially created a Hippocratic Oath for patent usage (that is, for its employees who are developers concerned about unethical offensive use of software patents): https://blog.twitter.com/2012/introducing-innovators-patent-...

The patent language says it's "a commitment from Twitter to our employees that patents can only be used for defensive purposes." Extending this more broadly would say "a commitment to our employees that the code they write can only be used for non-spamming purposes."

The problem, of course, is that it's pretty easy to tell whether a patent is being used defensively or offensively. Defining spam (or more difficult yet, privacy) is a bit more slippery.


Used that in some contract work last year--it's a good thing.


What difference would that make? Path could still fire him, oath or no oath, and I can't see their decision being much influenced there.


They certainly could. The talent pool would certainly hear about it as well.


Publicity. He writes a blog "Path fired me for not breaking the oath" and then communities like this one will rally around that whistle-blower.


This assumes that Path (or any company) would be foolish enough to make their intentions clear.

If a developer refused to implement a feature due to their ethics then the company would do the following:

* Move engineer to different project

* Set unrealistic goals/deadlines/expectations

* After engineer fails, voice concern about performance

* Set up performance review and improvement plan

* After causing engineer to fail a second time due to unrealistic expectations, fire them due to poor performance

Even if that engineer writes a blog post, enough has happened between his initial refusal and termination as to make conclusive proof impossible. The discussion will be a he-said-she-said affair as his former employer makes a counter-blog post explaining the engineer's poor performance.

A lawsuit is similarly out of the question as most companies have sufficient funds to cause delays in court, thereby causing you to spend all your money on attorney fees and bleeding you dry.


Yup. SOP in the food service industry is to give employees who are underperforming 4 hours per week, on the slowest shift, and just leave them there until they quit.


Honestly, if you refuse to do something on ethical grounds and then are moved around, you know what's going on. Most people aren't clueless to office politics and at that point it's your decision to blow the whistle or shut up and watch it happen in spite of your oath.

It's not like hospitals haven't contended with this exact thing for a very long time. The wills of surgeons/doctors and their hospital administrators do not always match up.


How would this be any different from the situation today, where he writes a blog post titled, "Path fired me for not spamming millions of people's phones" and the community rallies around the whistle-blower?


Yeah, that negative publicity around Foxconn's workplace conditions really tanked Apple.


Maybe not, but Apple changed their practices pretty darn quick. They now review suppliers much more closely than they did before, and Foxconn's practices have changed a bit as well. You can argue it didn't have ENOUGH effect, but you can't claim it made no difference.


The goal was not to tank apple, but to improve the working conditions at the factory floor and transparency on apple's part. Apple' supply chain is much more transparent after the noise. On another note, our outrage/disagreement cannot be outcome based. We hope the bad publicity will change things. many times it does not.


I get your point, but these guys don't have anywhere near the same clout that Apple has.

Path is young (and building a service, not a device/OS) and it can be abandoned for something similar since all they're keeping is data. Once you buy a gadget, that's an investment on your part which will make a lot of people hesitant to give it up and the culture it that surrounds it.

And I wouldn't bet that Apple will be able to survive scandal after scandal and still survive unscathed. Cook is no Jobs.


I think that's a great idea. Drafting a version of it right now.


I wrote up something quite quickly: http://maxmackie.com/2013/04/30/The-Turing-Oath:-The-Promise...

"The Turing Oath" is on Github (https://github.com/maxmackie/Turing-Oath/blob/master/README....) and I recommend people contribute and we grow this to become something people recognize.


While I viscerally agree with this:

But I'm not sure if Turing, who is not well known for having had anything to do with privacy is the right person for this oath.


Arguably, the root cause of Turing’s persecution was that his privacy got invaded, and subsequently the government did not think he had a right to his private conduct.

Though admittedly, no technology was involved in the whole matter.

Also, Turing’s wartime exploits involved a breach of privacy in the service of a good cause.


Arguably, the root cause of Turing’s persecution was that his privacy got invaded, and subsequently the government did not think he had a right to his private conduct.

Actually, come to think of it, if viewed in that way, he's the perfect name for an ethics oath regarding privacy. I hadn't considered it that way.


It would be interesting to combine this with an open source license which links to the Oath, and forbids use of the code in any project or system which breaks the Oath.

For developers who have undertaken the Oath, the challenge would then be to write the best code, so that it sees widespread adoption. This might potentially make it harder for companies like Path to engage in activities which break the Oath.


I'd object to the name first and foremost if I knew Turing wasn't involved or directly responsible for its creation. It's still lying.


Fair enough, have another candidate in mind? Feel free to email me -- I want to see this be known.


Two words: Government Regulation.

Once Congress puts a bill forward to deal with this issue, it will be the beginning of the end for this type of behavior. I'm sure Obama will get behind it as well, as it will help the computer market immensely.


"Don't be evil" has never really worked, despite best intentions, depending on your vantage point.

You don't need an official Oath or for the company to know or base their employment on such an Oath. Either way, the bottleneck is the employee drawing attention to the company doing something unethical; no Oath has to get in the way of that. There's already a sub-thread on top-secret/weapons/armaments jobs. What about political ethics? And religious? Marriage / gay rights? Porn? "Sexism?"


Has anyone considered that perhaps Path did NOT violate any ethical boundaries?

Perhaps the guy checked a box that said "Please notify all of my contacts via text message".

Then all the messages went into queue that was delayed a bit.

Then the phone companies converted text messages to voice calls.


Does this mean that we can't write software for drones anymore?

What about software for missile guidance? Is that okay by this oath?

Or do we as a community value not texting people at 6AM more than we value not killing people?


Strawman. And anyway plenty of software developers (and other engineers) won't work on armaments.


So should a software developer's code include armaments, or not?

If it does, it would never get mainstream acceptance; if it doesn't, but does cover the topic of this post, it will be ethically absurd.

I don't see how this is a straw man; when building a professional code, one has to choose what actions to allow or disallow, and this seems like a topic that would obviously come up. What do you think about this scenario misrepresents the idea of a developer's professional code?


This is an internal debate I've had with myself since reading Bernard William's Objections to Utilitarianism [0].

I've never had to actually face the ethical dilemma of developing weapons, but what if the development improved precision on a missile? If we can ignore the question as to whether a missile is ethical or not, developing a better guidance system for a missile will help limit collateral damage, but could increase the "comfort-level" of using the weapon for those who decide such things, therefore increasing overall death/destruction. Utilitarianism is hard, because taking all factors into account is impossible. Kind of like machine learning.

I will never scoff at someone who turns down work for ethical objections, but some people are more pragmatic than others.

Both of Williams examples are really hard to wrap your head around if you accept the situations as presented. They are similar to a Sophies Choice [1]

[0]: http://plato.stanford.edu/entries/williams-bernard/#Day [1]: http://en.wikipedia.org/wiki/Sophie%27s_Choice_(novel)


This is a divergence, but it seems to ignore the value that comes from propagating the meme that building armament systems is unethical by refusing to participate in it.


While the idea of an oath may be flawed, I do think it's about time some segments of the tech field showed a little less contempt towards users. I don't know how that would happen, but launching your own startup shouldn't give you carte blanche to exploit your users however you see fit.

As a programmer, I don't want a bunch of charlatans in SF to give my career an unsavoury reputation because of these antics.


That's a decision for each individual developer to make.

Some folk tried to create a pacifist version of the GPL[1].

Others are using the RMPL (RobotGroup-Multiplo-Pacifist-License)[2] - basically a MIT license, but with a restriction that bans military projects.

[1] http://arstechnica.com/uncategorized/2006/08/7511/

[2] http://multiplo.com.ar/soft/Mbq/Minibloq.Lic.v1.0.en.pdf


The irony being that the government isn't strictly bound by copyright or licensing terms. They can and have violated them as needed.


> That's a decision for each individual developer to make.

I generally agree with this, which is why I find the idea of a "developer's code" somewhat ridiculous.


"Shippocratic Oath"

FTFY


There was a court case against a high school alumni site that sent out "a classmate is searching for you" emails to get people to sign up:

http://arstechnica.com/tech-policy/2012/11/how-lawsuit-again...

> The case originated with two lawsuits claiming that Classmates.com had sent out millions of deceptive e-mails telling users that an old friend was trying to contact them, and had viewed their profile or signed their "guestbook." For the great majority, that wasn't true; no one at all had shown an interest in their profile. About 60 million users were contacted, and about 3 million actually took the bait, paying between $10 and $40 to Classmates.


You don't know how happy it makes me to see former annoyance-kings classmates.com referred to as a "high school alumni site." Ah, the pre-FB days.


Another hook might be a violation of the Telephone Consumer Protection Act (TCPA) which has been interpreted to prohibit automatically sending text messages without the recipient's consent. The statutory damages are $500 per incident.


Classmates.com was sued in 2008 for sending emails claiming former classmates were searching for you. Similar, but Classmates.com was actually requiring a paid subscription before telling you there wasn't actually anyone on the other end.

http://arstechnica.com/tech-policy/2010/03/classmatescom-set...


Simple math is all it takes to understand why this is pervasive. In the classmates.com example, they collected upwards of $75 million in revenue from this scheme, and only had to dish out $2.75 million for the settlement several years later. So for any newcomers, this type of bait is an attractive business proposition; the potential lawsuit years down the line can simply be attributed to cost of revenue. And you'll obviously only get sued if there's money to be won, which means the scheme was a success, so in other words, you want to get sued. The real problem is the judgments/settlements are an order of magnitude lower than they should be.


There's the hidden cost of permanently alienating potential customers, though, which will show up silently in reduced conversion on all subsequent advertising efforts.


Sure. But it gets some douche product manager his bonus for revenue/user acquisition which was the only goal in the scheme.


Maybe that's why we've been seeing Path more in App Store top charts recently?

Another app doing similar spamming is Circle: http://discovercircle.com - surprised no one talked about that...


Growth hacking through grey-hat tricks? This is an outrage!

(your winnings, sir.)


Apps can't send text messages without the user knowing it on iOS, you have to tap Send for each one of them, so this couldn't have affected App Store rankings.


You presume the app sent it and not their servers or a partner service provider. They already grab your address book, including phone numbers. They don't need you after that.


In that case don't you think we would have seen many more reports of this shady practice? This seems to me a combination of purposefully bad UI and the user not paying attention.

Still the same practice spammers use, but very different from "covertly uploading my contacts data and texting everyone without telling me".


Actually the article notes that quite a few other people do seem to have experienced this in the past...


See, I am not convinced that it was an innocent "automated mistake". This is equivalent to the anecdotal old excuse one always hears from the government officials: "the computer did it". When some spamming process like this is automated, all it means is that it is being inflicted on lots of people and most likely deliberately, so automation is not something that can be accepted as an excuse, on the contrary, it is an aggravating circumstance.

I incline towards the opinion that these sharks do it quite deliberately. They just don't care how many people they embarrass and annoy, as long as some of those tech-innocent grannies and plumbers join up and thus put figures on their business projection sheets that get these sharks closer to cashing in big on an FB style IPO.


Facebook does that exact thing by the way - my boss came back from a week of vacation and wondered how I found him on Facebook. Facebook had sent each one of us in the office a link, that he didn't know about. Anytime you send email as a user - it needs to be very explicit.


Probably that why they hiring director of legal and privacy - https://hire.jobvite.com/CompanyJobs/Careers.aspx?c=qd29Vfw3...


In Australia it is possible to get companies that send unwanted texts banned from sending texts on Australian networks.


"The Privacy Act gives you the right to make a complaint if you believe an Australian or ACT government agency, or a private sector organisation covered by the Act, has mishandled your personal information contained in a record."

http://www.privacy.gov.au/complaints/what


There's no way to sue, but I suppose the FTC could take action and fine.

They will claim it's a "bug"... albeit one of those "viral bugs" that seems to lead to topping the App Store download charts. These tactics make me think there's no such thing as organic growth within the app stores.


>There's no way to sue //

You can sue for anything; just not always successfully. That said ...

This appears to be trespass which I think is a tort (entering a part of property, the phone, that was off limits and without consent). You can sue for that.

It's also in the same respect contrary to the UK Computer Misuse Act (crime) AFAICT [story appears to be in the UK, or is it just UK entries in the addressbook]. That would probably hinge on the consent to access the particular files duplicated.

Then there's database rights (tort?), a sort of copyright for databases. [Copyright wouldn't apply as it's not a creative work].

Harassment (tort I think) and infringement of the right to a private life as enshrined in the ECHR (crime) seem to be causes to object as well.

As the calls were business motivated then failure to check against a telephone cold-call blacklist could also generate extra fines.

Seems there's much that could be sued for.


Reminds me of the emails Facebook sends me telling me that I've missed important notifications when I don't log in for a while. I log in, and there are no notifications waiting.

And the messages Facebook sends me telling me about an event and letting me know that one of my friends is a guest, when they've been invited but haven't actually confirmed.


"Just curious, is there a way to sue/fine a company like this for false advertising, essentially?"

Yes. It's illegal to use someone else's likeness for advertising without their permission.


Fine you say? Here's an earlier HN submission pertaining to just that: https://news.ycombinator.com/item?id=5630449

UPDATE: That appears to be related to collecting info on minors. Looks like they need to be fined again for this.


Well, I don't think we should look for malice when is just incompetence.

LinkedIn keeps asking me to share my mail user/password so I can connect with more people, and says that X and many others already did it. I can't tell about others, but X is my wife and I'm certain she didn't do it.

So well, as I said I think that this is just a matter of incompetence (ie. automated message gone wrong).

EDIT: typo


How about another detail — the fact that – according do another comment here, the only one that seems to actually have looked at how the app behaves before jumping the gun – the app tells you it's going to invite everyone on first launch and you need to tell it not to?

Still bad, but quite a different perspective.


This is proven (and risky) way to grow. I really cannot blame them: they have to pay their rent and VCs are nervous.

This approach might burn them completely but also can get them to some significant number of users (after which they will issue an apology and pay all fines if needed).


> I really cannot blame them

Shitty behavior does not stop being shitty behavior because you have bills to pay. And it's not even in the "understandable under duress" area of shitty; Path isn't a person with a starving child and that dude's contact list isn't a loaf of bread.

If your company can't exist without being shitty, your company shouldn't exist.


I completely agree with you on "If your company can't exist without being shitty, your company shouldn't exist.". However, here in VC-stanm the behavior like this is considered "hacking the growth" and something which is not considered a bad thing. So unfortunately, you cannot play the game here (ok, you can play but no big money will bet on you) if you not ready to do things like this.


I'm OK with that game not being played. That spam-calling random people at 6AM is not an immediate "get the fuck out of my office" is probably a pretty good indication that something is deeply rotten in, as you put it, "VC-stan."

But you can get on TechCrunch, so it's gotta be okay, right? :(


And we also need to understand that all these social networks are in business of spamming people ("growing the network"). Hey I remember back in 2006 (maybe 2005?) Facebook invited all my all my Gmail contacts to Facebook (including people who interviewed me ...). That is the key of their existence, so I'm not sure how anybody in right mind can expect anything else. Maybe I'm too old and see these things as they are...


If this sort of behavior is really being pushed by investors, it kind of makes one wonder about the corporate veil shielding investors from liability.


Shitty behavior also does not stop being shitty behavior when you're pressured into doing it.


I'm all for the moral high ground, but it becomes a lot less simple when "pressured" means something like a combination of "sole breadwinner" and "ethical choice." Reality doesn't bend to our idealism, and businesses are grounded solely in reality (often to our detriment).


>when "pressured" means something like a combination of "sole breadwinner" and "ethical choice."

I can't figure out what you mean by this. Are you saying that this behavior is only alright when you need the money?


Definitely not. I'm saying it's never "right" in a moral/ethical sense, but when you're put in certain (fairly common) positions, the moral high ground is not a feasible option for most people. It comes down to whether you consider stability for those who depend on you more or less important than doing or refusing to do something based on whether you consider it right or wrong... that's something that's easy to judge until you're in such a position.


This isn't the hypothetical where you're stealing a loaf of bread from the market to feed your starving family. I certainly respect that taking the moral high ground isn't always easy and that some people are going to be in situations where it is more difficult for them to do so than for others.

But the standard for acting like an asshole has to be greater than simple expediency. The necessity of breaking the social contract has to be roughly proportional to the community inconvenience; that's why firemen get to use the siren and everyone is supposed to yield when they are headed to, well, fight a fire, but I don't get to use one when I'm headed to the grocery store.

If I bang on a stranger's door at 6am because their house is on fire and I'm trying to warn them, then that's great, because the "don't harass strangers at six in the freaking morning" social norm is less important than the "OMG THE FLAMES THEY BURN!" social norm. In contrast, if I bang on someone's door at 6am trying to sell Amway products, then I'm an asshole. Finally, If I bang on someone's door at 6am, insist that their buddy, whose name I found by going through the trash, has photos to share with them, and only later reveal that there never were any photos, then I'm an unbelievable jackass.


Unfortunately, your analogy is excessively dramatic and misses the reality of the situation. It's easy to create surreal situations in which absolute visions of your own morals apply. Sorry, but you just don't get it.


Apologies for trying to make a point. If anything, my analogy seems pretty much on point, as it barely changes the reality around the events here as described:

banging on door <=> text/phone call, which likely causes an audible alert stranger <=> contact/acquaintance/vendor/client/boss/relative/lover/ex-lover/dentist of a new user 6am <=> 6am their buddy <=> their contact the trash <=> a new user's cell phone contacts has photos to share <=> has photos to share there never were any photos <=> there never were any photos unbelievable jackass <=> unbelievable jackass

Seems close enough.

But my real question for you is which is it? Did a developer/Path act unethically, but you believe those actions are justified because people have families they need to care for? Or do you believe that Path/the developers did nothing wrong and I'm just applying my own morals to the situation? One or the other is a legitimate position to take (though I may disagree with your view), but you can't have both.


Because your hand-waving and appeals to authority are so much more relevant?


This sounds patronizing. As adults, you make your choice and face the consequences. That you may have dependents does not reduce your moral responsibility one bit.


Clearly you don't know how it feels to actually have "dependents" or what that responsibility entails. This will sound patronizing, but it's absolutely valid: Come back and comment on this in 10 to 20 years. At that point you might have the perspective you clearly lack now.

[edit]To be clear: I do not have dependents, but I also don't have an employer (who isn't me), and I do have people that depend on me. What I also have is a lot of experience in a lot of situations that are very much "gray" in terms of what less-experienced people seem to consider moral/ethical absolutes, which simply do not exist. That last part is what you don't understand but are likely to figure out as life teaches you the things your parents and teachers would like to but simply can't.[/edit]


It sounds patronizing because it is. It sounds silly because it's that, too.

Doing the right thing is sometimes difficult. That's not an excuse to do the wrong thing. Indiscriminately spamming hundreds of contacts is always the wrong thing.


It doesn't, but it might mean we need to look into how the pressure is being applied.

We can bitch at Path all day, but there exists strong incentive to do this, and people who don't do this will have unilaterally disarmed and be at a disadvantage. We need to encourage the first group to not do this, and encourage the second group to keep it up and not feel like suckers for respecting their users.


Would it also be okay to murder people in order to pay your rent and your backers are nervous? What about just hitting them over the head? What about just threatening to hit them?

Ethical justification isn't easy, but it's also not this hard. The entire point of being ethical is that you might lose out as a result of being ethical. If you opt to discard ethics in order to get ahead, you are being unethical. That is what it means to be unethical.

This. Is. Unethical.


Then the fines are obviously not high enough.


I think a better solution is to make fines exponential (for repeated reasons), for e.g. 1st time $10, 2nd time $100, third time $1000 dollars and so on.


Fines are meant to discourage behaviour found unacceptable by society[0]. That obviously fails to work if the net outcome of a ‘discouraged’ action is still positive even after taking into account the fine (and compensation for damage, if any). So, really, the (first) fine was not high enough. For a very similar reason, non-trivial fines[1] are usually given as a percentage of turnover or daily fines.

[0] Note that jail time also serves to rehabilitate the offender in case of serious crimes. That obviously doesn’t really work with companies, and unfortunately, carelessness with other people’s data is not considered a serious crime in many places.

[1] Basically anything above a parking ticket.


Yeah, that's why I think exponential grow in fines for repeated offenses by a company is a good idea, it will eventually be net negative for the company so they will stop doing it.


But why do you want to grow fines instead of imposing appropriate (i.e. high-enough) fines on the first offence?


Because you can't know the net positive in advance; is way too dependent of the context and that includes the size of the company.

It also discourages the searching of flaws in the system because even if you get away with it once (net-positive) you know the next time the fine will make unviable.

BTW this is similar to how already the civil justice system works in many countries, where after repeated minor offenses you go to jail.


I'm sure this will be downvoted to hell and back, but still: http://jesuschristsiliconvalley.tumblr.com/post/46539276780/...


And here I was busy thinking "Wow. If Path is doing crap like this than that post about the founder being a douchebag must be 100% correct." Congrats on being even more obnoxious than Facebook.


This was so great it almost made me laugh out loud.


the man is a genius. his other posts are also pretty awesome. however, part of me suspects that in an ultimate act of vanity, dave morin himself set up his own hateblog.


Turning off the ringer on your phone is totally legit, though. I don't believe in interrupt-driven communication, unless it's mediated through a machine or some kind of filter. (I'll let a machine notify me if lots of stuff is down, or if one of a very small number of people call me, but that's about it. IIRC, pg's call went to voicemail a couple years ago.


Turning off the ringer on your phone is totally legit, though.

It is, but his stated reason for doing so is nothing if not totally insane.


Especially in context, the phrasing is ridiculous, but semantically it no different from "I refused to be bossed around by my phone".


Refusing to be bossed around by an inanimate object still seems somewhat ridiculous to me.


A phone, especially a ringing one, isn't particularly inanimate.


Being bossed around by an inanimate object seems even more ridiculous to me :)


Turning off the ringer on your phone is totally legit, though.

For the CEO of a business that sends phone spam to other people at 6AM, it’s also rather telling behavior, though.


There are several things there that could be perfectly legit but the way they're stated just comes across as completely absurd. I know several people who have two phones (for similar reasons), I know people who prefer texting to calls (and reject calls regularly), I know people who create bespoke apps to play around and solve their own problems. None of the people I know would say any of the kinds of things you see in that article (Except maybe about Uber - that one seems normal).


Awesome. I've been searching high and low for something to fill the uncov-shaped hole in my heart.


FYI, I stopped reading your comment six words in and immediately downvoted it.


This isn't Reddit.


And I'd like to keep it that way.


Looking at other posts, those writings lack empathy and imagination. The post about Google Glass [0] is probably one of the most asinine thing I read on the Internet this month.

[0] - http://jesuschristsiliconvalley.tumblr.com/post/48596551224/...


it's obviously written in that voice for effect, but there is actual thought behind it. comparing google glasses to segway is valid, and probably not far off. only time will tell.


Path must really like thos FTC fines.

Here's to hoping the next fine will exceed their cash reserves and we can put an end to this madness.

The post is proof positive that path still uploads phonebooks from the app to their servers right after installing it.


> The post is proof positive that path still uploads phonebooks from the app to their servers right after installing it.

Is it? The texts were coming from his phone number, which suggests they were sent from his phone (not necessarily, I know, but you said "proof positive").

I don't know how Android text message sending works, but there is likely some rate limiting to how many texts you can send so they certainly could have been queued up to be sent later.


The sent address on an SMS is as meaningless as it is on an email. SMS gateways allow the sender to use any number or caller ID. I've used it previously as a party trick, it's completely transparent to the user.


I know that, and I said so in the comment, I have used it myself in some projects. I just meant that this is not necessarily "proof positive": there are other ways they could have sent those texts without uploading the user's address book to their server.


I've received no less than two text messages from Path today - both of which were not sent from my friends phones, but rather one of those 5 digit numbers that automated texts seem to use. This is... shady to say the least.


This comment is a great example of how people are so quick to rush to judgement with emotional reactions. Let's look at the facts:

1. Path was fined, not for anything involving address books, but for allowing 12 year olds to sign up for the service.

2. Yes... it is proof that Path uploads your phone book. Of course, they ask you. The OS won't even give you access to the phone book without prompting the user. So somewhere along the way, the user knowingly gave Path access to their contacts.

It would be rather trivial for a real reporter to do some research here. Does Path actually say "We're going to invite all your friends via SMS", even in fine print? It might be sleazy, but it would certainly change a lot. But instead, we're just going to sit here and speculate about things and irrationally talk about a fine that didn't have anything to do with this.


Me1000, I love you, but...

> The OS won't even give you access to the phone book without prompting the user. So somewhere along the way, the user knowingly gave Path access to their contacts.

The introduction of address book privacy in iOS was in large part prompted by the publication of Path's behavior. Up until the Path and eventually iOS update after the controversy first arose, Path didn't explicitly ask the user for access to their address book.

http://www.engadget.com/2012/02/15/iphone-address-book-issue...

> Path was fined, not for anything involving address books, but for allowing 12 year olds to sign up for the service.

Path was fined for the 12 year old signup thing specifically, but they were still charged with privacy violations regarding the address book kerfuffle.

http://www.ftc.gov/opa/2013/02/path.shtm


OP is using an android phone.


Android's permission system also doesn't require apps to ask for permission before they access your phonebook - it requires you to give permission to install the app, and it tells you the app can view your phonebook, but you have to either trust the app not to abuse that ability or not install it at all. There's no way of telling the difference between an app that can use your phonebook to provide useful optional functionality and one that'll upload the entire thing to the mothership the moment you start it.


I really think Android should add another layer of protection here, similar to the "This app wants to use your location" prompt in iOS. I'd like to be able to install an app that might need to access my phonebook in some use case but be able to deny it when it attempts to access that information when I don't want it to.

For example, I'd want to be able to use the facebook app and many users might even want to have it scan their address books in order to find friends. However, if the app attempts to read my address book when I'm just checking someone's status update that is clearly not okay and I want to be able to block it.

The free pass to pillage my phone upon installation doesn't sit well with me.


> I really think Android should add another layer of protection here, similar to the "This app wants to use your location" prompt in iOS

Most definitely. This has always been my argument against the whole system: installing apps that need excessive permissions is basically blackmail. Just like "Do you agree to the terms of service?", you hardly have a choice. I was very surprised to see people not even glance at the permissions before clicking Accept.

But as I said, it's blackmail anyway whether you look or not. You don't want them to have all your contacts, your exact location, all data on your sdcard, and full network access? Fine then, you won't get [whatsapp] (or pretty much any other app), that what everyone else has and that you're almost socially obliged to have (at least in my age category).

It even goes so far that the android user has no permissions to use the permission manager to deny or allow permissions for apps. There are commands ("pm grant x" and "pm revoke y") that lets you change apps' permissions... but you can't use it by default, even as root ("java.lang.SecurityException: Neither user [your uid] nor current process has android.permission.GRANT_REVOKE_PERMISSIONS"). It's totally messed up.


On a rooted android phone, it is possible to install apps but not give the the permission. Of course, that usually means they will crash then they try to do something, but it gives you a layer of protection if you want to try the app out or something.


I'm well aware of what prompted the change to iOS. We talked about that last year. I'm not defending anything, but that's not really at issue today, though.


Specifically in this case: pretty sure Android and Path alike might be informing you about address book access, but nowhere is the user going to be prompted for Path to share with your contacts after uninstalling the app.

Same odious behavior, just a bit different this time.


it certainly is on android. one of the reasons why i only install "social" apps on iOS..


1. Path was fined, not for anything involving address books, but for allowing 12 year olds to sign up for the service.

That's not true, is it? According to the FTC[1], they were fined for "collecting personal information from their mobile device address books without their knowledge and consent."

2. Yes... it is proof that Path uploads your phone book. Of course, they ask you. The OS won't even give you access to the phone book without prompting the user. So somewhere along the way, the user knowingly gave Path access to their contacts.

Giving them permission to read the address book (which might be useful and perfectly legitimate) and giving them permission to send everyone in that address book spam is two very different things.

[1] http://www.ftc.gov/opa/2013/02/path.shtm


1. The sub title: "Company also Will Pay $800,000 for Allegedly Collecting Kids' Personal Information without their Parents’ Consent" The fine was only with regard to the violation of CIPA.


They pled down, effectively. They were charged with more widely-ranging violations.


You are equating "grant access to the address book" with "spam all 'friends' at 6 o'clock in the morning to tell them lies"

By your logic, it would be completely useless to even read the fine print, because giving them access to the addressbook would imply my consent for them to do anything technically possible with it.


> ...because giving them access to the addressbook would imply my consent for them to do anything technically possible with it.

Well, from a technical perspective that is indeed the case. Once they physically have your contact info they may do as they please.

You, as the iOS or Android user, are not giving them permission to use your contacts "properly" or "nicely"--you're giving permission to access them, the raw data of all of them, and once that's done all bets are off. If the app is untrustworthy it is free to go crazy (one of the reasons I always say "no" to that question).

I don't see how Apple or Google can stop this in a technical way without making the permissions more fine grained which in turn makes it more confusing to users (who probably mostly click "OK" anyway).

Apple could, however, make better app policies so that they can pull apps when they attempt this kind of shady crap. I'm not familiar with the Android app store policy, so I won't speculate there.


I agree, there's no easy solution. Maybe public debates like this are the best we can hope for anyway. My approach is to be extremely cautious with apps that depend on network effects to be useful or profitable.


  >Does Path actually say "We're going to invite all your friends via SMS", even in fine print?
Should it? More importantly, will anyone download the app in the first place if it did?

No one -- in their right minds -- would suddenly want to share (non-existent) photos with all their contacts. Seems like an odd way to say "We're going to invite all your friends via SMS". Your address book doesn't consist primarily of your Twitter followers. It doesn't matter if they intended it to be a feature; someone at the company should have raised a Big Red Flag and made any such SMS feature explicitly opt-in-only. With a big fonts, high-contrast colors, dancing bananas or whatever else you can use to grab attention to that fact.

This is an order of magnitude beyond sleazy.


"I think it's be more appropriate if the box bore a great red label: 'WARNING: LARK'S VOMIT!!!’" — "Our sales would plummet!” — "Well why don't you move into more conventional areas of confectionary??!!"


Without addressing the opt-in point, which I agree with... You're also assuming there is no "continue without doing this" button, which of course, I was assuming there would be...

Many users just mash on the "next" button on app intro screens. Again, it's all just speculation until someone takes the time to start researching and documenting the facts instead of just yelling "KILL IT"! :/


I agree, we're just all shooting the breeze here until we get a proper word from Path, and/or a journalist does a proper investigation into what exactly did happen.

I'm not unsympathetic when someone says "oh, I didn't read that bit" (I'm guilty of that too), but surely they have usability experts who would have warned them about it. The original author is technically inclined to make an informed decision. That's a big deal to me. That tells me, they never gave the guy any settings options to begin with or hid it in an obscure panel.

What's worse, according to the author, texts were sent possibly after he uninstalled the app. Which means they still keep the data!


You "assume" there would be? You worked for Path...


Classy. Interning at a company and having not been there for 9 months (see above) obviously counts for nothing.

For what it's worth, I disagree with Path and Me1000's arguments. Doesn't mean us here at HN should be attacking him or ignoring his points because of that, nor does it mean that doxing him is acceptable.

Really disappointed with HN in this thread :/


If it was opt-in then mashing the "Next" button would be sufficient to prevent the app from misbehaving...


Me1000, you should mention to the folks here that you actually worked on the address book stealing code during your time working at Path so you are particularly aware that the statements in your post there are both untrue and self-serving.


I've never hidden that I interned at Path. The address book debacle happened long before I ever joined. The code you're talking about did little more than normalize phone numbers so it could be hashed _before_ it was sent to the server.

It's been about nine months since I've worked at Path and as you may know (although, given how baseless your comment is, perhaps you wouldn't know)... startups move quickly, Path has released many updates since I've left. It's incredibly disingenuous to suggest what I'm saying is untrue (since both statements I made are provable with empirical evidence) or that I had any motive other than trying to get people to think before they go on a witch hunt.

Droithomme, if I know you personally, I would appreciate you contacting me privately.


FWIW, while I disagree with your stated opinion, I can also see how you came to it. I think most of us here realise that your opinion is separate from your past employment.


Doxing Me1000 is incredibly uncool.


That's not really doxing, I don't think. He's got links in his profile and the same username as on Twitter.

Lots of people know Randy. He's had posts on the front page of HN.

As much as a developer can be a "public figure", I think he is one.

If John Resig posted about DOM libraries and someone mentioned that he wrote jQuery, I don't think anyone would suggest he'd been doxed.


There's a big difference between "X would like to access your contacts" and "X would like to access your contacts, then store everything on their servers for reasons/duration of their determination"


But not from the OS's point of view. iOS (and Android) can only warn about the contact crossing the threshold into the app. After that the app can do as it pleases...


Path's settlement with the FTC for the last address book incident was that they create a privacy program and get independent privacy audits every other year for the next 20 years.

I don't know what the details are on those audits, but if these texts were sent without consent it seems like the kind of misuse of personal information that they would be concerned about.


Android doesn't ask for user's permission before accessing the address book, is it?


Android shows the permissions each app is requesting before you install, and even lets you know if they change their permissions between updates. While what Path did is crappy, they didn't subvert the Android permissions system.

The thing that burnt the poster is that while a social app asking for access to their contacts might not rise a brow, the user has no way to know what they are going to do with that data without looking at the reviews or around the internet for complaints/testimonials.


"Android shows the permissions each app is requesting before you install"

Yes and no. Google often hides the most offensive permission requests under that "see more" arrow. And the permission requests (and accompanying explanations) are too vague and ambiguous. For example: Does "request access to network" mean they're able to sniff all my incoming/outgoing data, granting the app access to everything?


That, and the page is designed so most people will click the "Accept & Download" button without even reading the top-level permission requests.

It's got the title and button at the top, taking up a large chunk of space (1/3rd on my Nexus 4), and then a vague list of - to most users - technical-sounding "stuff".

My guess is a large majority of users never look past the button.


Congrats __chismc. Shortly after this post it appears they moved the "Install" link under the permission requests.


But Android also doesn't allow you to deny specific permissions. It's all or nothing at time of install - if it ever gets location, it always gets location. This is one of the reasons I like iOS.


This doesn't sound very difficult to verify.

Go to play.google.com. Search for path. First result in the app store. Click on Permissions.

"This application has access to the following:

... blah blah blah ...

This permission allows the app to use the camera at any time without your confirmation.

... blah blah blah ...

read your contacts Allows the app to read data about your contacts stored on your tablet

... blah blah blah ...

read call log Allows the app to read your tablet's call log, including data about incoming and outgoing calls.

... blah blah blah ...

Now users have been trained to click "yes" to all requests without even reading them, so I you can get into philosophical arguments about if the "really" have permissions. Just like most users randomly click thru "click thru licences".


Read data about contacts doesn't sound unreasonable for an app like Path though. Facebook uses that permission to sync contacts if you want, and I don't see any problem with that. Unfortunately, reading data means they can store it, off device, independently of the install state. That's a difficult problem to solve, but I don't think users should be expected to expect this as a result of that permission.


Exactly. WhatsApp wants permission to practically everything possible, because it offers various features on top of these permissions. Yet it never spammed anyone so far from what I can tell.


They might not spam users, but they don't let you delete contacts. So in short: once they grab your contact list, it's theirs.


Can you actually prevent an app from using one or more of those permissions? Like can I give it permission to my camera but not to the call log?


There is no official support for this AFAIK. Some 3rd party ROMs like Cyanogenmod have this functionality built in, though, and if you root your phone there are apps like "Permissions Denied" that you can run to do this.

I'd assume the reason Google is somewhat hesitant to offer this officially is that many apps don't deal well with this -- some do degrade gracefully, while others end up throwing task-ending exceptions because the app code just never planned for not being able to do some task which requires permissions declared in the manifest.


There is a simple solution for managing permissions for poorly built apps: serve them empty or fake data.

Every app already has to consider the case of GPS being unavailable indoors, the contact list only having one person (yourself) in it, or the camera picture being black in darkness.


Yes, that is a very good idea.

I'm sure some apps will fail anyway because they just never expected a contact list of 0 entries, but the list should be much smaller in that situation (mostly limited to those who do virtually no QA).


Sounds like a huge time investment and you risk bricking your phone? Doesn't seem worth it.


I have always wondered why Google doesn't add this feature. Creeping up the ladder of permissions is a problem in Android, and the user's choice is all or nothing. This can become a bad choice: Add a permission, or lose access to the data an app is keeping for you.

It would be easy enough for developers to catch security exceptions that Google would find little or no developer fall-off due to a requirement like this.


No. Only on some custom Android builds.


yes, android requires user permission to access any data on your phone.


It tells you the permissions requested by an app before you install it. For Path, see the "permissions" tab on this Play Store page https://play.google.com/store/apps/details?id=com.path&h...


Yes, you have to put this line in your app's manifest:

  <uses-permission android:name="android.permission.READ_CONTACTS"/>
and the installer will prompt the user for that permission when they install the app.


When you install an Android app it lists the permissions you grant the app by continuing with the install. Contact access is one of these permissions.


Path seems to do all the shady things with your data that we fear Google and Facebook could do. FB and G definitely push the boundaries of privacy/creepy sometimes, but Path seems to have no qualms about blowing right past them. I am staying away.


It reminds of one of Facebook's early growth tactics - as part of the contact import process, they'd send out spam IMs to all your contacts saying you just joined Facebook and ask if they wanted to join. It was very shady.


The impact is also smaller, FB has 1 billion or so accounts and Google probably has about the same. Plus, they can track every move you make across the web, things you like, searches you make, what type of content you email, share etc.

If Path grows, I hope they just die (two strikes is enough for me!) more and more people will look under the hood.


And this is why I fundamentally don't trust my smartphone.

It's a fun device. But it's a spy, outside my control, in my pocket.

I've rooted it, but haven't yet modded it (and if anyone cares to point me at a gentle introduction for CyanogenMod or another option that works on an HTC Incredible, I'm all ears).

I've been reasonably conservative in what apps I place on my phone, and several (Pandora specifically comes to mind) were removed when permissions were extended to include contacts (Pandora, you listening?).

I'm waiting eagerly for the following capabilities:

To define at the phone level what information I'm willing to share. Existing "privacy controls" make a mockery of any semblance of either "privacy" or "control" by distributing vague and conflicting access among a great many applications with no ability to centrally audit them.

To specifically grant to specific applications specific rights. My location is something I'll disclose very guardedly (I disable GPS functions on my phone). Other rights generally shouldn't be shared.

To request and audit ALL information a given application has of me in a convenient electronic format (such as a database dump accessibly by MySQL or Postgresql). Such functionality is of course a three-edged sword, as what information the vendor has and I wish to request a third party might also request pretending to be me. Or having legal authority to make the request (though that's already the case), via subpoena or warrant.

My contacts list is off limits. Full stop. Specific contacts might be contacted by way of an application if specifically designated by me, but no other use may be made of their information. Hell, it's not even mine to give.

The existing state of smartphones is interesting, but it's also a little shop of horrors. And if application authors, smartphone manufacturers, and telecom providers don't get their act together on this Real Soon Now, we're going to see some horror stories.


A thousand times this. Why does the OS not make the app permissions more granular than a bit flag? Every app and its dog nowadays require all these really invasive data permissions, and all the choice we're given is either "yes to all" or GTFO. Google went to great efforts to design the concept of contact circles (something that could easily have prevented the OP's problem), why is Android still stuck in the dark ages of data sharing?


What I really wish there was in android is a way to disable permissions after installing an app. Obviously this probably won't make it into stock android, but I would love to be able to install an app like Pandora and then revoke specific privileges.

Then whenever the app attempted to use those revoked permissions, android would do something logical for certain cases (like providing an empty contacts list for the contacts permissions), or even just crash the app if it couldn't do anything else. I would totally be willing to accept a certain amount of instability for a feature like this.


> "What I really wish there was in android is a way to disable permissions after installing an app."

You can, you can! Only Google went ahead and disabled it for you. The commands are "pm revoke x" and "pm grant y", but if you ever try it (even running as root), you'll get this message:

Operation not allowed: java.lang.SecurityException: Neither user [your uid] nor current process has android.permission.GRANT_REVOKE_PERMISSIONS

> "Then whenever the app attempted to use those revoked permissions, android would do something logical for certain cases (like providing an empty contacts list for the contacts permissions), or even just crash the app if it couldn't do anything else. I would totally be willing to accept a certain amount of instability for a feature like this."

Exactly! Same for me. If this made it into stock android, developers would be forced to put phone book access in a try{} block so that permission revoking doesn't crash the entire app. Your solution with returning an empty phone book sounds even better, but that's also more work so I don't know whether that'll ever make it... Then again, it's a much nicer solution, so who knows.


Why on earth are people still using Path after it has become so very obvious a long time ago how unethical this company is?

This kind of behavior doesn't just go away after a bit of bad publicity or a few fines. It's part of the DNA of a company. Such a lack of ethics permeates everything from strategic decisions to technical choices to hiring.

Expect more of the same.


Remember when a while back they downloaded far too much information from each phone (for the convenience of connecting you to people)? Everyone was surprised (some outraged) and then they pushed an update to stop that "feature" and when the CEO/Boss man posted a blog entry apologizing, everyone forgave the company, people were holding hands singing Kumbaya.

Edit: Here's when they flubbed a year ago.

http://news.cnet.com/8301-19882_3-57373474-250/path-ceo-we-a...

Edit2: Er... apparently, I suffered a seizure of some sort (and an aneurism and a stroke simultaneously). Reworded.


I just noticed, in that apology letter, the line:

"... Your trust matters to us and we want you to feel completely in control of your information on Path. ..."

So they want you to feel in control.


"We are deeply sorry if you were uncomfortable with how our application used your phone contacts."

That non-apology is corporate communications at its most typical.


Path's customer service replied to this article's author on Twitter saying they'd "love to engage."

If you aren't Captain Picard, you're not engaging anything. Shut up and talk human, folks.


I’m a big sucker for a non-apology apology, reading one never fails to make my day.

More here: http://terribleapologies.com/ and http://en.wikipedia.org/wiki/Non-apology_apology#Examples and http://jezebel.com/sorry-not-sorry-how-to-non-apologize-5993...


This kind of thing sits alongside "it is important that justice is seen to be done" in my list of phrases that will preclude you from being taken at face value ever again.


Kind of. In law, actual justice is obviously the top priority, but it's also important for social stability and the effectiveness of the courts that justice appear to be done. (Otherwise, people may not respect court decisions even when they are just, etc.) Luckily, these two properties are usually complimentary. Given actual justice, the appearance of justice usually follows through transparency and the like.


Haha, yes! It's very important to feel you're in control, just like you want to feel like you're making a wise decision when you reach for the conveniently placed junk in the supermarket isles.

I get the feeling that either those in charge are hopelessly detached from society to see how privacy is perceived by the rest of us (like Zuckerberg) or there's little to no vetting when it comes to implementation decisions.


They just had to pay a $800,000 fine for that stunt http://www.ftc.gov/opa/2013/02/path.shtm


I think the HN traffic may have destroyed another WordPress install.

So, google cache: http://webcache.googleusercontent.com/search?q=cache%3Ahttp%...


For those with WordPress installs who want to survive an HN frontpage:

    1. if you don't have sudo, use the W3 Total Cache plugin http://wordpress.org/extend/plugins/w3-total-cache/
    2. if you have sudo:
        2a. the easy way: apt-get install memcached, add the pecl memcache extension, and use object-cache.php http://plugins.svn.wordpress.org/memcached/trunk/object-cache.php and batcache http://wordpress.org/extend/plugins/batcache/
        2b. the hard way: varnish https://www.varnish-cache.org/ https://github.com/pkhamre/wp-varnish


Using CloudFlare or Google Page Speed Service is also a good idea to further optimize!


Or:

  # /var/spool/cron/crontabs/apache
  */2 * * * * ( cd /var/www/htdocs && [ ! -e .mlan.lock ] && touch .mlan.lock && wget -q -O tmp.html http://www.mywebsite.com/blogs/my-long-article-name/ && mv -f tmp.html my-long-article-name.html && rm -f .mlan.lock )
Post HN story http://www.mywebsite.com/my-long-article-name.html and it'll get refreshed every 2 minutes. Pretty simple hack. (Edit: add lock file)


    rm -rf /var/www/wordpress
FTFY


This is good information

I'll make sure to never install Path

There are some abuses that can't be solved by an apology.


Yeah, this just convinced me to never use Path too.


Great, we have a few more convinced not to use Path. One application will now have perhaps a few hundred less users. What about the rest of the permission-hungry apps? Whatsapp? Facebook? Any Google app? Any other big developer's app? What if the founders of Path just start a new company?

Not installing Path is not a solution here. You gotta look critically at the permissions an app uses and their terms of service (at least skip to the privacy related issues, though they usually try to hide and obfuscate them). If there is something you don't entirely trust, wonder why you really need that app. Perhaps it's an improvement for your life, but can't you really live without? You've gone without that app for the past how many years? Is it worth giving up your phone book and all sdcard contents?


You're correct, and it gives me the shivers to see an application like 'photo sharing' wanting several useless permissions

Here's what I would like: for Android to allow me to deny or ask for a confirmation for each permission of these.


Or ask upon usage. "Do you want to grant XYZ to view your contact list? [Yes, and don't ask again] [Just this time] [Never]"


I don't know the exact details of this story, may be the blogger accidentally pressed a button in the app and the messages were queued up for the following day.

However, if the story is indeed cut and dry:

1) Path sent messages that qualify as spam both because they had no permission to send them and they were false.

2) If this was intentional, this should be a red flag to investors not just of the company but the kind of people that run it.

3) This is nothing new. Tagged did the same thing, with e-mail, which to some degree falls afoul of less laws than using text messages or the telephone (other commentators pointed out that land line carriers convert SMS to voice calls, which is news to me.)

4) Using spammy methods to acquire users is a red flag for any web service. While arguably Facebook used and uses extremely aggressive e-mail notifications (sending out an e-mail for every minor thing, and whenever a new feature is added opting in the user to receive notifications by default), using spammy techniques means that your service will skew toward the bottom of the market that actually "falls" for these techniques (poor and illiterate) early on and actually scare away early adapters for multiple reasons.

5) In the short term, Path's metrics will look really good, but in the long term it could result in serious problems, least of which will be another news story with FTC settlement in it.


The weirdest part of this is that it apparently made voice calls to landlines? Why would they do that, it makes no sense. Unless maybe that's what the phone company does if you text a landline? Never tried it but I would be surprised...


If you send a text to a landline, a lot of providers will convert it to a phone call using some sort of text-to-speech API. I've done this by accident on older cell phones when I'd add someone's landline to a text message instead of their mobile number. This sounds like a nightmare scenario though for this poor chap and his family (and his dentist).


Well that shows you how long it's been since I've had a landline- Thanks all.

(Do they charge you 15¢ for the privilege? Can you reply?)


It's a reasonably standard thing in the UK. Useful at times, really annoying at other times... You can get landline phones that will actually receive the texts as texts; other phones get calls.


I remember when I was a kid in... it must have been 2005 or earlier, when I was at a friends house and they had this, it was literally the coolest thing in the world. A phone that spoke text messages? My god, it's the future! Novelty wore off fast once I entered adulthood because it's never a text message I want to hear. Who knew I could claim my PPI back?! bah


At least BT do this for texts to landlines in the UK. When it was first launched, they used Tom Baker's voice. It was an amazing few weeks.



Various API that let you send SMS will make a call if the SMS can't be delivered, and then use text-to-speech to read the thing out to you. It's fairly obnoxious really.


And some, like Telia in Sweden, will keep calling until you listen to a certain percentage of the message to helpfully ensure that you've gotten the message. Which is loads of fun if, for example, someone sends you a link with a UUID--you basically have to pull the phone off the hook and let it finish before you stop receiving the calls.

(You can also contact your phone company and ask them to please remove this "feature" - but usually you have to be subjected to it before you know the phone company even does this!)


Verizon in the US offered text-to-voice service in the past. I'm sure this is something similar.


One of my clients wanted me to implement the same thing for his iOS app. I told him, that I think it is illegal and if it is not, that it should be. Anyway, I never wrote that code.


"Thank you" - From, The Internet.


From http://www.theverge.com/2013/4/30/4286090/path-is-spamming-a... it sounds like it's a result of "finding your friends" actually texting invites to friends, and then text messages being put in a queue so they're getting sent out even after the app is deleted.

Obviously such an action should be more clearly labelled. If it was, could they whitelist the times it sends out text messages to not do it at 6am? How easy is it to lookup an approximate region for a mobile number?


You can do it based on country code, and, if you want, NANP for North American numbers. That would get you a time zone (or range) and you can easily use that to not spam people at 6am.


This seems to be a direct violation of Google Play policy:

"Do not send SMS, email, or other messages on behalf of the user without providing the user with the ability to confirm content and intended recipient."

https://play.google.com/about/developer-content-policy.html


Not quite. The messages aren't being sent from the device, they're being sent by Path from their own infrastructure once they've uploaded the address-book.


The policy does not make that distinction.

An app generally needs a backend and it is clear some of the policies are directed towards not the app itself but how it interacts with the backend. These same guidelines are meant to be used to stop apps such as malware games that collect contacts and send them to the backend to be used as spam email lists.


I share this guy's frustration. But with Whatsapp not Path - I heard not so nice things about Path so didn't bother trying it. Anyway, after installing Whatsapp on my Android the app didn't spam my contacts, but quickly uploaded by entire contact list and hours later I started receiving spam from recruiters that had my phone number. So far, not the app's fault. But then I went on the delete the app, but first I wanted to delete any contact it had previously uploaded so it wouldn't keep my data. How naive was I?! Whatsapp wouldn't let me delete the contacts it had previously uploaded. Eventually I just gave up and deleted the app without clearing the app's data.


FYI, you can deactivate your Path account by visiting their website (https://path.com/), signing in and clicking 'Deactivate' on the 'Settings' page.


Deactivating your account just effectively hides your profile. You're only a couple clicks away from having it all restored if you so choose. They seem quite good at finding ways to screw up, surely they can still find a way to do this until they actually remove your data.

Contact them via their Desk service portal here http://service.path.com/customer/portal/emails/new and ask them to remove your data after deactivating.


Notice that is says "Deactivating your account will remove your content from Path. If you reactivate your account, your content will come back."

Not deleting your content, though. I hate that.


It's also a bald-faced lie. If it removes the content, then there is no way for it to come back.


The wording is slippery and pretty deceptive, but it’s technically correct.

“Deactivating your account will remove your content from Path.”

The ‘Path’ in the sentence refers to the social network, not the company’s servers.


https://path.com/privacy

This privacy policy is a joke. It basically says "we can do anything we like with your data".

Under the "What Information Do We Share With Third Parties?" section there are some classic deceptions. This one is great (as in evil genius):

>with certain social networking services, if you allow such sharing through our services; //

Not consent, allow. As in if you don't actively prevent it we'll do it.

>with service providers who are working with us in connection with the operation of our site or our services //

We'll sell you out to anyone who we can describe as "working with us".

>"in connection with, or during negotiations of, any merger, sale of company assets, financing or acquisition, or in any other situation where personal information may be disclosed or transferred as one of our business assets."

So when doing business-y stuff, blah, blah oh yeah and any time we want to use your info as a business asset. They're covering themselves, again, to sell all data to anyone who'll buy it.


> Not deleting your content, though. I hate that.

Why? Having worked on a semi social app last year, deleting a user and all their content was a huge undertaking. Marking them and their content as deleted was simple and had the exact same effect--user and content never get returned from DB queries. Guess which one we implemented?

I guess I'm trying to say sites don't (always) do the deactivate thing because of some nefarious scheme to steal your content/identity at a later date. Sometimes it's just a technical call (or lazy programmers, depending on your point of view).


I don't really care how hard it is for the company to implement, that's not really my concern. But hoarding my data sucks.


Path seems like a company which is doing everything wrong these days. My peeve with them is that I signed up and used it happily for a long time because it should be possible to get my data exported later. Now they have removed that from the FAQ/Support and their support mails are just ignorant saying "not possible but the team will look into it".


This might be off topic but I loved reading that post, it turned into comedy gold. The time I was at the third... (I don’t have any photos to share with them) I was giggling. And the list of people path called at the end killed me.


As I read the post I wondered the whole time, "Does he have any photos to share?"


This is not a comment specifically on Path -- I don't know anything about what they are doing or not doing.

But more generally: one of the most interesting parts of startups is the tension between "Don't Be Evil" and "Don't Fail". It would be good to be able to discuss this more openly -- "Don't Be Evil" by itself is too utopian. Many of the most successful companies in the world did things in their early days -- or later -- that new entrepreneurs would never even consider -- until of course their own backs are up against the wall.


There are many (perhaps dubious) things I've learned from friends about how to grow/bootstrap users. If I decide to ever use any of those tactics then I'll know I'm doing something dodgy and work to mitigate any risks. I'd bet that the successful companies you're referring to also knew that they were on shaky ground and acted accordingly.

However, problems arise when startups begin to think that behaving this way is 'normal'. During the previous furore over Path grabbing address books, the CEO claimed it was "industry best practice". Just because (nearly) everyone does it, doesn't make it "best practice". It actually belongs on the 'list-of-dodgy-things' and therefore should be treated with the appropriate caution.

This is one reason I have a gripe with the "Move fast and break things" bandwagon. It's not really appropriate if you're stumbling around in a minefield.


yep - agreed that just because nearly everyone does the same thing doesn't mean its a good thing to do: nearly everyone used to own slaves (except the slaves).


I, and I think most of us here, know of this concept, but I'd say you articulated it well. If a company needs to be semi-evil, being upfront may help them.


I have been a Path user since the app launched and have never had any text messages sent to my address book. Path informed users that the address book data hook was no longer there and that all data had been removed from their services after the initial FTC inquiry. I took that at face value but after this article I would be interested in hearing a response from the company about how my information is handled.


This is a feature while signing up if you use Facebook. It shows your friends with a phone number and if you don't uncheck them before tapping ‘next’, it will invite them.


I was considering joining path with 4, 5 of my closest friends to have some kind of 'private facebook'. I don't really like sharing stuff on twitter/facebook so path seemed like a good alternative to share stuff with people i definately know will be interested.

After reading this I don't think I will join anytime soon. I don't really get the reasoning behind this. Path is marketed for the use case I had in mind. Sharing stuff with only a handful of people you know well. Why on earth are they trying to lure all of your contacts in. This would make sense for facebook, not for path.

Anyone knows an alternative to path using my data more responsibly?


I use GroupMe; although it isn't laid out as "beautifully" as Path is, it's simple to use and you can share photos pretty effortlessly. I don't think it has as many bells and whistles as Path (I installed it once but then ended up deleting it because the utility was near zero) but it works great to have an ongoing conversation with a gorup of buddies. The thing I love about GroupMe is that you can create new groups and add different people according to the situation at hand; it's awesome for concerts, festivals, and shared events to keep track of everyone. The downsides are it doesn't have the longevity of something like a Path, and it's a battery HOG...make sure to turn off Push notifications or you're going to have a bad time. Also just to disclose, I don't work for GroupMe so not a corporate plug. I've also heard What's App is great although I've never used it myself.


Internet Relay Chat (I'm into simple solutions)


I really like using email for staying in touch, also works with groups.

But besides the fact that my friends aren't really into the whole tech thing (lot of them are still using their old non-smartphone) Inlining videos, photos and threading conversations is something I really like for such a software to have.


One other idea that comes to mind is that you could create a private subreddit on reddit.com and make it closed/invite-only. Reddit doesn't ask for personally identifying data.


This sounds to me like there was a fuckup in one of the pieces of software that sends these messages causing a lag. The guy probably hit "yes" somewhere without realizing it and then 12 hours later shit hit the fan. Could have been client, could have been Path server, could have been cell provider, whatever.

The thing is it doesn't matter.

When you're dancing on the line of ethical behavior, you are one bug, one mistake, one oversight from crossing it. When you cross it, it might not be "your fault", but generally it never is: your fault was to be so close that such a thing could happen in the first place.


It doesn't seem to be a bug, it has to do with their post-signup invite friends screen. By default, all contacts are selected, and if you just hit next, it will invite them all.

You have to explicitly hit "unselect all" first :(

Obviously, this is really bad UX design (for the user), and it really surprises me that Path would do this just to get a few more users, especially considering that they used to market themselves as a social network for a limited number of close friends.


Perhaps the third time's a charm for Path.

http://gawker.com/5883549/dont-forgive-path-the-creepy-iphon...


The just got fined $800,000. I guess it should have been $800,000,000.


Totally agree with the sentiment. I believe that companies have DNA, and that most companies do not change over time, they just get better at hiding the bad stuff. I would probably trust a company that starts off of the right foot (e.g. Google) even when it becomes larger, than trust one that's built off – what I consider to be – shady growth strategies.

Path seems to be taking especially egregious steps. All the UI polish in the world can't hide shady business practices.


We'd see a lot less abusive behavior from startups if investors went to zero when it happened.

As it stands, $800k is a line item equal to only 2% of the money Path has raised. As such, I doubt that anybody who invested in it cares or views this as anything other than a triviality that a few nerds will care about.


Why do people use these horrible services? Is having an online social presence so important? I understand if you do it for business, but if bad news like this keeps coming out of a company, I simply stop using their products.


The article is wrong and a linkbait.

1. Was the feature designed badly? Yes. Friends should be unchecked by default.

2. Did Path call anyone? No. That's a service from phone providers when a text is sent to a landline.

3. Did Path sent texts without permission of the user? No. But the feature was designed in a way that many people just tapped yes and didn't uncheck their contacts.


> Did Path sent texts without permission of the user? No.

No?

The permissions screen is shown in the verge article. It's labelled: "Find Your Friends: Path is more fun with friends. Find out who's already on Path."

I interpret that as asking permission to run my contacts against its database and tell me who else is already on Path. Not as asking permission to text people who aren't.

Permission to do some X with a contacts list is not permission to do anything with a contacts list.


Glad i just deleted Path long time ago. I did it because many reasons, the first one is transfer my entire life to a new place, then find out it can close, sell to another company I don't like. So the fear of wasting resources and time to setup my life in there made me deny this app. The second reason was an old episode with contact data, so I thought they shall make many other "mistakes" in the future. So after this one I feel better I didn't follow this path. Facebook is also dead on my life, I just check it for family messages. Google+ all the way.


This is the last straw I am definitely not buying any more stickers from Path.


Probably best not to just jump to conclusions until there is a response from Path or more evidence from users. I've been using Path since it launched with nothing like this ever happening. This sounds to me like some sort of unrelated scam. I didn't notice him say int he article but do the texts include a URL? If so where does that go? This seems like something that would be noticed by more than one or two guys if it was something Path was doing.


There's a screenshot in The Verge's article (http://www.theverge.com/2013/4/30/4286090/path-is-spamming-a...), looks to go straight to an invite with the guy's name (https://path.com/i/BfOPb).


Another similar user report from 3 months ago on Reddit, very detailed report.

http://www.reddit.com/r/Android/comments/16tavj/warning_be_c...


I'd held off uninstalling Path because it was well-designed and I always liked checking out their new UI enhancements when they'd roll out an update. Though, apparently they've already got my address book - the app is no longer on my device. Enough.


I was thinking the same thing. Path's UI design is notable and i wanted to keep tabs on it , as an engineer. Looks like the curiosity is not worth it.


I just went to the path app on the Google Play store and marked every user review that mentions the spamming as helpful. People should know about this before they download, but most only read the first three reviews and don't look closely at permissions.

https://play.google.com/store/apps/details?id=com.path&f....


Am I the only one that found this sentence interesting/amusing/silly:

"I decided the best place to contact them would be Twitter"

Why would anyone contact someone on Twitter first? Their contact page (usefulness unknown) is easy to find on their website.


You're publicising a problem with the company and so it has more of an incentive to deal with it quickly.


You'd be surprised how often you can get a response faster on Twitter than via a service's support desk.


At times like these I feel the best I can do is pay with my downloads (or vise versa).

I just deleted Path; I recommend others do the same.


I just deleted my account as well. I never really used the app due to a lack of traction within my own circle of friends, but now I have an even better reason to rid myself of the thing.


This is both funny and a serious invasion of privacy. Path keeps on stepping into it because they are hell bent in using web 2.0 tactics to get customers. The smartphone is a very personal and intimate device completely different than a desktop or laptop and when you loose the trust of people who have invited you into their homes, you will not get a second chance. Path has been struggling for a while but for the last few months I have been noticing a lot of PR articles talking about their growth, especially outside of the US. My guess is that their VC probably said fuck it! We can ask for forgiveness later!

When a company is this small and shows no regard for privacy we better hope that it falls in the deadpool because if they get to scale, we are going to get to hear a lot more of these invasive tactics. I really hoped for path to put a dent into Facebook's growth but not anymore!


I think the moral of this story is to do a little Googling about a company or product before trusting them with all the contact information (at least) in your phone. You know, to find out first if they're criminal scumbuckets. For all you know their app keylogs your transactions with your bank.


Well we know they don't keylog, they can't act outside of their sandbox. Curious if they'd try if they could though.


Unless they've found an exploit. Even companies that are generally law-abiding (Sony) have distributed applications specifically designed to circumvent OS-level protections, to do things the user doesn't want. Given what we know about these creeps, why wouldn't they? You have to have trust to install closed-source software, and that trust should be based on something besides "oooo, shiny".


Putting a keylogger exploit into an app that you, as named identifiable rich people and a US company distribute through the app store, would be pretty crazy risk profile for the developer. All it takes is one person finding it.

Exploits should be used on targeted individuals where you can serve a trojaned app just to that individual, vs. something like the App Store where that would require either Apple's permission or some crazy proxy. (A carrier could probably do it with phones the carrier sells, though, particularly on Android, but even on Apple by pre-jailbreaking phones sold in sketchy areas like rebel-held Syria, if that were the goal)


They might run into troubles doing that. At least one security researcher has a lifetime ban from Apple's developer program for trying things like that.


Looks like this might be related to the "find friends" button. There are a handful of reviews in the Play store that mention it messaging everyone.

This is why I hate install-time permissions. It means you have to trust an app until uninstall do you part, which generally happens well after abuses.


>This is why I hate install-time permissions.

Android developer here: I would have to say a mix of permission types would be best. Sometimes a feature of an app is crucial to its design (or, to be blunt, to its monetization).

A few things -- using the camera on the phone, accessing the address book, sending text messages, maybe a few others -- would be great to request as "optional permissions," or even better, "runtime-granted permissions," so that an app that only 5% of the time needs that permission could ask for it LATER instead of making everyone who installs the app agree to using a permission that they may not want the app to have.

As it stands, you'd have to break your app up into several different downloads in order to have optional features. Not impossible, but neither is it a good user experience.


MessageMe does this too. When I was installing it, it auto-checked 600 people and the default "next" button was going to text those 600 people.

LinkedIn's signup flow is similar.


Could this kind of "feature" be responsible for recent news about massive user growth?

http://blogs.wsj.com/digits/2013/04/25/path-a-social-diary-a...

Path, a more intimate social-networking app that’s like a personal journal, is now growing by 1 million registered users a week after its most recent launch.

The newest version of Path includes a way to message your friends — for which Path limits to 150 — and send them stylized stickers like other top messaging apps. Around half of Path’s registered users (now at 9 million) are regularly using the app on a monthly basis, CEO Dave Morin said.


The very fact that a product/service/company does not have a clear and solid revenue model. The fact that their success is measured by generated traffic (downloads/page-views/subscribers etc) can cause some people to make very strange choices.

This may not be the reason but surely helps many in our industry to reach those dark spots of ethics and faithfulness.

If your users are customers, that is, they pay you, then you will care about their privacy, as you know that otherwise, you will loos them.

If they don't, and yet consume your bandwidth and CPU, you may find yourself end up sniffing their address-books, claiming copyrights on their images or selling their clicks and choices to campaigners.


On iOS, Apple made it so the app has to get permission to access the phone book. (Interestingly, Google Image search turned up this for a query of "ios address book permission": http://i.stack.imgur.com/MHF0p.jpg)

Did OP give this permission? I'm not defending Path (at all!), just trying to get full details. I've in fact accidentally given address book permission to apps by tapping too fast.

Update: OP is using Android, which is different.


Apple added this feature after the previous privacy gaffe from Path. Still though, there's a legitimate use case for asking for your contacts, and it is to help connect you with people you may know who are also on path. You also give them access to your photos to share things on Path, but there would be an uproar if they started uploading all of your photos to their servers. I'm not really seeing any good way to solve this at the OS level, it seems the only solution is for apps to be less shady.


“Apple added this feature after the previous privacy gaffe from Path.”

Yup, here’s an article about it, from February of last year: http://allthingsd.com/20120215/apple-app-access-to-contact-d...

However, according to Apple, even before iOS6 came along (which asks permission whenever an app requests access to Contacts) it had already been against Apple’s dev guidelines to use Contacts info without users’ permission:

“You and Your Applications may not collect user or device data without prior user consent, and then only to provide a service or function that is directly relevant to the use of the Application, or to serve advertising. You may not use analytics software in Your Application to collect and send device data to a third party.”


OP is using Android where you either have to accept all the permissions the app requests as part of installing it, or not install it at all.


Interestingly, Android was much better about application permissions for some time. These days though, it seems iOS allows users to have more control.

On iOS, for reference, one could install Path but then deny it notification permission (or limit what types of notifications at a very granular level), allow/deny access to photos, and allow/deny access to the address book. You could deny all of those and still use the app just fine, though obviously without the ability to upload things from your photo roll.


This is the most ridiculously arrogant apology:

> We're sorry to hear of your issues.

Well I don't have issues, you have issues. And you should be sorry that you screwed up, not to hear about anything.


I blame Google and Android and its Play Store. So many apps ask for ridiculous access to everything, they should really discourage this. There should be a big red flag on any apps that ask for "superuser" access to my phone.

After getting a "smartphone" I've had 50x more spam calls. I never had this problem pre-Android. Coincidentally, spam peaks when I'm using my phone, which makes me wonder if these apps are telling spammers I'm near the phone.


You're blaming the Play Store for something that the app developers decided to do? Just because there's a hole doesn't mean it should be abused, particularly since Path is supposed to be from a "legit" company.


Yes I am. If there's a hole in IE allowing a virus to proliferate, don't we blame IE and get a security patch or switch to another browser? How is this any different? The root of the problem is Android and the Play Store.


This is what desperation looks like, and why we need to be adamant about our privacy.


Why is it even possible for this to happen? Did someone really think it was a bright idea to provide an API to access people's personal data, and if so, why doesn't the phone tell them that before letting them install the app?

On top of all of that, why wouldn't the phone provide a setting to restrict all personal/identifying information from being accessed by the app?


Address book access is a permission just like anything else (internet access, for example), so it can fly under the radar. Android has been trying to make things like "send text messages" stand out a tad more than "write to local storage", but they may need to go even further.

This is my personal favorite app permission that you can request: http://developer.android.com/reference/android/Manifest.perm...


Wow. I wonder what the use-case for that is?


Enterprise anti-theft tools.


An app that lets you brick your phone remotely if it gets stolen maybe? Depends on how it works.


It most likely does warn users but most people just click 'Accept' on all those "ABC application wants access to X, Y and Z" warnings.


Right, this form of security is completely ineffective. It’s just like Vista’s Universal Access Control modals: after being shown the first few of the day, you would stop reading them. I suspect EULAs are written to exploit this phenomenon: no-one reads EULAs because they go on forever, so you can bury all kinds of outrageous conditions in there.


All of the smartphones I have used ask you before allowing access to your contacts. But many apps want to use these for legitimate purposes -- for example, Vine uses them to find people you're already friends with.


It's all about trusting what that company is saying "now".

They can change later and spam your list or use it for some other purpose. You can shame them publicly if you find it out. But then, it's already too late by that time.


That sort of API is necessary for, say, an alternate text messaging app.


I actually got a text from them at around 1am not long ago. A friend opened up an account and they spammed everyone on her contacts list. She is an iOS user. I'm nost against tactics like these, but you have to do this correctly. Or else you alienate your user base and their contacts. I, for one, now tell people to steer clear from path.


> @stekenwright We’re sorry to hear of your issues and would love to engage. Please message us so we can help

Translation: "Your blog post detailing our scuzzy spammery is getting seen by lots of people. We're uncomfortable with that, and would like to get you to say we're not so bad after all."


This is pathetic. I'm glad I've never found a use for this pile of garbage but this is like the fourth or fifth time I've read about something exactly like this happening. This company has no respect for users or privacy.

CALLING A FREAKING LANDLINE?!? DIE PATH


One thing I've learned from this kind of news is this: Do wrong and get ahead. Most people will forget next week, you'll get a ton of free press, and oh yeah...more users.

I've been so dumb to not have tried this kind of bad publicity stunt yet.


I signed up for path a couple of months ago, with facebook. When I realised how much private photos (they didn't seem to care about the privacy settings on facebook) they took from me, I deactivated my account and sent an email asking them to remove all my information. Yesterday, I reactivated my account, lo and behold, they had not taken away my information as I asked them to do, and it seems the information had been there all along, even though I deactivated my account, because some friend of mine hade gone in and "liked" my pictures on path while my account was deactivated.


"We're sorry to hear of your issues and we would love to engage..."

If you can't send anything except this daft, goofy, unbelievably annoying tweet, maybe you need to find a different way to "engage" with upset users.


Is this only a problem on Android phones? The screenshot he shows of the text is clearly Android. I would wonder how this would be possible on a non-jailbroken iOS device if Path wasn't left running all night.


I don't see any reports from iOS folks about this happening. It's definitely against the Android Play store TOS, though.

The Play store definitely has a more laissez fair approach to apps, but spamming like this is pretty a pretty blatant violation. Google might want to consider suspending their app.


How about a little bit of consistency for the Path url in the tweet. I know its now common practice, but this obscuring of links is what I have repeated told my family and many friends to avoid clicking on, so to avoid downloading viruses or going to sites they don't intend to.

For "bit.ly/PathHelp" the underlying url is "t.co/B4lOWrDqyr" and it redirects to "service.path.com/customer/portal/emails/new"

I'm sure there is a reason for it, but just having service.path.com or help.path.com would be more beneficial for the company to both have as a url and to tweet to (former) customers/users.


Has to do with URL character count - Twitter limit is 140.


I wonder if Path will end up being used as a model of what not to do for start-ups, there's been that many missteps that it's hard to separate the network from the issues.

Overreaching use of customer data, check ("But everyone else was doing it!"), and then saying "it turned out the customers didn't understand this", spamming contacts, a CEO who come across as somewhat of an arse at multiple opportunities.

If I was a VC I'd be so nervy about investing money in a business that's repeatedly getting caught out doing some seriously shady business practises.


They're actually hiring a Director of Privacy and Legal right now:

"The Director of Privacy and Legal will be responsible for positioning Path as a leader in the protection of user privacy."


I like Path.

They are showing the world what could happen if your data lands in the wrong hands. I hope people now become more aware of privacy and security issues thanks to Path. Tell me which other company is directly working for this cause? FB and Google keep telling that they won't use our date for bad purposes. Path is showing what can be done with the data they already have.

So this is why I like Path. They are setting an example of what bad companies can do.


I got one of these texts from a friend of mine. Path didn't ring a bell so I figured he'd gotten hacked and ignored the text message.


This is *ucked up but hilarious at the same time.


Let's sue them for hacking into everyones phones.

Which is essentially what they did. It's still a break in even if you leave your door unlocked.


Strangely, this behavior will ultimately be reward with a huge influx in user base due to awareness.

Might even cover that 800k fine easily.


I don't understand why people still use Path. Why do they have good ratings and how come they are still on the Appstore?


"Growth hacking"


Software developers are as responsible for this guy's address book getting spammed as fork and spoon makers are for making people gain weight.

Someone built a tool. And someone else used that tool to do an unethical thing. I doubt a software language exists that can control the choices of its users.


Could this just have been a good old-fashioned programming bug, rather than spamming or malicious intent?


How could this possibly be a programming bug?


Oh, I can easily see possible scenarios. For example: have a functionality when you add a contact to the list of contacts, send her a message welcoming her, after asking the user if it's OK. Reuse the same code in the part of the application where you initialize the database from existing contacts. Put wrong defaults there so that the code does not ask "send welcome message?" (because that'd be stupid - of course you don't want to ask somebody 200 times for each of the entries in initial import) but still sends one. Boom! - initial import causes 200 welcome messages to be sent. Happens more than you'd like. Something like that happened to me too, in different form and was caught in testing, nowhere near production, but still.


Of course it was a bug. It seems clear to me they were experimenting with some texting functionality where Path would notify users of things, it made it to production with bugs, and some subset of users got this terrible behaviour.

Never ascribe to malice that which is adequately explained by incompetence.


It would need to be about three bugs. First, there's the bug where users who say that Path can't have their address books get their address books sent to Path. Second, there's the bug where their servers think the user has photos that he does not have. Third, there's the bug where their servers think the user wants to share these photos with Path's list of phone-number contacts for that user.

This does not seem like the most likely scenario to me.


What are the reasons that make you so sure, though? To me, it's not clear at all just from the symptoms that it is a bug. It could just as well be a feature they're only testing out on 0.1% of the user base to see how their engagement or whatever improves.


Maybe I'm naive, but it seems just so insanely bad that I assumed no product manager or engineer would intentionally cause this behavior.


I am sure you've already done so, but in case you haven't, read about Path's previous snafus.

Recently fined by the FTC: http://www.pcworld.com/article/2026985/ftc-fines-maker-of-pa...


Collecting and using my private info to, say, recommend friends in their UI is something I could imagine a product manager doing on purpose. Creepy, but you can see why they would think this is helpful. Spamming every number in my phone book with texts about photos that don't exist at 6am is not something I could imagine a product manager doing on purpose.


Even if that's true, it's a pretty awful bug. I'm not aware of FB or Google ever doing something like this.


I know this is totally off topic, but why does File Expert need to read my contacts? Would be better if Android was able to install apps while selectively denying certain permissions. Another option is to prompt, e.g. "send path.com your address book [yes|no]?"



I haven't even finished the article when I checked if had uninstalled the Path app!


The problem was the guy's address book got spammed AFTER uninstalling, yikes!


Twist also likes to send me messages every time someone in my phone book uses their app (and presumably it tells that person too). I don't remember telling it that it could access my contacts but I suppose I must have.



that is great! really!

if another dozen of cases like this happen, maybe, just maybe, people will wake up and stop installing apps with ridiculous permissions.

That's one of the reasons i use CyanogenMod. i can disable permissions from apps. For example, i removed internet access from swype. It does crash everytime i reboot my phone, because it's probably trying to check for updates, and i know it will crash if it tries to connect while i'm typing. and i rather that then be in the dark if my data is secure.


So, did anyone tell you / did you find out that they signed up for Path as a result of the SMS spam? Did it turn out that you really did have photos to share, but forgot?


Looks like the good old HN rush has tanked their server.

Anyone got a mirror?



is there a way to sandbox an application?

say i install an app, but i want that app to see exactly 0 contacts when i actually have more contacts than that.


Of course not, that would make the App Store less valuable to its customers (namely, app vendors).


Always be on offense - never answer your phone :-)


That's what I call a genuine "oh-sh&t" moment.


Thanks for the reminder - just uninstalled Path.


Usually, this is what happens when someone get's so caught up with 'Growth Hacking' they forget about UX.


Growth hackers also wear black hats.


Let's not jump to conclusions too quickly. This was probably (hopefully) a bug.


The whole feature seems like a bug. Even under the best of conditions who would want to text their entire address book that they had photos to share? It's (IMO) socially obnoxious behavior and should be made difficult to do.

Maybe I'm not in the target demographic, but I'm guessing that most people have a mix of contacts in their contact list that conform to different social situations. Not all of them would care to know I had photos to share.


From theverge's article (above this post currently)

"Path is really best with friends and we really want to help users invite the people that they care about to their Path as quickly as possible," said Nate Johnson, VP of marketing for Path. Johnson said the Path customer service team has reached out to Kenwright, but right now it looks like nothing went wrong with the app.


Happy I deleted my account a while ago, loved the design back when v2 came out but..



A terms needs to be coined for this behavior- perhaps "PATHology"?


Taking a page from FB's playbook.


I feel like you won't hear back from them. Know any lawyers who would be willing to pen a snail mail letter for you?


That's why its free.


didn't Tagged get sued for this?


Wow, they just paid $800K to FTC for being shady.

They are betting that they become so big that these shenanigans don't matter later on.


They're doing it exactly backwards. The Right Way is to hold off on the massive user privacy violations until you become huge. At that point enforcing the laws against you becomes too difficult and expensive, so the government won't bother; and if some random AG tries to do it anyway you can afford to buy whatever changes to the laws you need to get him off your back. Problem solved.


But nowadays, due to all the competition for emerging startups, you can't become huge until you start invoking privacy violations.

It's a Catch-22 of ethical misconduct!


They're also being threatened with a class action lawsuit for doing what the OP described: http://www.mediapost.com/publications/article/197248/path-su...

I'm not sure the shenanigans won't come back to haunt them. Less trendy apps have been booted from Apple's App Store for a lot less.


The problems with Path are because its CEO Dave Morin is not geek enough. Dave is Christian, majored in economics, focuses on skiiing, worked in a marketing and management position at Facebook and Apple. he doesn't go hard, he is just opportunistically trying to seize a financial position by hiring geeks and be public speaker in the news. He acts like Steve Jobs while leaving out the crucial prerequisite of actually having smart people like his product.


Sounds like a job for Anonymous


> I decided the best place to contact them would be Twitter

No, you didn't. You decided that you could score some quick internet drama points using Twitter.


This certainly sounds like a bug in their Android app and not a malicious thing that they did. Path is headed up by some smart people. I have a hard time believing that they would willing blast out texts to everyone in an address book considering what they just paid the FTC. Then again, it's possible that it was a user error while trying to uninstall the app. There certainly is an option to invite people to path right there in the sidebar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: