Just to be clear, the store selling gift cards had a signed out stating that they will report excessive number of cash transactions to the authorities because they are forced to by FinCEN and other U.S. regulatory bodies.
If a merchant fails to report cash gift card sells over the legal limit per day, they can be criminally prosecuted and put in jail.
So please don't blame the merchants for this. I can assure you that the merchant doesn't care if someone wants to pay in cash. In fact, they prefer it usually (e.g., cash discounts at gas stations).
What people see as "corporate oppression" is quite often government oppression.
Personally, I think both corporations and multinational corporations are engaged in this kind of misbehavior. Indeed, the line between powerful politicians and powerful corporate leaders blurs more and more each passing year.
If each side of the political spectrum continues to ignore the fact that both government and large corporations are at fault here, things will never get better.
As the writer of a comment mentioning corporate oppression, I agree with you. The government is a huge part of this. The true false dichotomy is when we claim that the government is good while corporations are bad, or the converse.
The government works for the powerful, and the powerful are corporations.
This one, oddly enough, is only possible because of the government: in states where it's possible for gas stations to give cash discounts, this is only because state law overrides merchant agreements that otherwise prohibit it. Otherwise, credit-card merchant agreements (i.e. corporate policy, not government policy) generally prohibit differential gas/card pricing. As a merchant you could choose not to do business with Visa at all, but if you do business with Visa, you aren't allow to give cash discounts by the contract terms.
As of January 27, 2013, Visa and Mastercard allow credit card surcharges. Prior to this I believe merchant agreements did allow for cash payment discounts, just not "surcharges", even though they are practically the same thing.
It was a long time ago that I worked a register, but at one point merchant agreements definitely disallowed cash discounts. I knew because on a slow day I actually read the fine print. The boss, correctly, didn't care that their discount for cash was forbidden, but I was young enough to be a little scandalized.
> What people see as "corporate oppression" is quite often government oppression.
I don't know what "quite often" means, but most of the times it isn't. The government is looking for criminals. Corporations collect data to increase profits. Some of the data collection is done to catch criminals, but most (actually, virtually all) of it is done in the name of profit.
The FinCEN threshold is $10000. The article mentioned buying enough Amazon gift cards to buy a stroller. The most expensive stroller I could find on Amazon in a minute was less than $600. So I don't think we can blame the government in this particular case. Probably just a lazy clerk who didn't feel like selling that many gift cards.
First, the $10,000 doesn't apply to "closed-loop" gift cards, like Visa or MasterCard gift cards, which is likely what the OP may have been looking to buy (although it's possible she was looking for Amazon gift cards). In the case of closed-loop cards, the limit is $2000 a day, not $10,000.
Second, that's a 10,000 limit per day, not per purchase.
Third, as someone else indicated, no clerk said anything to the OP. She just noticed the sign saying that large gift card purchases would be reported to the authorities, which would make me uneasy as well (how much is "large"? to what authorities?).
> both government and large corporations are at fault here
Small companies can be equally guilty of misusing data. The best example is credit reporting data, which is often used to screen out job applicants and creates a downward spiral where falling behind on your bills due to a job loss harms your credit which prevents you from finding another job. This has nothing to do with big multinationals.
We've gotten so used to the advice about "building credit" that we've forgotten that it's really corporate surveillance. But at least its regulated, unlike Facebook.
In the financial services area (brokerage accounts, etc), there are real consequences for employees in the industry who don't comply with anti-money laundering laws.
If "Abdul al-Qaeda" makes hundreds of all-cash deposits under the $10K reporting limit into his brokerage account, that still represents a potential red flag. Employees who are servicing that account could face serious penalties (from the government) for their failure to look into potential money-laundering for nefarious purposes. The laws are written vaguely, so such employees can never be 100% certain that they are in compliance.
Great point. Instead of passing new privacy laws that will limit consumer's choices, let's first get rid of the laws blocking those consumers willing to trade some convenience for some privacy.
> For example, seven months in, my uncle sent me a Facebook message, congratulating me on my pregnancy. My response was downright rude: I deleted the thread and unfriended him immediately. When I emailed to ask why he did it, he explained, “I didn’t put it on your wall.” Another family member who reached out on Facebook chat a few weeks later exclaimed, “I didn’t know that a private message wasn’t private!”
Ordinary people have no idea what they're actually trading for "free" Facebook, "free" Google, "free" Chrome, etc. Heck, I'm technologically savvy and I don't really understand either.
I'd be interested in knowing what Facebook's privacy policies are regarding private messages. Are they data mined? Did the congrats on the baby message result in baby related advertising?
I suspect Facebook treats them as "direct messages" (i.e. there's no illusion Facebook doesn't know the contents) and not "private messages", if their normal attitude toward privacy is anything to go by.
Bearing in mind there are Facebook advertisers unsophisticated enough not to bother filtering on "interested in" before showing dating ads, it seems far more likely that she'd be shown a baby ad coincidentally.
Email sent from something like GMail is really no better. Google can associate keywords with the recipient as well as the sender. They can't show ads to the recipient directly, but if they can associate that email with an identity elsewhere...
(Karma burning time. Summary: The only way to not swallow the Blue Pill is to make powers that be use trusted execution made by and for the people. Unfortunately: Not going to happen.)
It’s time for a frank public discussion about how to make personal information privacy not just a series of check boxes but a basic human right, both online and off.
The problem is, is that there is no suitable infrastructure for guaranteeing such a right, and there is no one everyone trusts enough to implement it. To have this in actuality, we really need some sort of trusted execution environment. Unfortunately, this is the same thing as DRM, and we already know how manufacturers and big software companies might collude with the authorities to turn this against us.
In a way, this is a win-win for totalitarianism, big-data style. If the powers that be can implement DRM as they want, they control computing from the substrate-up and we've all swallowed the Blue Pill. However if the public has a vehement reaction against DRM, then the population is effectively inoculated against implementing such tools for themselves so they can hold authorities and website operators to account. Either way, the oligarchs/equity lords/0.01% win.
If you look at the activities of groups like the IRA, resistance has relied on the power of social networks in the oppressed population. Now, online social networks have embedded the structure of people's social networks in big data itself. That is a lot of aggregated power. Power corrupts.
EDIT: After engaging in the discussion below, I'll bend my position a bit. Such infrastructure would be nice (where "nice" = "really the only sane way to do it") but a social and legal framework to set expectations might be the pragmatic thing to go after first.
This is a totally false and a distraction. We don't need a trusted execution environment. We just need rights enshrined in law.
Rights do not have to be guaranteed by bulletproof technology in order to be valuable. We might not be able to guarantee that corporations never overstep the bounds we set, but the threat of a lawsuit can be a powerful disincentive.
This is a totally false and a distraction. We don't need a trusted execution environment. We just need rights enshrined in law.
From a certain POV, this is correct, and I don't disagree with the general aim.
Rights do not have to be guaranteed by bulletproof technology in order to be valuable.
Entirely correct. What's of more concern is that we not only have a general public, but an "expert" population of programmers and other technical workers that doesn't make decisions about these things in an informed rational way. This is why I get on the soapbox in support of trusted execution environments and point out how they can be used against the authorities.
the threat of a lawsuit can be a powerful disincentive.
But I would argue that the "trusting trust" problem encompasses not only privacy and security, but political power as well. When transparency itself is strictly voluntary and comes into competition with the temptation of money and great power, we've seen that such a deterrent loses some of its potency. The threat of a lawsuit also doesn't work as well against certain governments in certain contexts.
You may well be right that technology is a distraction, for now. The social and legal framework might be the thing to establish first.
Sure - some kind of trusted execution environment (which I would argue we don't understand well yet) would be an ideal solution.
But... a trusted execution environment would be no use if it wasn't mandated, so it is secondary to having rights enshrined in law.
And yes, there is a trust problem. But I think it would be very difficult to operate a massive data collection empire on the scale of Google or Facebook without at least one whistleblower exposing it.
That's nonsense. If anything, governments are guaranteed to break their own laws whenever it's most convenient for them. We absolutely need a communication system that cannot be governed by any government. It does not need to have an altruistic DRM implementation to work. It needs security so pointlessly difficult to break through that it'd be cheaper and easier to do analog surveillance.
governments are guaranteed to break their own laws whenever it's most convenient for them...It needs security so pointlessly difficult to break through that it'd be cheaper and easier to do analog surveillance.
In that case, you also need "altruistic DRM" -- otherwise private data gets into the wrong hands and bad actors do with it what they will. You're sounding like a loud disagreement in tone, but the words themselves actually spell out strong support for my idea.
I'll admit that I didn't think of a potentially DRM linchpin good actor that hides from the government and is as untouchable as the incorruptible trillionaire. The reason I think this way is that I'm sure that unless you're a radical/extremist in your principals, your vulnerable to exploitation from authorities and anyone who is a bad actor more powerful than they are.
Which probably also tells you that I think people are generally bad actors by nature. Due to greed & a lack of integrity when endowed with a position of power. Any systems designed with the notion that people are generally good actors tend to be exploited and corrupted over time.
"Are you a good actor, and if so, how do you know?"
Perhaps? I can't know that because I've never been in such a position that would reveal my lesser traits. I've never been given into logical fallacies under pressure from multitudes of bad actors that would love to see me fail so they could get what they want. Which is why I've commented before on why I think the government should be designed in a way that forces bad actors to be good actors by encouraging them to follow their natural instincts to be positively selfish instead of negatively.
Every time there is a debate about the intrusiveness of Google, or Facebook, there is a chorus of people around here saying "If you don't like it, don't use it" in defense of some kind of capitalist or libertarian ideal.
Hopefully this article helps to put the lie to that kind of thinking, and we see this pervasive intrusion for what it is: corporate oppression; the abuse of the corporate power over individuals.
Of course there will be pain involved with giving up Google or Facebook. There's a reason millions of people use them. By choosing not to use them, you are saying I value my privacy more than the convenience I gain from using things like Google, Facebook, or even credit cards.
What I've found is many people like to complain about a lack of privacy, but are willing to make no real sacrifices to get more of it.
I don't see the "lie" involved with saying "If you don't like it, don't use it". It seems more dishonest to advocate that there is a path towards retaining privacy that still allows people to use services that intrinsically involve the person turning over information about themselves to a company.
> "What I've found is many people like to complain about a lack of privacy, but are willing to make no real sacrifices to get more of it."
This is a false trade-off. Why should people have to make sacrifices to gain privacy? If people believe it's important then they'll advocate for it but that doesn't mean they should become digital hermits to prove a point.
For example (if you're a US citizen), what have you personally had to sacrifice for the rights that the Constitution provides? I doubt it was much but the people who wrote it decided that 'this is how things should be' and set things up that way. Technology is moving so fast that it's the corporations who get to 'decide' things and (understandably) they default to wanting everything. People and the law are still catching up to the consequences of this.
>This is a false trade-off. Why should people have to make sacrifices to gain privacy? If people believe it's important then they'll advocate for it but that doesn't mean they should become digital hermits to prove a point.
But why would a privacy minded person even be willing to give third parties their information? Once it's out of your hands, it's out of your control. Forget the sale of information by Facebook or Twitter. What if they get hacked, and their entire database gets exposed to the world? You can't control that.
This is why, even before the internet, those who wanted complete privacy turned into literal hermits.
If one is truly concerned about privacy, they shouldn't make that information available at all. So no Facebook, Twitter, or use of Google services.
Privacy is like cryptography. You're private/secure until you're not, and there's nothing you can do once it's been broken. So cultural or legal agreements or frameworks might provide some recompense, they will not stop the initial problem, which is the revealing of the information.
The best way to remain private is to ensure that you don't tell anyone else the information you wish to be private at all.
But corporations aren't the only problem with privacy. There are social problems, governmental problems. The author's uncle sent a public message saying that she's pregnant on Facebook. That's not Facebook's fault at all, and it can still be a problem.
Non-out GBLT people have problems with this stuff constantly, not only with Facebook screwing up and accidentally reveling information that was set to private, but with others publicly getting outed by well-meaning friends or family before they're ready to out themselves.
Again, the damage is done when the information is revealed. You have to remember, for a lot of people the stakes aren't just avoiding a pile of baby-related coupons and spam in their e-mail. It is literally life and death for some people. While yes, we can punish the release of the information, it does not stop the damage from happening when the information is released. We should have stricter privacy protections, but at the same time, we need to have a conversation about what privacy actually means, and what the risks are for revealing information to someone else.
>Why should people have to make sacrifices to gain privacy?
You're not being asked to make sacrifices to gain privacy, you're being asked to give up privacy for the sake of convenience. There's a massive difference. You have no right to use Facebook, and they have the right to ask of you almost anything they want in return for letting you use it. It's like making the claim that you have to pirate digital media because you can't buy it otherwise. You don't have a right to it in the first place, so you're not being forced to do anything. You're making a choice, and then trying to come up with a reason to morally justify your choice.
Is it inconvenient to give up your digital life? Yes. Is that a conscious choice by multiple different actors? Yes. Do you have a right to use Facebook and Amazon without them tracking you? Maybe you should but right now you don't, at least not in the US. The alternative is to not use Facebook, not use Amazon, not use Gmail. It's possible. But don't make the claim that you are being forced to make sacrifices. You're entitled to your privacy, you're not entitled to use Facebook.
People are not really being asked to give up their privacy. Or rather, they don't fully comprehend what they're giving up, so I'd argue that it's not at all a conscious choice for many.
Moreover, I find this kind of attitude strange. All those rights that people do have were hard-won. The right to vote, freedom of speech and association and many others. People fought for them because they were living in environments where they didn't have them. Sure, they could have moved and left everything they knew, but instead they chose to try and change things. What would the world be like if they hadn't? How does social progress take place?
So change it. Maybe this is the first step. But what attitude I find strange is that anyone would say they have the right to privacy and the right to use Facebook. That's just not true. It could be in the future (Facebook is already regulated by the government in terms of privacy settings), but people tend to think the restrictions we put on the government apply to corporations as well. They don't. You can downvote it, but it doesn't make it less true.
Facebook is only entitled to use data on us however it sees fit because we allow it to. There is no reason that a culture should not restrict such activities by law if they are detrimental.
Which is why I said "Do you have a right to use Facebook and Amazon without them tracking you? Maybe you should but right now you don't, at least not in the US."
It could change. But right now, you don't have that right. Facebook is not the government, and it owes you nothing.
Define "we". How many people are demanding Facebook change? What percentage of their userbase, and what percentage of their income? How much will Facebook stock decrease if they say "no"? The answer is none, as evidenced by the fact that this is the status quo.
Listen, I don't want to defend big corporations or their sleazy ways. What I'm saying is, no one is entitled to use Facebook. Thomas Jefferson didn't demand states recognize Google's privacy policy. Amazon isn't enshrined in the Magna Carta. So don't talk about sacrifice like you're losing a leg. No one is pointing a gun at you and demanding you post to Twitter.
This is a total straw man. The issue here is about tracking by advertisers seeking to discover personal details about you. Avoiding this is not simply a matter of just not using one product. It's pervasive across all major services on the internet.
Having a severely curtailed access to the internet is most definitely a sacrifice in modern life.
The lie is that these services intrinsically require that the companies be able to use this information for their own purposes. They only get to do that because people defend this behavior as acceptable.
Also - you seem to be saying that if you want privacy you should be willing to make the sacrifice of behaving like a criminal and stressfully hiding from observers on a daily basis. I fail to see why you would want society to be this way.
>What I've found is many people like to complain about a lack of privacy, but are willing to make no real sacrifices to get more of it.
At some point in the lifetime of a technology, choosing not to use that technology is tantamount to choosing to not participate in society.
Humans are social animals. Depriving humans of human contact is one of the worst, most barbaric punishments, and will cause insanity in a short amount of time.
The choice isn't Google or privacy. It's panopticon or solitary confinement.
In the case of Google the search engine, not necessarily. You can still reap the benefits from Google without having your data (minus your HTTP request, of course) recorded by using services like Startpage, which proxy your queries for you.
In the case of Google the platform, perhaps. The main attraction is YouTube, assuming you're not working for a company that uses Docs, Hangouts or the like. While they can build a profile from you by your search queries, if you take necessary precautions to make sure that they're going to a bogus profile (and don't do any vlogging, plus limit your commenting, obviously), you'll be fine.
Avoiding smartphones is easy, because they're not a necessity. Prepaid dumbphones are cheap and disposable. Smartphones are walled gardens, anyway, so unless you want to develop apps, they're of not much use. It goes without saying that you will not be using Snapchat, Vine, Instagram or any services like that. That's hardly being deprived of contact.
It's certainly possible and manageable to opt out, you just need perseverance and a certain technical inclination.
To stay truly private, one should also stop using email.
Even if one doesn't have Gmail account but a self-hosted email server and what not, there's big chance most of their friends will only use Gmail, so Google can gather lots of data anyway.
One of the reasons it's a lie in facebook's case is because other people will turn over information about you to those companies even if you don't use their services in any way. I can't not be on facebook, even though I closed my account 5 years ago, and block it in my browser.
> The point is that you should not have to make sacrifices for privacy.
I don't see a realistic way for that to happen. Sure we could change things, but there will be a sacrifice by someone in the system as a whole.
For example, we could pass laws banning targeted marketing. That will make advertising less valuable and companies who sell them will get less revenue because surely an advertiser won't pay the same price to show an ad for a diapers to a pregnant woman than to show it to a random sampling of society? So now my ad-supported website will make less showing ads. I'll have to compensate somehow. Maybe I'll start charging for access to make up the difference. Some people will be fine with that -- they'd rather pay more for privacy. Others won't. This kind of a solution would improve privacy. But there is still a cost.
Some people won't mind the cost. Others would rather pay by losing privacy instead. Passing laws takes away that choice from the consumer. Personally, I'd rather decide for myself than have someone make that choice for me no matter how much they think they have my interests at heart.
Yeah, but this happens with any change. It's the same with the minimum wage, for example. The argument is, "Well, if my business is required to pay a living wage, I'll go out of business because people won't pay more for my product or service." Yep. If your business can't follow the law, you will likely go out of business trying to. And laws change, and that can affect your business. This is the same as it's always been.
If your business goes away, some other business that can sustain itself under the law will pop up there (possibly in the same area of business, possibly a completely different one), or the economy will start tanking because no business can sustain themselves due to the new laws.
Sometimes it's not a good idea to leave the choice up to the consumer. (for example: If you stop allowing us to keep slaves, we'll have to pay workers and prices will rise!) Sometimes there are bigger considerations. I feel that privacy is one of those, personally, but it's valid to disagree with me.
Google, probably. Facebook, not so much. It could disappear tomorrow and people would get on with their lives much as before. Though cheap hosting providers and flickr would likely see an uptick in usage.
So what? Apart from an ideological view on "privacy is good mmkay", how exactly does it hinder your life if Google knows that you are pregnant, or that if Facebook knows that you are planning up a party for a friend? Or that if company X knows you are anticipating movie Y?
What concrete things do I gain from opting out? What do I lose? How does it actually affect my life?
Edit: It's weird to receive downvotes for this as I'd assume a comment like this exactly contributes to the discussion about the topic at hand.
As an example: what if you're gay, or a pregnant teen, or into the swinging lifestyle, or a liberal, or any other thing that you might not want people to know?
The article cites an example of how Target outed a pregnant teenager to her family by starting to send her ads for diapers and baby supplies. Her father was angry at Target (because he thought they were promoting teen pregnancy, and not just reacting to it) until he found the truth.
What if she lived in a household where her family would kick her out for having gotten pregnant?
Another example of data leakage: when Google Reader decided to go 'social' and let you share your feeds, they turned on public sharing by default; I read one blog post afterwards about a man who followed several liberal blogs even though his family was extremely conservative. As a result, his family saw that he was reading 'that trash' (paraphrasing) and they ended up having a huge argument (and, AFAIR, making their next holiday awkward and uncomfortable).
It's also not hard to imagine targeted advertising popping up on your computer, making other people (family members? coworkers?) jump to conclusions (correct or not) about you or your lifestyle, or violent spouses noticing ads for abuse counselling based on your google searches.
Even aside from the 'it's not their business' angle, these automated data harvesting and management systems are completely impersonal; they have no idea how much you share with others already or what you don't want other people to know. When even following a liberal blog can get you in an argument with your family, there's an argument for not trusting companies implicitly to handle your data in a responsible manner.
So to answer your question, what you gain from opting out is control over your life, information about yourself, and how other people perceive you.
On the other hand, it's interesting that the root of the issue (according to your examples) are not that the companies are doing evil things, but that people are intolerant. Would privacy be such an issue if being gay/straight was no more interesting than your shoe size?
Because it's clearly not about control of all information about yourself, but control of very specific pieces of information about yourself that are culturally deemed private, which can be identified and accounted for.
Not just people. Insurance companies can be surprisingly intolerant of people with preexisting conditions. Companies in general can also be intolerant of their own employees searching for employment elsewhere.
>Would privacy be such an issue if being gay/straight was no more interesting than your shoe size?
Yes. Until we are a collective utopian hivemind, there will always be a need to keep secrets. You cannot account for all the possible reasons.
This. You never know what could come back to bite you. You can apply for a job, and one thing you said years ago doesn't agree with who is hiring. Some crazy bastard becomes a cop and decides to look you up on facebook when he is back in his car writing a ticket, turns out he has a thing against Jews (Godwin'ing!). Once the info is stored, there is no getting rid of it. Best not to store it to begin with.
But why is privacy not enough? Why should I have to convince anyone else that privacy is important? If they don't care, then they don't have to be private themselves, but don't force it on other people.
If you want an actual reason, then just look through the history books about how governments were using private information against their citizens (WW2 being the best example).
"Oh, you look forward to watching V for Vendetta? New stats show that 90% of violent anarchists have watched that movie. Please come with us, citizen."
People that are of a sexual persuasion that their employer disapproves of. Victims of domestic abuse. People with stalkers. Bunch of other example. But really I think the author is speaking about the benefit of not, well, what the article's title says.
> What concrete things do I gain from opting out? What do I lose? How does it actually affect my life?
These are for profit businesses and both are ultimately controlled by investors. What's to stop Google or Facebook from giving that data to your insurance company so they can increase your premiums? Or what about the case a few years ago where a girl was pregnant and her father found out when Target started sending them targeted ads and spam?
And keep in mind not only do they collect and use this data but they NEVER delete it. It's their data an it's their data for as long as they decide to keep it. What you gain is the ability to say or do something without being forced to think "in 25 years will this be used against me?"
One possible cause of downvoting: devil's advocacy. The devil seems to be able to advocate well enough on his own, and there is enough opportunity for serious discussion that having to argue with people who don't even believe what they're saying is irritating and tiresome to many.
I, on the other hand, would rather they be ads for things I _don't_ want, so that I'm not tempted to spend money on crap I don't need but have been manipulated into wanting more than I already do.
Piracy is ALWAYS an option.
And piracy guarantees privacy in viewing. It doesn't go into RedBox's DB, nor does Netflix see what I watch, and nor does Hulu, nor even YouTube/Google. Nor will those Internet-bluray players tattle what I've seen.
I'm sorry you have to live in a world where you are forced to watch things you don't want to, but I guess that's the endpoint for our society so we should just learn to accept it.
Digging through the other pieces of the story to the headline nugget, we find this:
"""
But then my husband headed to our local corner store to buy enough gift cards to afford a stroller listed on Amazon. There, a warning sign behind the cashier informed him that the store “reserves the right to limit the daily amount of prepaid card purchases and has an obligation to report excessive transactions to the authorities.”
"""
So, it's an interesting point, And it's worth noting that we are leaning very heavily in the direction of assuming that people who want to be anonymous are doing so for criminal reasons. This could be partially mitigated, perhaps, if there were a way to give cash to Amazon (but it's probably the case that large cash transactions would be looked upon with the same scrutiny).
They threw that in there implying that it was difficult to buy gift cards with cash.
In reality, those limits are pretty much the same as any other cash transaction. You can buy a few hundred or low thousands of dollars of gift cards with little/no oversight. Unless you only engage in peer-to-peer cash transactions, any pattern of moving more than a few thousand dollars in a regular < 30 day period is an activity that is eventually reported under money laundering laws.
It's long been a trope of dystopian (even utopian! see also: gold pressed latinum in Star Trek) scifi that cash is relegated to the shadier transactions, with legitimate transactions happening in the digital realm.
That's not science fiction; it's anthropology (see: Graeber2011 ;-)). Most societies have always seen fit to use some form of credit (or other "virtual money") for more trustworthy transactions while employing "cold, hard cash" for more transactions between individuals with more transient, uncaring, insecure relations (which of course includes criminals).
The problem is that "trustworthy" has come to mean "watched over by our corporate overlords".
Agreed. Bitcoin provides anonymity and people that use it for political or convenience reasons are sometimes looked at suspiciously. But, if I go into Walmart and pay for $150 in groceries in cash, no biggie.
Services like SpiderOak that include anonymity as a feature to a small extent do a disservice so the otherwise laudable principle of "can't I be anonymous sometimes without being a criminal?" Maybe zero-knowledge features would be looked upon less suspiciously if they were just included as a feature, without being advertised as a feature? Well, I think you get my gist?
Without Zerocoin, Bitcoin is NOT anonymous. And acquiring bitcoin anonymously is rather difficult. Local meetup for cash? Person can take audio/video or track you. Deposit at bank? Precise datetime and camera video is recorded. Mailing cash might work, if you're cautious about DNA, fingerprints, printer fingerprint, etc.
But mailing cash reveals your general location and in less populated places might be enough to find your identity. And in larger areas, if you mail cash often, you can probably be correlated from video with sufficient technology.
With Zerocoin the problem is mostly solved, from what I can tell (so long there's sufficient volume).
yet another outcome of the War-On-Drugs (as used by some people; results may vary by skin color and whether a corporation sold you the drug or if it came from columbia)
Buying giftcards with cash is effectively the same deal. Why would anybody shop at a store where the operator states they might arbitrarily call the police on you?
> Why would anybody shop at a store where the operator states they might arbitrarily call the police on you?
Why indeed, but that's not what's happening here. Regulation requires reporting in specific situations, like large cash transactions. It's not arbitrary on the store's part, and it's going to be the case across the board in the US and (I'd imagine) most of the developed world.
Disconnect[0] is often touted as a more privacy conscious alternative. I haven't investigated it thoroughly, but it's open source so you can look into it yourself pretty easily.
Staying completely off the Internet goes quite a ways toward that goal.
But I'm pretty sure it's hard to conduct business on the Internet (with any of the major public companies, such as Amazon) and be anonymous in today's world.
They have rankings and tracker lists and stuff that you can use, I'm not sure if the data is bundled with it initially or if you have to download it from them.
We have significantly more privacy in the modern era than people did hundreds of years ago. Most of us live in cities large enough that no one will know that you've been to the doctor, what you bought from the butcher or who you spend your time hanging out with. If you think it's easy for people to find out that someone is pregnant now, it was absolutely effortless in 1714. It's common for people to spend days commuting to work and going out to eat and socialize without running into anyone that they know. In the past there was almost no privacy for anyone. People lived in a world comprised of dozens of people and saw many of them every day.
Are we moving into a scary new era of data collection and prediction, or cycling back to an old world where it's hard to keep secrets because everyone knows everyone else? Big Data is no more of a threat to privacy than Small Town used to be. But in modern society we have the option to opt out. Or at the very least, use Tor or private browsing and avoid social media. We have the ability to live with a level of anonymity that our ancestors would deeply envy. The only thing stopping us is our own desire to participate in services and online communities designed specifically to collect and broadcast information about ourselves.
Why do we do it? Because we really like it. Our cultural roots lie in societies where supposed secrets traveled fast and everyone knew if someone was pregnant or gay or searching for information about strange topics. Even when we're able to hide ourselves in crowds of strangers we invent new ways to connect to each other and broadcast our statuses and purchase habits to as many people as possible. Everything old is new again, and instead of trying to hide on social media we should focus on figuring out how to take advantage of new technology to give ourselves the level of privacy that we want. Modern data collection presents challenges, but challenges that we can overcome to create a world that is better than the one we came from.
Uh, did you read the whole article? The author's point was you can't opt out and not seem like a criminal. This was because if you wanted to stay anonymous online and actually buy things, you need to purchase gift cards with cash. The place she went (or her husband did) to purchase multiple gift cards had a sign that said something like they're watching people that buy a lot of gift cards.
You're response sounds more like a marketing brochure to me. While I love tech and am in the tech world, the fact is, mine and everyone's private info is out there for anyone to buy and many companies to sell. Sure, we can make tools (some day) to put the equation in our favor, but this is not that day. Simply saying we can use Tor/private browsing/avoid social media is only part of it. Actually buying things not in cash leaves a trail and that info is sold to whomever pays.
Regardless if you could easily find out in the 1700's if someone was pregnant, the article was not a commentary on the past. It was about opting out in the present, which is pretty much impossible if you use any kind of connected technology in your daily life. Be it your atm/credit card, phone, internet connection - they all leave trails.
Sure, in the 1700's a butcher would see your face and maybe remember it. Maybe retain enough info to track you down if need be. But now, it's more than absolute. The more everything becomes digitized, the less anyone can be anonymous. You'd have to really be into the spy mindset to cover your tracks and do the things that would need to be done to get false ids, etc. And I don't know how much that would even help as machine learning and face recognition (coupled with ccd's everywhere) get better and could probably link up those fake ids.
Honestly, I don't care much because I'm not a very important or interesting person. So in a sense that way, I have some anonymity. And I love tech (still). Interesting experiment from the author, but we've already crossed the privacy line and there's no going back or opting out.
> The author's point was you can't opt out and not seem like a criminal.
But why is that? Let's face it, if you care very deeply about INFOSEC then you're probably someone who actually has something to hide from people (criminals, military, etc.), or someone who cares about the principle of the matter.
If you're the latter then you'll correlate yourself with the former whether you like it or not, and while we all agree that correlation is not causation, it is something that might get you examined at the very least.
Remember, police can't psychically predict who among the population has actually committed a crime or not when it happens, so they have to do "good old fashioned police work" (as I've seen it described in NSA threads) to investigate suspicious activity and verify it's innocuous, to determine who in a possible set of people actually committed a given crime, etc.
By behaving in almost the same fashion as actual criminals you make it that much harder to be discriminated out of the "possibly criminal" set early in the course of an investigation.
In this case the only thing she mentioned that seemed criminal was converting cash to gift cards to shop anonymously on Amazon — exactly the kind of thing a criminal would actually need to do to procure materials for a crime, if they wished.
Is this unfortunate? Absolutely! But so are many other tradeoffs society makes, even down to waiting at a red light even when the intersection is actually clear. If it were easy to separate criminal activity from other activity we would hardly need police or forensics.
I just realized that the same argument could be made regarding gun control.
> But why is that? Let's face it, if you care very deeply about GUNS then you're probably someone who actually has to protect from people (criminals, military, etc.), or someone who cares about the principle of the matter.
I think it all boils down to the amount of trust we put in our governments. Both guns and privacy are a protective measures against corrupt governments. In todays world privacy is probably a magnitude more important and also associated with lesser drawbacks than guns, as it would be impossible to organize resistance while being under todays surveillance.
Yeah, there are a lot of parallels. But for every person who is genuinely concerned with government overreach and that alone, I see 50 more Bundys who use government as rhetoric to play up anti-social/selfish ends.
Interesting points, but I don't think the lack of privacy in a small community is very similar to the lack of privacy in a large surveillance ecosystem. If the concern is over the ability to have secrets at all (i.e. to know facts that are unknown to others) then the metaphor works. The problem is that it's not about personal insult or embarrassment, it's about a larger power disparity.
In a small town, the lack of secrecy is (generally) mutual. People are aware of the reputation they gain and can usually address any rumors or allegations directly. Furthermore, there's still the option to physically move away, and one can presume that there will be other communities to join while leaving behind social baggage.
When it comes to the internet, the metaphor breaks down. Most people are (by design) unaware of the degree or even the presence of surveillance around them. The parties collecting the information (corporations, government agencies, and a few individuals) remain unknown, and their motivations reach far beyond those of rumormongers or gossips. Because they don't share in the same small community, the powers that watch us don't have to fear direct or indirect repercussions from the ways they use our data for or against us. This should be much scarier than any threat of individual antagonism or public shame.
There's also no effective way to 'leave' the internet. As this article illustrates, it's difficult or impossible to avoid revealing even a single specific piece of information, let alone your whole identity. It's more than the fact that we "really like" our involvement in the online community, it's a necessity for the vast majority of us to sustain ourselves socially and financially. We will inevitably reveal many things about ourselves by our actions online, and this data will never disappear.
I agree that we need to move forward from the way things are rather than trying to back out at this late hour. However, I very much disagree that this is in any way comparable to the way things have been, or a reemergence of previous societal trends. This is a new problem created from our many advancements, and we need to find new ways to address it.
Big Data is no more of a threat to privacy than Small Town used to be.
That's both true and irrelevant. As any nerd who grew up pre-Internet in a small town can tell you, small towns are absolutely stifling places. One of the benefits of the Internet, for me, is that it allowed me to escape the ultra-conformist Midwestern suburban environment I grew up in and allowed me to communicate and interact with people who were (and are) different and not afraid of being different.
Our cultural roots lie in societies where supposed secrets traveled fast and
everyone knew if someone was pregnant or gay or searching for information
about strange topics.
You leave out the reason for those secrets traveling quickly: to most efficiently ostracize and isolate the person with the nonconformist beliefs, sexual orientation, religion, or what have you. This is all fine and pleasant, unless you happen to be the person about whom these beliefs are spreading.
Small towns are small, so a) the number of people who track you is also small; and b) you can go somewhere else. Big data is global. There is no escape.
In small towns, observation is symmetrical. You know who's doing what. But you also know who the gossiping busybodies are, and you know who talks to whom. Big Data is deeply hidden, with complex technical and financial relationships.
In small towns, relationships are, by dint of matching our evolutionary context, reasonably human. Big Data is by its nature depersonalized and abstracted. In person, people being creepy can get feedback that they are creepy. With Big Data, though, the only direct feedback they get may be increased profits.
You've made an egregious false equivalency here. I don't care if the people I interact with individually or personally know that I am pregnant. I care that some company I've never heard of is deducing I am pregnant from information I'd like to be private, and then subsequently selling and doing who-knows-what with that knowledge.
>If you think it's easy for people to find out that someone is pregnant now, it was absolutely effortless in 1714.
Was it "absolutely effortless" in 1714 for an employee at a company in China to find out a woman in France was pregnant? Because THAT would be equivalent to what the author of the article is discussing. Or even to make it local: would the sellers in the town market know my personal interest if I didn't tell them? No, they would not. At least, not without espionage, which is essentially what a lot of the "big data" surrounding targeted advertising amounts to.
>In the past there was almost no privacy for anyone.
This is a massive canard. You are taking privacy to mean "not being spotted/recognized/tracked by people around you in daily life" but that is not "privacy" as the author or almost anyone defines it because the privacy issues of today are unique to today. Not only is privacy of the present fundamentally different from the past because of technology, but privacy in this context is about the privacy between individuals and corporations.
>The only thing stopping us is our own desire to participate in services and online communities designed specifically to collect and broadcast information about ourselves.
As others have pointed out, it seems like you didn't read the article. You cannot opt out without becoming Other.
> "We have the ability to live with a level of anonymity that our ancestors would deeply envy."
This is completely ridiculous. Just consider how many pieces of data are needed to take part in society as a normal person. Social security numbers, Passports, ID cards, Bank account numbers and who knows how many other things. Back in 1714, I'm sure it was a lot easier to hide things from people and even lie about who you were. Taking pregnancy as the main example is silly and equating Big Data with Small Town even more so.
I prefer to go the opposite route. When any of my close friends is having a baby I sign up for all the baby clubs with approximately the same due date, then hand over all the free samples and good coupons to the real expectant parents. Some of our friends and family did this when my ex-wife and I were having our daughter, it saved us a bundle.
This was an interesting article, but isn't the title begging the question? Unless I missed something, no one made any criminal charges, or even raised any suspicion regarding the author's behavior.
When trying to pay cash for several pre-paid cards in order to place an anonymous online order for a stroller on Amazon, the transaction was rejected and possibly reported to the authorities for suspicious behavior.
> But then my husband headed to our local corner store to buy enough gift cards to afford a stroller listed on Amazon. There, a warning sign behind the cashier informed him that the store “reserves the right to limit the daily amount of prepaid card purchases and has an obligation to report excessive transactions to the authorities.”
All the article says is that there was a sign on the wall. I didn't get the impression it stopped them.
Precisely. As a diehard paranoid, I wanted to sympathize with the writer, but I felt betrayed.
Nothing actually happened. They saw a sign. So the writer gives the impression of being an alarmist worry-wart.
Could this article have been selected for publication so as to fuel the idea that people who are worried are merely overly anxious? The Snowden papers proved that government organizations have been feeding us confusing information to sway public opinion toward obedience to central authority. This may be a good example of it.
Rather than fear being 'outed' by her uncle on Facebook whilst pregnant, she would have been better deactivating or 'deleting' her Facebook account. Nobody could have tagged her or posted anything on her wall, nor sent her private messages.
You don't dance with the devil if you intend to hide from him.
Facebook already maintains placeholder "Ghost User Accounts" for possible future members, so I see no reason that they wouldn't continue to track associations with an inactive account too.
Indeed, but for 9 months of pregnancy she still wouldn't have feared being outed by her Facebook contacts.
Facebook 'delete' is in itself an oxymoron. I believe that they logically delete everything, and then everything gets analysed and archived for posterity.
As I was in the process of deleting my Facebook account last year I noted several bugs that suggested that the counts of checkins and photos did not tally with the things I had deleted. I.e. Everything.
I have some screenshots somewhere on imgur I should dig out.
I wonder if rather than focusing on completely opting out, if the author had tried a deliberate confusion strategy in addition to minimizing her digital footprint would she have been "more" successful?
(I recognize the difficulty in measuring "successful" outcomes with this metric).
One of the "dirty" secrets of Big data is that for the Analytics to work is that the data has to be "prepped".
How about deliberately using different names, genders, etc etc in social media, stores (first abbreviation, maiden name etc).
Once the data trail gets muddy enough, you cannot track anything.
My mother has done this her entire life, and passed the habit down to me. She alternates between her married name (from a 35 year old divorce) and her maiden name and both, hyphenated and unhyphenated, intentionally typos when she can get away with it, uses a lot of off-by-1-or-2 for numbers in registration profiles, old addresses, and alternately combines and separates her first and middle names (because of her particular name, it works.)
She's also a data person professionally, and once worked at Acxiom. At the time, when she checked into what they had on her, she found that Acxiom thought she was no fewer than seven people, some of them male:)
There's no contradiction between those two things that I'm aware of. All jobs and all purchases under capitalism play their part in the general oppression of the working class:)
There is the day to day disgust with how directly what you're doing enforces the status quo - but alternatively, there aren't a lot of good (or morally good) computer jobs in Arkansas; ask a Wal-Mart employee.
And how does that affect her ability to do things like get a password reset? My wife used to do a little of that, though not to the extent of your mother. (Mostly just putting in that she was 120 years old on forms with ages, and stuff.) At one point, she lost the password to an account, and to reset it, she had to enter the fake data she had supplied to get it back. She was never able to remember which years she had chosen for which account. Luckily, it wasn't for anything too important.
Also, are there any legal implications to doing something like that? It's probably fine for signing up to a web site, but does she do that on things like auto loans or mortgages? Has it ever been a problem? I like the idea, it just seems like it's inviting trouble.
>And how does that affect her ability to do things like get a password reset?
A lot of this was Before Internet, so you were usually speaking to a person when dealing with the account.
>It's probably fine for signing up to a web site, but does she do that on things like auto loans or mortgages?
I'm not sure if she's ever obfuscated there, but I have no idea.
My solution has been to not use credit thus far in life. At this point, I have 5x the savings I would qualify for as a credit limit anyway, so I guess I've grown into my lack of credit cards over time. I've also paid in full for transportation until lately, where I now structure a secured passbook-style loan through my credit union to build up a fake credit history.
I can't help that my credit union knows me, which puts me on a few lists by itself, but they'd be who I would be getting a home or auto loan from at this point.
You can't really get away without changing your name and your country. All you can do is thwart a bunch of machines that don't actually care if they can't resolve the identities of a few people.
Indeed, this work for humans too. You cannot remove information once it enters people's heads, but you sure as hell can spread misinformation to drown it out.
The problem is that there's is a very large body of information associated with you that is not under your direct control. In some cases spreading such misinformation is quite illegal (caller ID, email forwarding services), unfeasible for most people (GPS location, HTTP referrer), or technologically impossible (email sender information).
Fortunately this is becoming more accepted with the advent of Bitcoin and Tor, however even those services are generally regarded by society as criminal at worst and sketchy at best.
Would it be possible to maximize your footprint, instead of minimizing it? I think it's almost impossible to opt out, as the story says, so I try to get other names associated with my address, get other people to use my grocery store loyalty card, and I let my kids do school homework searches under my Google log-in. I also fill in as many surveys as I can with semi-fake answers, and on Netflix or Hulu (Can't recall which one does this) I alternate checking "yes" and "no" for "Is this ad relevant to you?"
If you create enough "data smog" will the Big Data people be able to penetrate it?
If you create enough data smog the Big Data systems will just give wrong answers. Whether that's a good or bad thing depends on how the data's used.
It seems to be a common misconception that the Googles and Facebooks of the world have someone nefariously looking at your data to profile your every move. No. The average employee at Facebook does not care about you, and even if they have access to your personal data (I know that this is significantly locked down at Google, and probably is at Facebook these days), they don't care who you are or what you do. Instead, they're developing algorithms that mine lots of data from lots of consumers to produce products for lots of consumers. If you put in wrong information, it's basically GIGO. And since the basic models are trained on millions of data points, if just you do it, you only hurt yourself - you're a drop in the bucket compared to everyone's data, but your own personalized interface will be based on the profile you give it, not what you actually want.
You can see this in action with shared Netflix accounts. Share a Netflix account with someone and basically you get garbage for recommendations, because the system tries to make sense of two different "streams" of data that were arbitrarily merged together. (There are ways to separate them out, but they add a lot of complexity to the system.)
Things may be different if you're talking about Palantir or the NSA. They specifically do look at individuals. But then, if they get a bunch of random data their assumption is probably going to be that the person is deliberately trying to obscure their tracks ("tradecraft"), which means that the suspect warrants even further scrutiny.
What happens during wartime when civil liberties are curtailed? I suspect each company will have to follow their law of the land which might involve regrettable use of data. In other words I dont have a lot of trouble believing that the Googles and the Facebooks of the world will be good stewards of data in normal circumstances (Its in their self interest after all).
The other issue worth thinking about is that we are in the midst of the golden age of building networks so the economic models are all viable. What happens 10-20 years from now when the roving eye of economic development has moved onto other sectors leaving the economics of ad supported saas products non viable. I fully trust Google and Facebook, but if they were to ever be liquidated / sold in pieces I dont know if my data will end up in safe hands.
This is a legit concern - there's a long history of organizations doing unscrupulous things with their data & IP when their actual business goes south. (SCO is perhaps the best example.) However, these are usually more self-limiting nuisances than long-term problems. If the business is going south, then it means the organization's actual power is declining, and this limits the damage they can do with their data. They're quite capable of making a lot of noise and annoying a lot of people, but they'd then lack the power to deal with the inevitable blowback.
What would happen if Google goes under in 30 years and sells all the data it's accumulated on all Americans? Let's say they sell it to a worst-case buyer, someone who explicitly uses it for blackmail & extortion. If they do this on a small scale, then by definition it's on a small scale, they can't make any money off it, and they get shut down legally when they pick on the wrong journalist or wealthy individual. (They're in a catch-22 when picking targets: if they pick poor, powerless people, there's no money they can extract from them, while if they pick fat wealthy targets they'll probably get fought hard legally.) If they do it on a large-scale, the public outcry will get laws passed faster than you can say "re-election", and they'll get shut down by Congress/lawsuits. Either one is a no-win for the buyer.
A totalitarian government is more worrying, but the government already has all your personal data anyway.
If they're looking for you, specifically, then yes. It is possible to have the smoke trail have an erroneous local maximum, but it's unlikely that you can pull this off.
If they're just looking for patterns of which you are just a statistic, then it'll merely be hit-or-miss; you're unlikely to generate enough "smog" to make it so they never actually tag you correctly.
But that means someone somewhere will know she (or a variation of her name) is pregnant. I have arrived on the US only 8 months ago and since 3 months ago during every week I am getting two mailings from Financial One, a company I've never directly (or knowingly indirectly) made business with. One is addressed to "First M Last" and the other to "First Middle Last". Financial One doesn't care, it simply send the ad to the two of me.
People should be suing these companies, like in the example with Target. She should've sued Target for that. Eventually they'll learn their lesson. Look at Google, they're already backtracked on some privacy invasion stuff because of some recent lawsuits.
I am not putting a value judgement on big data, but I don't think the author's experiment is true to her objective. Instead of testing how easy it would be to opt out of using Facebook or Amazon, she tried to hide something from Facebook and Amazon. She is basically trying to get the advantage of digital life without the downsides. The experiment would have been much easier (and closer to the normal opt out suggestions) if she quit Facebook and made her baby purchases in person from brick and mortar stores with cash. Sure that is an incovenience, but isn't nearly as much trouble as trying to censor your communications with everyone online or trying to launder your baby purchases through multiple anonymous payment methods.
As far as I know, gift card stores like Gyft.com don't require personal information and you can pay with BTC. For example, in the case of the author, she would buy a Target gift card with BTC, then check out at the register with her smartphone.
The limit is $200 per gift card, so it's not very low for normal use. I do this with Amazon and it's pretty easy.
Well, being realistic, I doubt that most exchanges like Bitstamp are sharing their customer's info with other companies, so the original issue that the author was talking about doesn't really apply here.
If you're paranoid, you can use Local Bitcoins or the Mycelium Local Trader app to do in person transactions.
The story about Target outing the pregnant high-schooler has circulated for some time now, but has it ever been verified? I don't doubt Target's capability to discover if a woman is pregnant, but the citation given [1] sounds a bit too anecdotal (and isolated) for my taste. Are there more cases of this happening? Any other sources?
Her experiment while interesting does not demonstrate that all of her elaborate steps were even necessary, just that she put a ton of work into doing this and was successful, who knows how much of that work was needed.
Also no where in her article does she reference someone else thinking she looked like a criminal. Yes there was the sign at the store about gift cards but nothing about the cashier caring at all. She alternates between 'look like a criminal' and 'feel like a criminal' while I agree she might of felt like quite a badass criminal using tor and paying for everything in cash and gift cards, the article in no way indicates others thought she looked like a criminal.
It does however show that she negatively views people who perform those type of actions, since she views them as actions that make her feel like a criminal; and, as with many people, views criminals negatively.
Also if you are going to have to go buy the gift cards to use at Amazon in the first place? Why even shop at Amazon? Why not go to your local Babies R Us (or wherever people buy strollers and such) and buy your items with cash there. The convenience of Amazon seems lost on me when I have to go to a physical store prior to ordering something on Amazon.
Just for illustration, in Europe I usually carry around equivalent of $700 in cash for petrol and other daily expenses. Less hassle and practically no risk of being robbed. But yes, most people here prefer credit cards.
I think there's a large regional variation. That is, in Germany and Switzerland, the small hotels I've stayed at want payment in cash, and often don't accept credit cards. See http://www.thestar.com/business/2012/07/27/germanys_tough_ba... for example, which describes how IKEA in Stockholm accepts credit/debit cards, but not IKEA in Berlin, which is cash only.
In Germany one time I bought a camera at the electronics store Saturn. Again, the store didn't take a card, though there was an ATM in the lobby so I could draw the cash for it. This pro-cash view is also why Germany pushed for the 500 EUR bill. Apparently many people don't trust the banks, and prefer to keep their wealth in cash.
In the UK I've found that the taxis basically don't deal with credit cards, while in Sweden, every taxi supports it. Most Swedish banks are trying to get away from dealing in cash.
And I spent a week in Iceland without ever having Icelandic money. Even the local corner stores took my debit card.
Fantastic story. I've been wondering if there is a truly private, easy to use chat. Something cryptographically secure, but easy enough that I could get other family members and friends to use it.
> When a user turns on iMessage, the device generates two pairs of keys for use with the
service: an RSA 1280-bit key for encryption and an ECDSA 256-bit key for signing. For
each key pair, the private keys are saved in the device’s keychain and the public keys
are sent to Apple’s directory service (IDS), where they are associated with the user’s
phone number or email address, along with the device’s APNs address.
I have always assumed that chat programs that interact directly with texting (google voice) are regulated by the telecom privacy laws. Can anyone answer definitively whether or not google voice messages can be mined?
The assumption that bitcoin is anonymous isn't really a great one. It's pseudonymous at the very least, but as it's possible to track every transaction ever made (even moreso than cash transactions), it's not a trivial matter for typical users to anonymize their transactions.
Even if Bitcoin (and tumblers) became ubiquitous, the legal frameworks that exist to handle cash transactions (e.g. making an unusually large amount of cash purchases of gift cards, moving an unusually large amount of money across the border, etc.) will be revised to include bitcoin, with the result that using BTC in the way that people seem to think it's destined for will become, if not illegal, then at the very least highly suspect.
Ideally, we should be fixing the broken systems that are in place, rather than trying to concoct technology to try to work around it (which can then just be regulated/criminalized anyway).
People using Bitcoin without investing a massive amount of effort into staying anonymous will probably perform an identity-revealing action at some point. Then, every previous transaction can be linked with some level of certainty. That's hardly "anonymous payments".
If Bitcoin were to adopt Zerocoin, then we might have something useful.
When a child comes, there are all kinds of new expenses: diapers, formula, clothes, pediatricians, strollers, cribs, baby monitors, toys. . . .
Personally though, I'm surprised the canonical example isn't buying a new home. I bought my first home two months ago, and it seems like every day in the mail I get two offers for life insurance, another for lawn care, and another for furniture/blinds. But I'm not seeing a deluge of online ads for those things. Maybe it's an opportunity?
It's much more than just the things you buy for the new baby. Having a child is a watershed moment in your life from a marketing perspective:
1. Your life is different. You used to self-herd [1] yourself to one set of brands and businesses before you had a child. That is now broken. Marketers have a small window of opportunity to get you hooked on new brands and businesses. Once they do, you'll self-herd to them for a very long time, if not for ever.
2. You are tired and you don't make the best decisions in that situation. This leaves you especially susceptible to marketing, which, because of #1, will have a long-lasting impact.
That being said, the article is mostly bullshit. My wife recently had a baby. We didn't keep it a secret in any way on Facebook, Google or by our purchase behavior. We didn't get hit by any kind of directed marketing (except for retargeting ads online) until after we had the child and the record of birth is part of the public county records.
But this only applies for the first child, when the parents are the most nervous about living up to expectations. By the second or third child, they're like: "I get onesies at Goodwill and garage sales since they're cheaper and they'll only be in it for a couple of months before the knees are worn-out, or they've outgrown it." and "Pack-n-play? Those are expensive - I just take a blanket along for them to play on. Takes less room in the car and easier to clean."
I don't understand. Do you think these companies are sending coupons out of a sense of charity? They spend so much money giving coupons in the hopes of winning brand loyalty and making back a whole lot more money.
While it has a reputation for facilitating elicit activities...
Did the author misspell "illicit" in such a way that the spell checker changed it to "elicit", or did she write "elicit" originally? It makes a difference in how I view this unknown-to-me-author's credibility.
I noticed this, too. I think it speaks less of spell checkers and more about Time's ability to do copy-editing.
If I submitted a piece to a major outlet such as this, I would do so in full faith that they would assign at least one person to read through it and copy-edit it before publication. Apparently, Time is not a sufficiently reputable institution of journalism to believe in doing this, or at least is unwilling to spend enough money to have it done by competent people.
EDIT: As for Janet's credibility, I can speak to it personally, having spent a year with her working down the hall from my office. Probably the coolest stuff she's done relates to how decisions are made in big teams of scientists (she studied the sociology of scientists running some Mars rover missions: http://janet.vertesi.com/projects/social-life-spacecraft).
So please don't blame the merchants for this. I can assure you that the merchant doesn't care if someone wants to pay in cash. In fact, they prefer it usually (e.g., cash discounts at gas stations).
What people see as "corporate oppression" is quite often government oppression.
Personally, I think both corporations and multinational corporations are engaged in this kind of misbehavior. Indeed, the line between powerful politicians and powerful corporate leaders blurs more and more each passing year.
If each side of the political spectrum continues to ignore the fact that both government and large corporations are at fault here, things will never get better.