Hacker News new | past | comments | ask | show | jobs | submit login
Facebook Expects to Be Fined Up to $5B by FTC Over Privacy Issues (nytimes.com)
562 points by Dangeranger on April 24, 2019 | hide | past | favorite | 302 comments



Seems like they should add another 0 to the fine after the recent hat trick:

1) Prompting users to give Facebook their email passwords.[0]

2) Using that email access to "inadvertently" upload the information of their email contacts.[1]

3) Storing said passwords and others in plaintext. [2]

It's pretty impressive that a company could do something so brazenly malevolent and be confident that they will escape with no more than a fine.

[0] https://www.thedailybeast.com/beyond-sketchy-facebook-demand...

[1] https://www.theguardian.com/technology/2019/apr/18/facebook-...

[2] https://krebsonsecurity.com/2019/03/facebook-stored-hundreds...


I'm ashamed to say that up until now, when I saw Facebook (or similar) acting evil I thought about the quote, "Never attribute to malice that which is adequately explained by stupidity".

Well, fuck that and fuck me, those people are not idiots, they're criminals.


Not to contradict your overall sentiment, but people often focus on intention with these issues. I think that's wrong.

Like, if someone treads dog poo into my house on their shoes, it doesn't really matter if they did it by mistake, or spent ages walking around town trying to find some dog poo to step in before coming to my house; the effect is that there is now dog poo on my carpet.

We need to be more dispassionate when discussing these issues because otherwise threads like this descend into analysis of whether Zuckerberg/Bezos/whoever is a moral person. Which is a)probably unknowable and b)besides the point.

There is a problem here with a very big company that has more power than it knows how to handle, which can probably only be mitigated by breaking it up. That's all there is to it, really.

[edit] Just as an addendum, that's not to say that if the company has done something illegal, the people responsible shouldn't be prosecuted- they should.


There definitely is a difference between someone accidentally tracking dog poo into your house vs someone doing it intentionally; in the first case, you all them to clean it up, and in the second case you break the friendship and/or seek criminal prosection.

Likewise for big companies. If a company is acting badly, you need to figure out if it was intentional. In both cases you seek damages, but your approach to making sure it never happens again will be very different.


> There definitely is a difference between someone accidentally tracking dog poo into your house vs someone doing it intentionally; in the first case, you all them to clean it up, and in the second case you break the friendship and/or seek criminal prosection.

But the point is that this assumes you're able to tell the difference between the two cases- intent is often a really hard thing to prove. And often discussions about whether Facebook acted in good faith when it did certain things neglect the fact that they made a big mess everywhere, that needs sorting out regardless of their intent.


> hard thing to prove

It is and yet the legal system is busy with these sorts of proves all the time. e.g. if you have a professional insurance and caused some damage by mistake - that's covered, if you cause damage deliberately - it's not.


But if we can reduce that to a second-order effect we will be better off. For example, if you made the rule ('regulation') that "shoes shall not be worn in the house" then you can avert the situation and never need to try to determine motivation.

I think this is basic good that regulations- thoughtful ones- serv. Because if Facebook makes money by being evil, then their competitors will be pressured to do the same in order to stay in business. But by leveling the playing field you can help prevent these monopolies from getting so big and powerful.

We can't rely on businesses to act "morally." That ship has sailed. We have to compel them to behave, not by social shaming but by making non-compliance painful and repeated non-compliance an existential threat.


It's hard to prove but for some strange reason they always have the resources when it's time to come after the little guy and they don't when it's the big guy and it's also more obvious


Right.

That's like saying there's not a difference in involuntary manslaughter and murder.

There is. Intent is very important. That's WHY so many people focus on the intention.


A logical reason to care about intent is that malicious people will likely do it again and can not be trusted in the future where as people who make a mistake will avoid doing it again


For a corporation, "avoiding doing it again" means dedicating resources to preventing a repeat of that type of "mistake": audits, privacy reviews, etc.

The choice not to dedicate those resources up front was an intentional one.


Analogies break down due to the difference of scale between individuals and the biggest companies in the world.

If a company makes an "innocent" unintentional mistake, it can be attributed in part to their choosing not to put resources towards detecting and avoiding that kind of error


Repeated behavior probably has to be taken into account also, like if your friend does it 10 times - you'll want to break up that friendship.

It's hardly an accident at that point anymore, but intentionally or (intentionally) carelessly done. That's where we are with Facebook I think.


[flagged]


Nah. You're getting too invested into it. Not much Joe Public can do methinks


Getting too invested? In what, the society that surrounds each individual? I don't think that is possible.

There is a difference between empowerment ("Joe Public can't do much to change society") versus investing (literally putting money into local businesses, demanding better behavior, etc).

I have met a lot of my American peers who believe their individual liberty absolves them of responsibility to the society they live in, which is entirely opposite: their personal liberty is granted precisely because of the society they live in.


> Likewise for big companies. If a company is acting badly, you need to figure out if it was intentional.

I disagree that it's likewise for big companies. Corporations like that don't really have intentions; every intention is fundamentally about profit. Profit is in fact both its intention as well as its reward/punishment.

It's really the only way to properly "communicate" with an entity like that. A corporation is not a human being. And just like it's not useful to try to reason with (or attribute human-like intention to) a cat, it's basically futile to do so with a corporation.

Any time that human-like reasoning seems to apply with a corporation, it really only happened because the reasoning happens to align with its profit intentions.

Edit: this is less true for smaller companies, but for a multinational it's pretty much a given.

Also, there's something in the way that large companies are structured, that actual accountability (like you'd find with humans or small groups) seems to disappear and slip between the gaps of hierarchy. Lacking accountability, attributing intent becomes guesswork.


Seems to be intentional as a system for taking a user's password, scanning their contacts and then uploading them requires specifically built tools with intention. Stepping in dog poo and tracking it into a house requires only shoes that you already wear and walking that you already do. That doesn't require any new intentions or tools to be built


Criminal prosecution for dog poo? You must be kidding me.


Wouldn't the intentional smearing of feces over someones carpet throughout their home qualify as vandalism?


> if someone treads dog poo into my house on their shoes, it doesn't really matter if they did it by mistake, or spent ages walking around town trying to find some dog poo to step in before coming to my house; the effect is that there is now dog poo on my carpet

Yes, but if someone treads dog poo into your house every day for a year, their repeated claims that it was a mistake every time are not going to carry as much weight. With FB we're not talking about a single incident of treading dog poo; we're talking about a repeated pattern of getting dog poo all over the place.


> a repeated pattern of getting dog poo all over the place

a woeful understatement, in my opinion; more like bulldozing manure into you and your neighbor's house.


>Like, if someone treads dog poo into my house on their shoes, it doesn't really matter if they did it by mistake, or spent ages walking around town trying to find some dog poo to step in before coming to my house; the effect is that there is now dog poo on my carpet.

Have to disagree with this; I am much more forgiving of accidental harm than intentional harm. And speaking of dogs, pretty sure my dog understands this as well, since there's barking if he thinks I did something on purpose but accidentally stepping on him doesn't elicit the same response.


Who was talking about forgiveness? I thought we were discussing ethics here. No amount of good intentions will make Zuckerberg's actions moral... at least the actions of which I'm aware.


We're talking about your feelings after a harmful event, and whether the perpetrator's intention matters. I'm using "willingness to forgive" as a measure of how much it matters.


Words like malice and evil require intent. If we don't want the conversation to be about intent, we should use words like amoral. Facebook isn't actively trying to harm their customers, they simply don't care about them. That is amoral not evil.


> Facebook isn't actively trying to harm their customers, they simply don't care about them. That is amoral not evil.

Don't bend yourself out of shape. Actively, happily ignoring the evil you have caused and continue to cause is evil.


That isn't evil. You wouldn't call a hurricane or a fire evil. You wouldn't call a person who kills someone in a no fault car accident evil. Evil isn't about results. It is about intent and motive.

You can call Facebook a number of things from amoral to negligent or even criminal, but once you start talking about evilness you have to start judging their intentions and motives.


> That isn't evil. You wouldn't call a hurricane or a fire evil. You wouldn't call a person who kills someone in a no fault car accident evil.

All of these things are no one's fault.

> Evil isn't about results. It is about intent and motive.

Exactly. Prioritizing profit at the cost of customers' well-being is a deliberate decision; if not, seeing that customers are harmed by your own, continued actions and doing nothing to change it is, to me, actively being evil. Your intent may not be exactly to harm, but you have no problem harming people to get there. There is no difference.


I think that is just too broad of a definition for evil. According to that, everyone who isn't carbon neutral would be evil. We are worsening climate change through our "own, continued actions and doing nothing to change it". And if we are all evil then evilness has no real meaning.


> According to that, everyone who isn't carbon neutral would be evil.

While I agree that it's useful to maintain some nuance in our perspectives, generally I think it's better to recognize and realign our actions, not definitions. It's the difference between accidentally hitting someone, and accidentally hitting someone and proceeding to run them over.

And to your example, I think we know most people aren't really aware of what is going on. Another group of people don't believe it at all, a combination of ignorance and poor government. We're all human; that means something.


Lets say Mark gets in a waterballoon fight, but with the balloons full of gasoline. Lets say this happens in hospital. Mark is aware that his behavior is likely to result in great harm, many deaths. He doesn't _want_ people to die, it's not his goal, he just doesn't care that he's putting them in danger. He wants to have fun with his balloons, and that's all that matters to him.

Mark isn't trying to kill people. Mark _is_ evil, because he chooses to take actions that are likely to cause great harm.


> You wouldn't call a hurricane or a fire evil.

No, but you would call it evil if someone repeatedly built rickety buildings in a hurricane zone or built structures out of dry wood and paper next to a forest at high risk for a forest fire, and then acted like they had no responsibility when the buildings repeatedly got destroyed by fires and hurricanes and people's personal property got lost forever or looted in the resulting disorder.


> You wouldn't call a hurricane or a fire evil.

That's ridiculous, neither hurricanes nor fire possess free will. I bet you would call someone who intentionally starts those things evil, though


Hurricanes and fire aren't "actively, happily ignoring the evil"...

Unless you're an animist, I guess.


Evil and amoral lie on a spectrum (with things like "altruistic" and "noble" on the other end).

Trying to fit things into neat little boxes so you can apply words to them isn't particularly helpful. What Facebook is doing is some form of wrong. They aren't murdering children, but still, what they are doing is not good and they are doing it at a massive scale, so the harm is multiplied.


> which can probably only be mitigated by breaking it up.

I think about this a lot with regard to Big Tech. For some companies it looks easy and obvious (e.g., Amazon spins off AWS). For Facebook is it really as obvious as "spin off Instagram"? I'm not really convinced of that. It seems like their power is so ingrained in Facebook itself that it's not immediately obvious what splicing off Instagram would do. What would we actually want to accomplish?


Honestly I can't say if it's definitely the right course of action, although in the last few years there have been reports of millions of 16-35 year-old Americans leaving Facebook, only to move to Instagram. I do think the ability of these companies to buy their nearest rivals is an antitrust issue that needs addressing, for the health of this relatively new market, if nothing else.


AWS + Amazon doesn’t seem obvious to me; would there be much to gain from breaking up a conglomerate into several businesses, when those separate businesses are in entirely different verticals?

I suppose breaking off AWS might lower Amazon’s ability to subsidize an unprofitable retail business in search of market share, though that is probably a moot point now that Amazon is raising its prices in search of profitability and is likely to stop being the de facto online shopping destination now that other online retailers are also standardizing in two day shipping and easy returns (often easier due to providing return labels in the box).


Might run fewer brick and mortar stores out of business before being forced to sell at a profit.


I think one thing missing in the responses here is that there is some culpability without intent. Assume there was not ill intent, the fact that so many things that went on still happens, means they didn't have sufficient controls, or checks and balances in the practices to prevent it from happening. That lack of control is itself part of the problem, and something they could have fixed any number of times.

My 9 year old son always rightfully claims that many of the harmful things he does was accidental. The problem is that he frequently leaves little margin for error in a lot of things he does. Follows his sister just a few feet behind his bike; of course your going to run into her if she stops quickly. Stacking your bowl, cup and silverware on your plate then bring it up one handed; of course they're going to spill.

Facebook's internal controls and practices are insufficient to manage their business. It doesn't matter if they didn't willfully intend to do all of the shit they did. They did intentionally create the controls, practices, and culture in place that allowed it to happen.


We do distinguish manslaughter and murder however. Someone dies either way, but in that case intent matters.

I think in most of these cases the intent should simply unleash an additional charge or penalty - leading to the imprisonment of executives.

If my company accidentally does something extremely stupid or negligent, or against the interests of my customers, fine me enough to ensure I create systems and oversight to attempt very hard not to.

If it turns out I was doing so intentionally, lock me up and throw away the key.


>Like, if someone treads dog poo into my house on their shoes, it doesn't really matter if they did it by mistake, or spent ages walking around town trying to find some dog poo to step in before coming to my house; the effect is that there is now dog poo on my carpet.

So, whether it's your own two year old or a malicious adult, you think it's wrong to respond differently because they both produced the same harm?


I think it's not wrong to treat an adult exactly like you would a child if he's acting like one. I interpret parent poster as trying to make a point about how we cannot tell anything about intent so we should judge on the action, solely.


> I think it's not wrong to treat an adult exactly like you would a child if he's acting like one.

So, are you saying that the malicious adult poo-tracker is being childish and should be treated with the leniency we afford to children?

> I interpret parent poster as trying to make a point about how we cannot tell anything about intent so we should judge on the action, solely.

Why on Earth do you think we can't deduce someone's intent? In the case I posited, you can know the intent of both the child (no malicious intent) and the malicious poo-tracker (malicious intent). In many other cases, you can also deduce intent from someone's behavior.


>> ...

No, I'm not saying that. Take it easy bro. Why you all angry and shit?


Disagree. I can live with things being shitty if we're all on the same side and trying. Intent is everything.


Thank you for choosing such an apt metaphor. It has made the followup threads a much better read.


Isn't the presence of motive one of the cornerstones of a criminal investigation.


actus reaus vs. mens rea.


Inspector Javert, is that you? Going to lock me away forever for stealing bread to feed my sister's starving kids?


She can't afford bread for the kids on a FAANG salary? Silicon Valley prices have become ridiculous.


Sorry but I think you're being too hard on yourself, while also being inappropriately diplomatic toward Facebook. I mean, crimes happen all the time. I'm a criminal, I speed sometimes, I often jaywalk. I would say they are malicious, and have forfeited whatever position as a trustee of data that people have ever granted them. What do you do with a trustee who puts their interest before the beneficiary? Of course, you fire them. But indeed who is the beneficiary? At least in their public vernacular it's the user, even though more well read individuals know the beneficiary is the trustee, one in the same.

Facebook is perhaps not out of the ordinary really, it's just another of a variety of businesses who have come to realize they hate having users. They just need their data. Vampires don't want human friends either, they just need their blood.


That gets me every time I read in the media about a 'bug' that enabled some complex data transfer. People, somebody has to decide, plan, code, test and deploy the behaviour, and make sure it works. And yet, people accept the explanation it seems without much thought.


The email contacts feature already existed but was disabled in the past, so it just needed to be a Boolean switch. Like anyone who uses Messenger, I can believe they could introduce a regression and never fix it.


> "Never attribute to malice that which is adequately explained by stupidity"

My response to that quote is, to paraphrase Arthur C. Clarke: "Sufficiently advanced stupidity is indistinguishable from malice."


I always took that quote is the default, to be disproven by significant evidence to the contrary, or Mark Zuckerberg in other words.


Maybe I misread, but none of the articles linked above suggest any malicious intent. Could you clarify what specifically you're referring to?


Making money off "mistakes" that keep happening, darn it!


I don't think they can make money off plain text passwords? They can only lose money from that, if it's discovered and they are fined and lose some users.

They can make money from contact lists, but IIUC the article said they didn't use the contact lists. Also, given that it only affected a couple million users (like 0.1% of their user base), the damage to their reputation would far outweigh any benefits of actually using those contact lists.

On the last item, the collection of email passwords, the only benefit would be if they actually use those passwords to get information, so the above point covers it.

Let me know if I'm missing something.


how about extracting a user's contacts with their plaintext passwords that would usually be hashed into a database (at least in my professional experience) and thus unusable by facebook?


You can do many evil things with passwords. But, according to the articles, that's not what happened. They did download contact lists, even though they never used the plaintext passwords they stored in the logs. The users involved in the two incidents were different.


Storing plain text passwords doesn't make them money, but it negatively affects users, should those passwords be compromised. Though FB didn't prosper from the mistake, users potentially feel a cost, so the outcome is the same. Criminal negligence might not benefit the bad actor, but it's still a crime either way.


Yes that's a good point.

It's a bit like Exxon Valdez: they don't profit from spilling oil in the ocean, but they should still be penalized for cutting corners on safety.


or using the plaintext passwords to extract a user's contacts when hashing it into a database would prevent facebook from impersonating that user and logging in to access their non-facebook social graph?


Except the quote is generally wrong when it comes to issues of profit. It's simply backwards. People are fucking evil when it comes to money, profits, power, status, pleasure, and generally getting ahead of others. Considering those are the main motivators for almost all human actions, Hanlon's razor is clearly a bunch of bullshit that people spread around to end actual debate and discussions they don't like.


I don't think the quote is wrong, it's just that some things aren't well explained by stupidity. "Oops, we accidentally stole your data and used it" isn't explainable by stupidity but is explainable by malice.

In other words, Hanlon's Razer doesn't say that there is no such thing as malice just that you shouldn't default to assuming malice.


Yes. I'm saying that you should default to assuming malice because in most human endeavors and motives it's the primary factor. Even stupid people can and are evil. The motives I list above cover probably over 90% of human actions. Although that's just my own guess as to percentage, that means hanlon's razor is useful only ten percent of the time.


I think that is the perception that needs to change, some actions certainly can't be explained with "move fast and break things".

Some actions move into the criminal areas, like tricking users for passwords and phone number to then use that for other purposes to further their own agenda. That could indeed be seen as criminal I think.

And someone has to be held accountable.


I’d prefer to say negligence than stupidity. Being stupid isn’t something you can avoid, but being negligent is something within their power to avoid and is a legitimate target of legal sanction.


>Never attribute to malice that which is adequately explained by stupidity

I find that the people who use that quote are most often both.


Come on, you knew facebook was evil. Most people know facebook is evil. We choose to ignore it because otherwise we would feel obligated to change. Change would come at the expense of "likes" and other modes of social validation.

People are realizing that social validation is becoming less validated by social media. (Lets remove the likes on IG!!). The people coming around now are sheep looking for the next wave of validation.


Conclusion of the year. Kudos.


> 3) Storing said passwords and others in plaintext. [2]

in logs.


If you're a small business - I get it, it's good to whitelist whatever information you're logging explicitly, but for smaller teams a hard to diagnose issue might lead the team to "log everything so we can sort it out later". Facebook is Facebook, whether this decision was the product of the corporation as a whole, a small dev team, or a highly paid consultant/third party, Facebook is a big enough company that they don't have the "I didn't realize..." excuse anymore.


I've worked for multiple Fortune 25 companies, and that excuse does not fly. Not in banking or healthcare, where breaches of privacy/confidentiality are actually illegal, rather than merely distasteful. Small teams and careless devs doing that sort of bad logging will be caught and corrected by strict security oversight.

This is the sort of thing that leads the HN crowd to sneer at the old, slow ways of the enterprise world.


The passwords in plaintext wasn't a breach or a leak. If you punish Facebook for it, they would be less inclined to share such information in the future.


If we can't trust a company to audit themselves fairly (in fact, we can't generally) then they should bear the burden of paying for an external audit of their processes, many "boring" industries have these sorts of requirements.


That's how you handle children, though. No one is going to ignore wrongdoings in hopes the perp will continue to play nicely.


There might not have been a large-scale public data dump of passwords, but if 20,000[1] employees had access to logs with plaintext passwords, there is no guarantee I could ever accept that zero of them read or used customer passwords for personal purposes.

What's to stop a malicious ex-lover from grabbing a FB password and reading that person's private messages? If FB didn't even know there were passwords in plaintext, they very likely weren't auditing log access as much as was needed.

[1] https://www.theverge.com/2019/3/21/18275837/facebook-plain-t...


Malicious insiders don't need your password to access your data. If you are concerned about malicious insiders then plaintext password logs don't matter.


I'm more concerned with plaintext passwords because most people use the same passwords over and over, so if you can break one system, you can break others.


With FB's dataset and employee count, it would be insane not to be concerned with malicious insiders. Even if you completely discount morality and honesty where Facebook is concerned (and I do), there's substantial liability risk, PR risk, regulatory risk and commercial paranoia, which I suspect was likely their biggest concern, at least until recently.


There's a dead sibling to my comment mentioning that your congruence.io website has an expired SSL certificate, fyi.


There is no oversight in a Devops world. New systems come up in protoduction, celebrations are had, and the data protection officers and security compliance officers remain in blissful ignorance.


I'm on a small team (5 engineers) that is GDPR compliant so no PII is logged. It's not hard if you care.


Its a lot easier on a small team. Security isn't a stable state.


I'm replying to the parent who made the opposite assertion

I can't f'in believe I have to explicate such simple things to supposedly intelligent and thoughtful people


Seems like a pretty trivial automated test for so many PHD's to miss:

  create_user('Bob', 'BobPassword123')
  assert "BobPassword123" not in logfile


Hrm, so you checked the apache access logs, or maybe an error log, what about the system logs? Does the login request spin up a bash script and pass the password to it?

I don't think it's trivial to guarantee non-existence.


> I don't think it's trivial to guarantee non-existence.

I disagree in this case. Log messages don't spontaneously appear in arbitrary places. If the developers understand what their software is doing and how their systems are configured then they should know where to check for the logging messages.


Until someone turned on logging of the full request body on the load balancer/proxy or otherwise unknown-to-you middlebox that was the TLS termination point in production that you did not know about


And? That doesn't change anything I said. The new logging config isn't magic, either, and the new set of log files can be scanned, too.

With billions of dollars and some the best software developers in the world (supposedly), Facebook should be able to figure this out.


So their incompetence lies in lacking configuration management in production as opposed to a lack of testing? Why are they manually configuring production servers anyway at that scale?


They are managing a lot of things, they have full responsibility to be aware of these things as a company but I'd guess that with the scale of ops Facebook has regular developers never interact with the team that's managing their attempt at high availability by scattering servers across different hosts, data centers, legal jurisdictions etc. in fact I'd imagine at a op the size of Facebook that sort of problem has a dedicated department.

That said, someone should have been watching for this stuff and failed to do so (or to exist), so I'm not excusing them - but this is not a trivial thing to protect against.


You speak with the hubris of a man that's never been in the arena.


Facebook hires some of the top software developers and engineers on the planet, if not leaking plaintext passwords is too high of an expectation for them then nothing that isn't public knowledge should ever be put into any computer system. As a profession we should demand our peers do better than this.


We're all people of varying skills, I would never assume an innate high bar for any activity a human does - only by requiring the bar be maintained at a level and regularly checking and enforcing that requirement can we be reasonably sure it is - and this isn't just `echo $password` the way these passwords got into the log file is (from what I've been able to discern) pretty obscure and round about - Facebook is absolutely responsible and needs to be held to account, but the mistake is understandable.


It's not really that simple. Sometimes you need to enable request logging in production... maybe for performance analysis, maybe for debugging.


And you make sure that you can purge the logs afterwards.


Run the deployment in docker, check every file changed in the container for it’s contents.


yeah well we're not running apache >wut is this post?


Ah yes, the single, non-distributed, file-backed log Facebook uses.


You have no problem with the create_user function? Obviously it's pseudocode my point is there are a finite number of log locations and checking for an instance of a known string among them isn't difficult.


There are a finite number of log locations now. How do you intend to operate an integration test across the entirety of Facebook's stack to detect new log locations?


Change requests and a change review board staffed by professionals.

Subject: Change Request

Body: I would like to log the body of authentication requests in production.

Subject Re: Change Request

Body: How will you ensure personal data is not stored that shouldn't be?

Subject: Re: Re: Change Request

Body: I will add configuration Y to logging system X.


Change1: Enable logging of middleware traffic on 0.01% of requests for better profiling.

Reviewer: Does this have privacy implications?

Change1: No, Service X marks all PII before this point. Code X drops everything marked in this way.

Two years later.

Change N: Modify request structure for more optimal blah blah blah.

Now suddenly the changed request structure causes a regression in the PII detection which causes some logging of PII.

This shit is way more complex than "just stop people when they ask to log passwords".


Bureaucracy like this is what kills teams and products. There's no guarantee it works, and every change or commit should already have privacy and data protection in mind anyway. You also don't know what you don't know - you could for all your knowledge think that nothing is being logged, when in fact there's a subsystem outside your scope that's actually doing the logging.

You assert that it's trivial, yet you're adding more layers to protect against something like that from happening. It's the naivety that all problems are trivial is what gets people and companies into trouble in the first place


I promise you the productivity loss of sending a few emails is much less than 5 billion dollars.


The point is aptly demonstrated within this thread; sometimes things that look trivial at a glance aren't so trivial in reality.


You assume that all logging happens within an application. It doesn't.


What if it logs "BobPassw[...]"? Is that more acceptable?

What if there is a bug and some other function logs "[...]word123"?


Seems like a useless test considering how many actual production systems don’t log to local disk.


If it turns out to be that simple to write a test, it probably didn't happen by accident. Many systems are more difficult to write tests for.

Of course we (and they) should do it anyway, but it does often take an investment in making things testable.


Which is the NUMBER ONE MOST BASIC rule of handling users confidential data. It reflects incredibly poorly on the engineering practices of Facebook that this managed to get through. It should be a criminal liability.


I'm not arguing this wasn't a bad thing but comment made it sound like they were storing plaintext passwords in the database circa 2002.


Storing them in logs is the same thing IMO. Logs are usually more easily accessible than a user database.


Gonna have to disagree.

I think it would be way worse if we found out they were storing passwords were plaintext in the database in 2019. Even if the security implications are the same/worse, the policy/decision making of such a revelation would be beyond terrible.

edit: To put it another way, remote code execution flaws are terrible but they can happen. However it would be way worse if someone put in a static username/password backdoor. The security outcome may be the same but one is beyond terrible policy/decision making.


As is the policy/decision making that results in logs having the passwords - you can still have very locked down database access. Logs tend to get spread around all sorts of systems, and access control for logs is almost ALWAYS lower than DB keys. They're also cached for search-ability on any number of elasticsearch or business intelligence platforms, so getting rid of them after the fact is even harder. At the very least its an equivalent problem.


Logs tend to be ephemeral with limited retention.


That's still a pretty elementary error for a company that gets off on using CS-y riddles in interviews like Google used to. Move fast and break things, and then get a $5B fine.


Great—so potentially subject to even less security/privacy controls compared to other places they could've stored it.


The first thing I did after setting up Jenkins for our small dev shop was to find and install and test a plugin that removes all kinds of credentials from the logs. And I'm just a developer, and I never claimed nor think that I have any devops or administration qualifications - just common sense.

If Facebook devops engineers, who are probably among the best trained and highest paid on the planet, lack the same common sense for their users passwords... I can't even come up with an ending to this sentence that would properly express my emotions right now.


So, it's not in plaintext if it is in logs? Storing passwords in logs is even worse than on a database since access more likely is less restrictive. Every programmer worth its salt knows about this problem (I surely wrote code to prevent this).

Sorry, but it is like saying "there is no SQL injection, only bad input validation".


> in logs.

... for seven years.


even worse


> 2) Using that email access to "inadvertently" upload the information of their email contacts.[1]

If they wanted to show proper contrition, they'd not only delete that information from their systems, they'd also remove all links in the social graphs that can be in any way a result of that information. They'd also take 100% of their revenue that was even slightly influenced/generated by the inappropriately gather data and send it out to the users whose data was copied without permission.

Also they should probably put together a legal team who will be ready to start handling the $150,000 per copied item that they didn't have the right to copy. :lol


LinkedIn was caught harvesting people's contacts with dark patterns. They settled a class action suit for $13M.


They weren't using the cleartext password to scrape contacts, though, were they?


LinkedIn did in fact have that as an explicit feature where users would insert their email password on LinkedIn’s website.

https://www.linkedin.com/help/linkedin/answer/5204/email-pas...

> If your email provider doesn't support OAuth, you'll need to enter your email password before clicking Continue. LinkedIn will use your password only for a moment to authenticate your account. We don't store or save your email password during this process.

Facebook’s feature was similar I believe. Where they stumbled was connecting an email verification feature to the code written almost a decade ago for contact import.


I remember the era started by Sean Parker with Plaxo: https://en.wikipedia.org/wiki/Plaxo ... Everybody was doing it: https://blog.codinghorror.com/please-give-us-your-email-pass...


In 2008-2010 companies had the chance to harvest millions of emails and other personal data. The opportunity is not likely to come back this easily.

Shutting down these options, even if it is the right things to do for the users, is effectively increasing the barriers of entry and reducing the growing speed of new companies. The result is that those companies will maintain a dominant position on the market.


Let's hope this fine doesn't cover those actions and prevent any real justice from being meted out.


If I understand this right, #3 happened and was (allegedly) resolved before #1 and #2 happened, so that's not a fair shake. #1 and #2 are still valid points.


Is there anything left I can do to escape Facebook? This is just depressing for someone who was in college when FB came out and never signed up. Nor any other social network (still a favorite oxymoron) since.

What kind of a profile of me would they be able to glean from email? Whole thing feels so fucking invasive.


Spotted this in the guardian link:

>> “When we looked into the steps people were going through to verify their accounts we found that in some cases people’s email contacts were also unintentionally uploaded to Facebook when they created their account,”

Im struggling with the "unintentionally" part and to play devils advocate trying to figure out if it could be possible. Perhaps they had a service already that performed the login to upload peoples contacts on request and to save time reused this service to also verify the account?


>Seems like they should add another 0 to the fine after the recent hat trick

You're under the impression that regulators are in the business of bankrupting companies.

All the instances you cited of Facebook's wrongdoing are not worth a $50billion(!!) fine. They just aren't.


nothing linkedin hasn't done. (don't know about the plaintext passwords)

[I meant this as a serious observation, linkedin was sued but I believe they only got a mild slap on the wrist]


[flagged]


Can we quit it with posting this quote every time something negative on Facebook comes up. It’s practically a meme now and, quite honestly, it’s getting annoying.


It is annoying, but it also reflects Zuckerbeg's mindset pretty well. What has changed since that quote is that he has learned to mask that mindset and lie and deflect.


The only people who find this annoying seem to be Facebook employees, shareholders or people with vested interests in being allowed to go about the kind of cavalier data collection that Facebook does.

On the contrary, I think we need to keep repeating it. Everything I know about Zuckerberg, everything he says, the lies he tells, his actions and the actions of his company...they all lead me to one conclusion; that that particular quote is the deepest insight we have into the mind of Mark Zuckerberg, and we should never forget it.


I am none of those things you mentioned (employee, shareholder, other vested interest), and I also find it annoying. It’s something Zuck said when he was much younger. If his views haven’t changed since that age then he is an unusual person indeed.


I think it's pretty clear that his views haven't changed.

Judge him by his actions, not his PR.


> Every insider I've spoken to said any FTC fine on Facebook under $10 billion would be seen as a massive win, showing firm won't face serious consequences for privacy violations. Wall St. seems to agree in response to news of $3-$5b settlement [$FB up 4% after hours]

https://twitter.com/lhfang/status/1121148735818358784


In other words, market rewards Facebook for being fined only $5 billion by increasing their market cap $20 billion. Unreal.


Facebook announced earnings today; other new information besides the fine contributed to the after-hours jump.


This is a very strange way to put it. More likely, the fine was priced in with some level of uncertainty, the with the fine having been decided, the market corrected relative to its estimates.


> by increasing their market cap $20 billion

Is this actually meaningful to any company that isn't IPOing or releasing new shares? I am not pro-facebook by any means but it's not clear this means much aside from how much their investors like them, again of questionable value as the majority of votes are privately held by Zuckerberg.


Increase the value of shares and options and owned by the very people who made the mistake? Yes, increasing an individual's net worth doesn't exactly discourage future mistakes.


Unless Zuckerberg is immediately about to sell I don’t see how this incentivizes any behavior at the company. It’s a trailing indicator of performance, not a leading one, and seems to reflect public opinion more than anything.


We should never read too much into such numbers, but, technically, expectations of a $25bn fine could explain exactly what you describe.


as it so turns out, you can't rely on capitalism to even have any morality, much less enforce one


Eh, a capitalist system is going to reflect the moral framework of the participants. The participant here is the stock market, and its moral framework is based on economic growth and profit for shareholders. In that regard this was a good outcome and, as a result, is rewarded.


Which means it has no morality no matter how you try to twist it.


Sure, no intrinsic morality...more of a passthrough.


I try to think of it more as utterly sociopathic (in more or less the strict medical definition of the term), and things seem to really make a lot of sense under that framework.


Agreed. This small of a fine comes off as a cost of doing business for a company as large as Facebook.


This isn't the future I wanted.


I'm like 99% sure this has always been the "ideal" business model. Only take a chance if you can afford it. Something, something cost-benefit analysis.

There was a stink about a car manufacturer and seat belts (I think) a few years back. They decided the cost of a few settlements for dead people was less than the cost of fixing the problem.

This is now, has been, and (unless we eat the rich) always will be the way it is.


This is why we need GDPR in the US. Make the fine like 5% of global annual profit (not sales). Something that will hurt and make them actually think it's not worth it and actively avoid it rather than just the cost of doing business and a snicker as they walk away.


5% of FB revenue (55B) is $2.75B which is less than the proposed FTC fine of 5B.

Making it be 5% of profit and not revenue would make it hurt even less.


Laws don't seem to matter in this day and age as they can be gamed by money and/or legal tricks. Only way to fix the issue is to get rid of corrupt and unethical government and business leaders. Implementing something like GDPR would only be adding to the arsenal of bad actors.


Facebook will not acknowledge any wrongdoing as part of the settlement....Which is what I expect the script to read.

If the settlement happens, I imagine this would provide some weight behind any class action lawsuit.


> will not acknowledge any wrongdoing as part of the settlement

This bugs the heck out of me (in general, not specific to this case). What is point of letting them claim innocence? How does this benefit the consumer?

I can see occasional exceptions where it's clearly a case of misunderstandings so you don't want to bring down the full hammer...but I honestly can't remember more than one such case where someone DID acknowledge wrongdoing.


If Facebook admits wrongdoing they will lose every lawsuit brought against them going forward, which means they will contest the charges instead of settling. The FTC doesn’t want that, because an actual court case means they might lose, which would be bad for the careers of everyone involved. They’d rather just take the guaranteed W by offering FB a palatable settlement.


This is a perfectly rationale terrible explanation - it absolutely makes sense, but there's a real problem if we allow decision making like this to be accepted at large. Some cases might be lost, but companies that violate our privacy should actually pay for it.


That's not a W, though. It's playing to a tie, then forfeiting the rest of the game. Naturally, I would love for this to go to trial. Even if the FTC lost, at least we'd know what the consumer privacy landscape in the US actually looks like.

The title of this post/story, should be "Facebook Expects to be Fined Not More Than $5B by FTC," because a $5B fine would be extremely not-painful; there would be zero deterrent effect from a fine of this size.


Taking a bribe shouldn't be the best outcome for all involved.


>What is point of letting them claim innocence?

It changes the evidentiary basis of future claims on related grounds. The admission opens them up to other legal risks outside of the current dispute.

Maybe European regulators want to slap them for the same fact pattern (same facts, different jurisdiction). Maybe a class action is put together (same facts, different plaintiffs). Maybe they have an HR suit for unlawful termination from one of their security guys claiming he was fired for disclosing a vulnerability (related facts).

Etc.


> It changes the evidentiary basis of future claims on related grounds

Well that's interesting...citation? I thought not admitting any wrongdoing meant that you didn't admit it, meaning that there'd be no lower bar for anything, related grounds or not.


Those insiders may just be lowering expectations, preparing their audience to consider rather bad news as a “massive win”.

After all, the insiders most familiar with the matter are those deciding how much to set aside for its eventual resolution. There are rules for how to account for the inherent uncertainty, and massively underestimating the loss would just set them up for new trouble, i. e. a shareholder lawsuit.


A $10 billion fine for the Cambridge Analytica "scandal" would be so far beyond reasonable as to merit criminal charges against Joseph Simons. $3 billion is already insanely ridiculously high.


This is the correct opinion.


Before this thread becomes a Facebook bashing session, please keep in mind that Equifax leaked all your SSN data along with names and addresses and got away with no fines.


I am not a lawyer but intent appears to play a big role. A company that is negligent or incompetent will always face a lesser repercussion than one who acts deliberately.

Now of course this is not to exonerate Equifax whose entire premise rests on safeguarding sensitive information. From the consumer side, the 2 incidents are equally bad.


Not that I completely disagree with you, but lets not pretend that leaking SSN is equivalent to possibly leaking emails and passwords. One of those is far more important than the others. There was also tons of evidence of insider trading by Equifax execs. If anything the Equifax stuff proved to me that the American public doesn't care about privacy and so neither should investors.


Which one are you saying is more sensitive: my ssn or my email password?



Hilarious that the two replies to you are exact opposite answers, and neither is the original commentator


Email password, of course. I give out my ssn for identification to various organizations. I NEVER give out my email password to anyone except my email provider


Is that even a question? guess what is needed when you go apply for mortgage?


> whose entire premise rests on safeguarding sensitive information

Is this a joke? You are not Equifax's customer and they do not need your trust. Their entire premise is selling information about you, to people who do not trust you. Securing your data is something they have to do for compliance, not a core part of their business.


Equifax did not violate the terms of a prior FTC settlement agreement in order to avoid punishment for their prior bad behavior. It’s a lot easier to expedite penalties when there is a written settlement that is being violated than original bad acts.

The Equifax matter is far from over, they are being investigated by: 48 state Attorneys General offices, the District of Columbia, the FTC, the CFPB, the SEC, the Department of Justice, other U.S. state regulators, certain Congressional committees of both the Senate and House of Representatives, the Office of the Privacy Commissioner of Canada, and the U.K.’s Financial Conduct Authority.

Not to mention they made a recent SEC filing acknowledging they expect fines from FTC and CFPB.


Can someone explain how Facebook gets a fine but Equifax doesn't? I don't understand how? Or has Equifax not been fined yet?


They violated a 2011 settlement with the FTC:

https://www.ftc.gov/news-events/press-releases/2011/11/faceb...


Facebook is a popular target right now and the subject is politically charged. That's it.


Doesn't seem great that fines are determined based on how hot/not some firm is politically. I'm pretty sure someone can make a list of 100 companies that deserve more fines than Facebook.


And all the connected criminals that got away ought to be punished first, I'm sure a judge will agree with the wisdom of this argument and let you go.


I doubt the judge was thinking about politics. Courtrooms aren't the internet, thankfully.


Same reason Martha Stewart got jail time. Ever watch Billions?


So what are you saying exactly? We can’t set our eyes on Facebook because someone else is worse?


Precedence matters - especially for government inquiries and fines.

Also, people use FB by choice today while something like Equifax is forced upon us given institutional structures. So it is unclear why Facebook is more wrong than Equifax.


Equifax being negligent and Facebook repeatedly being caught doing bad stuff is different. You need to show intent, Equifax didn’t intended to get breached. Facebook certainly intended to do all the shady stuff they’ve been caught doing.


Leaking this data wasn't an intrinsic part of their business model.


It is, even more so than Facebook. Buying and selling your personal financial data is Equifax's entire business.


[flagged]


Similar punishment for similar transgressions is how the law is supposed to work.


> Similar punishment for similar transgressions is how the law is supposed to work.

Facebook and Equifax did not commit similar transgressions, and to say to is misleading.


I'm not sure anyone claims that, but obviously saying "Similar punishment for similar transgressions" means that different transgressions need not have the same punishments.


Assuming those punishments are fair and just - the punishments companies have been getting lately for privacy violations have been relatively consistent - nobody is actually being punished.


I think equifax has a "too big to fail" burden, credit scores are part of how things work in the country. And I'm not sure they're directly responsible for the leak of their data.

Facebook is far from being essential, it's mostly a consumer product. There are so many voices against facebook that I don't think there would be enough public support to protect facebook.


The political think tank that promulgated the term "whataboutism" has created a bane on humanity.


Whataboutism (also known as whataboutery) is a variant of the tu quoque logical fallacy...

The term "whataboutery" has been used in Britain and Ireland since the period of the Troubles (conflict) in Northern Ireland.[10][11][12] Lexicographers date the first appearance of the variant whataboutism to the 1990s[1][10] or 1970s...

https://en.wikipedia.org/wiki/Whataboutism

https://books.google.com/ngrams/graph?content=tu+quoque&case...


The problem with the term "whataboutism" is that its users wield it to prevent discussion about anything that doesn't fit their agenda. In particular, its main use has been to amplify the Russiagate conspiracy which was never backed by any evidence and yet was the forefront of public attention for almost three years. If you pointed out actual evidence-based foreign influence such as that put forward by the Saudis, you were accused of "whataboutism". This was almost certainly a tactic dreamed up in a politically motivated D.C. think tank or PR firm, and has prevented fruitful discussion of countless real issues for an extended period of time.


You're labouring under several grave misapprehensions. This addresses one of them:

https://archive.org/details/muellerreport/page/n7


It's not inherently bad, it's just misused a lot IMO.


[flagged]


Equifax needs to be fined until its existence can only be found in the history books, and its negligent management needs to be jailed for the long-term harm that they have caused.

Same with Facebook. But OP's point is that the collective has too short an attention span to actually hold any of these actors to the law.


OP Said > Before this thread becomes a Facebook bashing session,

It absolutely should be a FB bashing session. Just because some company got away with something doesn't mean they all should.


> Just because some company got away with something doesn't mean they all should.

I totally agree.


It seems beside the "the bug bounty is far less than it would have sold for in the dark market" another meme has taken hold of HN. No matter how significant the fine is, there always be people clamoring how trivial it is compared to their revenues. It could have been $10B and you would hear the same thing. The only fine, then, worth slapping is one that bankrupts the entire company.

Facebook made ~$7B net income in 2018. Am I supposed to believe that $5B fine isn't going to affect them at all?


Facebook earned about $172 billion since 2013. Do you think facebook would have earned $167 billion while behaving legally?

What about ethically?

How much do you think Facebook would have earned if it acted responsibly?

Once you add punitive damages, I don't understand how you could possibly think $5 billion is anything less than an order of magnitude off.


This fine is not for everything unethical Facebook ever did. This fine is for Facebook giving data to Cambridge Analytica. I don’t think Facebook got paid for that at all. When trying to google around for what people paid CA for the use of the data, I found numbers in the hundreds of thousands.

So in fact, Facebook is paying billions because they made a bad decision when choosing to grant access to data for research. That’s a lot of money. I think if you’d asked what a likely fine was when the story broke, people would have guessed a few million dollars.


> Facebook is paying billions because they made a bad decision when choosing to grant access to data for research

For clarity, Facebook didn't "choose to grant access to data" specifically to Cambridge Analytica. It used to be the case that Facebook's API would let the client app see all the information a FB user could see once they authorized it, including information about their friends. That's what CA exploited.


> Facebook earned about $172 billion since 2013.

No, they didn't, it's around $56.3 billion [0].

[0] https://www.macrotrends.net/stocks/charts/FB/facebook/net-in...


The relevant privacy issues (Cambridge Analytica) did almost nothing for facebook. The core product, sharing information with apps so they could be social, was a bad idea and never took off. It was shuttered in 2014 (a little bit after Kogan made his app and extracted a bunch of data). So of that 172B in revenue, basically none of it came about as a result of the product that Cambridge Analytica (and others) used.


> Facebook earned about $172 billion since 2013. Do you think facebook would have earned $167 billion while behaving legally?

Considering the fine seems to be primarily based on something they stopped doing in 2014, yeah, I think they'd have easily earned $167 billion without doing it.


>Facebook earned about $172 billion since 2013. Do you think facebook would have earned $167 billion while behaving legally?

>5 billion is ... an order of the magnitude off

So you're saying a 30% of their income came as the result of illegal activity? That's a pretty strong statement to make. What proof do you have of that?

Do you want to punish Facebook for doing things that are legal (but arguably not ethical) or do you want to punish them for doing things that are illegal? The former means you're essentially advocating for an ex post facto law and it's easy to understand why some people would see that as not fair.


The fine should be high enough to nullify any profit they made from their malfeasance, plus some amount as punitive damages to discourage the behavior in the future.


Finding the right fine for actions like this is hard.

On one hand, we (as a society) want it to hurt, to cause the company real pain. On the other hand, we don't want to actually destroy the company. This leads to a "cost of doing business" problem, where evil practices become a preferred path for management, if evil is sufficiently profitable.

The underlying problem is that corporations are NOT people, whatever the law says, and the corporation itself has no inherent ethics or morality. The fire doesn't choose to burn the forest, it just does.

edit: I suppose the problem in part is society. We see a beauty and symmetry in capitalism, so we assume beautiful == good, a philosophical failing going all the way back to the ancient Greeks. Just because it's beautiful and useful doesn't mean it's "good" in a moral sense.


The size of the forest fire matters. Facebook is probably the first company in history that can cause planet wide forest fires at scale and speed never possible before.

That said I think they finally get that, after a long period of total denial. Not because of the fines and regulatory action which have come too late, but just from all the unintended unignorable consequences that have pilled up.


Yeah. The loss of long-term customers over privacy issues, addiction issues, and general creepiness is probably a greater concern to them than a fine.

I quit using Facebook because I realized it's actively unhealthy for me, and I've been "healing" since. They don't want the dribble of middle class tech nerds leaving the platform to become a flood.


"We don't want to actually destroy the company"

It would be bad for investors, but... would it be that bad overall? I guess a replacement could theoretically be worse but they would have to commit to that ideal and not be dissuaded by the fate of their predecessors


It's a fair question, something we should be asking ourselves as a society.

But as I suggested earlier, we're still stuck with a society where a lot of people, perhaps even a majority, find corporate capitalism to be morally righteous, not merely elegant and effective. Corporate life is more sacred than human life in our world, sadly.


Yet the stock is up almost 10% in after hours trading... the fine is not nearly enough to discourage criminal wrongdoing.

https://finance.yahoo.com/quote/fb?ltr=1


It's almost as if the company reported earnings that beat expectations...


Not only that but it looks more like government is asking for its cut of the profit FB has made.


"The Silicon Valley company and the F.T.C.'s consumer protection and enforcement staff have been in negotiations for months over a financial penalty"

Honest question: What gets negotiated in this kind of settlement? What leverage does Facebook have in this situation to say, "no, that fine is too high"?


I'd assume it would look something like:

"No, that fine is too high. We'll settle this in court(s) since we think we can do better."

Now the FTC is facing a potentially big delay and a risk of no fine if the courts eventually hand Facebook a favorable decision.


> Now the FTC is facing a potentially big delay and a risk of no fine

The risk is much higher than that. There's a risk that a court rejects the entire justification for the fine greatly reducing both the ftc's ability to impose future fines and their power in general.


If Facebook is being charged this much for their (pretty serious) privacy issues, Equifax deserves to be shut down and have their lead executives face jail time.

I'm not defending Facebook, but I feel like whenever a huge financial instiitution does something, they're treated as a protected class, but other companies get hit pretty hard (justifiably)


I feel like we need something better than fines. It really seems like fines are just becoming part of the cost of business to people with sufficient amounts of money.

That could also just be the angry techno-cynic in me though.


Can someone explain what happens to the money when a big company pays a government agency this much? Does it go into the federal budget? or does the ftc use it for its own purposes?


For US gov't, the Federal Treasury. Victims may also be compensated.

https://www.sfgate.com/business/networth/article/When-govern...


Good. They're a scummy company offering a scummy service to a society of self-absorbed users eager to buy the b.s. others are selling or vice versa. Commoditizing people like Silicon Valley companies have done is criminal nearly everywhere but the USA.


This is pocket change for them. Reminds me of how they fine oil companies $50,000 and another $450,000 for violations. When they are making billions off of violating user privacy and will continue to keep using that user data for future business, this fine is nothing to them. It's like I manipulatively steal $10 from a million people and only lose a few bucks. They still profited.


$5,000,000,000 reminds you of $50,000? A fine of 1% of Facebook's value seems a bit more than pocket change.


What? FB's value naturally changes by more that 1% on quite a lot of days each year, without this fine.

Put it another way: if you took all of the daily market cap changes of FB's stock, but added -1% to one of them, could you pick out which one it was?


How does that make it not matter? My savings in the stock market do the same thing, but if I got a fine/bill that was 1% of my net worth, it would still be a huge deal.


If your net worth changes often randomly covered the fine amount, it is very different from if you were say a pensioner on a fixed income.

Having had days where NW changed by entire year's salaries, I can say it makes a huge difference to how you see it.


It is 5 billion but that's nothing considering they make 15 billion and more in future using the unethical practices. It's like me stealing your car and only being fined for the wheel and I still get to keep the car.


1% of anything's value is pocket change. It's not about how much money it is to you or me, but how much money it is to Facebook.

And it's not a lot.


It's interesting to consider the psychological aspect here. By publicly saying they expect a fine in the range $3-5 billion, Facebook increases the chance that the regulators ultimately decide on a figure in that range or close to it. This is an example of 'anchoring effect' : https://en.m.wikipedia.org/wiki/Anchoring

Given this effect, Facebook has an incentive to 'low-ball' their estimate.


> Mobile advertising revenue represented approximately 93% of advertising revenue for the first quarter of 2019, up from approximately 91% of advertising revenue in the first quarter of 2018.

holy shit


.....and $FB is up 6% in after-hours trading


Everybody was doing it... Yelp, Linkedin, Facebook, and a few other big network that are no more: https://blog.codinghorror.com/please-give-us-your-email-pass...


You read the headlines all the time about the EU fining whatever tech company some insane amount. I'm curious, though, if Google, Facebook, etc. actually pay the amount. I would expect them to fight these judgement tooth and nail, and do everything they can to avoid complying.


Since the fine hasn't been decided by the FTC yet, I am not sure I see the point of declaring a number?

For instance, if the FTC was going to be happy with say a $4bn fine, this tells them that FB is happy to pay $5bn.

The only reason I can think of is that FB is quite sure the fine wouldn't be less than $5bn (maybe FB is asking for significantly higher), and they are trying to price-anchor it. Maybe this is a way of telling FTC that FB will accept something close to $5bn without any lawsuits or other appeals, don't try to negotiate more?


This would be awesome if it applied to security breaches as well. Not sure how you would fine OPM, but there are a ton of consumer companies that have leaked data like a sieve.


[flagged]


what? ofcourse you can.

just look at any industry that puts people in danger when people make technically incompetent mistakes.

even the most basic electrician is held to a higher standard then facebook in this regard...


Surely you can - that could amount to criminal negligence which is definitely a crime.


Yes, you can and should. But how do you define negligence in IT? We don't have the same codified norms as civil engineering so it'd be very subjective.


I feel the best test was "did you get breached? what data was taken?", if the answers are yes and PII, you were collecting data that you didn't have the capability to protect and should face a fine.

There is a lot a company can do to protect data. If they demonstrate that they were following all those best practices re: encryption, storage, least privilege, etc. then that should absolutely factor in.

At least to hold people accountable who store stuff in plain text and leave it on publicly accessible storage.


That is not an excuse. Imagine not fining people for reckless driving.


Facebook was evil because it was a "walled garden". And after Cambridge Analytica it's evil because "it wasn't a walled garden."


They should take that $5 billion and use it to fund an alternative to Facebook.

(and one that is smart as to how to approach it, given Facebook's network monopoly. I.e. it should allow you to "use Facebook" in the same way I "use Yahoo mail" when I communicate with a Yahoo mail user from my Gmail account)


I want a government run Facebook as much as I want a Facebook run Facebook.


I didn't say a government run Facebook.

You're using an internet that came from government funding, btw.


Great news! Now that investors know how much the bank robbers have to return as a form of punishment, they are on their way to the next bank. Stock is up breaking $200 after hours, and we will see you all +40% up in few years at the next "punishment pit stop".


Why doesn't Zuck just quit? He's so rich. I would have fucked off and retired so long ago.


Because of the global power he wields which he would forfeit by quitting.


Someone might use it against him.


I would guess because he truly cares about making the world more open and connected and issn't in it for the money.

He also pledged to donate all his wealth along with Bill Gates and Warren Buffet


>> Why doesn't Zuck just quit? He's so rich. I would have fucked off and retired so long ago.

> I would guess because he truly cares about making the world more open and connected and issn't in it for the money.

It's more likely that he just likes being powerful and in charge of something big. If he just "fucked off and retired" the power he'd wield would be much less.


Well it's a nice way of not having your wealth taxed isn't it?

Then, when your children are all grown up you can get around those promises you made to the pesky public about not giving your privileged children billions of dollars by simply making them life-long directors of your charitable foundation.

Besides, some of the most evil and warped people in history were convinced they were somehow making the world better even in the face of overwhelming evidence to the contrary.

We call these people ideologues and fundamentalists.


I can't reconcile that with all of the evil shit Facebook does.


Because he has one of the coolest toys in the world.

Really, isn't running one of the biggest companies ever created a better retirement hobby than collecting stamps?


Still waiting for the day where social network is not based on identity, but based on real, useful content.

More than that, treat email just like password. That means, force users to change email regularly so that it's not leaked.

We need a better email system.


As a Facebook ads consumer I'm disappointed that Facebook sucks so much when it comes to security. I pay lots of money to build a community on my pages I don't want my real estate to lose value because they are negligent


This bump in the road aside, it's another $blowout$ quarter. Pretty impressive.


The diffusion of responsibility here is pretty galling. It's not Facebook it's employees at Facebook. Facebook is people not a person. If nobody steps up then the leaders need to take responsibility.


Where exactly does that $ go?


To put the fine in context: Facebook reported $2.4B net profit for 1Q19, including a $3B Provision for legal expenses related to the FTC inquiry.

This is a material but hardly back-breaking fine.


Will the money go to some government black hole? Seems like a waste of resources. How about requiring Facebook to offer better services or open source more tech.


Let's just say someone is building a wall


Without new laws affecting the rest of us? Sounds good. More bad-company enforcement is better than more internet legislation on these issues at this time.


You can find FB all day and it probably won't change a thing, but what if execs could be held personally liable for said transgressions?


https://finance.yahoo.com/quote/FB/key-statistics?p=FB $55 Billion in revenue and EBITDA 29.23B and they get a $5 Billion fine for doing a lot of scummy, but profitable stuff.

The best part for FB? Two years from now FB will say to FTC, "you already fined us once...the largest fine ever blah blah blah"


How is that good? Next time they will fine them more.


They can really do whatever they want and get away with it. I can't say I'm surprised.


Extremely non-deterrent.


Too big to fine.


Curious to where these fines go exactly?


shut'em down. only fix that will actually fix the core problem.


I did read this as $58 for several seconds, making 5B a suboptimal abbreviation


not enough


Isn't that just like 1 month's worth of revenue? That amounts to absolutely nothing in today's world, not even an incentive to reduce the bad behavior.

Jail time for lawbreaking executives needs to happen. The financial penalties do not dissuade bad behavior or act as a deterrent, so the same crimes will continue to be committed.

Are there any peer-reviewed studies looking at deterrence of financial/tech/privacy crimes in the modern era? I would expect that in every case the penalty is lower than the profit, making it a continued incentive to break the law.

The only thing these people value truly is their time on this Earth to be free and breathe fresh air. If they risk jail time when breaking these laws, I think we would see a reduction in intentional and repeated lawbreaking by large corporations.


I don’t know man, 20% of your annual income would be seen as a sizeable fine. That’s 20% of their yearly profits, and it wipes out most of their earnings for Q1.


On the long-term, it's still nothing at all for Facebook. Their stock will bounce back.


20% of 100,000 is much more significant than 20% of 100 billion dollars.


What crimes would they be jailed for?


Could they be prosecuted for violating the Identity Theft and Assumption Deterrence Act which makes it unlawful to "knowingly transfer[ring] or us[ing], without lawful authority, a means of identification of another person with the intent to commit, or to aid or abet, any unlawful activity that constitutes a violation of Federal law, or that constitutes a felony under any applicable State or local law."

There's an element of intent that might be hard to prove. Intent is legally defined as "the decision to bring about a prohibited consequence." If you know your actions will result in prohibited consequences and take them anyway, you are deciding to bring about a prohibited consequence.

Conspiracy might fit, too. You've got 2 or more people intentionally agreeing to these practices that they know will result in identity theft and then taking action to put those practices into use.

I guess they'd have to start with somebody who provably had their identity stolen as a result of the practices, though.


1 month of revenue is how many months of profit?


That misses the point: it's effectively a (cheap) speeding ticket for bad behavior over the life of the company. It would be a bargain at twice the price from their point of view.


Yea, and it might be that this slap on the wrists prevents any real punishment from ever being dealt out - that's what pisses me off, whenever the government goes into a non-prosecution agreement or a fine without admittance of wrong doing... it's actually lowering the potential liabilities outstanding of the company unjustly.


1 month of pay is a lot more than a speeding ticket.


Finland, Switzerland, and the UK are the first three places that pop into my mind where speeding tickets are calculated based on the income of the offender.

You'd have to be caught going way above the limit to be fined above 1 month of pay, but it's possible.


I think if you were talking about such a large speeding ticket, you wouldn’t use a speeding ticket as a metaphor for a small fine...


Even worse, in this model they can effectively just build in the cost of the fines in the business model. It's just a cost of doing business at this point.

I'm sorry, but this shit isn't going to end until the U.S. actually punishes white collar crime. Fines _do not_ work.


According to YCharts, Facebook's 2018 revenue was $55.8 BN and their profit was $22.1 BN, so 1 month of revenue is 2.5 months of profit.


They are fairly high margin relative to most industries, so probably less than 2 months of profit would be my guess.


If only there was a way to find out how much profit a public company makes, instead of going three-deep in a threat with “would be my guess” numbers...

FWIW, FB quarterly profits Q1, Q2, and Q3 2018 were 5.1, 5.1, and 6.9 billion, respectively: https://www.statista.com/statistics/223289/facebooks-quarter...


There’s a place for estimates with reasoning. My intention was not to give the exact answer, which is often no that’s needed, but to help (in some small way) someone who doesn’t know how to get there build the reasoning that might get them enough of an answer.


And they'll shrug at $5b ...

A fine is not enough in this situation ...


Get off Facebook. That's the only message Zuckerberg & Co will understand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: