Hacker News new | past | comments | ask | show | jobs | submit login
Instagram photos reveal predictive markers of depression (springeropen.com)
215 points by mcone on Aug 8, 2017 | hide | past | favorite | 87 comments



TL;DR - If you post a lot of gray pictures of yourself with a sad face that nobody likes or comments on, you are probably depressed.

They used the color of the photographs, number of people in the photos, level of engagement, and happiness rated by Mechanical Turk to determine depression.


Wait so they didn't actually go back and see if the people they identified as depressed had a diagnosis of clinical depression?


What, you thought this was an actual scientific study?


Ok I went back and read more:

"Data collection was crowdsourced using Amazon’s Mechanical Turk (MTurk) crowdwork platform. Separate surveys were created for depressed and healthy individuals. In the depressed survey, participants were invited to complete a survey that involved passing a series of inclusion criteria, responding to a standardized clinical depression survey, answering questions related to demographics and history of depression, and sharing social media history. We used the CES-D (Center for Epidemiologic Studies Depression Scale) questionnaire to screen participant depression levels [26]. CES-D assessment quality has been demonstrated as on-par with other depression inventories, including the Beck Depression Inventory and the Kellner Symptom Questionnaire [27, 28]. Healthy participants were screened to ensure no history of depression and active Instagram use. See Additional file 1 for actual survey text."

So it sounds like they found depressed and healthy subjects via MTurk and verified them using surveys. I guess that's slightly more rigorous than just "people on MTurk thought the person in this photo looks depressed"


It literally is a peer-reviewed scientific study. Middlebrow cynical dismissals are boring.


a peer-reviewed scientific study at the height of the replication crisis


Hahaha... mechanical Turk, I was one of those, the xxx ones were at least entertaining


What kind of tasks were you hired to do on this latter topic?


Tagging scenes Genders in the scene Positions etc...

I'm curious how ML will do that, not how but accuracy would be cool to make (train) the model. Also with thumbnails that capture the viewer's attention.

The other jobs were brute searching sites in hopes to improve SEO.

Transcribing photos of receipts (funny those ads, some poor schmuck being paid 12 cents per 10 min job).

Oh I remember an awkward one it was some social site's user photo uploads, I had to tag images with babies. So weird to see so many 'public' yet seemingly intimate photos. Not nude, just random people's photos.

I did this for a whole week as much as I could do sanely, made $55 not worth it. Washing plates makes more money.


> The other jobs were brute searching sites in hopes to improve SEO.

I'm curious on how this works. Do you basically search for specific keywords and click on specific URLs in order to signal Google that the URL is more relevant?


That sememed to be the basis of it but this was a couple of years ago or more say 2010-2012 something like that man that was a while back wow.

Yeah you had to search some term and find the link. It seemed to work as they would be beyond the first page result usually or at least not #1 but say #4 on page 1.

But it could be biased you know based on your own search history I often check externally with say Tor to see how sites rank from other viewpoints than my own.

Anyway hope that is helpful. You could also just sign up and look around. I think I still have a 99.99% satisfaction rate a lot of jobs you need something close to that to apply but it is shit work though. Some go up to $1.00+ woweee but those take like half an hour to do it's ridiculous.


Thanks for replying! I suspected that people did that but never had confirmation.


"The more comments Instagram posts received, the more likely they were posted by depressed participants"


In general I call this the Magarshak score:

How many comments vs shares does a content post have?

Comments - disagree with it, want to correct something, controversy

Shares - agree with it, want to spread it more widely, echo chamber


I think this works better for something like Facebook, Instagram comments tend to be compliments.


Yeah, it's really weird to me coming from the rest of the Internet to have random people I've never met say things like "Beautiful", "Great work", "I love it", etc., instead of, say, what you'll get in the comments of a youtube video.


Sadly, if it's just a single compliment with no relation to the actual image posted, it's likely a bot doing the commenting. This is a common way for people to boost their follower count.


That's all I ever see on my employer's blogs and announcements, and is far more depressing to me than any honest criticism would be.


Wouldn't that be the Magarshak ratio?


No because 100:100 and 0:0 would be equal in value but far different in meaning. It must be a score and not a ratio.


Yes, but that's not what this ratio purports to measure.

Also, the fact that you wrote n:n with the colon in between would make it by definition a ratio.


And in 10 years, this will be a part of your Citizen Score.

I should burn this nickname and be more diligent about persona management. Or get off social media entirely.


I'd bet that this:

"Using modern machines/VMs, C64 demoscene devs are able to create content and develop new efficient algos for the old hardware that probably wouldn't have been possible using only the old hardware alone. They can develop algos and art content using sophisticated tools and backport to the old hardware."

Is just as true today as is this:

"Using invasive data mining, insurance companies are able to make more accurate inferences about their high-risk groups that probably wouldn't have been possible using only the data they are legally allowed to use to set rates. They can use sophisticated tools offered from data brokers to make inferences and then backport them into policy changes that meet the current guidelines/regulations for the industry."

In either case I don't know enough to explain what "backporting" entails. But there's just no way insurance companies aren't using this kind of invasive data already.


> But there's just no way insurance companies aren't using this kind of invasive data already.

I declined an offer for a role exactly like this. Disgusting stuff.


On the one hand I agree with you--sleuthing for information is shady, esp. if it's without consent. But more accurate pricing is a "good thing" for most people, otherwise adverse selection makes everyone worse off.


I'm pretty sure the only reason social media isn't incorporated into credit scoring is that it's too difficult to reliably attach to a given individual without help from Facebook, etc. FB definitely does not want to become a consumer credit reporting agency, so their ability to cooperate without triggering the FCRA is limited.


Hmmm... I'm sure FB and all other social media are afraid of being targeted by regulation in case such data becomes so highly predictive and used by rating agencies which can causes actual financial harm to a person (i.e. by increasing their interest rates for a mortgage). I wonder if that's why they have been so aggressive about self-censoring and such.


I used to think similarly, these days I tend to be in fear is feeding the system, with a good deal of 'government do not scare me'... Western Europe ones that is.heh


> Or get off social media entirely.

That will also be part of your Citizen Score, or it will impact your life like having a bad Citizen Score. Kind of like how you can't get a loan because you haven't had a loan before and you don't have a credit rating.


It certainly will be as long as people sit around moaning about what is going to be imposed on them rather than asserting any opinions about how society should work.


I'd rather speak my mind and be judged for it than homogenize myself for acceptance.


black mirror season 3 episode 1!


> Or get off social media entirely.

Wouldn't that be a bigger red flag?


Yeah, you need to have a carefully curated, carefully boring and mainstream social media presence. As the Laundry requires of its employees.


Or you can pay us for some licenses of 'i-normal' the new program that will post in your name carefully boring and mainstream messages in social media!. Your kids need it if they want to work!


Red flag of what, though? Eg, I stay away from social media, and create burners for just about everything (rotating burners semi frequently depending on how many random details I post, etc) - so what am I an indicator of? I try to be a Ghost.

I'm sure a ghost is an indicator for some things, but the majority of it is just speculation right? I have trouble thinking of how a lack of information about someone could be harmful as a source of information.

Now, I could see it becoming harmful in that I have to have some type of Citizen Score to apply for jobs, be accepted for loans, get insurance, etc etc. That's quite reasonable, imo. Yet, it's still seems different than being a source of information, e.g. this person is unstable, or this person is depressed, etc.


Your ghosts have ip addresses and the snoops have taps, so really it's only your reputation with the other users you are running from.

I play the same game. :)


Sure, but it's better to play than to not (if you care, of course). It's just like when I browse, always in private mode haha.


> Red flag of what, though?

If your employer/government/etc asks you for your facebook/google+/instagram/snapchat/etc account and you say you don't have any, it might come off as being suspicious. As if you are trying to hide something. The same thing with your banks/financial institutes/etc.


Well I guess that's what I meant by information vs actually being harmful. I expect it to have negative effects to loans/employers/etc.


I have none of those accounts, ghost or otherwise. I don't intend to either. Does this doom me forever to be a suspicious person?

Why can't it simply indicate that I have no interest in sharing my personal life with those companies?


Not just photos, but in general I've found that the people who are continuously posting how happy their lives are and how everything is great are most likely the ones who are utterly miserable.

I have been seeing a woman's posts lately who is going through a bitter, ugly divorce, and it's astounding how she represents everything publicly via Facebook when I know the full story due to a family member being involved.


Someone that is miserable is not exactly someone that is suffering from depression.

This research is for finding a clinical condition and make less expendable the cost of health programs, not to evaluate the life of others based just on opinions. Suicides are the second-leading cause of death among teenagers. But, some people are just "not a happy person". That doesn't mean that they are unhappy, sad or suffering from depression.

I really like this comics from Oatmeal that talk about that. http://theoatmeal.com/comics/unhappy the essay that this comic is based thttps://outline.com/r6q4Ga


Would it be acceptable for Facebook to try to make you less depressed by changing what you see on your feed?


They do this, or at least have done this. If a "digitally depressed" person has exhausted all possible suggested friends, FaceBook proceeds to suggest friends that they assume will be sexually attractive to you, as an example.



Whether or not it's acceptable, I think we'll see apps/bots employed to do this. I'm not sure there's much in that process that makes money for Facebook, so big players might avoid it.


Would be great to have this as a service where you could analyze your own instagram feed for signs of trouble.


Clippy, 2018: It looks like you're depressed, can I help you with that?


166 isn't really a representative sample, is it?


User, it appears you are trying to upload a photo displaying negative emotions. Please upload a new photo expressing a happiness quotient of at least .75 as per terms and conditions.


It's possible and mutually beneficial to instead send that user targeted ads for therapy, suicide hotline etc...which is effectively what happens.

The cynical dystopian insinuation around technology, on a technology site no less, is getting old.


It's not about technology. It's about power, capital, and society.

We already know that Facebook/Instagram can manipulate your real-world emotions and thoughts by what it chooses to show you.

We also know that susceptibility to ads for different products varies depending on emotional state and context (pre 'surveillance capitalism' it was well known that you could get much bettter responses to aspirational products as the weekend approached, and to products like make-up on Sunday evenings as the work-dread sets in).

The question is, if you have a load of high-value inventory to shift, but your audience isn't susceptible to it, why not treat this as an optimization problem and make them?

It's the right thing to do for your shareholders, completely legal, and near undetectable.


The question is, if you have a load of high-value inventory to shift, but your audience isn't susceptible to it, why not treat this as an optimization problem and make them?

Susceptibility to action is binary. If the audience isn't susceptible then they won't ever convert, or take action.

I'll never buy a Yoni Egg [1], and no amount of targeting or putting advertisements in front of me will cause me to make a purchase of one.

However there might be a world in which I would buy a box of feminine products, even though I am a man. For example if my daughter is sick or incapable of getting them for herself. In which case, a social or other network knowing that

1. I have a daughter

2. That she is sick

3. What her time based need of these products is

4. That there is a preference for a certain product in this category

5. I can get the product cheaply or easily

Makes my life easier, by saying: "Hey [user] we see that you may be in a scenario that this particular product/service would be helpful right now"

Even though at that point I might not even know that I need to help get that when I go to the store because of social anxiety or embarrassment preventing from asking.

So yes, I believe in an all seeing all knowing AI god that will guide and optimize our behaviors. That is not a sarcastic comment.

[1]https://yoniegg.com/what-are-yoni-eggs/


Your example is laughably contrived, unrealistic, and so full of plot holes that my fingers would tire from typing all my thoughts out. By contrast, the very real privacy invasion damage that invariably occurs in the kind of world that you're advocating for is well documented, and dystopian in the extreme. The never forgetting Internet, the different insurance rates, the illegal files on every citizen etc. I want no part of such a world.


We're a lot closer to it than you probably realize. Actually in some similar use cases we're already there.


But we're in it right?

And it is a world where non participation might equate to a negative profile, so the bay we can do is understand the enemy and curate perfection, while engaging in a real way only when anonymous (so far as that can even be achieved).


It can be regulated against, more transparency can be enforced. Despite the enormous lobbying from Facebook and Google in the EU, I could imagine it happening there within five or ten years for example.

There's a real tendency on HN to get all Ayn Rand and forget we live in a society. In relative terms, this is a very new business model. Companies will push for more power, and they get it unless people push back. As people close to the problem, we need to see ourselves as responsible for influencing the discussion about type of society we end up with.


> So yes, I believe in an all seeing all knowing AI god that will guide and optimize our behaviors. That is not a sarcastic comment.

We already use much less powerful things, words and images, mostly to manipulate. Plenty don't hesitate to trying to make people feel bad or unpopular for not having a product they don't actually need, and your susceptibility doesn't matter, that they also don't blink doing it to a mentally handicapped or psychologically unstable person is. But sure, just give marketing more power and at some point it will wrap around to being all nice and fluffy... I'm also not being sarcastic when I say you can live in that world over my dead body. It's not enough for you to want it, you need to want it more than I don't want it, and I doubt anxiety or fear of embarrassment is going to be enough motivation, not by orders of magnitude.


>The cynical dystopian insinuation around technology, on a technology site no less, is getting old.

A technology site is exactly the place for cynicism about technology. The idea of seeing an ad for a suicide hotilne on a social media site is even more distopian than the op's crack about mandatory happiness.


A technology site is exactly the place for cynicism about technology.

I disagree, because I think we need to be the idealists and drivers instead of those pushing back, but I also don't think yours is necessarily an unreasonable position to have.

The idea of seeing an ad for a suicide hotilne on a social media site is even more distopian than the op's crack about mandatory happiness

I'm not sure I follow why. Ideally an offline social network (like your friends and family) would correctly infer your mental state from interactions and be able to intervene appropriately. A digital social network, where your friends and family, as well as the digital homunculus are all observing your activity should be able to infer your mental state as well, if not better than the offline system and intervene where necessary as well.


The universe of human endeavor is not a line between backwardness and progress. Social medial platforms could evolve into more robust means of communication, tools of brutal oppression, advertising-saturated hellscapes, monuments to human vanity, or slightly more advanced contacts lists. They already are most of these things and more. What direction or combination of directions this technology takes will be determined by 'idealists' and 'drivers'. These will not be the people sitting on the sidelines and cheering, but rather the cynical, dissatisfied, and imaginative people who have the capacity and desire to envision a tech landscape dramatically different than the one we currently occupy.


Except the network can't compare with non-members, so it can't even assess whether it itself is part of the problem, and whether the social network committing suicide would be the best intervention of all. That's just unthinkable. It's like trying to make a TV show that makes people happy, instead of encouraging them to turn off their TV which is likely to have drastically better results. You can write a script about turning off the TV, you can have a show about people who hate their TV and are heroes, but you cannot not make an episode, the network has no slot for that.

When I see Zuckerberg's fake, forced smile, I see a person trapped under there somewhere crying for help. I don't even mean this in a snarky way, I don't hate the guy, and I'm interested in the "Bill Gates transformation of being less insecure". But still, anyone near that and oblivious to it should read and listen when it comes to mental health, not talk and certainly not write algorithms that "intervene". If FB was able to want to help with it on any meaningful level, it would either disband or change drastically, and not have "be middleman for everything" as goal number one like so many do.

> idealists and drivers instead of those pushing back

You say this like there is a train, and it either moves or doesn't. Instead it's a multi-dimensional space with a million things that can all change position and shape and color. Technology isn't the question, but who is using it, why and for what.


Or cigarettes, booze, porn, divorce attorney, higher education, jewelry, sports cars, psychiatry, etc...

There are plenty of profitable industries built on the backs of people's mental unrest that has nothing to do with making them well.

Why do you think this technology would be deployed only aligned with the best interests of the users?

My guess is that it would optimize for the most proftiable behavior for the owner of the capital.

A free or cheap solution would not have the ROI a palliative / ongoing treatment would.

Why do you think depressed people are almost never given the option of writing gratitude letters, or are given drugs shown to increase the symptoms they proport to treat in the long term?


Agreed. The path of least resistance for capitalism is to manufacture needs in its consumers, then fill them at a profit. People pay for what they have been manipulated to want, not what improves their lives. In Korean the saying for this is "to give a disease and offer the cure" (병주고 약주다).


Search for #depression on Instagram and see what happens.


Yea exactly, that's a great example of putting narrow targeting (self selected really) to good use.


Excellent. But more likely this user will see an uptick in ads for self-help products and antidepressants.


And lose their jobs, mortgages, insurance, etc., to low scoring resulting from data mining.


People who have bought black and white posters also bought rope. Our only rope now for only 0.42 €.


I wonder more about how this data affects what the people they know see.


i would be surprised if it did. from my limited knowledge of targeted advertising, the ads aren't based on network data but individual behavior.


Trust the Computer. The Computer is Your Friend.


Please read the article and contribute something useful instead of your off-broadway Black Mirror episode.

The algorithm didn't focus much on emotions of the subjects in the photos.


I thought it was funny.


Being funny is verboten here. 0/5 stars.


It is when the mods wake up, just trying to help you guys out. I've seen things moderated for less, they don't want clever quips they don't want reddit


To be honest I expected it to sit at the bottom of the page half greyed out.


Corrolary: Instagram usage makes you depressed!


We used ML and assumptions like: "Group photos are an indicator of happiness" to guess if a person is depressed.

God help the millennials. The only winning move is not to play.


They didn't use that as an indicator of depression. All they did was pass into the model the number of faces in the frame. The analysis afterwards suggested that depressed people tend to not have group photos.


Shit, I don't have group photos :(


I guess I should photoshop more people in my stock facebook placeholder :^(


Do you also have the other markers?

It's not like a single marker determines everything.

It's not even like having ALL markers means one is 100% depressed.

It merely gives you a confidence score in the person being depressed.


On the other hand, if you do have all the markers, maybe you should step back and consider.


Well doh

People who don't hang out with other people are more at risk for depression AND they are less likely to have group photos.

This is called a latent variable.


Not latent since you can directly measure it. You are thinking of confounding variable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: