Hacker News new | past | comments | ask | show | jobs | submit login

This kind of thing is what they're talking about.

> It was determined that the bottom 80% of men (in terms of attractiveness) are competing for the bottom 22% of women and the top 78% of women are competing for the top 20% of men

[1]. https://medium.com/@worstonlinedater/tinder-experiments-ii-g...




So, this guy set up a fake profile and interviewed women who matched with him, without disclosing he was doing research? Ethics aside, he doesn't discuss his methodology at all. How did these interviews turn into a Gini curve? How could they, without some heroic statistical assumptions?


It's been studied a lot, I just threw that up as example since it was posted here a couple of days ago. Here's one of OKCupid's own studies [1]. You can search for others that confirm the same thing over and over again.

1. https://www.gwern.net/docs/psychology/okcupid/yourlooksandyo...


That study shows the exact opposite of what you're claiming. The third figure shows that the vast majority of women's messages go to men they rate as less than 4/5 attractive. The last figure shows that even the least attractive men still got replies to their messages 22% of the time.


You shouldn't be taking pareto claims on a sample size of n=27 seriously

There are apple flavored flat earth medicine studies with stronger statistics


It's been shown plenty of other places, including by OKCupid, which why I qualified with "kind of thing." It's common knowledge at this point.


There's lots of common 'knowledge' in social psychology including studies done by serious reputable researchers that turns out to not be so. This is a big enough issue that it's termed the "replication crisis". So, yes, maybe. Maybe not.


Being shown by a lot of studies means its been replicated a lot.


only the ones with valid statistics count

some guy dug up a bunch of studies done on abductions by satanists in the 1990s. remember that, from 60 minutes?

the punchline was there had never actually been such an abduction, but there were 30+ studies.

and that's replicated a lot.

the quality of the replication matters.


Alright, keep that goal post moving then. In the meantime, I'll go with the best we got.


There's no goalpost moving. It's the same thing I said originally.

You shouldn't be swayed by inappropriately small sample sizes. Your response was "well what if I had a lot of them?" My answer was "still no."

.

> In the meantime, I'll go with the best we got.

This isn't even close to the best we've got.


My original claim is that something like that was true and what someone else was referring to and I gave you a lazy link, and you complained about the guys terrible methodology and replication. I told you that there are dozens of studies and data analyses (some are on huge data sets [1]) out there and your response is that oh no they have to be quality. That's a goalpost move from "this is bad" to "all those (that I haven't even seen) are bad."

> This isn't even close to the best we've got.

If you've got that then show me and I'll have a look. Until then I'm going with that studies I've seen that all seem to say roughly the same thing (despite widely varying sample sizes and quality of methodology)

1. https://medium.com/@worstonlinedater/tinder-experiments-ii-g...


> Until then I'm going with that studies I've seen that all seem to say roughly the same thing (despite widely varying sample sizes and quality of methodology)

There are none with acceptable sizes. You're just talking.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: