This is a cool challenge but the prize is definitely lacking. I think anyone capable of writing an algorithm of the caliber you're looking for isn't likely to participate. I could be wrong but I think you're going to have to pony up some serious cash to get developers taking this seriously. Or you could go the more standard route and just hire someone to do the job.
Ping me (b@kaggle.com) if you're interested in running this competition more formally on https://kaggle.com.
We've run hundreds of machine learning competitions & offer a real-time leaderboard to encourage competitive participation, a very active community of data scientists, and many other features that simplify running this type of challenge.
So basically they are asking people to build them an algorithm that will be a critical part of their business, in exchange for a free service that will be based on this algorithm. Right...
Sorry if this was unclear. You own any code that you write for this competition.
The prize is that we'll use your algorithm to validate any matches that you go on. If that doesn't seem worthwhile to you, feel free to pass on this contest.
Do you allow closed source entries? Rewriting an algorithm implemented in someone else's code to avoid copyright infringement is trivial not to mention inevitable given the performance requirement differences between a contest and a production site.
This is interesting, but given your parameters (predict the most friendships), all you're technically asking for is recall. I'll write an algorithm that has 100% recall: predict that all people become friends with each other.
If this is really a competition (and not just "Here, have fun with our dataset!"), you need to define the rules a little bit more clearly. How are you weighing recall vs. precision? Or are you just looking at % correct labels, where the only two labels possible are "FRIENDS" and "NOT FRIENDS"?
Sorry this was unclear. We meant "correctly predict the most friendships"
You get 1 point for each friendship that you correctly predict did or did not occur. In the test data set ~50% of pairs became friends, so predicting "everyone became friends" would get 250 points, whereas a perfect algorithm would get 500 points.
I'm updating the README now to make our scoring system more clear.
They're also looking for whether people become friends on Facebook.
The dominant factor here is going to be the rate at which the participants send and accept connection requests on Facebook. Some people send them to everyone they meet, some people never use Facebook.
KPI overfitting, yay!
(The best second-order effect is probably a multi-feature similarity measure between the participants and the person's current Facebook Friends, including graph distance to current Friends. In case anyone is taking a run at this.)
This would be a little more fun if there was a cash prize. No offense meant, groupers look cool, but you'd probably get some more participation that way.
This is the sort of thing I'm personally very interested in, and I have some pretty novel ideas for how I'd approach it. That said, I wouldn't participate in this because it clearly devalues the industry. You should really rethink your approach.
Developers who are considering participation in this, I'd suggest you build something for yourself with data acquired elsewhere.
> I wouldn't participate in this because it clearly devalues the industry.
People this may be aimed at:
* Experienced devs in boring day-jobs who are seeking some kind of off-time challenge.
* People just getting into ML and want to solve something real.
* CS students with spare time.
You know more about ML than me, but it doesn't sound like they're looking for a cancer cure; just fishing around for a one-off challenge. Or maybe they're taking names for future interview candidates.
> Developers who are considering participation in this, I'd suggest you build something for yourself with data acquired elsewhere.
Relax, dude. If people think this an interesting problem to solve, what's that to you?
Honestly, I think this is a very cool challenge. As someone who just went on a Grouper last night in Boston and had a great time, I think I just might participate and submit something. Do you have any limitations on how many people can form a team? Personally, I would pair on this with my roommate. He's the big data guy, and I'm the coder.
I just noticed in the FAQ it states, "...several fields have been renamed of course." If I'm understanding this correctly, any real-world conclusions you draw will be completely meaningless, as we're essentially working from a mislabeled dataset.
That's true, but to have the best chance of designing a good method/analysis, I need to know what the variables in my analysis mean. Otherwise, it is tougher to make decisions about what variables it makes sense to include in a model, what sorts of transformations make sense, what sort of approaches might work best, etc.
I would echo this sentiment. Not only are the columns intentionally mis-labeled but they also appear to be computed, meaning some of the variance inherent to the original sample will have been lost.
1. The data is collected from the user's FB profile or comes from our internal ratings
2. The platinum_albums header is just a joke, we anonymized the data
3. Thanks for pointing that out. There was a bug with a few rows that is now fixed.
Specifically, you should explain all the columns, including:
- Is that the person's height in inches?
- What does the asterisk in certain column-names indicate?
- Why do the pets, platinum_albums, weekly_workouts, number_of_siblings and pokemon_collected values seem to fall in the range of 7 - 8?
Also, this dataset is far too small. There is a single male-male relationship and that's not going to provide any significant data if we're looking at genders at all.
I would also argue that it's not the best set of metrics to use to determine whether people will become friends. Age and facebook_friends_count might give you some hints, but I seriously doubt that shoe size has as big an impact on the potential for friendship as, say, common interests, shared culture, income class, or other socioeconomic factors.
You write in the README that the mislabeled columns are "from our internal ratings". Can you give any more definite sense of what this means? What kind of things are these ratings based off of? What are they designed to reflect? How are they computed (roughly)?