Hacker News new | past | comments | ask | show | jobs | submit login

No.

You probably don't need a degree in experimental psychology to understand that forced recruitment gives you invalid/biased/low quality data. These surveys are annoying to the users; more importantly, the data provided has little if at all value to the organization behind the survey. No matter how well you design your survey.

I don't even want to start on how bad/biased some surveys are designed.

The only way the survey business is still in business is because it is so distant from the revenue. If your ad campaign doesn't boost sale, you pull your ad away. Bad survey's uselessness are not so explicit so people still trust that "something can be gained".

No, stop creating useless data from online user. If you want insights on usage, do analytics on server logs. If you want feedback on user experience, do a serious experiments and plan to spend some ten thousand bucks on it.




You seem to be evaluating this company based on either A. how it would work for you personally, as someone who isn't interested or B. an inaccurate assessment of how it would work for other people who aren't you. When discussing a company you typically try to avoid both.

For instance, many girls like dolls, but if you don't like dolls that doesn't make dolls a "No." business, and it's not illuminating to hear why dolls are icky. A better discussion would be how and what makes a new doll appealing (or unappealing) and whether it can compete with Barbie.

Clearly the online survey industry is making some money. Certain parts of the web, which may not target you, rely on surveys. Combining surveys and paywalls, if not perfectly novel, is at least interesting. The combination raises a host of issues and potential problems which your comment ignores.

Your being off-base would not quite matter as much if it wasn't so rude to the team that worked on this. Did they do a good job in their implementation? I don't know, it hasn't come up yet.


My argument is targeting at the whole "survey" business, not this or that dev team in particular. I would much prefer the dev team here to do a great job, "fixing" the broken online survey business instead of adding on to it. If you are part of the dev team, I apologize for the confusion.

Not sure how the "doll" analogy applies here. When a doll sells, even to a small proportion of population, it sells. When you do an online survey, you gain very little true insight. You definitely will gain more insight from the two alternatives I mentioned (analytics on logs, properly designed experiment).

Don't get me wrong, I am not saying the whole online survey industry is wrong. Amazon Turk is wonderful, so is SocialSci. My company actually use result from Turkers for building models. The premise for that to work is a well designed experiment with qualified turkers. A survey that replacing "paywall" basically means you do not have a clear targeted recruitment poll. You get basically anyone who pays your website a visit. Survey result from that, in my humble opinion, is worth as much as server log analytics.

p.s.: I think you meant "You are being off-base" instead of "Your being off-base" but I really want to avoid discussion on grammar on HN.


but I really want to avoid discussion on grammar on HN.

Good. Because you either have your apostrophe or your verb tense wrong here in your original post:

Bad survey's uselessness are

It's best, in an international community, to let these things slide without comment.


Analytics and experiments will tell you 'what' but will rarely tell you 'why'. That's the data you can collect from quantitative and qualitative surveys.


your argument relies on the assumption that data gathered from surveys is accurate and so good that you can make appropriate insights.

For most purposes, I'd say this is good enough, but this is a whole other can of worms.


"You probably don't need a degree in experimental psychology to understand that forced recruitment gives you invalid/biased/low quality data."

Interesting point. Anecdotally, I am less honest with my selections if I'm taking a survey specifically to get a reward (i.e. contest, Kongregate points, access to a site), as I usually just want to get it done with as fast as possible.

However, if it has tangible results (i.e. will tailor my experience on the site like Netflix) then I am likely to spend more time choosing honestly.


Even the honesty of your answers is not the only issue. One of the big ones is that the set of people opting in to a survey is already demonstrating a characteristic different from the set of all people of interest unless the people of interest are for some reason "the set of people opting in to a survey". This alone is enough to muddy or even invalidate the output data.



There are cases where surveys are useful, but none of them hinge on getting accurate results. The two that come to mind: push polls, and exploiting sample bias to make yourself look good.


I've enjoyed the recent 5-hour ENERGY ad about how many doctors approve of their product. A whopping 73% of doctors recommend low calorie energy products... when measuring the percent of those who recommend energy products.

http://www.youtube.com/watch?v=RCqT3fdAAHQ

While there are abusive uses like push polls and leading questions, there are legitimate uses too.


That example is even worse; 73% of doctors recommend that if you must use an energy product, you should use a low-calorie one.


I always forget how sarcasm is lost online. I meant that as an example of a misleading, bad poll.


Thanks for the interest, and sorry for the delayed response.

We certainly hope that the surveys are not annoying. We know that some people will prefer to pay, but others (like me) would rather take a few seconds to complete a survey than pay. Our hope is that the surveywalls will let users access content that otherwise would have been beyond their reach. And at the same time, quality content publishers will be able to make money off their work.

As to the researcher side, we agree with you that nothing beats revealed user behavior when optimizing web sites or apps. But it's not cheap or even possible to A/B test in other situations.

Suppose that you're a restaurant owner and want a new sign for your building. It's not practical to purchase two signs and see how alternating the signs affects business on different days. And it's also not in budget to spend $10k or more on a traditional market research survey.

Or if you're a politician, you can not wait until voting day to see which of your various ad campaigns worked in different districts. You need proxy measurements.

Companies already ask these types of questions using traditional approaches like panels and phone polling. We provide a cheaper way for them to do it online. All approaches have built in biases, and we're working to account for the biases and quality issues in online polls. As you point out, we'll have to do that to succeed.


Thanks for responding to my, as noted by others, rather "rude" critique. I think you guys are building something distinctive from the surveys on AmazonTurk, SocialSci, because your recruitment poll is not chosen. Turkers went to amazon with the idea to properly do survey, your survey might took the user by surprise.

A few suggestions: 1. Hire someone with experience on experimental psychology, I mean, by trade. Not necessarily a PhD but at least with some experience and has knowledge of the textbook experiment errors. Just one of those would help the design a long way. 2. Throw in some free analytics tools/result for the organization behind the survey. As you mentioned, restaurant owners and politicians might just want to run the survey for some straightforward result, well, not all surveys have straightforward result, even when designed well. Some analytics/visualization tool, e.g., manyeyes from IBM, doesn't really take a lot of dev time but would be quite impressive in the eyes of your customers (i.e., the politicians or restaurant owners).

Best of luck, Alex


Thanks again, Alex. I appreciate the comments and great ideas.

We do plan to add polling and survey design experts to our team (I have stats but not polling background). And an analytics platform is coming.

What do you mean by our survey "might take the user by surprise"? Do you mean that it might be surprising to the user to have a survey launch when clicking one of our links? We're testing different "teaser" text for the links. And hopefully we can make it clear that a survey will be coming when you click.

If you're willing to run a test survey, fill out the contact us form on our site, and we'll give you a discount code. Appreciate the feedback.


I'm curious to know what bad personal experiences you've had in conducting surveys to dismiss them completely. When was the last time you ran a survey and why did it fail to deliver the results you were expecting?

Also, please can you take it easy on the tone? See this discussion - http://news.ycombinator.com/item?id=4396747

Edit: Analysing server logs can tell you what people did, not why they did it. At least in my experience, surveying users directly about their motivations and needs has been very enlightening and resulted in a direct increase in sales.


thanks for the feedback on the tone. See constructive suggestion in my reply to awenger.

one comment on the survey vs. server log comparison: Survey also doesn't give you "why" on user behavior; it only give you what the user "thinks" the reason is. For certain occasions it provide false insights. Server logs are facts, you won't get why but you get concrete observations.


I agree that users don't give perfectly true answers, but discarding their answers as 100% useless or false is silly. You can corroborate survey results with server logs. You don't have to view them in isolation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: