The authors are from a prestigious school, CMU. Still, I was curious to know who funded this research. Alas, not surprisingly, it is MPAA.
The CMU lab is called Initiative for Digital Entertainment Analytics (IDEA), and according to CMU's press release "The creation of IDEA was made possible through a gift from the Motion Picture Association of America (MPAA)"
"The MPAA was interested in CMU in part because of our unique strengths in the performing arts, computer science and technology, and management," noted Smith.
Looks like MPAA also had something else in mind, when they funded this lab.
I actually read this paper without knowing it was sponsored by the Motion Picture Association of America and without realising that it wasn't peer reviewed. I cringed on the thought that this is the level of social science.
In my profession (law), proving loss of revenue is so difficult that it is in many cases considered impossible, even in civil cases. You have to take into consideration many factors and you can't just automatically ascribe declining or increasing revenues to one fact, even if it seems plausible. You can't just make up a model full of assumptions and attach fancy names and mathematical symbols to it. No decent judge would fall for that. To start with you would have to make a year-on-year comparison spanning 5-10 years. Then you would have to make a convincing argument why the revenues didn't just decline or increase due to one of the many other possible factors such as popularity of the movies, internet penetration, broadband access, weather, macro economic trends, changes in taste, normal seasonal changes, supply of alternatives (cable tv shows, sport games etc.), changes in payment options for the legal movie outlets, other changes in business model or marketing that could explain why the legal movie outlets sold more after the shutdown.
If the product had been more generic that might have been possible but with movies the popularity of the supply varies so much that changes could often be ascribed to differences in popularity.
Not disagreeing, but whenever there is a study posted where the funding comes from a source that has an obvious interest in the outcome of a study there is always a comment like this one pointing out the conflict.
I understand providing information beyond that would be likely very difficult, but having never been in a similar research situation, could someone outline what this influence is like? If there an underlying pretense during the research that they have to reach a specific outcome? Is it more subtle/obvious? Any additional insight would be appreciated.
I became suspicious because the writing style of this paper was very superfluous, yet the conclusions were very strongly worded. Here is an example in the very first page:
for each additional 1% pre-shutdown Megaupload penetration, the post-shutdown sales unit change was 2.5% to 3.8% higher, suggesting that these increases are a causal effect of the shutdown.
This does not suggest causality. You need more rigorous inductive reasoning, and randomized trials, if you want to conclude causality.
> Edit: Just found out that SSRN is not even a peer reviewed journal!
It's not really a journal at all. It is a repository for papers in the social sciences, kind of like arXiv is for some other fields.
Most of their target audience is well aware of this, but it might seem confusing if you're coming from outside academia. They're not trying to pretend they're something they're not. Putting pre-prints of papers on SSRN is considered good practice.
SSRN is actually a pretty important source for open access research.
Just found out that SSRN is not even a peer reviewed journal!
Nobody claimed it was. It's a platform for sharing preprints/working papers, like arxiv. Obviously you don't like this study or its conclusions much, but throwing the kitchen sink at it doesn't strengthen your argument either.
It is meant to be a scientific study. What's the use of an unsubstantiated claim? In fact, its been proven to be highly counterproductive (at least in other fields) as others go on to build works and further claims around it, only to later discover the errors and depth of uncertainties.
Using the same principle, I could posit (probably quite defensibly), that the amount of alleged copyright infringment of Megaupload Limited's limited access and centralized services would be dwarfed by that from Bittorrent and other P2P methods.
You need more rigorous inductive reasoning, and randomized trials
This is usually impossible in social science research studying real-world phenomena that can't be reproduced in the laboratory. It's a cargo cult attitude to think all science must look like an FDA drug application. Part of science is making do with the information you can collect.
I think you have it backwards, economics is for the most part a cargo cult science because it pretends it's OK not to test hypothesis. Many people thought Phrenology was also scientific and they measured things and made predictions. However, if you apply the rules of Feynman's lecture they both fail.
Self-censoring is in my view an important, but largely ignored phenomena that should be more researched.
There is some evidence that people stay away from research that is considered controversial by the public [1]. In my own experience (biologist), a lot of biologists shun away from public debates on more controversial stuff like GMs or creatonism out of fear of being associated with the fringe-elements participating, or are scared of "loosing" their reputation.
There was a Nature survey in which 15% of 3247 scientists confessed of having changed their study-design, methodology or findings because of pressure from a funding agency [2].
More indirectly, this study checked out studies on a fat substitute from Procter & Gamble and found that "supportive authors were significantly more likely than critical or neutral authors to have financial relationships with P&G"... [3]
These are just glimpses, from my own experience I would say that the problem is larger than it's currently being described.
Edit: There is little to no research on what's happening directly with the scientists in a situation in which funding influences findings. Does the researcher self-censor out of fear of loosing funding? Is there a slightly threatening phone-call from the funding agency? No-one really knows!
In my experience, a lot of biologists avoid debates on creationism because it is not a scientific theory. Getting into a debate as a scientist would suggest there is something to discuss scientifically, there is not. Richard Dawkins voiced this publicly many times.
Furthermore, and more importantly, avoiding public debate is very different from changing the results of your research. If you are implying that some people have results that support creationism and keep it to themselves, I would like to hear more about this. I cannot even imagine how you would conduct scientific research on creationism.
Another point: you are using the word "confess" which has negative connotations. It is perfectly normal for a funding agency to voice their concerns on, e.g., the design of an experiment, if they can think of an improvement. There is nothing wrong with this, there is nothing to "confess". In some cases the agency might be altering the design to game the results. That is wrong, of course; but your phrasing is too general.
If noone really knows, than maybe nothing substantial is happening. "slightly threatening phone-call from the funding agency" sounds near paranoid. And again, your wording is way too general. For example, it would include NASA (as the funding agency) ask a laboratory that is designing a detector to make modifications to allow higher resolution in higher energies.
Actually, it's pretty well known how it can happen.
It's fairly easy for people to rationalize not publishing a null result not favorable to their funding source when null results are often not published anyway. [1] However, in aggregate, this means that studies that show unfavorable results to a funder are less likely to see publication.
Imagine you're testing the efficacy of a new drug, which has a close-to-neutral or neutral effect (to keep things simple), and there are n labs testing it. You might expect there to be a normal distribution of effect around a mean of zero (whatever quantity we're measuring). However, the groups who didn't find any effect decided to not publish their null results for any minor technicality or due to some pressure like (e.g. are you sure you don't want to repeat this experiment?)
Left as an exercise: if you observe only at what got published, what do you expect to see?
That being said, there's obviously more than just your garden variety publication bias and conflict of interest at play here.
If you expect or are looking for specific results you are likely to unconsciously manipulate the experiment in order to find them. And when big corporations are involved uncounscious manipulation is often the smaller problem..
Just for the record, there was previously a study saying pretty much the opposite. Then this study appeared and was trumpeted loudly in the press by MPAA affiliates.
For me, it was "statistically significant" that lowered their credibility: Oh yeah? under what p-value?
The data is hard enough to analyse, we don't need misleading statistics on top of that. (To clarify my point: "statistical significance" is indicative of frequentist statistics. While the changes in the frequency properties of sales prior to and after the shutdown are relevant information, they are probably not the best way to use available information.)
The CMU lab is called Initiative for Digital Entertainment Analytics (IDEA), and according to CMU's press release "The creation of IDEA was made possible through a gift from the Motion Picture Association of America (MPAA)"
http://www.cmu.edu/homepage/society/2012/fall/entertainment-...
"The MPAA was interested in CMU in part because of our unique strengths in the performing arts, computer science and technology, and management," noted Smith.
Looks like MPAA also had something else in mind, when they funded this lab.