Hacker News new | past | comments | ask | show | jobs | submit login
Art Makes You Smart (nytimes.com)
80 points by wallflower on Nov 24, 2013 | hide | past | favorite | 16 comments



This is a reasonable preliminary study but the conclusions drawn by it are not justified. Some students were told they were special, and then got to leave their (possibly poorly performing and depressing) schools and spend considerable time being personally mentored by experts in their field while spending the day in a nice environment.

Perhaps we replace museum with "biology lab", "baseball camp", "robotics research", "skateboarding lessons", or any of a number of other things where students are specially selected to be taken to an interesting new location and tutored by experts in small groups. Would we get similar results, or is exposure to art in particular what is responsible for these results? That is not determined by this study and would need to be before making such dramatic claims.

Also, the way they assessed critical thinking skills here is questionable. Students went to the museum and art was discussed in a particular way, talking about symbols and analysis of paintings and so forth. Then the students who went to the museum and heard this, and those who didn't, were asked to write an essay about a painting. The ones who had been to the museum and heard people talking about paintings in the way the "good" essays were scored did better than those who had not been given exposure to an example of how people are expected to talk about art in an academic context. This is not a measure of critical thinking skills at all! It is a measure of who went to the museum and listened. To measure improvements in critical thinking skills, the content of the essay should not have been directly specifically linked to the type of content they were exposed to during the field trips, because as such the test is also measuring their exposure to that content (art criticism and analysis) and its style. To compare to baseball again, it would be like sending kids to baseball camp and then testing those who went to camp against those who didn't in a test where they were asked to write an essay about promising 2014 rookie players, and then claiming that good results in essays about baseball from the baseball camp goers was evidence the baseball camp created gains in general critical thinking skills. Maybe it did but such a test tightly connected to the particular subject matter that one group was exposed to and the other was not is not a reasonable way to assess general critical thinking skills.


Let's just take a moment to savor a basic fact: it was a randomized experiment. That alone puts this article above like 90% of the science news one reads.


But, like, isn't that being a bit 'cargo cult' about the fact the experiment was randomised?

If the control isn't otherwise similar enough to isolate the hypothesis that we actually want to test (e.g. 'art makes you smart'), then it might as well not be randomised, with respect to that specific hypothesis?

(I'm not commenting on the original research with which I'm not familiar - just on this specific point.)


> If the control isn't otherwise similar enough to isolate the hypothesis that we actually want to test (e.g. 'art makes you smart'), then it might as well not be randomised, with respect to that specific hypothesis?

No, the randomization still serves for causal inference, it just means the identified causes are a mix of potential causes like direct effect of art and expectancy effects. If one wants to make the claim that the net effect _d_=0.09 (IIRC) was purely the work of the art/tutoring and nothing else, that would only be partially supported by the results, yeah.


>If one wants to make the claim that the net effect _d_=0.09 (IIRC) was purely the work of the art/tutoring and nothing else, that would only be partially supported by the results, yeah.

To put that another way: If one wants to make the claim that the effect was actually casually related to the art/tutoring at all - as opposed to the effect of just going on a field trip, expectancy effects, etc. - then this experiment isn't useful.

Its interesting that you phrase it, though, as 'partially support by the results'.

Would you therefore say that the results of a non-randomised trial also 'partially support' the hypothesis under test?

Lets say that we didn't have a randomised assignment; that instead we simply identified existing populations whose parents had brought them on one outing last semester, and contrasted those who had visited an art museum, vs. those who had visited a science museum, and administered our tests to these two groups.

We could probably do some research with our data, but we'd always be worried that we couldn't account for selection-like biases, (e.g. maybe the kind of children who have parents that choose art vs. science museums are already smarter).

I understand that in the strictest bayesian sense, any data that doesn't could directly reject a hypothesis, but doesn't, 'partially supports' the hypothesis.

But, I'm not sure that, in practice, its meaningful to differentiate between which of the two methodological problems is the generally bigger or smaller issue.

That's where my comment is coming from.

If the hypothesis is as described in the news headline (which may not have being how the original study was motivated, which may be what you are more interested in, hence our disconnect), then isn't one problem as big as the other? Isn't [potential selection bias etc.] just as bad as [potential expectancy effects etc.]

Wouldn't it hence be a little misleading to say "well, yeah, they didnt control properly, but savour the fact they had random assignment"?


> I understand that in the strictest bayesian sense, any data that doesn't could directly reject a hypothesis, but doesn't, 'partially supports' the hypothesis. > > But, I'm not sure that, in practice, its meaningful to differentiate between which of the two methodological problems is the generally bigger or smaller issue.

I'd disagree there, you can do more than simply throw up your hands and say correlation!=causation. Some of the work stemming from Pearl's causality formalism has been about conditions under which one can make causal inference even in the absence of an explicit randomization step, and one can attack it from another direction by compiling correlations which have been later examined with randomization, and estimating how many of the correlations turned out to be causation in the same direction (the numbers tend to look like ~10%) and how much they are due to other factors.

> Isn't [potential selection bias etc.] just as bad as [potential expectancy effects etc.]

No. Expectancy effects can be manipulated and measured and one could arguably then adjust for it in other results where expectancy is mixed in. It's a fact about human psychologies, as measurable as anything else there. Selection effects are too wild and unpredictable to hope to do such a thing.


Spot on analysis, Thanks for this. I would add that the fact that the researchers came to this conclusion means that they let what appeared a lot of bias to creep into this study.


Are you criticizing the study? It appears that you have conflated the study proper with a journalist's lay interpretation of the study.

if you were criticizing the study proper, could you provide a link to it?


The claimed benefits resulting from the tours:

>The surveys included multiple items that assessed knowledge about art, as well as measures of tolerance, historical empathy and sustained interest in visiting art museums and other cultural institutions. We also asked them to write an essay in response to a work of art that was unfamiliar to them. These essays were then coded using a critical-thinking-skills assessment program developed by researchers working with the Isabella Stewart Gardner Museum in Boston.

They've demonstrated that exposing children to art makes them care more about art. Having said that, the title of the article really seems like a stretch.


The link in the article has more detail[0], which includes an expanded "critical thinking" section. Also from the comments on that page, from one of the authors: "...separate articles focused on each of the outcomes described here are currently under review at journals. This is a summary of the combined results".

[0] http://educationnext.org/the-educational-value-of-field-trip...


Thanks. I must have missed that link. The benefits as described there seem a lot more impressive.


Nice study as an example for statistics teaching, shades of the Lanarkshire Milk Experiment, but the design seems to preclude that. Not clear if the pupils within each school were selected through student ID numbers or similar or left to school groups?

PS: The Isabella Stewart Gardner Museum has regular classical music concerts that are downloadable as mp3s, classified by composer. Very useful resource as they are CC licensed.


Thank you for posting the ISG Museum info. I have been looking for something like this.

http://www.gardnermuseum.org/music/listen/music_library


The value of learning arts doesn't need quantification. Of course, a study which checks this is welcome. Arts stand as a unique field that has antiquity as old as humans themselves. I wonder what cave-art artists must be motivated with? There is some deep connection between being human and arts. I don't think a small study where children feel coming to museums may be cool isn't validating the place of arts in life.


And this month's winner of Contingencies' Catch-22 Job Title of the Month is ... drumroll please ... Professor of Education Reform.


> Students in the treatment group were 18 percent more likely to attend the exhibit than students in the control group.

Interesting definition of being smart :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: