In Bryan’s blog post Predicting the Popularity of Obvious Methods, he suggests that social scientists are more likely to pursue non-obvious methods when the obvious methods don’t provide the answer that they like. In the spirit of his post, the use of non-obvious or overly sophisticated methods can signal that the researcher kept trying until they got the “right” answer. The skeptical might view this as a warning sign about the research.

Most people do not consume research firsthand. Instead, it is filtered through the media. Journalists also have preferences over answers. If a newspaper doesn’t like the answer that research provides, they have wide latitude to ignore it. This means that one can infer the preferences of the media by the research that they do cite. Instead of sophisticated methods being the “tell,” journalists show their preferences by citing weak research.

To support their preferences for certain questions or certain answers, journalists might discuss research at lower quality journals. Or they might discuss research with inferior methods. In medicine, randomized controlled trials (RCT) are accepted as being better than observational studies. And within observational studies, it is agreed that more test subjects are preferred to fewer. A recent paper compares articles that get covered in the leading newspapers to articles published in the best medical journals. It finds that newspapers are more likely to discuss observational studies than RCTs. And these observational studies have smaller sample sizes than the observational studies published in the best medical journals. The paper suggests that the press skews reporting away from the most reliable research. (Given that this paper is not a million-subject, RCT published in the New England Journal of Medicine, you should infer that I “like” its conclusion.)

An article posted Friday in The Atlantic brought this topic to mind. It described how researchers took 37 girls and randomly assigned them to play with one of three dolls. The dolls were a Barbie doll, a “doctor” Barbie doll, and a Mrs. Potato Head. Each of the girls played with their assigned doll for five minutes. The girls then answered a short survey about what types of jobs they could do when they grew up. The girls who were assigned Barbie dolls saw fewer jobs open to them.

Halfway through The Atlantic article, it reads:

The paper has a few limitations: The sample size was small, as was the effect size. Still, it’s … icky. Why does a plastic spud make your daughter more likely to think she can be a scientist than an actual scientist doll does?

That would be a great question, if it was prodded by a study with more than 37 girls. I wondered if other media outlets would try to downplay the weakness of the research methodology. Interestingly, the LATimes included the weaknesses in the very first paragraph — including the five-minute play period. Their first paragraph states:

After spending just five minutes with Jane Potato-Head, girls believed they could grow up to do pretty much anything a boy could do.