A study has just been published that comes to the astonishing conclusion that men like to date pretty girls.
My first response was to file this under "like, duh". Then I read the news reports a little further, and I discovered that, while I don't doubt this study's conclusions for a moment, it was so poorly designed that it failed to corroborate even such an easy thesis. This study was so biased that the conclusion was clearly reached before they began the research. It's a good example of a study to ridicule because I have no axe to grind here other than the methods used. I don't challenge the conclusions: I'm quite sure they're correct. But this study used some of the same techniques of other biased studies, and so makes a good object lesson.
Because here's the catch. The way they did this study was by arranging for men and women to meet each other for, quote, three to seven minutes. Then they asked the participants to pick which of those they met that they would like to date.
So let's think about this carefully. Researchers arrange for two people to talk for three minutes. Then they ask them to rate the "romantic desirability" of the other person, and why they rated them high or low. And, they are astonished to discover, people tend to give shallow reasons, like physical appearance.
What other possible criteria could someone have for rating another person after meeting for three minutes? If a friend told me that after speaking to a woman for three minutes, he was convinced that this was the woman he should marry because they had so much in common, because she understood him so well, and because they shared fundamental social and moral beliefs ... I'm sure I'd smile knowingly and say to myself, Yeah, she was pretty and he was smitten.
Let's get real now. What could you possibly learn about another human being in three minutes of conversation? I suppose you might get some clue about her intelligence level, depending on whether she can speak coherently or is a babbling idiot. Though if she is inarticule, maybe that's just because she is nervous about trying to make a good impression on this three minute date. You might get a vague idea whether she is pleasant or rude. If she screams insults and obscenities at you in that three minutes, that would be a bad sign. But I'd guess most people manage to be nice and pleasant for a three minute date, so good behavior would prove almost nothing. Even if you tried to quiz her on her priorities in life, her likes and dislike, or on her most profound religious or moral beliefs, how far could you get in three minutes?
So do you see the set up? They create a situation where a man could learn only the most superficial things about a woman, almost nothing beyond how pretty she is. Then they ask him to draw conclusions about her, and pretend to be surprised when the only conclusions he draws are superficial ones. Like, duh.
This is a classic method for biased studies. Often some group with a political or social agenda wants to prove that some other group they dislike, or people in general, will draw unjustified conclusions based on limited information. They want to convince people that the disliked group is a bunch of bigots or is too dumb to comprehend complex issues. So they do a set-up study: They get some people from the targeted group, give them limited information, ask them to draw conclusions, and then ridicule them for drawing conclusions based on such limited information. Many people will fall for the setup. If someone asks you to participate in a study, you tend to play along. So when they ask you to come to some conclusion based on inadequate information, you make your best guess. And thus the victims play into the hands of the people with the agenda. Even if some or many of the victims catch on and refuse to play the game, that's no problem. You just carefully record the "no answers", and then don't include them in your published results. Like, if 3 men said they picked the girl who was prettiest, 1 said he picked the girl who sounded most intelligent, and 20 said they couldn't possibly choose based on such a short meeting, you simply report something like, "Of those who made a choice, 75% decided on appearance and 25% on intelligence." After all, if they didn't answer the question, how can you count them?
I've seen this done many times on studies that supposedly prove how prejudiced people are. One study showed participants photographs of people and asked them to make guesses about their personalites, based simply on those photographs with no further information. So the participants made guesses about the pictured people's personalities. Then the researchers bemoaned how people will jump to conclusions about someone's personality based solely on their appearance. They didn't report how many people refused or were unable to guess. They didn't ask the participants whether they thought their guesses were reliable, or whether they would have even made such guesses if they had not been told to do so. So the only possible result of the study was the result that the researchers had obviously wanted when they started.
Of course this isn't the only way to do a "set up" study. The key to the general technique is to limit the possible results to those you want to get. If, say, you want to prove that Americans love ice cream, don't ask, "What is your favorite dessert?" Who knows what people will answer? Rather, you ask, "What is your favorite flavor of ice cream?" Then when you publish your results you say, "65% of Americans say they love chocolate ice cream!" If you word your announcements carefully, you can make them sound like they mean a whole lot more than the evidence really says.
I was once a subject in a study done by the Ohio State Board of Education. They had a new program they were trying to push. But apparently they wanted to be able to say that they had sought the opinions of the community. Maybe they realized it was controversial. So they did a study. They brought in a bunch of people and had "facilitated discussion" where we were supposedly able to give our opinions about the new program. Except ... at no point did they ask whether we thought the new program was a good idea, or even how it should be implemented. Rather, all the questions and "discussion" were about trivial side issues, like what statistics should be collected and how progress in implementing it should be measured. After an hour of this, I did one of the rudest things I've ever done, but I don't regret it: As the facilitator was briefing us on the next round of meaningless questions, I stood up and asked the group, "How many of you here believe that anything we say here makes any difference, and how many think that we are here to rubber-stamp decisions that have already been made?" Apparently I wasn't the only one who was fed up with this, because the other participants applauded me. The facilitator mumbled some lame comments about "Perhaps we have not gotten your real opinions" but at that point everybody started walking out, and the meeting disintegrated.
The only defense against this technique is to carefully study the methods used, the people selected, the questions asked. Sometimes news reports on a study will be fair enough to give you this information. Usually it's buried at the bottom of the article, but sometimes they do let you in on the trick. Often they don't give you a clue. If it's a subject that you're seriously interested in, you can try to get more information, maybe even a copy of the full report and critiques by other researchers. But let's face it, that's a lot of work. I'm a politically active person and I rarely take the time to do that. I might seriously investigate one study a year, probably less. For the average person off the street, he's just not going to go to that much trouble.
Perhaps the only real response is to simply ignore all politically-charged studies as probably biased and therefore unreliable. But this is a depressing solution, as it writes off the potential for knowledge. I wish I had a better answer.
© 2007 by Jay Johansen
No comments yet.