Urban Legends of Science
Tuesday, March 17, 2015
Friday, August 9, 2013
Jamie Goode, writing on the Wiseman study, which numerous media reports claim showed that average people can't tell expensive wine from cheap wine:
"There is a single, crucial detail that is absent from these reports. It was not a comparison between two wines, one cheap and one expensive. Instead, subjects were given just a single wine to taste, and then asked to say whether it was cheap or expensive."
Even though it's not recent, it's a good write-up of another misconception that's been cemented into 'common knowledge'. A good, thorough read.
"There is a single, crucial detail that is absent from these reports. It was not a comparison between two wines, one cheap and one expensive. Instead, subjects were given just a single wine to taste, and then asked to say whether it was cheap or expensive."
Even though it's not recent, it's a good write-up of another misconception that's been cemented into 'common knowledge'. A good, thorough read.
John Szabo has a lengthy post at WineAlign that I missed when it was published, but that goes into some depth to address the actual experiments behind a lot of popular claims concerning wine tasting/reviewing. It well-written and a good read.
I do take a bit of an issue with his take on Robert Hodgson and his work (presented recently in completely expected overblown style in the Guardian) showing that wine ratings at the California State Fair are inconsistent. Szabo seems rather upset by the accusation (in fact making a mistake himself when reporting that experiment only went on for one year) and spends quite a bit of time talking about all of the factors that go into tasting the wine ("Bottle variation, cleanliness of glassware, temperature, order of wines, or how long the bottle had been open, not to mention taster fatigue or health...").
I think this is exactly the opposite of the point he has been making throughout the rest of his article. He wants to make the argument that wine tasting/scoring is in fact legitimate, but then claims that so many factors come into play that there is no way for anyone to expect consistency. It undermines his central thesis.
If we are to expect the public to believe that there is legitimacy to wine tasting/scoring (and there is) the goal isn't to explain away all the problems. The goal *also* has to be to address real problems with the system, admit them, and fix them. The wine competitions are one of the big public spectacles in wine tasting, and if they are being run in a fashion in which the results are completely inconsistent, waving them away as immaterial isn't the correct answer. Reworking them so that they show the public how wine scoring is a real thing is the answer.
I do take a bit of an issue with his take on Robert Hodgson and his work (presented recently in completely expected overblown style in the Guardian) showing that wine ratings at the California State Fair are inconsistent. Szabo seems rather upset by the accusation (in fact making a mistake himself when reporting that experiment only went on for one year) and spends quite a bit of time talking about all of the factors that go into tasting the wine ("Bottle variation, cleanliness of glassware, temperature, order of wines, or how long the bottle had been open, not to mention taster fatigue or health...").
I think this is exactly the opposite of the point he has been making throughout the rest of his article. He wants to make the argument that wine tasting/scoring is in fact legitimate, but then claims that so many factors come into play that there is no way for anyone to expect consistency. It undermines his central thesis.
If we are to expect the public to believe that there is legitimacy to wine tasting/scoring (and there is) the goal isn't to explain away all the problems. The goal *also* has to be to address real problems with the system, admit them, and fix them. The wine competitions are one of the big public spectacles in wine tasting, and if they are being run in a fashion in which the results are completely inconsistent, waving them away as immaterial isn't the correct answer. Reworking them so that they show the public how wine scoring is a real thing is the answer.
Wednesday, June 26, 2013
"But then Gonzales goes on to cite a piece written by disgraced science journalist Jonah Lehrer to back up his argument. And that's when I start detecting hints of bologna."
Couldn't agree more, Michaeleen!
Couldn't agree more, Michaeleen!
Well, it's poking its head out again!
David Derbyshire, over at the Guardian, has written an article on wine tasting and (more specifically) wine judging. Overall, it's a respectable article, but he manages to once again repeat the story about 'wine experts' not being able to tell the difference between a white and red wine.
And to make matters worse, Fiona Beckett (in a blog post reply to the article) makes a casual reference to it as well.
The story keeps going. All we can do is try to correct people one at a time.
David Derbyshire, over at the Guardian, has written an article on wine tasting and (more specifically) wine judging. Overall, it's a respectable article, but he manages to once again repeat the story about 'wine experts' not being able to tell the difference between a white and red wine.
And to make matters worse, Fiona Beckett (in a blog post reply to the article) makes a casual reference to it as well.
The story keeps going. All we can do is try to correct people one at a time.
Friday, May 10, 2013
About That Wine Experiment
There’s a story making the rounds again, concerning wine and wine professionals. Chances are you’ve heard it before. Told the shortest way, it’s this: Even so-called ‘wine experts’ can’t tell the difference between red and white wine by taste.
A fuller version is: “An experimenter gave a group of wine experts a red wine and a white wine and had them rate the wines. What they didn’t know is that the ‘red’ wine was the same white wine that was in the other glass, just dyed red. None of them could tell the difference, and they described the red wine as ‘jammy’ and so forth.”
The purported lesson is that wine rankings are complete bunk. (or, if the writer is being more generous, that we are fooled by our expectations and perception) It’s been featured in all kinds of prominent media, like The Atlantic, The New Yorker, and it’s hurling through cyberspace right now because it’s included in an excerpt from David McRaney’s new book. It’s been a popular topic for Jonah Lehrer, who has included it in pieces written in 2007, 2011, 2012, and his book, [no big surprise from that serial self-plagiarizer, I guess?] which were popularly linked to.
The problem, simply, is that the study never showed that.
The study, conducted by Frédéric Brochet in 2001, was part of a larger dissertation on perception and wine tasting, which largely consisted of lexical analysis of published wine reviews using computer software. His work only garnered attention after he submitted it to a wine industry research competition where it won a runner-up prize and the ‘dyed wine’ experiment got picked up by the press.
There are two levels to the inaccuracy of the popular story. The first is that several of the details that have been routinely reported are simply incorrect, having been copied from one article to another. So let’s break them down.
-The most important is that the subjects in this experiment were not, in fact ‘wine experts’. They were undergraduate enology students. They are probably more knowledgeable about wine than the average person, but they were not in any way ‘experts’, or even ‘professionals’.
- It’s simply not true that “Every single one, all 54, could not tell it was white.” as is frequently stated. Even Brochet doesn’t claim that, saying “About 2 or 3 per cent of people detect the white wine flavour”, and the paper that is frequently cited shows that indeed some people gave ‘white wine’ descriptions to the dyed-red wine.
But even more important than those errors, the study never demonstrated that people can’t tell red wine from white.
So let’s actually look at what the study did. As near as I can tell, the TASTING study has never been published in a peer reviewed journals (which itself makes me suspicious of the findings) but the olfactory (smell) component of the study was published in 2001, allowing us to look into the actual procedures. The undergraduate subjects came into the lab one week and were given a glass of a red wine and a glass of a white wine. (both Bordeaux, but the experimental details do not include any label or vintage, so we are unable to judge them) They were supplied with a list of potential descriptive words, and told to make a list of words and phrases that best described each wine, either from the supplied list or in their own words. The following week they return to the lab for another session. They were presented with two glasses, one containing white wine, and the other containing the same wine dyed red. They were then given the list of descriptors that they had used to describe the wines from the previous week, and asked to choose which of the wines in front of them best represented each descriptor. It was a forced-choice setup.
So they were given two identical white wines, different only in color, and asked to assign them descriptors from both red and white wines. With no other difference between them, and being required to assign the red descriptors to one of the wines, most people assigned them to the red-colored wine instead of randomly. This may show that people have a preconceived notion of what red wines should smell like, but it doesn’t come close to showing that people can’t tell red from white once you change the color. A much simpler design would have been to run the second week exactly like the first, seeing what descriptors people would have assigned to the dyed wine without any prompting. That is NOT what this study did, even though that is the popular understanding.
So who’s to blame for this urban legend? Are we to blame Brochet, who was making a name for himself and clearly believes that the wine industry is full of itself? Do we blame Lehrer and his ilk for continually plugging a story that they never bothered to do much research on? Do we blame all of the bloggers that simply repeat the story and link to the same text? The editors and publishers who allow it to go into books without double-checking it? We blame all of them, and I think we blame our current science as pop-culture mindset behind Lehrer, McRaney, and Malcolm Gladwell, where a cute, pithy anecdote to make a point travels farther and has more influence than a properly reported study. I don’t think any of them are bad people, or that they are intentionally misleading readers, but they are intelligent, educated people who hear a story and spread it around, sometimes without checking its veracity. And I think we can expect more from them.
Look, I understand why the story has such strong legs. We love a story that shows that ‘experts’ have no idea what they’re talking about, that shows that they can be easily fooled, and that deflates the perceived pretentiousness of something like wine. It’s a good story. But it isn’t true. So let’s try to set the record straight. Let’s go all Snopes on this story’s ass. Put the link to this any time you see someone claim that crap about ‘even experts can’t tell red wine from white’.
Subscribe to:
Posts (Atom)