Here's a question from the sidebar, The Quiz Daniel Kahneman Wants You to Fail, to Michael Lewis's review in Vanity Fair of Nobel-winning psychologist Daniel Kahneman's Thinking, Fast and Slow:
2. A team of psychologists performed personality tests on 100 professionals, of which 30 were engineers and 70 were lawyers. Brief descriptions were written for each subject. The following is a sample of one of the resulting descriptions:
Jack is a 45-year-old man. He is married and has four children. He is generally conservative, careful, and ambitious. He shows no interest in political and social issues and spends most of his free time on his many hobbies, which include home carpentry, sailing, and mathematics.
What is the probability that Jack is one of the 30 engineers?
A. 10–40 percent
B. 40–60 percent
C. 60–80 percent
D. 80–100 percent
Here's the explanation given:
If you answered anything but A (the correct response being precisely 30 percent), you have fallen victim to the representativeness heuristic again, despite having just read about it. When Kahneman and Tversky performed this experiment, they found that a large percentage of participants overestimated the likelihood that Jack was an engineer, even though mathematically, there was only a 30-in-100 chance of that being true. This proclivity for attaching ourselves to rich details, especially ones that we believe are typical of a certain kind of person (i.e., all engineers must spend every weekend doing math puzzles), is yet another shortcoming of the hyper-efficient System 1.
Huh?
Let's add some more of those rich details:
Jack has a B.S. degree from Purdue. At work, Jack wears a short-sleeve button-front shirt with a pocket protector full of mechanical pencils, just like most of Jack's coworkers on his floor. Jack always wears a tie clasp to keep his necktie from getting smudged by the blueprints when he leans over a drafting table. Jack's favorite line from Shakespeare is, "The first thing we do, let's kill all the lawyers." In fact, that's the only line from Shakespeare he knows. Jack wanted to name his firstborn son Kirk Spock, but his wife wouldn't let him.
So the percentage chance of Jack being an engineer is still "precisely 30 percent"?
I think one of the most widely overlooked cognitive flaws in the media is assuming that ignorance is smart, that scientists have proven that not noticing human patterns shows you have a high IQ (not that there's any such thing as IQ!).
I imagine that this sidebar wasn't made up by Kahneman or Lewis, but by some intern at Vanity Fair.
But, let me explain the fundamental flaw in Kahneman's underlying reasoning on this topic, and why it can mislead Vanity Fair staffers into thinking it validates their Jihad Against Prejudice. From Lewis's article:
Of course, Jack and Linda don't, actually, exist. They were made up by K&T. Now, most people don't read about other people, real or fictional, in the context of psychology experiments where the professors are attempting to pull the wool over their eyes. They read about other people in novels, journalism, history and so forth where writers try to select details to communicate larger, more interesting points. So, they've gotten pretty good at figuring out what larger message the author is trying to communicate by selecting details. As a commenter says, it's Chekhov's Gun: If Jack cleans his gun in Act I, you better believe his gun is going to go off at some point in the play.
So, the point is that Kahneman and Tversky went to the trouble of telling their subjects these specific details. The subjects didn't observe these details, they read them in a piece of prose that K&T crafted. So their subjects assumed that Kahneman and Tversky weren't tossing in random details to yank their chains and waste everybody's time. Subjects assumed good faith on the part of the professors. If a novelist gives you a bunch of details about a character, which is what Kahneman and Tversky were imitating, the novelist isn't going to throw in random details. But, of course, time-wasting and chain-yanking were exactly what K&T were trying to do.
I think one of the most widely overlooked cognitive flaws in the media is assuming that ignorance is smart, that scientists have proven that not noticing human patterns shows you have a high IQ (not that there's any such thing as IQ!).
I imagine that this sidebar wasn't made up by Kahneman or Lewis, but by some intern at Vanity Fair.
But, let me explain the fundamental flaw in Kahneman's underlying reasoning on this topic, and why it can mislead Vanity Fair staffers into thinking it validates their Jihad Against Prejudice. From Lewis's article:
It didn’t take me long to figure out that, in a not so roundabout way, Kahneman and Tversky had made my baseball story [Moneyball] possible. In a collaboration that lasted 15 years and involved an extraordinary number of strange and inventive experiments, they had demonstrated how essentially irrational human beings can be. In 1983—to take just one of dozens of examples—they had created a brief description of an imaginary character they named “Linda.” “Linda is thirty-one years old, single, outspoken, and very bright,” they wrote. “She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.”
Then they went around asking people the same question:
Which alternative is more probable?
(1) Linda is a bank teller.
(2) Linda is a bank teller and is active in the feminist movement.
The vast majority—roughly 85 percent—of the people they asked opted for No. 2, even though No. 2 is logically impossible. (If No. 2 is true, so is No. 1.) The human mind is so wedded to stereotypes and so distracted by vivid descriptions that it will seize upon them, even when they defy logic, rather than upon truly relevant facts. Kahneman and Tversky called this logical error the “conjunction fallacy.”
Of course, Jack and Linda don't, actually, exist. They were made up by K&T. Now, most people don't read about other people, real or fictional, in the context of psychology experiments where the professors are attempting to pull the wool over their eyes. They read about other people in novels, journalism, history and so forth where writers try to select details to communicate larger, more interesting points. So, they've gotten pretty good at figuring out what larger message the author is trying to communicate by selecting details. As a commenter says, it's Chekhov's Gun: If Jack cleans his gun in Act I, you better believe his gun is going to go off at some point in the play.
So, the point is that Kahneman and Tversky went to the trouble of telling their subjects these specific details. The subjects didn't observe these details, they read them in a piece of prose that K&T crafted. So their subjects assumed that Kahneman and Tversky weren't tossing in random details to yank their chains and waste everybody's time. Subjects assumed good faith on the part of the professors. If a novelist gives you a bunch of details about a character, which is what Kahneman and Tversky were imitating, the novelist isn't going to throw in random details. But, of course, time-wasting and chain-yanking were exactly what K&T were trying to do.
Possibly Kahneman is suffering from abstraction intoxication. (http://alrenous.blogspot.com/2011/08/nosology-of-human-thought.html)
ReplyDeleteOr maybe the journalist just screwed up.
I thumbed through the book at B&N and seems to have a lot of stuff in there like that. Maybe what he says is technically true from a certain perspective, but it's not like we are grabbing a bunch of balls in a bucked while blindfolded and being asked what is the chance it is a blue ball (when, say 30% of the balls are blue). In this case, the details matter. Or so it seems non-technical me.
ReplyDeleteThis sounds like something out of Malcolm Gladwell.
ReplyDeleteI'm sorry but I placed it in the 80-100% category the reason being while I know of instances where there are exceptions to the rule there is a difference between "technically minded people" (and their pursuits) and "socially minded people" (and their pursuits).
As Steve Jobs (or I read somewhere) genius is standing at the intersection of the science & the humanities.
I'd say Lewis suffers from availability heuristic representativeness bias schizophrenia, actually. "Moneyball" though mostly enjoyable reads like the imaginings of a casual baseball follower (watches WS or All-Star game some years). In his telling Oakland wasn't home of one of the most prominent teams of the late 80s but rather more a Kansas City By the Bay. What are the odds that a (globe-trotting) (media-celebrated) good old boy from New Orleans is the son of a (politically connected) (white-shoe lawyer) professional man and a (rich socialite) local woman? Approaching 100% methinks
ReplyDeleteHuh. Just spent half an hour trying to figure out how to post a comment, contact the author, or even contact the magazine at all for anything other than wanting to buy a subscription. Instead I found crap on the contact page like "[a href="mailto:subscriptions@vf.vom" target="_blank"]subscriptions@vf.com[/a]" in the source code.
ReplyDelete(Angle brackets replaced with square brackets so code is readable.)
Yeah, I think I'm going to blame incompetence at the magazine, because Kahneman overall has been fairly solid elsewhere.
The journalist screwed up. The explanatory note in the article is simply wrong (it supposes that the description provides *no* information); but the point of examples of this kind is to underline the importance of 'base rates' ('priors'). You can work out the math from Bayes Thm.
ReplyDeleteThe question is misleading because they don't tell you that the sample of Jack was selected randomly from the list of 100 professionals.
ReplyDeleteSome are born stupid, some achieve stupidity, and some have stupidity thrust upon them.
ReplyDeleteIf you answered anything but A (the correct response being precisely 30 percent), you have fallen victim to the representativeness heuristic again, despite having just read about it.
ReplyDeleteThe correct response would be precisely 30 percent if all you knew was that one sample out of a set of 100 samples including 30 engineers and 70 lawyers were selected.
Even if you knew just his name it could affect the correct response. For example, perhaps it's the case that people with the name "Jack" tend to become engineers more than lawyers.
Kahneman chose the descriptions to be roughly equally likely if you knew Jack was a lawyer and Jack was an engineer. (I personally suspect P(politics|lawyer)>P(politics|engineer), though, but that means it's less likely Jack is an engineer.) The details SEEM rich but aren't.
ReplyDeleteThe examples you gave are actually rich, in that some are much more likely if Jack is an engineer than if Jack is a lawyer.
The main effect is "base rate neglect"- only using the last piece of information they got. A better example is medical testing: suppose you have someone who, from their symptoms, suggests a 1 in 1000 chance they have disease X. You give them a 99% reliable test for disease X (as in, a 1% chance it gives the wrong result either way), and it comes back positive. What's the chance they have the disease?
The common answer (even among doctors) is 99%, neglecting the base rate in the population. The actual answer is 9%. (It's about ten times more likely that they don't have the disease and the test was wrong than that they have the disease and the test was right.)
Amusingly, if I remember correctly, racism is one of the few examples where people reliably take base rates into account (though their perceptions of base rates may be off due to limited experience.
Erik referenced 'abstraction intoxication' which seems to be related to Scott Adams business practice of confusopoly.
ReplyDeleteErik's excellent reference:
Abstraction Intoxication Disease
Overview: Privileges abstract logic and discussion over concrete details.
Reality: All abstractions either apply to concrete reality or are pointless. Often caused by exposure to socially-respected abstraction-intoxicated individuals.
Symptoms: Inability to or disinterest in relating thoughts or writings to actual experiences. When pressed, evades and deprecates the necessity. Inspires audience to name them sophomoric. Often absorbed with extreme idealistic discussions with little or no bearing on their actual behaviour. Becomes confused and uncertain when asked for clarification or details. For severe cases, consult an academic paper; look for extreme intricacy and an inability to write clearly, applied to describing little or nothing.
Source: Working memory limitations, see note.
Treatment: Uncertain.
Cure: Unknown, though prevention is likely easy.
Note: When the abstractions alone overflow the working memory it can give the impression one has appreciated all there is to appreciate about an issue, by contrast to day-to-day tasks, which fit entirely in working memory by dint of practice even if not by dint of simplicity. (E.g, driving is complex but well-practiced and the problem of getting from A to B fits easily in human memory.)
Confusopoly:
(neologism, economics) An economic condition whereby the market force of competition is evaded via intentional obfuscation.
The journalist is the one who screwed up. The Kahneman experiment showed that people ignore the apriori probabilities, i.e. they give the same chances of Jack being an engineer whether the group contained 30 engineers or 70. Of course the probability is not exactly 30% once you read the description; that's just ridiculous. Read for yourself here:
ReplyDeletehttp://books.google.com/books?id=_0H8gwj4a1MC&pg=PA5&lpg=PA5&dq=In+sharp+violation+of+Bayes&source=bl&ots=YCc9dPQ-WG&sig=zrdKQdNFTBlnayt8K8pOs5y2408&hl=en&ei=r6u7TuzzC-Pl4QTnsryuCA&sa=X&oi=book_result&ct=result&resnum=1&ved=0CB8Q6AEwAA#v=onepage&q=In%20sharp%20violation%20of%20Bayes&f=false
People do give the correct probability of 30% when not given any description.
The 'feminist bank teller' example is quite different from the lawyer/engineer one. It is a matter of logic that the probability that the woman is a feminist bank teller cannot be higher than that she is a bank teller. The fact that many people think the probability is higher does illustrate irrationality in their thinking. There is no such irrationality in the other example; if people are wrong in their probability estimates, it can only be shown by empirical data.
ReplyDeleteOh come on, the second example is actually pretty good. Draw yourself a Venn diagram, and you'll see that the probability of being a feminist bank teller has to be less than the probability of being a bank teller.
ReplyDeleteThe first example sounds like it was made up by someone at the magazine who doesn't understand Bayes's Law.
-Chris
I guessed A because I figured out what point they were trying to make.
ReplyDeleteIt seemed the obvious argument since it was the only thing we knew for sure about Jack (that the pool he was in consisted of 30% engineers).
My answer is sophisticated more than it is smart which speaks poorly on the test design and the scores gleaned from it.
But I would think that the larger point that Kahneman/Lewis aim to make is that there is a large potential downside when making an "irrational" decision compared to only making a decision based on the facts at hand.
I think I see Kahneman/Lewis' larger point. Profiling of some sort can be very helpful until it is not helpful, at which point it can become very destructive. So, yeah, Malcolm Gladwell Tipping Point stuff.
Kahneman chose the descriptions to be roughly equally likely if you knew Jack was a lawyer and Jack was an engineer. (I personally suspect P(politics|lawyer)>P(politics|engineer), though, but that means it's less likely Jack is an engineer.) The details SEEM rich but aren't.
ReplyDeleteEngineers aren't more likely than lawyers to have mathematics as a hobby? Or carpentry? That's doubtful.
The main effect is "base rate neglect"- only using the last piece of information they got. A better example is medical testing: suppose you have someone who, from their symptoms, suggests a 1 in 1000 chance they have disease X. You give them a 99% reliable test for disease X (as in, a 1% chance it gives the wrong result either way), and it comes back positive. What's the chance they have the disease?
The common answer (even among doctors) is 99%, neglecting the base rate in the population. The actual answer is 9%.
And the correct answer according to Lewis and Kahneman would be 0.1%, i.e. equal to the base rate, and ignoring all other information.
Truly bizarre. We can only assume a 30% probability if there is zero correlation between interest in mathematics as a hobby and a career in engineering vs law. Added to that would be a zero correlation in interest in law and an interest in politics. Both fly in the face of all experience and are highly unlikely to be true. Virtually all politicians are lawyers and and law-making, a very important part of politics, would seem to be of interest to lawyers, would it not? And mathematics as a hobby would seem to positively correlate with someone who chooses a math-intensive career, such as engineering. Not to mention that the fact that he is male alone points towards engineer since most engineering fields are heavily skewed male.
ReplyDeleteWith the Linda case, the actual problem is that most people assume that if option 2 is that Linda is a bank teller and active in the feminist movement, then option 1, that Linda is a bank teller, implies that Linda is NOT active in the feminist movement.
ReplyDeleteThat is, if you give descriptions of two categories, one of which is a subset of the other, people wil usually take the broader category and exclude the examples in the more specific category.
It's just like most people if you talk about a square and a rectangle, are going to interpret "rectangle" to mean "non-square rectangle," even though a square is technically a type of rectangle.
I was at an interdisciplinary colloquium where Amos Tversky presented the results of his "Linda" example. (I think it was Tversky's, not Tversky and Kahneman's. Tversky said he developed this particular theory during spring break [!] at Stanford. Maybe Tversky and Kahneman shared credit like Lennon and McCartney.) The Vanity Fair article doesn't give the theory behind the "Linda" experiment, so the article is extremely misleading.
ReplyDeleteThe theory is that while probability measures are additive, and while some people (Dempster-Schaefer) have developed sub-additive approaches, in real life people reason about degree of belief super-additively. Tversky called the phenomeon "unpacking": when people "unpack" an event they pay more attention to the details than to the overall event.
The "Linda" example was not about stereotypes or culture. (I'd also guess that the ridiculous "Jack" example was invented by some clueless, politically correct Vanity Fair writers or editors.) In fact, Tversky gave other examples of "unpacking" as well. One was an experiment where people were asked to estimate the probability that tomorrow's temperature in San Francisco would be in the following ranges: 50-60, 60-70, and 50-70. (I don't remember if each subject was asked to estimate for each range.) The "probabilities" for the first two ranges summed to significantly greater than that for the last range, whereas by the axioms of probability (additivity) the sum of the first two must of course exactly equal the third. But when people "unpacked" the interval they gave more weight to the subintervals.
Tversky's talk was fascinating, by the way, which is why I still remember it. He discussed another experiment where subjects knowingly but "irrationally" chose more pain over less pain. (Some people objected that the decision was rational, but Tversky said that he couldn't call any such decision rational.)
Anyway, Steve Sailer's mistake was to assume that the Vanity Fair article was seriously meant to inform rather than to entertain. That's what led him on that wild goose chase about the "Linda" story.
The second example is legitimate, it clearly is more likely that someone is a bank teller than that someone is a feminist bank teller. So 85% of the people asked are making a logical error. Don't SAT tests and the like sometimes do this, include a superficially attractive but wrong answer among multiple choices?
ReplyDeleteVaniver, I'm still not getting it. If the test is "99% reliable", then the chance of a false positive is, by definition, 1%. The previous information is not taken into account because it is entirely irrelevant.
ReplyDeleteThe only way out of this seems to be if you're not using the words "99% reliable" to mean what the rest of humanity takes them to mean.
Your comment on reading fiction in good faith reminded me of The Girl With the Dragon Tattoo. Which was awful in a car wreck kind of way (I was listening to it with my wife on a road trip). It is basically a leftist fantasy of critical theory archetypes: the crusading journalist, the BRILLIANT, exploited, rebellious woman, the tycoon who so thrives on the domination of others that he starts to kidnap, rape, torture, and ritually kill vulnerable women. Their stereotypes are reality based though, so it's all good.
ReplyDeleteI've learned it all from Michael Scott:
ReplyDeletehttp://www.youtube.com/watch?v=cmzUyn0LV2U
Hilarious posting.
ReplyDeleteThe novel writing law you reference is known as Checkov's Gun:
ReplyDeletehttp://en.wikipedia.org/wiki/Chekov%27s_law
basically, if you mention a gun on the mantle in chapter 2, it had better go off by chapter 5, otherwire, it should not be there.
This reminds me of the Dave Barry line, something approx of "Majoring in psychology is a useful way to learn what's on the minds of lab mice"
ReplyDeleteLike the "suspense" genre problem of knowing by credits billing who's going to not die before the end. We pick up what might prove useful info constantly and thus fail to exhibit "rational" responses expected of us by our betters.
ReplyDeleteAlso people are generally good at sensing when they're being led on (present political situation excepted). R.I. had a referendum to reduce the legal name from "State of Rhode Island & Providence Plantations" to just "Rhode Island" which was resoundingly defeated. Salesmen of the proposal made a spurious argument for "efficiency and simplification" though many suspected queasiness about the P word behind it all (even though there was never slavery in upper R.I., as long as the white man's held it anyway). Circumstances were influential--voters didn't understand why to vote for/fund a cosmetic change "just because." After all is Rhode Island a theater marquee or broadsheet headline that would benefit from fewer characters? Lewis might deem that "stupidity of crowds" but mortals aren't in the habit of judging via cogitation, however susceptible they may be to coaching. There are nationwide examples of the same phenomenon (personally hate it we don't have a $1 coin as Canada does but hey). Apologies if this has nothing to do with your IQ/academia subject
Establishment libs should be wary of deriding this, they may just get what they wish for. The entire concept of "racial dogwhistles" is at stake. Remember Gingrich this year about "the food stamp President" etc.? He didn't say "it" but you know what he was really saying, right? Right? Right.
ReplyDeleteWes,
ReplyDeletebut it's not like we are grabbing a bunch of balls in a bucked while blindfolded and being asked what is the chance it is a blue ball (when, say 30% of the balls are blue). In this case, the details matter. Or so it seems non-technical me.
Of course the extra information matters.
Would Kahneman adhere to the strict mathematical interpretation if presented with the information "Jack is an engineering graduate"? Maybe his own particular heuristic bias is so strong that even if told "Jack is an engineer" he'd angrily pound his desk with his fist insisting that the probability is still only 30%!
The Linda example is silly because, as Steve says, it's designed to fool people. Even describing option 2 as "logically impossible" is misleading. If both options include "bank teller" then we "cancel it out" from the question and merely ask "Is it more probable that Linda is active in the feminist movement or not?" Then it's just a straight up question of conditional probability. What proportion of woman majoring in philosophy OR were ever deeply concerned about social issues OR ever protested in nuclear demonstrations are ALSO today active in the feminist movement? If the answer is over 50% then it's more probable that Linda is active in the feminist than not. Of course, no one is going to have such details on hand so we'll have to rely on guesswork.
For me, I wouldn't wager money that Linda definitely is active in the feminist movement, but I'd regard it as rather likely that she at least has sympathies. In the real world such suspicions are far more likely to be helpful than telling yourself "no, I am absolutely not justified in assuming I know anything about Linda's feelings about feminism!!!"
Anyway, the silliness of these examples shouldn't take away from Kahneman's work. The insights of behavioral economics provided by people like Kahneman, Tversky, Richard Thaler, Thomas Gilovich and others are very interesting and a valuable addition to one's thinking toolkit.
Silver
T'quelle loves to smoke crack, has 5 kids with 5 different women, would have voted for Obama but was too lazy to do so, takes part in 'youth mobs' to loot stores, loves gangsta rap, and lives in section 8 housing.
ReplyDeleteWhat is the likelihood that he be black?
I'd say about 5%.
I immediately had the same reaction you did. But I meta-analyzed the question as follows: if actual real-world knowledge can be used in this question, then there is no one answer: any of the answers might be correct (although 80-100% seems improbable). Therefore, the author of the question thought that real-world knowledge is not applicable; probably just worded the question wrong. Therefore, he wants me to choose 30%; therefore A.
ReplyDeleteA clarification about Tversky's "weather" example that he presented along with the "Linda" example: I'm sure now that each subject estimated the probability for each interval, and that the sub-interval probabilities summed to more than that for the union. Otherwise it would not have involved what Tversky called "unpacking."
ReplyDeleteAlso, now that I read Lewis's description more carefully, it's clear that he completely misunderstood the "Linda" experiment. It was not about "stereotypes" or "vivid descriptions," any more than was Tversky's "weather" experiment I described above. Both were constructed to test the same hypothesis. Lewis is clearly to blame for misunderstanding the experiment and for unintentionally misleading everyone including even Steve Sailer, whose mistake was to assume that Lewis at least got the very basics right.
A group contains 100 men. One black and 99 whites.
ReplyDeleteOne man has set a world record in the Olympic 100 meter sprint.
What are the odds he is black?
Your answer.... 1 of 100
Is my analogy wrong?
"Kirk Spock" is very funny. And you're right.
ReplyDeleteThe "Linda" example is fair, but sort of pointless.
ReplyDeleteThe other one really bugs me. If you're going to introduce these biographical (which is to say, empirical) details into the mix, it seems perfectly reasonable to assume that those details are statistically relevant, as they surely are. What if all you knew was that one of the 100 was, say, female? Still a 30% chance she's an engineer and not a lawyer? The problem is, it's no longer an abstract question.
Kahneman would make a terrible bookie.
I don't get it either. If Jack is a fictional character, then how can one be wrong in guessing his profession?
ReplyDeleteThis is just stupid. Kahneman is writing as if people are numbers and the details don't matter. As if what we know to be generally true about people is not simply randomly right but mostly false.
ReplyDeleteAlso, the questions are asked as if the answer they are looking for is not the numeric quality but for the person to discern more through the details.
Stupid conclusions from trick questions.
See here:
ReplyDeletehttp://lesswrong.com/r/discussion/lw/8d5/michael_lewis_on_kahneman_and_tversky_link/
It's not the journalist who screwed up. Google "Jack is a 45-year-old man. He is married and has four children. He is generally conservative, careful, and ambitious" - it's in many psychology books. Because it is an exact quote from Kahneman & Tversky (1973)! Plenty of links to teaching materials, too.
ReplyDeleteSimply shows what kind of garbage passes for psychology research these days. I hope he got Nobel for something not as stupid as this. Because this is insultingly stupid.
It seems that the study also ignores conditional probability (note, all probabilities are technically conditional). Sure, they had a sample with 30 engineers and 70 lawyers.
ReplyDeleteHowever, they didn't ask "what is the probability that this person, randomly selected, is an engineer". They did ask "given the description of the person and the sample information, what is the probability that this person is an engineer?".
I think they suffer from the 'fail at understanding probability' bias.
Suppose you comprised a sample of 70 actual lawyers and 30 actual engineers and provided subjects with exactly the biographical details given for Jack. Subjects must sort the bios into 70 lawyers and 30 engineers. I hypothesize that the subjects would identify them measurably better than would random assignment, with or without accurate first names. With accurate first names, Katy bar the door.
ReplyDeleteConsider a group of 100 eighteen year males. Thirty are black and seventy are white.
ReplyDeleteA. Jamail is a member of this group. He lives in a run-down area of a large Eastern city. He was raised by his grandmother because his mother was in jail. He has never met his father. He has had several unpleasant encounters with law enforcement agencies. He dropped out of school at age 16. He is unemployed and spends most of his time in gang activities, listening to rap music and playing basketball with his friends.
What is the probability that Jamail is black?
B. Brett is also a member of this group. He lives with his parents in a wealthy suburb of an upper Midwestern city. His father is a business executive and his mother is a professor of Scandinavian studies at a local college. They met when they were students at the University of Utah, where they were both championship members of the swimming teams. Brett has never been in any sort of trouble. He is graduating near the top of his class from a prominent suburban prep school and will attend Dartmouth. His hobbies include swimming, lacrosse and studying German.
What is the probability that Brett is white?
Answers:
A. 30%
B. 70%
Explanation:
Any other answers are racist.
"Kahneman chose the descriptions to be roughly equally likely if you knew Jack was a lawyer and Jack was an engineer."
ReplyDeleteI have a hard time believing that the same percentage of lawyers and engineers do math for fun.
Every other trait, I can accept is roughly equally likely lawyers and engineers.
"even though mathematically, there was only a 30-in-100 chance of that being true."
That's just not so, as Steve demonstrates.
"Subjects assumed good faith on the part of the professors"
ReplyDeleteWouldn't thaty make them morons?
These sorts of trick questions intended to wrong-foot the quiz taker are part of a method; it's a way of getting a potential sucker to be open to suggestion. "You were wrong about the last five questions, what else are you getting wrong because of your prejudices?" Guys who sell Florida swampland to retirees likely have a name for this technique: "I started out with some of the ol' Minnesota Knockdown and had them eating out of my hand".
ReplyDeleteI suspect this is a case of Vanity Fair being idiotic rather than Kahneman. The question about Jack is an utter blunder and I doubt anyone with a PhD in anything quantitative would make it.
ReplyDeleteThe one about the 31 year old lady is different. That one really does show people to be bad at answering probability questions, although perhaps not much more than that.
Everybody seems to be confusing Conjunction Fallacy with Availability Heuristic, they are not the same thing. In both the examples of Jack and Linda, the Conjunction Fallacy is what matters. With Linda, by mathematical definition, the probability that she is both a bank teller and a feminist CAN'T BE greater than her being just a bank teller. That doesn't mean there is NO connection between her biographical details and the likelihood of her being a feminist. The fact that too many people take the mathematical truism of the Conjunction Fallacy and bungle it as in the Jack example is all too clear example of the limitations for improving people's thinking through education. If Kahneman has genuinely made this error too, as I heard him do in his Edge.org talk on the subject - that's appalling.
ReplyDeleteSeems to me the main problem here is that they're using "probability" and "chance" in the technical sense, while most people understand them to mean, "Which do you think I (the questioner) mean?" And we're a pretty innumerate society anyway; if people understood probabilities at all, there would be no lotteries.
ReplyDeleteThe second example is especially instructive. People picked B not because they think two things are more likely to be true than just one of them. As it says, that's logically impossible, but they aren't thinking about logic. They're trying to guess what the pollster or the person writing the question wants them to answer. What they think is more probable is that the questioner is trying to make a point, thus B is his preferred answer.
It's like on poorly-done multiple choice tests, sometimes there will be questions with three short answers and one long, detailed one, and the long one is usually the answer. The question will be something like, "How many miles would you have to drive to get from New York to Paris?"
A) 1000
B) 5000
C) New York and Paris are on different continents, so you can't drive from one to the other.
D) 10000
If you had just landed on earth and had never heard of those cities, your best "bet" would be selecting C every time, even though logically they're all 25% probable.
I don't really give a damn about awards, but don't judge a Nobel winner's career fully on a popular writer's puff piece summation.
ReplyDeleteThe teller example always gets tossed in because it's easy and cute, but your criticism is valid.
I do wish someone had paid you to review this one.
"...even though No. 2 is logically impossible. (If No. 2 is true, so is No. 1.)"
ReplyDeleteSo No. 2 isn't logically impossible, unless No.1 is too.
Can we blame that one on an intern, too?
Sheesh.
It looks to me that the lawyer/engineer example was simply the product of a Vanity Fair journalist who managed to butcher Kahneman's point beyond recognition, and indeed all the way into absurd falsehood.
ReplyDeleteWho would have thought a journalist could misunderstood a point of logic?
But I don't get your problem with Kahneman's actual example of Linda the teller. Yes, of course the details were set up to be misleading. But the point of that example is that, insofar as one is careful to follow the logic of what was imparted, and what one should conclude, then one will NOT go down the path favored by sloppy thinking.
The example is misleading in much the same sense that optical illusions are misleading. They both expose incorrect inferences that are worthwhile studying precisely because they indicate mechanisms of thinking that may be otherwise unexpected.
Steve, over at Paul Graham's Hacker News there's a smart (as usual) conversation going on about exactly what you're saying. http://news.ycombinator.com/item?id=3219240
ReplyDeleteThis clever sillies is good stuff.
ReplyDeleteWhere do guys learn to craft such complex fallacies or dressup simple fallacies to fool or feed the confirmation biases of the indoctrinated masses?
* Vaniety Fair Sidebar/Kahnmeman
* Joseph Soares/Wake SAT
* Gladwee/much of his stuff
Is there a book I can buy or a class I can take to master this technique?
A man with a scowling face comes at you with a knife in an enclosed area.
ReplyDeleteHow much danger are you in?
a. Some
b. A lot
c. None
The correct answer is C! He's a waiter bringing your sneak knife! See how you misperceive danger? Now excuse me while I cash my large check for my brilliant journalism.
What Ron Mexico said: The correct posterior probability is derivable via Bayes' Rule, truly, but VF's "explanation" indicates that the likelihood provides no information, which can't be right---or if it does turn out to be true in this example, would show not so much that people neglect priors ("base rates"), but that they're confused in thinking that engineers are a lot more likely to do math for fun. Vaniver's (famous) medical-test example is better.
ReplyDeleteWhat Steve says is also true and was noted by the linguist H.P. Grice: ordinary human conversation is made possible in part by assuming that when someone tells us a seemingly extraneous detail, it's to communicate something important---rather than, say, to trick us, or for no reason at all (the maxims of "quantity" and "relevance").
So a guy named Daniel Kahneman, dual Israeli-US citizen, veteran of the Israeli military, does not want the rest of us to notice group patterns (i.e. "stereotypes").
ReplyDeleteHe is not only a Nobel laureate, he is a Kevin Macdonald laureate as well.
I wonder of this guy would want the security personnel at El Al to follow his line of thinking when goes to visit his kin in Israel.
From a set of people, consisting 3 muslims and 7 christians, a person is randomly picked. His name is Mohammad Aziz. What is the probability that he is a muslim?
ReplyDeleteI put it at 80-100%. I'm married to an engineer and Jack sounds a lot like my husband.
ReplyDeleteAs Ron M already alluded to, I think the Vanity Fair article gave you the wrong idea about what Kahneman found and said.
ReplyDeleteThe finding from the Jack study was that two different groups of subjects gave (on average) the SAME probability estimate that Jack was an engineer regardless of whether the group was told that the prior probabilities were 30/70 (as in the version you gave) or whether the group was told 70/30. Collectively, that is irrational, no matter how you slice it.
Far from denying that the sketch is an accurate stereotype of an engineer, Kahneman's design assumed that validity of that. His point was that when people have "representativeness" information handy, they discard the prior probabilities.
The correct ("Bayesian") way to analyze these problems is to consider BOTH prior probabilities AND representativeness. Bayesian reasoning does not deny the validity of reasoning from stereotypes--it actually formalizes how to do it right.
Regarding Linda, again the VF article has it wrong. The strong result was between groups: the group estimating p(feminist bank teller) gave a higher number than the group estimating p(bank teller). Again, collectively, it can't be so.
But neither Kahneman (nor Bayes :-) ) ever said anything that would deny that the sketches used in these problems do provide valid statistical evidence in favor of the group membership you have in mind. So if the VF writer makes it sound like K's work invalidates any reliance on stereotypes, it just means that he is completely over his head.
I call it throwing out the baby (guarded prejudice) with the bathwater (bigotry).
ReplyDeleteAnd here's the irony, it engages in the same inductive logic fallacy as they blame everyone else with: since some bigots are prejudiced, ergo all prejudices are bigoted.
...including, among so many others, suggesting it a bad idea to loan $400 large to a non-english speaking strawberry picker in the Central Valley to buy a house.
I can say with 100% certainty that Kahneman is an idiot. That's what I take away from that exercise in pretzel logic.
ReplyDeleteWe know that lawyers tend to be politically liberal, and that they have a keen interest in politics and social issues. We know that their hobbies do not typically include home carpentry, sailing, and mathematics.
It's as if Kahneman gave the general profile of a physicist, then asked what the probability is that Joe the Physicist is black. And insisted that the correct answer is 13%, because that's the percentage of blacks in the general population.
What is the probability that an engineer has mathematics as a hobby? Much much greater than the probability for a lawyer. Math as a hobby is really pretty rare, but it's merely slightly uncommon among engineers.
ReplyDeleteSailing probably has roughly equal probability
4 children favors the engineer
Conservative favors the engineer
Non-political grossly favors the engineer
There being only 30% engineers in the sample favors the lawyer. Properly speaking this is a conditional probability problem---like the infamous urn problems. Unless you deliberately engineered the sample, like sensitivity training types ALWAYS do, I'd say the chances you've got an engineer are around 80%.
"The journalist screwed up."
ReplyDeleteI'll say. If all the samples report accurately "I'm a lawyer" or "I'm an engineer". Then you know with 100% probability what they are.
It's tricky, though. I can see how even a smart black/white journalist could miss this. And Steve's attempt to torture this example into anti-diversity thinking is a big FAIL.
A team of psychologists performed personality tests on 100 men, of which 50 were professional football players and 50 were mathematicians. Brief descriptions were written for each subject. The following is a sample of one of the resulting descriptions:
ReplyDeleteJamaal Holmes is a 28-year-old 250 lb man. He is single and has no children, though he has an active sex life with several different women. His hobbies include working out in the gym and watching football on TV. He is not much of a reader. He attended college on an athletic scholarship.
What is the probability that Shawn Holmes is one of the fifty mathematicians?
If your name is Kahneman you think the answer is 50%, because you believe the above profile could just as easily apply to some random mathematician as to a football player. You don't concern yourself with awkward and annoying questions such as "what percentage of mathematicians weight 250 pounds?", because that would get in the way of the Big Important Point you are trying to make.
In the Linda example, how is this true?
ReplyDelete"No. 2 is logically impossible"
Michael Lewis's recent article on Germany was some of the worst tripe I've ever read from him. Maybe he's letting his Hollywood success - or its nutty politics - go to his head.
ReplyDeleteI know it's the exception that proves the rule - so I'm still waiting to find the lawyer that enjoys or even practices home carpentry.
To the idiots who are commenting that Kahneman is an idiot (or some Jewish supremacist or whatever): You guys are the idiots, OK? Hope this helps.
ReplyDeleteSince my earlier comments didn't get through moderation, I'll repeat: I heard Amos Tversky explain the "Linda" experiment and similar experiments. The article does not accurately describe the motivation or conclusions of the experiment as described by Tversky himself.
Consider for just a moment that a renowned Nobel prize-winner might possibly be smarter than you.
To the idiots who are commenting that Kahneman is an idiot (or some Jewish supremacist or whatever): You guys are the idiots, OK? Hope this helps.
ReplyDeleteSince my earlier comments didn't get through moderation, I'll repeat: I heard Amos Tversky explain the "Linda" experiment and similar experiments. The article does not accurately describe the motivation or conclusions of the experiment as described by Tversky himself.
Assuming that is true, then the person you should be calling an idiot is the person who wrote the article and not the people commenting on it.
> I imagine that this sidebar wasn't made up by Kahneman or Lewis, but by some intern at Vanity Fair.
ReplyDeleteI agree, I'm 97% certain it's wrong.
"No. 2 is logically impossible" is bad shorthand for "It's logically impossible for No. 2 to be the right answer [to the question of which is more plausible]."
ReplyDeleteJohn got passed over in promotion to less qualified workers, has been denounced as a 'racist' for discussing racial differences, has been called a 'xenophobe' for expressing negative views on open immigration, has been denounced as a 'homophobe' for opposing gay marriage.
ReplyDeleteWhat is the chance that he's a white conservative?
To the idiots who are commenting that Kahneman is an idiot (or some Jewish supremacist or whatever)
ReplyDeleteThe lofty pinnacle from which you call other people idiots is undermined by your bizarre paranoia. Jewish supremacist? Only one person here has as much as made note of the fact that Kahneman is Jewish - and that person is you.
Jack is generally liberal, favors 'gay marriage'/open borders/Zionism/Obama/, donates to organizations like ADL and SPLC, writes politically correct articles, is favored by institutions for his ethnicity and views. What is the chance that he's a Jewish academic?
ReplyDeleteDear Anonymous: I already called the journalist and/or editor idiots, so I've got that base covered. But some of the commenters are idiots too for taking it at face value. I'm not talking about Steve Sailer and others here who tried to piece together the real story as best they could. I'm talking about the ones who dissed Kahneman for it. I mean, if an article about some Nobel prize-winning physicist quoted him as saying that E equals m c cubed, would you call the physicist an idiot, or would you consider that somebody at the magazine perhaps made a mistake?
ReplyDeleteEspecially funny is the comment by Noah172, a Kevin MacDonald disciple who apparently believes every word he reads in a magazine published by Samuel Irving Newhouse, Jr. Ha!
The piece touches on this briefly when he tails about method 1 decision making and method 2 decision making but this is a flaw that runs through more or less all 'people are irrational' type psyche experiments.
ReplyDeleteMost people think that thinking hard is a bit of a chore, and they will do it, as long as the occasion is important enough to stir them from sloth. More succinctly, people get a lot smarter in situations where they have time to think and the stakes are high, than when the stakes are low. For a test subject, there is no more low stakes environment than a psych profs study, and expecting rationality from such people in such a setting is more than a bit ridiculous.
It's rather odd that PhD's in psych don't actually seem to get this. Perhaps we should do some sort of study on them.
Piggy,
ReplyDeleteI think I see Kahneman/Lewis' larger point. Profiling of some sort can be very helpful until it is not helpful, at which point it can become very destructive. So, yeah, Malcolm Gladwell Tipping Point stuff.
It's difficult to think of any examples of lives being ruined on an individual basis as a result of prejudicial thinking. "Oh boy, just imagine how much better Jim could have lived if only he hadn't been so dreadfully (and totally inaccurately) prejudiced about blacks' propensity for violent behavior!" Doesn't really work.
On the other hand, it's (pardon me) "probably" fair to say that society as a whole can suffer from politicians manipulating people's prejudices -- leading them into war, for example. So the "anti-prejudicial check" may not be all bad news, tiresome though it sometimes is.
Aaron #1,
And we're a pretty innumerate society anyway; if people understood probabilities at all, there would be no lotteries.
Not necessarily. Lottery wins are so unlikely that they effectively have negative mathematical (during one's lifetime), but that doesn't mean playing is necessarily foolish.
If you spend $20/year (constant 2011 dollars) on powerball and add an unlikely large 5% risk-free real interest rate to cover interest foregone, after 50 years it totals a little over $4000, which is not a terribly large sum of money. When you consider that it affords you the opportunity to win a totally life-changing sum of money (nine figures) it's not money particularly badly spent.
Aaron #2,
Consider for just a moment that a renowned Nobel prize-winner might possibly be smarter than you.
Probably...
Silver
Doorknob, are you an idiot.
ReplyDeleteK&T's conclusions from the second example (bank teller) are just as flawed as from the first--though for different reasons. As anyone with some questionnaire construction or market research experience knows, when you pose questions to the public (or to undergraduate psych students as K&T so often did) such respondents naturally try to be helpful, often to a flaw. The answers you get back are not always in reference to the question or hypothesis test you intended. As Steve said, respondents assume a degree of good faith and intelligence on the part of the question creators. Respondents will respond to the task with that in mind.
ReplyDeleteIt's quite likely that many respondents read the response options below:
(1) Linda is a bank teller.
(2) Linda is a bank teller and is active in the feminist movement
...And re-interpreted the original question to fit. That is, it's likely respondents re-interpreted the question as asking which is the more helpful description and not which is merely the "more [mathematically] probable". If not, why would my otherwise very smart full professor at Stanford (Tversky) or Harvard (Khaneman) pose me such obviously overlapping response options?!
At the most, it's a test of compliance, not logical inconsistency.
Great post. Can Steve now get his nobel prize too, please?
Alat @5:27
ReplyDeleteVaniver, I'm still not getting it. If the test is "99% reliable", then the chance of a false positive is, by definition, 1%. The previous information is not taken into account because it is entirely irrelevant.
The only way out of this seems to be if you're not using the words "99% reliable" to mean what the rest of humanity takes them to mean.
If 1 in 1000 people have the disease, and the test is "99% reliable," according to the definition that Vaniver uses, it means that in 1% of people who have the disease, they will test negative, and that in 1% of the people who do not, they will test positive.
So for 1 000 000 people, on average 1000 will have the disease and 999 000 will not. If all 1 000 000 are tested, then 990 of the people with the disease will test positive and 10 will test negative. Of the 999 000 who do not have the disease, 9990 will test positive and 989 010 will test negative. So there are 9990 false positives for every 990 true positives.
Steve, you don't get it.
ReplyDeleteThe only conclusion you can logically deduce form the information supplied by by K @ T is that 30% are engineers. Any other conclusion is not necessarily illogical, but it's based on additional information not supplied by K @ T. The point about K @ T's experiments was the people read all sorts of stuff into scenarios (and decision making) which aren't there.
Flipping the script for a second, let's try the following scenario.
In a group, 70% of men are good to their wives and 30% are wife beaters. Mark belongs to this group and Mark is a Christian. What is the probability the Mark beats his wife?
You'd be surprised by the results that people give to this sort of thing.
Your average Lefty would answer 70% and your average Righty would answer 30%. The logical answer is that you can't tell from the information given, but logic never stops people from coming to their own conclusions.
Keith Stanovich's book, What Intelligence Tests Miss, lists a whole lot of similar cognitive biases. It's worth a read. He's a good guy. Here is a very short article, written by him in the Scientific American.
Which alternative is more probable?
ReplyDelete(1) Linda is a bank teller.
(2) Linda is a bank teller and is active in the feminist movement.
If I were asked that question, I would say, "I can't answer the question without more information." It could be (1), or it could be both are equally probable, depending on whether or not there exists at least one bank teller fitting Linda's description who is not in the feminist movement.
Right.
ReplyDeleteJack is a minor aristocrat in Russia in 1900. In the first act, Jack cleans his gun. What are the odds that his gun will go off in the final act?
Aaron in Israel,
ReplyDelete"who apparently believes every word he reads in a magazine published by Samuel Irving Newhouse, Jr. Ha!"
I honestly don't understand this. Kahneman has a long bio entry, which he wrote himself, on the official Nobel Prize website. What magazine did I read?
I wrote a response to Aaron, which I do not see approved yet, which includes the name Gramsci. My mistake; I meant to write Marcuse.
ReplyDelete(I hope that comment goes through.)
The only conclusion you can logically deduce form the information supplied by by K @ T is that 30% are engineers. Any other conclusion is not necessarily illogical, but it's based on additional information not supplied by K @ T. The point about K @ T's experiments was the people read all sorts of stuff into scenarios (and decision making) which aren't there.
ReplyDeleteTests like this are a poor model of real life scenarios.
In tests like this, you're given some precise probabilities, and you're expected to assume absolutely nothing about the other probabilities unless they're explicitly revealed. In the real world there are often reasonable probabilities you can estimate.
Aaron in Israel,
ReplyDeleteOk, I see you meant Vanity Fair. My oversight.
I tried to write you an initial response, but it did not go through, so here is take two:
Speaking for myself, I do not call Kahneman stupid. I would not even deny that he is brilliant. He is, however, using his brains to advance an agenda that is good for him and his, and, I believe, detrimental to the rest of us.
You see, Aaron, brains are what make the Tribe worth kvetching about. Marx, Freud, Trotsky, Boas, Marcuse, Alinsky, Podhoretz, Soros, and a thousand other MoT's one could name. All brilliant in their way, and all disastrous for humanity.
The Jack example is literally wrong. Let A be the event that Jack is an engineer. P(A) = .3
ReplyDeleteHowever, let X be the event that Jack has all of the described properties.
P(A | X) (or probability of A given X) is mostly definitely not 0.3, unless the events X and A are independent.
It's almost like journalists don't know anything about probability.
It appears that Kahneman doesn't understand Bayesian probability as well as Nate Silver and Steve Sailer do.
ReplyDeleteHere's an interesting blog post dealing with Bayesian probability
http://fivethirtyeight.blogs.nytimes.com/2010/12/15/a-bayesian-take-on-julian-assange/
-Risto
The only conclusion you can logically deduce form the information supplied by by K @ T is that 30% are engineers. Any other conclusion is not necessarily illogical, but it's based on additional information not supplied by K @ T.
ReplyDeleteThat's ridiculous. Apparently the only information in our heads is supposed to that which K&T give us. Now that is illogical.
The only conclusion you can logically deduce form the information supplied by by K @ T is that 30% are engineers. Any other conclusion is not necessarily illogical, but it's based on additional information not supplied by K @ T.
ReplyDeleteYou're assuming that we are not intelligent people who already have access to a great deal of information - perhaps even more than K and T possess - but are instead rats in a maze of some omnipotent scientists construction.
That is not the only conclusion one can logically come to, because the world is not a logical construct created by Kahneman and in which he is God.
So are Kahneman and Tversky like the anti-Sherlock Holmes?
ReplyDeleteThe evidence therefore tell us - nothing you stinkin' bigot! Stereotyping is a hate-crime.
In a group, 70% of men are good to their wives and 30% are wife beaters. Mark belongs to this group and Mark is a Christian. What is the probability the Mark beats his wife?
ReplyDeleteYou'd be surprised by the results that people give to this sort of thing.
Your average Lefty would answer 70% and your average Righty would answer 30%. The logical answer is that you can't tell from the information given, but logic never stops people from coming to their own conclusions.
Again, this assumes that nobody is in the possession of any knowledge of the real world except that which we are "given" by some outside power.
If you know that the probability that a Christian man will cheat on his wife is X percent, then X percent is the correct answer. Regardless of what some ignorant person constructing a test may believe. You can't just tell people "pretend you don't already know all the things which you know". You especially can't make that command an unspoken part of your test.
It's like presenting people with a physics problem which requires the use of the speed of light to solve, and then saying: "Ah ha! You assumed that the speed of light is 299792458 meters/sec, but I never told you that it was! How illogical of you!"
I guess I'm not understanding the question. Doesn't it all really revolve around how you evaluate the description? If mathematics is a hobby only engineers engage in, and there exists one engineer in the sample, it's 100% likely Jack is an engineer.
ReplyDeleteMalcolm Gladwell is a big fan of Kahneman, which is reason enough to look critically and closely. My big problem with him is that he gets constantly credited by the media as a genius whenever people do things that are stupid, as if we needed his discoveries to know people aren't utility-maximizing robots. His heuristics are so vague that they can apply to many situations.
ReplyDeleteFor instance, his "representative heuristic" or "base rate fallacy" and "law of small numbers" are claims that people generally focus too much on scant information about individuals and ignore prior information about group membership. (E.g., focus on attributes of Jack, ignore 70/30 split of lawyers and engineers.) Formally, people don't intuitively use Bayesian statistics because they underweight prior probabilities.
This could be a prediction that, for instance, if a Black job applicant gets 3/3 interview questions right and an Asian applicant only gets 2/3 right that the employer will irrationally hire the Black applicant because he's ignoring the prior group-level information that Asians have much higher IQs than Blacks.
But instead the psychological literature is awash with claims that employers massively irrationally discriminate (often unconsciously) against Blacks because they have a huge prior bias of judging people based on their racial group. And no one ever is rude enough to point out that Kahneman's very general heuristics seem to make the opposite prediction.
After Obama took over there was a lot of hype about Kahneman's work and "behavioral economics" sweeping through policy circles to correct the irrationally that fueled the financial crisis, often hyped by the same elite journalists who completely ignored the housing bubble. It hasn't exactly been a game changer.
Billy Willy said...
ReplyDelete"As Ron M already alluded to, I think the Vanity Fair article gave you the wrong idea about what Kahneman found and said."
No. The entire text is a quote from Kahneman & Tversky.
"The finding from the Jack study was that two different groups of subjects gave (on average) the SAME probability estimate that Jack was an engineer regardless of whether the group was told that the prior probabilities were 30/70 (as in the version you gave) or whether the group was told 70/30. Collectively, that is irrational, no matter how you slice it."
No again. Because no group was aware of the inversion choice - so they had no reference to go by and the subjects simply picked some likely number. Completely rational and completely correct. You can be absolutely sure that a majority would properly weigh probabilities giver the "compare the 30/70 and 70/30 cases". But you can't expect correct answer to a question that was never posed. THAT is irrational (or stupid, or intentional bullshit).
Aaron in Israel said...
ReplyDeleteConsider for just a moment that a renowned Nobel prize-winner might possibly be smarter than you.
Here is the original paper in question:
http://www.econ.ucdavis.edu/faculty/nehring/teaching/econ106/readings/Extensional%20Versus%20Intuitive.pdf
It claims that "Linda" respondents flagrantly violated conjunction logic even though the authors themselves suspected that the actual problem was with the wording of their question (majority assumed that #1 means telled AND NOT a feminist). Problem is that they refused to test this possibility in the most direct way. They did all kind of things EXCEPT the right one.
Yes, the journalist butchered "Linda" section but Kahneman & Tversky version is not much better.
Here is for you to understand what's going on:
http://home.comcast.net/~erozycki/ConjunctiveFallacy.html
Do you think Michael Lewis suffers from Howell Raines syndrome?
ReplyDelete@ anon -"John got passed over in promotion to less qualified workers, has been denounced as a 'racist' for discussing racial differences, has been called a 'xenophobe' for expressing negative views on open immigration, has been denounced as a 'homophobe' for opposing gay marriage.
ReplyDeleteWhat is the chance that he's a white conservative?"
Not 100%, that is for sure. :)
I know it's the exception that proves the rule - so I'm still waiting to find the lawyer that enjoys or even practices home carpentry.
ReplyDeleteHere's one. I enjoy it, but I don't do it because I enjoy it -- I do it because it needs to be done.
Boy, not many commenters seem to have gotten what's going on in these examples.
ReplyDeleteAs Billy Willy points out, the lawyer/engineer example actually does ultimately derive from Kahneman (or K and Tversky). But the actual experiment they performed was decidedly different from what is suggested by the Vanity Fair sidebar.
Again, TWO groups of subjects were presented with very DIFFERENT prior probabilities as to the proportion of lawyers and engineers. Yet, apparently, they came up with essentially the SAME posterior probabilities. The only reasonable conclusion to draw is that, on average, as a group, these subjects simply dismissed the prior probabilities.
There may be some issues as to whether the subjects of this sort of question and others "really" understood what was being asked, and the exact meaning of what was being assumed, but in a way that's the point: people DON'T focus on these things in such a fashion as to pull out the essential ingredients relevant to logical inference. Instead, they simply think about subject matter in a sloppy, stereotyped way. If, upon thinking about certain issues, they effectively ignore prior probabilities, or the significance of conjuncts, then they are thinking in some way other than a logical one.
That is what is important to recognize in people, to explain, and to incorporate into one's theories about human behavior.
The only conclusion you can logically deduce form the information supplied by by K @ T is that 30% are engineers. Any other conclusion is not necessarily illogical, but it's based on additional information not supplied by K @ T.
ReplyDeleteEVERY conclusion we draw relies "on additional information not supplied by K @ T", like the meanings of the words they used.
This sums up the confusion - essentially probabilities properly understood measure one's ability to guess. So long as causality abides, probabilities do not exist in 'things' - the objective probability of me winning a lottery is 100% or 0%. The subjective probability is my ability to make an accurate prediction -which would be 1/tickets sold. If I know the lottery is rigged, the subjective probability changes regardless of tickets sold. Same with a die roll. The objective prob. of me rolling a 6 is 100% or 0% as it is determined by physics. My ability to make an accurate guess is 1/6. This changes with more information about the die. If I suspect it is loaded, my subjective prob will not be 1/6. Check out this article:
ReplyDeletehttp://mises.org/daily/4979
I seem to remember that Pinker pretty thoroughly demolished this type of thinking, in "How The Mind Works". He basically said that these two psychologists were over-educated versions of smart-ass adolescents. He said that he tried to make a similar type of argument to his father when he was 13 regarding weather fronts, but ended the narrative by saying that long suffering dad was right and his know it all son was wrong. He basically said that Kahneman and Tversky were comparing the real world to a casino, whereas in actuality, the casino is the exception to the real world. The real world is probabilistically different.
ReplyDeleteMy guess is this is a failure to understand probability. Whether Lewis makes the error, or Kahneman, or the author of the sidebar did, or some editor, I can't tell.
ReplyDeleteSomeone in this chain is not a Bayesian.
Lots of people who work with statistics and probability haven't the faintest idea what a probability is saying--even phd level math folks. They get themselves all confused thinking about frequencies, outcomes, and having been told false things about independence and then pre knowledge and post knowledge.
A probability is a measure of YOUR IGNORANCE. What's the probability you toss a penny and it comes up heads? Given NO OTHER INFO, it's 1/2. Now, say you tossed that penny with a special penny tossing machine that provided just the right impulse, rotation, and adjusted for wind velocity, it might be 99%. Given you LOOK at the penny after the toss, you know the outcome--and so the probability of heads is either 0 or it's 1, but it definitely ain't 1/2.
That means that a probability is not "in the world", constant for a given problem as defined, waiting to be found. It depends entirely on what you know, and how well you know it.
People have been misled into thinking that knowing more doesn't change a probability by being mistakenly told that "The probability of having a boy baby on a couple's 6th pregnancy when the first 5 are boys is 1/2."
That's false--the 5 boys are telling us a great deal about how nonrandom the sex selection is in this case. But people have been told the answer's 1/2, so they apply other mistaken ideas about how evidence changes probabilities everywhere they look.
Of course the probability of Jack being an engineer is higher than 30%. It was exactly 30% before you knew ANYTHING except that 30 of the 100 people were engineers. As soon as you were told any more info, the probability changed. Just hearing that it was a man should have changed your probability towards engineer. Certainly the other details should have too.
Candid observer has it right. This was a between-groups study and the point was base-rate neglect. This has been demonstrated umpteen times, using many different approaches. The original K&T is just an initial demonstration of the phenomenon. If both groups provide identical estimates, its reasonable to infer that on average they are not being swayed by the percentages (as Bayes theorem says they should).
ReplyDeleteNassim Nicholas Taleb calls this sort of use of probability (including and especially the "Linda" chestnut from Statistics 101) engaging in the "ludic fallacy". While real probability and statistics are a useful tool (and lucky are those who can wrap their minds around the counterintuitive way of thinking required), sometimes we need to be like NNT's streetwise "Fat Tony". When faced with two throws of six dice, one coming up all sixes, the other (in this order) 4, 2, 1, 6, 5, 3, when which being the more likely (given fair dice, under the laws of probability, both throws are equally probable), people not trained in probability would when asked say that the second throw was more likely, because they interpret that precise throw as meaning "random". Fat Tony would agree, but only because he would be skeptical of the fair dice assumption. Call it "metaprobability", i.e. the probability of probability. So when asked about the odds about Linda being both a bank teller and a feminist, they are relying on the same tried and true evolved heuristics that allow me to conclude with confidence that someone who has black table napkins also has black bedsheets. And the reason this heuristic has evolved is that attributes tend to cluster, or, putting it another way: "people are more typical than you can possibly imagine".
ReplyDeleteMany of the objections to the "Linda" experiment seem reasonable, especially given the information provided in the article, but they don't apply to the similar experiment where people assigned probabilities to temperature intervals in San Francisco. That experiment did not involve fictional characters, stories, or yanking people's chain. It asked people to assign probabilities to basically value-neutral physical events. Those who object to the "Linda" experiment should address the "weather" experiment where the objectionable features are absent.
ReplyDeleteAccording to Tversky, both experiments supported the same hypothesis: that as people "unpack" a problem they assign "too much" weight to the components as compared to the whole (i.e., their degrees of belief are superadditive).
While Pinker is correct (as paraphrased by Anonymous) that life isn't a casino, in a straightforward problem like the weather prediction experiment one could on average win money by betting against people's reasoning.
ReplyDeleteEverybody (including Pinker's father) says, yeah, of course we don't think rationally, we all know that. Correct. But that meta-belief often doesn't match our actual beliefs and behavior, which often assume rationality. Our surprise at K and T's findings shows, among other things, how far our actual beliefs diverge from that meta-belief.
Even more than that: In the K and T "cold water" experiment where subjects voluntarily chose more pain over less pain (with no external incentive), some subjects even said at the time something like, "This is really weird that I'm choosing this option." They knew that they were acting "irrationally" at the time, contrary to how they'd expect themselves to act, and they were puzzled by it.
Pinker (paraphrased) also discounts K and T's work when he suggests that it's just "people are irrational" - hey, we already know that! But did we already know that our actual degree-of-belief function is often superadditive? That's what Tversky claimed to show with the "Linda" and other experiments. If he was correct (I have no idea whether he was), then wouldn't that be a useful contribution to cognitive science? K and T didn't just show that people are irrational (duh), they showed how people are irrational.
If you ask people to estimate what % of the American population belong to different ethnic groups, they'll come up with average guesses that add up to way more than 100%.
ReplyDeleteI'm more interested in anatomizing common elite misconceptions, since, by definition, they run the world. For example, here you see Kahneman -> Lewis -> Vanity Fair editor (in other words Nobelist to best all around magazine journalist to most expensive magazine) and we still end up with a self-evident mistake. Why? Because everybody knows Stereotypes Are Wrong. So any kind of input triggers that elite response.
ReplyDelete"Lots of people who work with statistics and probability haven't the faintest idea what a probability is saying--even phd level math folks"
ReplyDeleteEvery undergrad who has studied Math Stats knows what a probability is.
I have yet to meet Phd level Math people, who can grasp measure theoretic probability, who can't grasp something this trivial. Your mistaking your inability to understand what they say for their ignorance.
"Given NO OTHER INFO, it's 1/2."
Really? There's no reason it can't be 0.99. Since it's an estimation problem, you could (say) find a likelihood estimate after observing a few realizations of the event. More generally, you could construct an approximate confidence interval using the fact that the maximum likelihood estimator is asymptotically efficient.
"Given you LOOK at the penny after the toss, you know the outcome--and so the probability of heads is either 0 or it's 1, but it definitely ain't 1/2."
ReplyDeleteEven undergrads use Laplace smoothing for small samples, but you do not. Nice! I hear in some parts of the universe an MLE does not have an associated confidence interval. Of course, I don't inhabit that part of the universe.
"Consider for just a moment that a renowned Nobel prize-winner might possibly be smarter than you"
ReplyDeleteIf the guy who pioneered lobotomies/leucotomies got a (medicine) Nobel then I'm sure this Kahneman earned his. We may be fumbling toward something but we're definitely fumbling. I think aforementioned ludic fallacy covers most overclass social-science "research" these days.
I wouldn't read too much into the elite aspect here. Michael Lewis is in the infotainment elite, and Vanity Fair is in the gossip magazine elite. Their goal was to publish a catchy article that gets noticed (Arts and Letters Daily!).
ReplyDeleteScientific American is every bit as elite and politically correct as Vanity Fair, but I don't think Scientific American would make a mistake like this. That would be noteworthy. Presumably, though, they care about describing scientific work accurately. I wouldn't expect Michael Lewis and Vanity Fair to stifle their prejudices and get the science right any more than I'd expect that of non-elite people.
Full disclosure: I am a math Ph.D. whose research focused on logic and conditional probability, and I work as a professional statistician.
ReplyDeletePractically all the commenters here are wrong in some way, and nobody has laid a glove on Kahneman himself.
The "Jack" experiment involved two separate groups, and the correct conclusion from it is the process the people in the groups used was not sensitive to the prior probabilities in the range between 30/70 and 70/30 splits. However, and counterintuitively, those two sets of prior probabilities are really not very far apart in Bayesian practice, so that I would expect even groups of perfectly logical reasoners to show a difference in posterior probabilities that was small enough to be significantly different from zero only in large experiments; I just bought Kahneman's book, so maybe I will follow up on this and see if the experiments, meta-analytically, were large enough.
The "Linda" example, on the other hand, was indeed demonstrating not merely an inconsistency between groups, but an inconsistency in the responses of individuals, because as worded, answer 2 can never be correct and can only be the product of a mathematical, linguistic, or logical error. All the criticisms of Kahneman over this are off base, because they merely explain why people made the mistake, which does not change the importance of the main result, that this particular setup induces people to make mistakes (that really are mistakes, i.e. mental errors, and not simply suboptimal estimation procedures).
The kind of interpretive error that most commenters (as well as the idiot Vanity Fair intern, the semi-idiot Gladwell, the semi-brilliant Lewis, and the usually-brilliant Sailer) are making is, ironically, of the same kind being discussed -- the prior probability of Kahneman being correct, given his supreme eminence in the field, is underestimated or misapplied, so that people don't take enough care to check their reasoning when they think they are disagreeing with him.
Vanity Fair and/or Michael Lewis screwed up/made up the Jack example. The example is from the appendix of "Thinking, Fast and Slow", as it is, but it is this:
ReplyDelete"Dick is a 30 year old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues."
As you can see, there are no particular facts or inferences that would influence your judgement regarding Dick's occupation.
I am reading the book, and the Vanity Fair exerpt that Steve quoted just sounded wrong. It is.
Forbes
Shouldn't Kahneman do a study to figure out why his research results so often get twisted to support conventional wisdom nuggets like Stereotypes Are Always Wrong?
ReplyDelete>Shouldn't Kahneman do a study to figure out why his research results so often get twisted to support conventional wisdom nuggets like Stereotypes Are Always Wrong?
ReplyDeletePrevious to Kahneman,I read Schermer's "The Believing Brain" which goes pretty far in answering your question: belief first, then explanation/rationalization.
For me, Stereotypes Are Always Wrong, is not conventional wisdom--it is PC. Vanity Fair is a consumate PC magazine, which explains how they got Kahneman wrong--as Schermer describes, belief (PC) first, then rationalized by altering Kahneman's example. YMMV.
Forbes
Doesn't the answer to the Jack question depend on your beliefs about:
ReplyDeleteP(political|lawyer)
and
P(political|engineer)
So...
If
P(engineer) = 0.3
P(political|lawyer) = 0.9
P(political|engineer) = 0.1
then P(engineer|political) = 0.8
The answer to the Linda question is always (A) because condition (B) is a subset of (A) and P(A) >= P(B).
More fight
ReplyDelete" (1) Linda is a bank teller.
ReplyDelete(2) Linda is a bank teller and is active in the feminist movement.
The vast majority—roughly 85 percent—of the people they asked opted for No. 2, even though No. 2 is logically impossible. (If No. 2 is true, so is No. 1.) The human mind is so wedded to stereotypes and so distracted by vivid descriptions that it will seize upon them, even when they defy logic, rather than upon truly relevant facts. Kahneman and Tversky called this logical error the “conjunction fallacy.""
As part of pattern recognition the brain fills in the blanks.
People read #1 as she is a banker and is NOT active in the feminist movement.
As an aside everyone who didn't sterotype got eaten by sabertooths a long time ago, now if only they would have eaten these idiots as well.
Steve,
ReplyDeleteI just finished Kahneman's book, and someone pointed me at your discussion (I read it a week or two ago as well, but had forgotten).
If you finish the chapter in which Kahneman uses this example, he finishes the chapter with the full explanation of (a) what's wrong with people's thinking here, and (b) what you should do instead.
The chapter open had me a little suspicious, having read your review. The chapter end cleared up all my questions, and demonstrated that they are as brilliant as polymath suggests.