HjemGrupperSnakMereZeitgeist
Søg På Websted
På dette site bruger vi cookies til at levere vores ydelser, forbedre performance, til analyseformål, og (hvis brugeren ikke er logget ind) til reklamer. Ved at bruge LibraryThing anerkender du at have læst og forstået vores vilkår og betingelser inklusive vores politik for håndtering af brugeroplysninger. Din brug af dette site og dets ydelser er underlagt disse vilkår og betingelser.

Resultater fra Google Bøger

Klik på en miniature for at gå til Google Books

Indlæser...

Expert Political Judgment: How Good Is It? How Can We Know?

af Philip E. Tetlock

MedlemmerAnmeldelserPopularitetGennemsnitlig vurderingOmtaler
309586,568 (4.22)5
The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This book fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts. Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox--the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events--is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits--the single-minded determination required to prevail in ideological combat. Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.… (mere)
Ingen
Indlæser...

Bliv medlem af LibraryThing for at finde ud af, om du vil kunne lide denne bog.

Der er ingen diskussionstråde på Snak om denne bog.

» Se også 5 omtaler

Viser 5 af 5
An interesting book about the kind of biases "professional" pundits have in talking about political topics. Very data-driven, and shows that experts are better than the completely uninformed (college students or worse), but that experts are actually also good outside of their areas of expertise, if they rely on decent information sources. There's an even more exciting result -- fairly straightforward computer models can do even better than experts, even within their areas of expertise.

One thing missing is that experts often are better at coming up with indicators -- i.e. "if X happens, then Y will result" -- which is more useful than just saying "I think Y will happen". There are also structured ways to get far better predictions from people (on par with the best machine models) by asking better questions (and using markets, etc.)

Irony: most of the "wrong" predictions citied in the book ended up coming true shortly thereafter ("Russia invades Ukraine", etc.)

Unfortunately, the audiobook narrator is gratingly annoying; I'd read the book instead. ( )
  octal | Jan 1, 2021 |
This book is an empirical study of expert knowledge. The author questioned experts on political matters for more than a decade and followed up to see which predictions held up and which didn't. He presents most of his results with reference to two kinds of experts, "hedgehogs" and "foxes", those who stick to one big scheme and those who are more open to alternative interpretations.

The book is filled with correlation coefficients and other statistical data, so it certainly doesn't lack precision. But I was a bit disappointed with the narrow scope of "political judgment" in this study. The survey questions seem to have focused very much on international politics and other very large issues. No wonder the expert predictions were so shaky. It would have been interesting to see how expert opinion on small-scale, local issues would have fared in comparison. This book wasn't quite as informative as I expected but I still hope other researchers will be inspired to conduct similar studies.
  thcson | May 5, 2013 |
This is a critical book for anyone one who depends on professional forecasters of "social" variables, and even more for anyone whose livelihood rests on making such forecasts. "Social" because Tetlock's book is focussed on political forecasting, but I'm convinced that it applies to economic and social forecasting as well. (Having spent a professional career forecasting economic variables, I have some insight here). Tetlock is not discussing forecasting in the hard sciences, where forecasting is based on much harder data.

His first critical conclusion is that, in forecasting complex political events, "we could do as well by tossing coins as by consulting experts". This is based on a massive set of surveys of expert opinion that were compared to outcomes in the real world over many years. The task was enormously complex to set up; defining an experiment in the social sciences presents the problems that constantly arise in making judgements in these sciences (what does one measure, and how? How can bias be measured and eliminated? etc. etc.) Much of the book is devoted to the problems in constructing the study, and how they were resolved.

His second key conclusion is that, while that may be true of experts as an undifferentiated group, some experts do significantly better than other experts. This does not reflect the level of expertise involved, nor does it reflect political orientation. Rather, it reflects the way the experts think. Poorer performers tend to be what Tetlock characterizes as "hedgehogs" -- people who apply theoretical frameworks, who stick with a line of argument, and who believe strongly in their own forecasts. The better performers tend to be what he calls "foxes" -- those with an eclectic approach, who examine many hypotheses, and who are more inclined to think probabilistically, by grading the likelihood of their forecasts.

But, as he notes, the forecasters who get the most media exposure tend to be the hedgehogs, those with a strong point of view that can be clearly expressed. This makes all the sense in the world; someone with a clear cut and compelling story is much more fun to listen to (and much more memorable than) someone who presents a range of possible outcomes (as a former many-handed economist, I know this all too well).

What does that mean for those of us who use forecasts? We use them in making political decisions, personal financial decisions, and investment decisions. This book tells us that WHAT THE EXPERTS SAY IS NOT LIKELY TO ADD MUCH TO THE QUALITY OF YOUR OWN DECISION MAKING. And that says be careful how much you pay for expert advice, and how much you rely on it. That of course applies to experts in the social sciences, NOT to experts in the hard (aka real) sciences. Generally, it is a good idea to regard your doctor as a real expert.

Because it makes it impossible to avoid these conclusions, I gave this book five stars; this is very important stuff. I would not have given it five stars for the way in which it is written. For me, it read as if it had been written for other academics, rather than for the general reader. This is hard to avoid, but some other works in the field do manage -- for example, "Thinking Fast and Slow". Don't skip the book because it is not exactly an enjoyable read, however: its merit far outweighs its manner. ( )
  annbury | Sep 6, 2012 |
How good are political academics/think-tankers/pundits at predicting the outcome of political events? Tetlock studies their predictions over many years in an attempt to answer this question. It's an interesting question, and the research is solid, but I ended up drowning in the details of his analysis and addressing of the various threats to validity. The book feels too much like a PhD dissertation to be a compelling read. ( )
1 stem lorin | Mar 19, 2009 |
Viser 5 af 5
ingen anmeldelser | tilføj en anmeldelse
Du bliver nødt til at logge ind for at redigere data i Almen Viden.
For mere hjælp se Almen Viden hjælpesiden.
Kanonisk titel
Originaltitel
Alternative titler
Oprindelig udgivelsesdato
Personer/Figurer
Vigtige steder
Vigtige begivenheder
Beslægtede film
Indskrift
Tilegnelse
Første ord
Citater
Sidste ord
Oplysning om flertydighed
Forlagets redaktører
Bagsidecitater
Originalsprog
Canonical DDC/MDS
Canonical LCC

Henvisninger til dette værk andre steder.

Wikipedia på engelsk (1)

The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This book fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts. Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox--the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events--is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits--the single-minded determination required to prevail in ideological combat. Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.

Ingen biblioteksbeskrivelser fundet.

Beskrivelse af bogen
Haiku-resume

Aktuelle diskussioner

Ingen

Populære omslag

Quick Links

Vurdering

Gennemsnit: (4.22)
0.5
1
1.5
2 1
2.5
3 4
3.5 1
4 10
4.5 2
5 12

Er det dig?

Bliv LibraryThing-forfatter.

 

Om | Kontakt | LibraryThing.com | Brugerbetingelser/Håndtering af brugeroplysninger | Hjælp/FAQs | Blog | Butik | APIs | TinyCat | Efterladte biblioteker | Tidlige Anmeldere | Almen Viden | 208,723,817 bøger! | Topbjælke: Altid synlig