You are here

Better learning through better betting

Jul 21,2016 - Last updated at Jul 21,2016

The course of public debate has become depressingly familiar. It often begins with a surprise — for example, Donald Trump, the real estate tycoon turned reality TV star, defies the odds to become the US Republican Party’s presumptive presidential nominee.

The pundits dive in. Why is it happening? What does it mean? What will come next?

After a time, the future that was being debated is revealed.

In an ideal world, everyone would acknowledge which forecast proved correct. Lessons would be learned. People would change their thinking accordingly. Collectively, we would all be a little wiser.

But this is a less-than-ideal world. All too often, instead of learning lessons, the pundits just continue arguing.

They dispute what happened. They disagree about who predicted which outcome. No minds are changed. Collectively, we become no wiser.

For those who think we can do better, one solution is to make bets on our predictions.

In early 2014, economists Tyler Cowen and Bryan Caplan did just that.

Cowen was pessimistic about unemployment in the United States, so he bet Caplan that the rate would not fall below 5 per cent during the following 20 years.

Two years later, unemployment had dropped to 4.9 per cent. Caplan had won a clear victory.

It is exactly that sort of clarity that betting on predictions seeks to obtain.

Left unchallenged, pundits routinely use vague language, such as “unemployment will remain high for years” or “Trump’s support will slide”.

These sound good on television, but they do not produce an indisputable, testable result. (How many years is “many”? How much decline qualifies as a “slide”?)

Bets force the two parties to agree to well-defined terms — making it obvious to everyone who was right and who was wrong.

The purpose of betting, of course, is not to declare winners and losers; it is to replace endless, pointless arguments with a clear determination of whose understanding of reality is closer to the truth.

The ultimate goal is to make us all a little wiser. And yet, unfortunately, when it comes to achieving that goal, betting has consistently failed.

Take Cowen’s response to losing his bet with Caplan.

He quickly acknowledged that he had lost, according to the terms of the bet. But he nonetheless insisted that this did not prove Caplan right.

Cowen noted that while the unemployment rate had fallen, the employment-to-population ratio had barely budged.

In his evaluation of the outcome, Cowen concluded: “I feel I’m the one who won the bet.”

Their wager had settled absolutely nothing.

Unfortunately, that is what typically happens when people make simple bets on complex debates.

In 1980, biologist Paul Ehrlich and economist Julian Simon made another famous wager — on the price of five metals 10 years later.

Their bet ended with a clear win for Simon, but Ehrlich shrugged off the outcome as meaningless. And he was not wrong.

If the two scientists had chosen a different starting year, Ehrlich might well have been the winner.

The problem with bets like these is that they are far too simple to settle the complex debates that underlie them.

A handful of metal prices cannot settle a sprawling Malthus vs Cornucopia argument, just as a single data point for unemployment cannot be the final word in the dispute between Cowen and Caplan.

And yet, it would be a shame to abandon bets altogether. Doing so would leave only the squalid food fights into which so many important debates devolve.

The solution is to take bets far more seriously, to expand them, and to design them to be capable of settling debates to the satisfaction of most reasonable observers.

Ideally, a bet would use a question as big as the debate it means to settle. But that will not work, because big questions — “Will population growth outstrip resources and threaten civilisation?” — do not produce easily measurable outcomes.

The key, instead, is to ask many small, precise questions.

Cowen and Caplan should not have relied on the unemployment rate alone; they should have included the employment-to-population ratio and other metrics they agreed would have diagnostic value.

Ehrlich and Simon should have made a wider array of predictions, on metal prices, food production, air quality and other factors.

This approach, using question clusters, could be applied to virtually any important debate.

Right now, for example we are putting the hawks-versus-doves argument about the Iran nuclear deal to the forecasting test.

Naturally, using many questions could result in split decisions. But if our goal is to learn, that is a feature, not a bug.

A split decision would suggest that neither bettor’s understanding of reality is perfectly accurate and that the truth lies somewhere between.

That would be an enlightening result particularly when public debates are dominated by extreme positions — the clash between Ehrlich and Simon being a classic illustration.

Of course, none of this is possible if those involved are not willing to think together about how to settle their disagreement. This is not always easy, to say the least.

After their famous bet, Simon and Ehrlich considered a second bet involving a large basket of measures; but it never came about, partly because of personal antipathy between the two men.

What is needed is a neutral party and arbiter — a role that think tanks, for example, are well positioned to play.

But whatever the difficulties, the quality of public debate stands to benefit enormously from well-structured wagers. When it comes to learning about the world, betting on outcomes beats arguments that settle nothing.

 

 

Philip E. Tetlock is professor in democracy and citizenship and professor of management at the University of Pennsylvania. Dan Gardner is a journalist. They are the co-authors of “Superforecasting: The Art and Science of Prediction”. ©Project Syndicate, 2016. www.project-syndicate.org

up
49 users have voted.


Newsletter

Get top stories and blog posts emailed to you each day.

PDF