Until recently, data sets were small and costly, and computers were slow and expensive. So it is natural that as gains in computing power have dramatically reduced these impediments, economists have rushed to use big data and artificial intelligence to help them spot patterns in all sorts of activities and outcomes.
Until recently, two big impediments limited what research economists could learn about the world with the powerful methods that mathematicians and statisticians, starting in the early nineteenth century, developed to recognize and interpret patterns in noisy data: Data sets were small and costly, and computers were slow and expensive. So it is natural that as gains in computing power have dramatically reduced these impediments, economists have rushed to use big data and artificial intelligence to help them spot patterns in all sorts of activities and outcomes.
Data summary and pattern recognition are big parts of the physical sciences as well. The physicist Richard Feynman once likened the natural world to a game played by the gods: “you don’t know the rules of the game, but you’re allowed to look at the board from time to time, in a little corner, perhaps. And from these observations, you try to figure out what the rules are.”
Feynman’s metaphor is a literal description of what many economists do. Like astrophysicists, we typically acquire non-experimental data generated by processes we want to understand. The mathematician John von Neumann defined a game as (1) a list of players; (2) a list of actions available to each player; (3) a list of how payoffs accruing to each player depend on the actions of all players; and (4) a timing protocol that tells who chooses what when. This elegant definition includes what we mean by a “constitution” or an “economic system”: a social understanding about who chooses what when.
Like Feynman’s metaphorical physicist, our task is to infer a “game” – which for economists is the structure of a market or system of markets – from observed data. But then we want to do something that physicists don’t: think about how different “games” might produce improved outcomes. That is, we want to conduct experiments to study how a hypothetical change in the rules of the game or in a pattern of observed behavior by some “players” (say, government regulators or a central bank) might affect patterns of behavior by the remaining players.
Thus, “structural model builders” in economics seek to infer from historical patterns of behavior a set of invariant parameters for hypothetical (often historically unprecedented) situations in which a government or regulator follows a new set of rules. The government has strategies, and the people have counterstrategies, according to a Chinese proverb. “Structural models” seek such invariant parameters in order to help regulators and market designers understand and predict data patterns under historically unprecedented situations.
The challenging task of building structural models will benefit from rapidly developing branches of AI that don’t involve more than pattern recognition. A great example is AlphaGo. The team of computer scientists that created the algorithm to play the Chinese game Go cleverly combined a suite of tools that had been developed by specialists in statistics, simulation, decision theory, and game theory communities. Many of the tools used in just the right proportions to make an outstanding artificial Go player are also economists’ bread-and-butter tools for building structural models to study macroeconomics and industrial organization.
Of course, economics differs from physics in a crucial respect. Whereas Pierre-Simon Laplace regarded “the present state of the universe as the effect of its past and the cause of its future,” the reverse is true in economics: what we expect other people to do later causes what we do now. We typically use personal theories about what other people want to forecast what they will do. When we have good theories of other people, what they are likely to do determines what we expect them to do. This line of reasoning, sometimes called “rational expectations,” reflects a sense in which “the future causes the present” in economic systems. Taking this into account is at the core of building “structural” economic models.
For example, I will join a run on a bank if I expect that other people will. Without deposit insurance, customers have incentives to avoid banks vulnerable to runs. With deposit insurance, customers don’t care and won’t run. On the other hand, if governments insure deposits, bank owners will want their assets to become as big and as risky as possible, while depositors won’t care. There are similar tradeoffs with unemployment and disability insurance – insuring people against bad luck may weaken their incentive to provide for themselves – and for official bailouts of governments and firms.
More broadly, my reputation is what others expect me to do. I face choices about whether to confirm or disappoint those expectations. Those choices will affect how others behave in the future. Central bankers think about that a lot.
Like physicists, we economists use models and data to learn. We don’t learn new things until we appreciate that our old models cannot explain new data. We then construct new models in light of how their predecessors failed. This explains how we have learned from past depressions and financial crises. And with big data, faster computers, and better algorithms, we might see patterns where once we heard only noise.
We’re extending our sale until Tuesday! Get 50% off a new subscription when you use the discount code BLACKFRIDAY.
Enjoy unlimited access to the world's leading thinkers on economics, politics, finance, world affairs, sustainability, and technology, as well as weekly long reads, book reviews, interviews, our annual print magazine, The Year Ahead, our complete archive, and more, all for less than $1 a week.
Thomas J. Sargent is Professor of Economics at New York University and a senior fellow at the Hoover Institution.
Read the original article on project-syndicate.org.
More about: AI