This January I met Alan Kirman at the Robinson Workshop on Rationality and Emotions. Over lunch we had a brief discussion about the difficulties of modern macroeconomics. I was therefore intrigued to see a new paper of his (co-authored with Peter Howitt, David Colander, Axel Leijonhufvud and Perry Mehrling) entitled Beyond DSGE Models: Towards an Empirically-Based Macroeconomics which was presented in January at the AEA conference (and looks like it will be appearing in the AER ‘Papers and Proceedings’).
The paper has much to say about the current state of macro, in particular the serious problems with DSGE (dynamic stochastic general equilibrium models) and where we should go from here. As the abstract puts it:
This paper argues that macro models should be as simple as possible, but not more so. Existing models are “more so” by far. It is time for the science of macro to step beyond representative agent, DSGE models and focus more on alternative heterogeneous agent macro models that take agent interaction, complexity, coordination problems and endogenous learning seriously. It further argues that as analytic work on these scientific models continues, policy-relevant models should be more empirically based; policy researchers should not approach the data with theoretical blinders on; instead, they should follow an engineering approach to policy analysis and let the data guide their choice of the relevant theory to apply.
It is worth quoting at some length from the paper in order to bring out the full ramifications of the story the authors tell:
Keynesianism Goes Wrong
With the development of macro econometric models in the 1950s, many of the Keynesian models were presented as having formal underpinnings of microeconomic theory and thus as providing a formal model of the macro economy. Specifically, IS/LM type models were too often presented as being “scientific” in this sense, rather than as the ad hoc engineering models that they were. Selective micro foundations were integrated into sectors of the models which give them the illusory appearance of being based on the axiomatic approach of General Equilibrium theory. This led to the economics of Keynes becoming separated from Keynesian economics.
The Reaction and a New Dawn (Rational Expectations and Neoclassical GE Models)
The exaggerated claims for the macro models of the 1960s led to a justifiable reaction by macroeconomists wanting to “do the science of macro right”, which meant bringing it up to the standards of rigor imposed by the General Equilibrium tradition. Thus, in the 1970s the formal modeling of macro in this spirit began, including work on the micro foundations of macroeconomics, construction of an explicit New Classical macroeconomic model, and the rational expectations approach. All of this work rightfully challenged the rigor of the previous work. The aim was to build a general equilibrium model of the macro economy based on explicit and fully formulated micro foundations.
But ‘Technical’ Difficulties Intervene
Given the difficulties inherent in such an approach, researchers started with a simple analytically tractable macro model which they hoped would be a stepping stone toward a more sensible macro model grounded in microfoundations. The problem is that the simple model was not susceptible to generalization, so the profession languished on the first step; and rational expectations representative agent models mysteriously became the only allowable modeling method. Moreover, such models were directly applied to policy even though they had little or no relevance. … [emphasis added]
But There Was a Reason For This: Other Stuff is Hard
The reason researchers clung to the rational expectations representative agent models for so long is not that they did not recognize their problems, but because of the analytical difficulties involved in moving beyond these models. Dropping the standard assumptions about agent rationality would complicate the already complicated models and abandoning the ad hoc representative agent assumption would leave them face to face with the difficulties raised by Sonnenschein, Mantel and Debreu. While the standard DSGE representative models may look daunting, it is the mathematical sophistication of the analysis and not the models themselves which are difficult. Conceptually, their technical difficulty pales in comparison to models with more realistic specifications: heterogeneous agents, statistical dynamics, multiple equilibria (or no equilibria), and endogenous learning. Yet, it is precisely such models that are needed if we are to start to capture the relevant intricacies of the macro economy.
Building more realistic models along these lines involves enormous work with little immediate payoff; one must either move beyond the extremely restrictive class of economic models to far more complicated analytic macro models, or one must replace the analytic modeling approach with virtual modeling. Happily, both changes are occurring; researchers are beginning to move on to models that attempt to deal with heterogeneous interacting agents, potential emergent macro properties, and behaviorally more varied and more realistic opportunistic agents. The papers in this session describe some of these new approaches. [emphasis added]
Some Closing Comments of My Own
So there you go: plenty of tough challenges and a big dose of humility. To some extent here it seems thing run on 30-40 years cycles: Keynesianism from 1945-1975, Rational Expectations DSGE from 1975-2005 and now we’re into the era of complexity and ‘loose’ tools with emphasis on empirics and heuristics rather than formal models. Whether this new approach will deliver more than the old is yet to be seen. After all, one reason that there are so many physicists getting interested in Economics and Finance is that the going is so hard in, e.g., condensed matter physics (superconductivity anyone …). If the economy really is so complex will we ever do any better at the macro scale than we do for the weather and if so will it not rely on some conceptual breakthrough rather than just doing using more hard-core dynamical systems theory and running more agent-based simulations?
That said, as the authors argue, the ‘simple’ route isn’t working and the hardness of the path is no reason not to attempt it – an argument in many ways directly inverse to the traditional ‘drunkard-and-the-lamp’ approach in which we restrict our models, often beyond the point in which they remain relevant, in order to maintain analytical tractability. Thus, though cautious regarding what more ‘complexity-oriented’ methods can deliver, I am in wholehearted agreement with the authors that they justify much greater exploration.