Large Scale Econometric Models: Do they Have a Future?

Here is an intriguing question: How is the Large Hadron Collider like the National Institute Global Economic Model? Read on!

Post Date
29 March, 2018
Reading Time
5 min read

It was a great pleasure to organize a session on  econometric models for the Royal Economic Society  Conference at the University of Sussex. In my new role as Research Director at the National Institute of Economic and Social Research (NIESR) I have inherited the responsibility for research and development of the National Institute Global Economic Model, NiGEM; the preeminent model of the world economy.

As you might expect, given my role at NIESR, the answer to the question posed in this session is a resounding yes!

For the session at Sussex, in addition to my own presentation, I assembled three outstanding speakers, Tony Garratt from the Warwick Business School (WBS), Marco Del Negro of the Federal Reserve Bank of New York and Garry Young, Director of Macroeconomic Modelling and Forecasting at NIESR.

Tony kicked off the session with a description of the work he’s been engaged in at WBS along with his co-authors Ana Galvao and James Mitchell. The University of Warwick is collaborating with the National Institute of Economic and Social Research in a partnership that gives Warwick graduate students access to the expertise of the applied economists at NIESR and where NIESR gains from the academic expertise of Warwick economists. As part of that partnership, the WBS team have agreed to publish their forecasts each quarter in the National Institute Review as a benchmark against which to measure the performance of the NiGEM team. Tony gave us a fascinating account of what it is the WBS team does!

Their approach is reduced form and eclectic. WBS has a stable of more than twenty-five models that are averaged with weights, updated in real time, by past forecast performance. Tony showed us how the WBS forecasts had performed in the past relative to the Bank of England and the Bank of England  Survey of External Forecasters. He described different ways of evaluating forecasts, both by comparing point forecasts and density forecasts, for output growth and inflation. Perhaps the most interesting result, for me, was that judgmental forecasts often outperform econometric models at short horizons.

Tony’s talk was followed by Marco Del Negro from the New York Fed who described the behaviour of a medium scale Dynamic Stochastic General Equilibrium (DSGE) model they’ve been running at the NY Fed since 2008. DSGE models have received quite a bit of bad press lately as a result of the failure of almost all of the experts to predict the 2008 financial crisis. Marco gave a spirited defence of DSGE models by showing us the forecast performance of the NY Fed’s DSGE model from 2008 to the present. The model is written in a relatively new computer language; JULIA. The code is open source, blindingly fast and widely used in research publications in leading journals. For the MatLab users out there: perhaps it’s time to switch?

In the third presentation of the day we were treated to an entertaining interlude when the projection facility malfunctioned and Garry Young ad-libbed for ten minutes with a cricketing anecdote. When he resumed, Garry gave us an account of the use of NiGEM to forecast the effects of Brexit. NiGEM has more than 5,000 equations, covers 60 countries and is widely used by central banks and national treasuries around the world for scenario analysis. NiGEM has a lot more in common with the NY Fed’s DSGE model than most people realize.

In the final presentation of the day, I tied the three presentations together by explaining the history of econometric modelling beginning with Klein Model 1 in the 1940s and ending with the NY FED’s DSGE model and with NIESR’s NiGEM. For me, the main story is continuity. With the publication of Robert Lucas’ celebrated critique of econometric modelling in 1976, large-scale models disappeared from the halls of academia. But they never disappeared from central banks, treasuries and research institutes where, as Garry reminded us, they have been used as story-telling devices for more than fifty years.

The version of NiGEM we work with today has come a long way from the backward looking equations of Klein model 1. It has been lovingly tended and developed by distinguished teams of researchers who have passed through the National Institute over the years. Past NIESR researchers include among their number, some of the leading applied economists and applied econometricians in the UK and the model they developed includes state-of-the art assumptions including the ability to add forward looking elements and rational expectations in solution scenarios.

Large-scale econometric models are here to stay. Policy makers use models like NiGEM to analyze policy alternatives and that is unlikely to change soon. In my presentation I argued for a closer dialogue between economic theorists and applied economists, similar to the dialogue that currently exists between theoretical physicists and applied physicists. I argued that NiGEM located at NIESR, is to economics as the Large Hadron Collider (LHC) located at CERN, is to physics.  Just as physicists use the LHC to test new theories of subatomic particles so economists should use NiGEM to test new theories of macroeconomics. I hope to put that idea into practice in the future at the National Institute.

In a separate presentation at the Royal Economic Society Conference this year, I discussed work I am engaged in with a research team at UCLA where we have developed a new theory of belief formation. This is an example of one of the theories we hope to test using NiGEM as a laboratory.

According to Forbes, the operating budget of the Large Hadron Collider is approximately one billion US dollars a year. NiGEM is entirely funded from subscriptions and the operating budget is well south of half a million US dollars. Funding agencies take note: we could make some pretty cool improvements for a billion a year.