Showing posts with label models. Show all posts
Showing posts with label models. Show all posts

Wednesday, July 18, 2012

The Optimal Rate Of Inflation

We’re on a optimisation binge today. After the last post on international reserves, here’s a piece on the optimal level of inflation (abstract; emphasis added):

How Inflation Affects Macroeconomic Performance: An Agent-Based Computational Investigation
Quamrul Ashraf, Boris Gershman, Peter Howitt

We use an agent-based computational approach to show how inflation can worsen macroeconomic performance by disrupting the mechanism of exchange in a decentralized market economy. We find that increasing the trend rate of inflation above 3 percent has a substantial deleterious effect, but lowering it below 3 percent has no significant macroeconomic consequences. Our finding remains qualitatively robust to changes in parameter values and to modifications to our model that partly address the Lucas critique. Finally, we contribute a novel explanation for why cross-country regressions may fail to detect a significant negative effect of trend inflation on output even when such an effect exists in reality.

Wednesday, November 16, 2011

Mark Thoma On Economists and The Future Of Economic Models

He’s responding to a Roger Martin essay (which you can read here) (excerpt):

Should economists be “imagineers” of our future?
By Mark Thoma

...I agree that macroeconomists need to fix their models. But I don’t think that predicting the future based upon “a straight-line projection of the past” is the problem...

...This year’s Nobel prize award to Thomas Sargent and the previous award to Robert Lucas were partly in recognition of their development of the tools and techniques that economists need to go beyond simply trying to extrapolate the future from the past, a procedure that can lead forecasters astray…

…people change their behavior in response to changes in the conditions they face. And this is one of the things that separate what researchers in the hard sciences do from the work of economists…

Tuesday, October 4, 2011

Brad DeLong Explains IS-LM

No much commentary on this one, just a good read for anyone interested in the subject (excerpt):

The Tribal Dislike of John Hicks and IS-LM: History of Economic Thought Edition

When you do economics and apply it to the real world, you start with the simplest possible model. Does that help you understand enough of the real world to satisfy you? If not, you complicate it by adding the most important thing that you had left out. Does that help you understand enough of the real world to satisfy you? If so, you use that model--and then when you want to go further you complicate it in its turn.

But at each stage in the process, you absorb the valid insights from your current model before you go on to complicate things further.

If you’re not familiar with the subject, or didn’t take macroeconomics at university, then the IS-LM model is Sir John Hicks attempt at explaining Keynes. Unfortunately, the underlying basis that he used for constructing the model was partially faulty (Keynes rejected the notion of a unique equilibrium), and Hick’s in fact repudiated IS-LM later in his career. By then it was too late, as a full generation of American economists had taken it up and hijacked the intellectual leadership of Keynesian economics, even if what they actually practiced was actually a bastardised amalgam of Keynes and neo-classical thought, commonly called the neo-classical synthesis.

Nevertheless, as DeLong points out, it doesn’t invalidate the potential usefulness of the model to examine policy issues, as long as it isn’t that badly wrong.

Uncertainty, Complexity, And The Problem With Economic Models

Nick Rowe has a headache (excerpt):

The Lucasian map is not the Hayekian territory

In defence of Lucas '72.

Take any macroeconomic model of a market economy with inefficient aggregate fluctuations. In fact, take any economic model where something bad might happen.

Assume that model is literally true.

The people in that model are idiots.

This conclusion follows immediately. If they weren't idiots, the people in the model would appoint the economist modelling the economy as central planner, who would tell them all what to do, and make them all better off.

The people in Lucas' '72 model are complete idiots for producing less because they don't realise there's a recession on.

The people in New Keynesian models are complete idiots for waiting for the Calvo fairy to give them permission to cut prices in a recession.

All models suffer this same problem. If the world really were as simple as the economic model of that world, people would figure it out, and wouldn't let bad things happen.

Of course, the discussion doesn’t go so far as to say that all economic analysis is useless – we know more about how people interact individually and in aggregate than we did before. But it’s always useful to keep in mind the inherent tension between a highly complex real world teeming with sometimes irrational individuals, and the oversimplified world of economic modelling. And you should always also take into account the biases of the modeller.

But, to quote George Box, “Essentially, all models are wrong, but some are useful."

Tuesday, March 8, 2011

Economic Modelling: Status Quo Ante

While there have been alternatives proposed (see for instance this post), large scale structural and stochastic models are still the bread and butter of macropolicy. Yet the inability of virtually every statistical model to provide substantive guidance on policy issues remains a problem.

This article on VoxEU provides an insight as to why (excerpt; emphasis added):

Dynamic stochastic general equilibrium models and their forecasts
Rochelle M Edge & Refet S. Gürkaynak

Dynamic stochastic general equilibrium (DSGE) models represent a major strand of the modern macroeconomics literature and are an important tool for policy analysis at central banks...

...The success of the DSGE model-based forecasts relative to other methods was viewed as evidence in favour of DSGE models’ reliably capturing the dynamics in the data…

...To see the absolute forecasting ability of the DSGE model, we run a series of standard forecast efficiency tests, where the realised inflation is regressed on forecasts made at different times in the past. A good forecast should have a zero intercept and unit slope as well as a high R-squared. Table 1 shows the efficiency tests for DSGE model forecasts of inflation at different maturities and demonstrates clearly that the forecasts are very poor. R-squareds at all horizons are essentially zero, implying no forecasting ability. All Figure 1 is therefore telling us is that all other forecasting methods perform just as poorly....

Wednesday, July 28, 2010

The Future of Economic Models

One of the key issues that has bedeviled the economics profession in the last three years has been the almost complete failure to predict the financial crisis and its contributing factors. Apart from a prescient few like White, Roubini, Shiller and Roach (among others), regulators, policymakers and pundits were caught off guard by the near collapse of an over leveraged financial sector in the Western nations, and the speed and depth of the downturn.

Part of that failure has been the poor performance of standard econometric models of both New Classical and New Keynesian varieties to adequately explain what’s going on. There’s a great critique of the current state of macro-models at the Macro Advisors Blog – well worth an investment of time, if you have some basic econometric knowledge.

An article in last week’s Economist magazine points a potential way out of this mess (quoted in full):

Friday, March 6, 2009

The Pitfalls of Econometrics

de minimis has an interesting post advocating using more econometric models to guide policy making in Malaysia. While I tend to lean that way myself (there's too much unsubstantiated rhetoric flung around the news and blogosphere for my taste), I don't want to be blind to the potential pitfalls and shortcomings of an applied econometric approach to policy. So this post is both to clarify some of the issues, as well as serve as a reminder to myself not to be too "assertive", as my wife puts it.

First is that econometric modeling (as etheorist remarked the other day) is really an art, not a science. There are many, many ways of looking at an economy and generating forecasts, from simple time series techniques to hideously complex dynamic general equilibrium models. So model choice and specification (as well as accompanying underlying assumptions), and not to mention the ideological bent of the modelers, can lead to very different conclusions about the state of the economy at any given time. The issue is compounded by Malaysia being such an open economy, which means that ideally, you'd have to incorporate all the major trade partners into your model as well.

Secondly, the evolving economic structure within a developing country means that even if you do come up with a model close to reality at some point in time, it might be out of date very quickly later on and you won’t know it until something goes wrong. This is one point where I would be critical of DOS: the Malaysian input-output tables haven’t been updated in years, and you need this to model intra-industry dynamics.

Thirdly, any econometric model necessarily uses historical data, which means there will always be an unobservable error component in any forecast in the presence of a current shock. A corollary of this is that, almost by definition, a trade shock such as we just suffered cannot be predicted on the basis of concurrent data. Models are more useful as a predictive guide to inventory driven recessions and business cycle downturns. You can of course use models to predict what happens when a shock occurs, but not when or if a shock will occur.

Fourth, data accuracy is inversely proportionate to the speed at which data is published. In other words, the faster you publish it, the larger the error rate. Where I think DOS can improve on that score is to follow the general practice in the EU, US and yes, Singapore, i.e. issue advanced, preliminary, and final estimates of major statistical series. The current practice of a 6-week to 8-week lag and quietly revising the historical series, isn't transparent (the loose hair around my workplace is testament to that). Data revisions should always be made clear, especially for national accounts data, which has to be revised even 2-3 years down the road.

One exception to this observation is financial sector data, which is available very quickly. (Side note: I've visited BNM to study their data gathering process, and I was a member of one of the teams responsible for implementing CCRIS in one of our banks - I am very impressed with BNM's operation in this instance. The disaggregated trial balance of the entire banking system is available at about t+4 after every month end – in other words, don’t be fooled by the monthly publishing schedule). (Side note to the side note: this is one reason why monetary policy is generally the first recourse in any crisis – you have better data much faster than real economy data).

However, I should point out that a 2-month to 3-month lag hits the sweet spot between accuracy and timeliness, and is fairly typical worldwide. China for instance tends to issue data on a 1 month lag, but subsequent revisions tend to be very large. Some Canadian series have no revisions at all, but you have to wait 6 months(!) to get them. I cannot fault DOS on that score, though they have made some absolute boo-boos before (pay attention to 2004-2005 trade data before and after revision, for instance).

Fourth: some of the most critical variables required for a predictive model are unobservable. For example, consumer and investor expectations have a big impact on private consumption and investment, but can’t be quantified. It’s possible to use proxies, such as consumer confidence or business expectations surveys, but these are subject to error as well.

Take all the factors above, and you shouldn’t be surprised that most whole economy econometric models have very little predictive power more than a quarter or two ahead, out of sample. I’d note that I’d be very surprised if the government doesn’t have on hand some whole economy econometric models, especially for trade and tax policies. Could more use be made of modeling? Absolutely! Just don’t fall for the promise that they’ll be a panacea and perfect guide to policy.