The Bank of England cuts its policy rate: rather little and rather late

The Bank of England cut its official policy rate (Bank Rate) by 25 basis points to 5.50 percent today (December 6, 2007). A month ago, on November 7, the Bank released an inflation forecast conditioned on market expectations of future interest rates. At the time at least 50 basis points of cuts were incorporated in market expectations. Despite the incorporation of 50 basis points worth of Bank Rate cuts in the forecast, the most likely outcome for inflation at the two-year horizon was below two percent – the official target. At the MPC meeting of November 8, Bank Rate was kept constant. 

To my way of thinking, it makes no sense to produce a forecast that (1) is supposed to reflect the views of the MPC, (2) incorporates an assumption of at least 50 basis points worth of rate cuts, (3) undershoots the inflation target at the horizon when the impact of current interest rate decisions is strongest, and then not to cut rates at the earliest opportunity, that is, on November 7. Based on the (admittedly incomplete) information available to me, if I had signed up to the forecast of November 7, I would have voted for a 50 basis points rate cut on November 8. Instead we got nothing on November 8 and a 25 basis points cut on December 6. Small mercies…. 

What accounts for this apparent addiction to gradualism (or reactive rather than pre-emptive) decision making by the MPC? 

The science (or lack of it) of uncertainty, gradualism, prudence and caution
Much of macroeconomic thinking still proceeds as if the economy would be represented by a system of linear equations with known coefficients. The only uncertainty allowed for is ‘additive noise’ – random disturbances that don’t affect the transmission mechanism from monetary and fiscal policy instruments or from exogenous shocks and developments to variables of interest, such as output, inflation, employment or asset prices. Instead they just change the realisations of these variables of interest by some random amount that is independent of the rest of the transmission mechanism. If in addition the objectives of the policy maker can be represented by a quadratic function (e.g. a weighted average of the squared deviation of inflation from target and of the output gap), then optimal policy satisfies the ‘principle of certainty equivalence’ (PCE).

According to PCE, policy under uncertainty should be formulated as follows: first, all random variables are set equal to their expected values; second, proceed to optimise the resulting non-stochastic or deterministic system. To put it crudely: according to the PCE, the way to handle uncertainty is to replace uncertain, random variables by their expected values and then to ignore the uncertainty. Under the conditions stated above, such a policy is actually optimal. 

Back to the real world. Any useful ‘model’ of the macro economy will be both non-linear and stochastic. Uncertainty will not just enter in the form of additive disturbances. Ignorance and uncertainty will affect all the key components of the transmission mechanism. If this is what the ‘real world’ (or our best empirical approximation to it) looks like, what will optimal policy look like? The honest answer is: we don’t have a clue. 

There is a massive and once again rapidly growing literature on optimal or robust policy design when there is ‘model uncertainty’ – pervasive uncertainty about the nature of the relationships between key variables; uncertainty about which variables should be included and excluded; ‘parameter uncertainty’ and even our trusty friend additive uncertainty. Except for a few hilariously simplistic examples that can be solved on the back of a rather large envelope, however, there are no general results, no robust policy-relevant conclusions. 

When there is pervasive model uncertainty, but the economy is known to be a non-linear stochastic system, policy makers have to specify and estimate a model, use it to make predictions of the future and to derive optimal policy. In general, specification, estimation, prediction and optimisation should not be done separately or sequentially. In practice, that is exactly what is done, because the right approach is not implementable. Old warhorses are re-activated, such as the use of maximin or minimax strategies (policy rules that minimize the likelihood of the worst possible outcome).

Martin Feldstein, in his Jackson Hole Conference Speech of August 2007, called for ‘risk-based decision theory’ to be applied to the fall-out from the housing market collapse and sub-prime crisis. That is unhelpful advice, however, unless you specify the objective function of the policy makers a little more precisely than Martin Feldstein did, and unless the transmission mechanism was specified more clearly.  Clearly, Feldstein’s call for a 100 basis points cut in the Federal Funds target rate must put rather more weight on the real economy than on inflation, and must attribute a quick and powerful impact of rate cuts on the real economy – a debatable assumption.  With different assumptions, I could have risk-based decision theory calling for a 50 basis points increase in the Federal Funds target rate. It won’t be long before we will see the ‘precautionary principle’ transplanted from the eco-sciences and eco-ethics into optimal macro policy design  (the precautionary principle is a ethical principle according to which which action should be taken to prevent serious or irreversible harm to public health or the environment, despite lack of definitive scientific certainty as to the likelihood, magnitude, or causation of that harm; what it means in practice will of course depend on whether it is short-run output and employment or inflation that is valued most, and on the likelihood of ‘irreversible’ harm being done to either).

The economic models that are useful for policy analysis are all shot through with private sector expectations of future asset prices, rates of return, income streams and future policy behaviour. For a while, the massive problem of modelling private expectations was finessed through the use of the assumption of rational or model-consistent expectations: private sector expectations were assumed to correspond to the optimal forecasts that would be made by using the (correct) model that these private expectations are a building block of. Except in classrooms, rational expectations no longer play a meaningful role. In its place has come an undisciplined explosion of ad-hoc learning models. They range from under-educated price and wage setters using ordinary least squares estimation methods, to post-graduate price and wage setters using some version of the Kalman filter (a recursive estimation method), to Nobel-calibre price and wage setters using optimal Bayesian estimation and prediction methods. But in any realistic setting, the exercise becomes utterly ad-hoc and seat-of-the-pants. There is no theory of ‘learning’ that has been shown to be useful to estimation, prediction and optimisation in non-linear stochastic macro models. Those who are groping around for a theory of rational learning don’t realise that they blind men looking in a pitch-dark room for a black cat named Oxymoron which isn’t there in the first place.

Central bank gradualism as a manifestation of timidity and indecision
So, since we know nothing, why the common central bank presumption in favour of gradualism – what is often called caution or prudence?

The only explanation I can think of is that this presumption reflects a massive misunderstanding of the implications of one among the very few analytically tractable papers on decision making under uncertainty. The paper is by my former teacher and colleague, William C. Brainard – the fastest mind in the west. He looks at a world in which a single instrument (the interest rate, say) affects a single target (inflation, say). The effect is linear, but in addition to additive uncertainty, the policy impact (the effect of the interest rate on inflation) is itself random. This case of instrument uncertainty implies that, if you care, say, about the expected squared deviation of inflation from its target level, you will end up acting cautiously compared to someone who follows a certainty-equivalent strategy. You will do less. 

And off they go. Model uncertainty, which includes instrument uncertainty, is a feature of the real world. In Brainard’s model of instrument uncertainty, instrument uncertainty motivates cautious behaviour. Cautious behaviour means doing less. In a monetary policy setting doing less means, obviously, achieving a given required change in interest rates in a number of little steps rather than in one big step. Not! 

Consider the following policy making under uncertainty exercise. I stand before a 5 feet wide and 500 feet deep precipice. It is broad daylight. I want to get to the other side. I can target my leaps accurately, but jumping farther is more costly to me. I will make a jump somewhere over 5 feet long. Now it is pitch dark. I know where the near edge of the precipice is, but not where the far edge is. I know the precipice is either 4 feet wide or 6 feet wide (and still 500 feet deep). I will indeed be cautious, even prudent. But cautious here means that I will take a bigger leap that when there is no uncertainty. Under most plausible specifications of the cost of jumping and my valuation of life, I will jump at least 6 feet under uncertainty. Caution means doing more. 

I don’t know whether the British economy, and specifically the transmission mechanism from Bank Rate to the inflation target and to the real economy is more like the Brainard example or my leap over the precipice example. I don’t think the MPC know either. The choice between gradualism and ‘cold turkey’/big bang/whole hoggism/ may therefore well be more a matter of temperament and instinct than of science. 

Against that, most central bankers are born procrastinators. They have an almost pathological ‘fear of reversals’: raising rates one month and having to reverse this increase at the next meeting or soon after. That fear of reversals is completely irrational. If you are anywhere near the optimum, there is always a good chance of a rate change in either direction at the next meeting, regardless of what the direction of your last move was. Anyone asking me why I favour a cut this month when I favoured an increase last month will get the answer: “that was then, this is now”, or “when the facts change, I change my mind - what do you do, sir?". Fear of reversals is likely to lead to central banks being systematically “behind the curve”, always looking for that elusive additional bit of information and higher degree of certainty that is bound to be just around the corner – if we but wait.  The ECB is a prime example of a central bank that it always running to catch up with the facts. The Bank of England was the central bank least subject to this procrastination bias in its first decade, but it appears to be slipping into the same quagmire so many other central banks have as their preferred habitat. 

There are few central bankers who, when faced with the need for a significant increase or cut in interest rates don’t prefer to do it in six steps of 25 basis points each rather than in one 150 basis points steps. The most extreme example of such a policy was the Fed’s 17 successive 25 basis points increases from June 2004 until June 2006.  If ever there was an example of rampant, out-of-control gradualism, this is it.  It meant that rates were far too low for far too long and that an inordinate amount of unnecessary liquidity was injected into the US financial markets and, through the exchange rate pegs of Bretton Woods II, also into much of the rest of the world. 

Conclusion
Things could have been worse today. The Bank of England could have kept rates unchanged.  The language of the statement suggests deep disagreement among the members.  It is quite possible that the Governor opposed today’s rate cut and was in the minority.  I believe that the real economy in the UK is likely to turn down more sharply than in the MPC’s central forecast, and that, over the horizon that the MPC can influence inflation, the inflation rate is more likely to undershoot the target than to overshoot it. The financial turmoil that erupted on August 9 is one of the drivers of this slowdown, but by no means the only one.  Further cuts are likely to be required to meet the inflation target, so why not have them now?

Maverecon: Willem Buiter

Willem Buiter's blog ran until December 2009. This blog is no longer active but it remains open as an archive.

Professor of European Political Economy, London School of Economics and Political Science; former chief economist of the EBRD, former external member of the MPC; adviser to international organisations, governments, central banks and private financial institutions.

Willem Buiter's website

Maverecon: a guide

Comment: To comment, please register with FT.com, which you can do for free here. Please also read our comments policy here.
Contact: You can write to Willem by using the email addresses shown on his website.
Time: UK time is shown on posts.
Follow: Links to the blog's Twitter and RSS feeds are at the top of the page. You can also read Maverecon on your mobile device, by going to www.ft.com/maverecon