Wednesday, July 8, 2015

Trading on Leaked Macroeconomic Data

The official release times of U.S. macroeconomic data are big deals in financial markets. A new paper finds evidence of substantial informed trading before the official release time of certain macroeconomic variables, suggesting that information is often leaked. Alexander Kurov, Alessio Sancetta, Georg H. Strasser, and Marketa Halova Wolfe examine high-frequency stock index and Treasury futures markets data around releases of U.S. macroeconomic announcements:
These announcements are quintessential updates to public information on the economy and fundamental inputs to asset pricing. More than a half of the cumulative annual equity risk premium is earned on announcement days (Savor & Wilson, 2013) and the information is almost instantaneously reflected in prices once released (Hu, Pan, & Wang, 2013). To ensure fairness, no market participant should have access to this information until the official release time. Yet, in this paper we find strong evidence of informed trading before several key macroeconomic news announcements....Prices start to move about 30 minutes before the official release time and the price move during this pre-announcement window accounts on average for about a half of the total price adjustment.
They consider the 30 macroeconomic announcements that other authors have shown tend to move markets, and find evidence of:

  • Significant pre-announcement price drift for: CB consumer confidence index, existing home sales, GDP preliminary, industrial production, ISM manufacturing index, ISM non-manufacturing index, and pending home sales.
  • Some pre-announcement drift for: advance retail sales, consumer price index, GDP advance, housing starts, and initial jobless claims.
  • No pre-announcement drift for: ADP employment, durable goods orders, new home sales, non-farm employment, producer price index, and UM consumer sentiment.
The figure below shows mean cumulative average returns in the E-mini S&P 500 Futures market from 60 minutes before the release time to 60 minutes after the release time for the series with significant evidence of pre-announcement drift.

Source: Kurov et al. 2015, Figure A1, panel c. Cumulative average returns in the E-mini S&P 500 Futures market .
Why do prices start to move before release time? It could be that some traders are superior forecasters, making better use of publicly-available information, and waiting until a few minutes before the announcement to make their trades. Alternatively, information might be leaked before the official release. Kurov et al. note that, while the first possibility cannot be ruled out entirely, the leaked information explanation appears highly likely. The authors conducted a phone and email survey of the organizations responsible for the macroeconomic data in their study to find out about data release procedures:
The release procedures fall into one of three categories. The first category involves posting the announcement on the organization’s website at the official release time, so that all market participants can access the information at the same time. The second category involves pre-releasing the information to selected journalists in “lock-up rooms” adding a risk of leakage if the lock-up is imperfectly guarded. The third category, previously not documented in academic literature, involves an unusual pre-release procedure used in three announcements: Instead of being pre-released in lock-up rooms, these announcements are electronically transmitted to journalists who are asked not to share the information with others. These three announcements are among the seven announcements with strong drift.
I wish I had a better sense of who was obtaining the leaked information and how much they were making from it.

Wednesday, June 24, 2015

Forecasting in Unstable Environments

I recently returned from the International Symposium on Forecasting "Frontiers in Forecasting" conference in Riverside. I presented some of my work on inflation uncertainty in a session devoted to uncertainty and the real economy. A highlight was the talk by Barbara Rossi,  a featured presenter from Universitat Pompeu Fabra, on "Forecasting in Unstable Environments: What Works and What Doesn't." (This post will be a bit more technical than my usual.)

Rossi spoke about instabilities in reduced form models and gave an overview of the evidence on what works and what doesn't in guarding against these instabilities. The basic issue is that the predictive ability of different models and variables changes over time. For example, the term spread was a pretty good predictor of GDP growth until the 1990s, and the credit spread was not. But in the 90s the situation reversed, and the credit spread became a better predictor of GDP growth while the term spread got worse.

Rossi noted that break tests and time varying parameter models, two common ways to protect against instabilities in forecasting relationships, do involve tradeoffs. For example, it is common to test for a break in an empirical relationship, then estimate a model in which the coefficients before and after the break differ. Including a break point reduces the bias of your estimates, but also reduces the precision. The more break points you add, the shorter are the time samples you use to estimate the coefficients. This is similar to what happens if you start adding tons of control variables to a regression when your number of observations is small.

Rossi also discussed rolling window estimation. Choosing the optimal window size is a challenge, with a similar bias/precision trade-off. The standard practice of reporting results from only a single window size is problematic, because the window size may have been selected based on "data snooping" to obtain the most desirable results. In work with Atsushi Inoue, Rossi develops out of sample forecast tests that are robust to window size. Many of the basic tools and tests from macroeconomic forecasting-- Granger casualty tests, forecast comparison tests, and forecast optimally tests-- can be made more robust to instabilities. For details, see Raffaella Giacomini and Rossi's chapter in the Handbook of Research Methods and Applications on Empirical Macroeconomics and references therein.

A bit of practical advice from Rossi was to maintain large-dimensional datasets as a guard against instability. In unstable environments, variables that are not useful now may be useful later, and it is increasingly computationally feasible to store and work with big datasets.

Wednesday, June 17, 2015

Another Four Percent

When Jeb Bush announced his presidential candidacy on Monday, he made a bold claim. "There's not a reason in the world we can’t grow at 4 percent a year,” he said, “and that will be my goal as president.”

You can pretty much guarantee that whenever a politician claims "there's not a reason in the world," plenty of people will be happy to provide one, and this case is no exception. Those reasons aside, for now, where did this 4 percent target come from? Jordan Weissmann explains that "the figure apparently originated during a conference call several years ago, during which Bush and several other advisers were brainstorming potential economic programs for the George W. Bush Institute...Jeb casually tossed out the idea of 4 percent growth, which everybody loved, even though it was kind of arbitrary." Jeb Bush himself calls 4 percent "a nice round number. It's double the growth that we are growing at." (To which Jon Perr snippily adds, "It's also an even number and the square of two.")

Let's face it, we have a thing for nice, round, kind of arbitrary numbers. The 2 percent inflation target, for example, was not chosen as the precise solution to some optimization problem, but more as a "rough guess [that] acquired force as a focal point." Psychology research shows that people put in extra effort to reach round number goals,  like a batting average of .300 rather than .299. A 4 percent growth target reduces something multidimensional and hard to define--economic success--to a single, salient number. An explicit numerical target provides an easy guide for accountability. This can be very useful, but it can also backfire.

As an analogy, imagine that citizens of some country have a vague, noble goal for their education system, like "improving student learning." They want to encourage school administrators and teachers to pursue this goal and hold them accountable. But with so many dimensions of student learning, it is difficult to gauge effort or success. They could introduce a mandatory, standardized math test for all students, and rate a teacher as "highly successful" if his or her students' scores improve by at least 10% over the course of the year. A nice round number. This would provide a simple, salient way to judge success, and it would certainly change what goes on in the classroom, with obvious upsides and downsides. Many teachers would put in more effort to ensure that students learned math--at least, the math covered on the test--but might neglect literature, art, or gym. Administrators might have incentive to engage in some deceptive accounting practices, finding reasons why a particular student's score should not be counted, why a group of students should switch classrooms. Even outright cheating, though likely rare, is possible, especially if jobs are hinging on the difference between 9.9% improvement and 10%. What is changing one or two answers?

Ceteris paribus, more math skills would bring a variety of benefits, just like more growth would, as the George W. Bush Institute's 4% Growth Project likes to point out. But making 4 percent growth the standard for success could also change policymakers' incentives and behaviors in some perverse ways. Potential policies' ability to boost growth will be overemphasized, and other merits or flaws (e.g. for the environment or the income distribution) underemphasized. The purported goal is sustained 4 percent growth over long time periods, which implies making the kind of long-run-minded reforms that boost both actual and potential GDP--not just running the economy above capacity for as long as possible until the music stops. But realistically, a president would worry more about achieving 4 percent while in office and less about afterwards, encouraging short-termism at best, or more unsavory practices at worst.

Even with all of these caveats, if the idea of a 4 percent solution still sounds appealing, it is worth opening up the discussion to what other 4 percent solutions might be better. Laurence Ball, Brad Delong, and Paul Krugman have made the case for 4 percent inflation target. I see their points but am not fully convinced. But what about 4 percent unemployment? Or 4 percent nominal wage growth? Are they more or less attainable than 4 percent GDP growth, and how would the benefits compare? If we do decide to buy into a 4 percent target, it is worth at least pausing to think about which 4 percent. 

Tuesday, June 16, 2015

Wage Increases Do Not Signal Impending Inflation

When the FOMC meets over the next two days, they will surely be looking for signs of impending inflation. Even though actual inflation is below target, any hint that pressure is building will be seized upon by more hawkish committee members as impetus for an earlier rate rise. The relatively strong May jobs report and uptick in nominal wage inflation are likely to draw attention in this respect.

Hopefully the FOMC members are aware of new research by two of the Fed's own economists, Ekaterina Peneva and Jeremy Rudd, on the passthrough (or lack thereof) of labor costs to price inflation. The research, which fails to find an important role for labor costs in driving inflation movements, casts doubts on wage-based explanations of inflation dynamics in recent years. They conclude that "price inflation now responds less persistently to changes in real activity or costs; at the same time, the joint dynamics of inflation and compensation no longer manifest the type of wage–price spiral that was evident in earlier decades."

Peneva and Rudd use a time-varying parameter/stochastic volatility VAR framework which lets them see how core inflation responds to a shock to the growth rate of labor costs at different times. The figure below shows how the response has varied over the past few decades. In 1975 and 1985, a rise in labor cost growth was followed by a rise in core inflation, but in recent decades, both before and after the Great Recession, there is no such response:

Peneva and Rudd do not take a strong stance on why wage-price dynamics appear to have changed. But their findings do complement research from Yash Mehra in 2000, who suggests that "One problem with this popular 'cost-push' view of the inflation process is that it does not recognize the influences of Federal Reserve policy and the resulting inflation environment on determining the causal influence of wage growth on inflation. If the Fed follows a non-accommodative monetary policy and keeps inflation low, then firms may not be able to pass along excessive wage gains in the form of higher product prices." Mehra finds that "Wage growth no longer helps predict inflation if we consider subperiods that begin in the early 1980s...The period since the early 1980s is the period during which the Fed has concentrated on keeping inflation low. What is new here is the finding that even in the pre- 1980 period there is another subperiod, 1953Q1 to 1965Q4, during which wage growth does not help predict inflation. This is also the subperiod during which inflation remained mostly low, mainly due to monetary policy pursued by the Fed."

Monday, May 25, 2015

The Limited Political Implications of Behavioral Economics

A recent post on Marginal Revolution contends that progressives use findings from behavioral economics to support the economic policies they favor, while ignoring the implications that support conservative policies. The short post, originally a comment by blogger and computational biologist Luis Pedro Coelho, is perhaps intentionally controversial, arguing that loss aversion is a case against redistributive policies and social mobility:
"Taking from the higher-incomes to give it to the lower incomes may be negative utility as the higher incomes are valuing their loss at an exaggerated rate (it’s a loss), while the lower income recipients under value it... 
...if your utility function is heavily rank-based (a standard left-wing view) and you accept loss-aversion from the behavioral literature, then social mobility is suspect from an utility point-of-view."
Tyler Cowen made a similar point a few years ago, arguing that "For a given level of income, if some are moving up others are moving down... More upward — and thus downward — relative mobility probably means less aggregate happiness, due to habit formation and frame of reference effects."

I don't think loss aversion, habit formation, and the like make a strong case against (or for) redistribution or social mobility, but I do think Coelho has a point that economists need to watch out for our own confirmation bias when we go pointing out other behavioral biases to support our favorite policies. Simply appealing to behavioral economics, in general, or to loss aversion or any number of documented decision-making biases, rarely makes a strong case for or against broad policy aims or strategies. The reason is best summarized by Wolfgang Pesendorfer in "Behavioral Economics Comes of Age":
Behavioral economics argues that economists ignore important variables that affect behavior. The new variables are typically shown to affect decisions in experimental settings. For economists, the difficulty is that these new variables may be unobservable or even difficult to define in economic settings with economic data. From the perspective of an economist, the unobservable variable amounts to a free parameter in the utility function. Having too many such parameters already, the economist finds it difficult to utilize the experimental finding.
All economic models require making drastic simplifications of reality. Whether they can say anything useful depends on how well they can capture those aspects of reality that are relevant to the question at hand and leave out those that aren't. Behavioral economics has done a good job of pointing out some aspects of reality that standard models leave out, but not always of telling us exactly when these are more relevant than dozens of other aspects of reality we also leave out without second thought. For example, "default bias" seems to be a hugely important factor in retirement savings, so it should definitely be a consideration in the design of very narrow policies regarding 401(K) plan participation, but that does not mean we need to also include it in every macroeconomic model.

Monday, May 11, 2015

Release of "Rewriting the Rules"

I have been working with the Roosevelt Institute and Joseph Stiglitz on report called "Rewriting the Rules of the American Economy: An Agenda for Growth and Shared Prosperity":
In this new report, the Roosevelt Institute exposes the link between the rapidly rising fortunes of America’s wealthiest citizens and increasing economic insecurity for everyone else. The conclusion is clear: piecemeal policy change will not do. To improve economic performance and create shared prosperity, we must rewrite the rules of our economy.
The report will be released tomorrow morning in DC, with remarks by Senator Elizabeth Warren and Mayor Bill de Blasio. You can watch the livestream beginning at 9 a.m. Eastern tomorrow (May 12). There will be an excellent panel of speakers including Rana Foroohar, Heather Boushey, Stan Greenberg, Simon Johnson, Bob Solow, and Lynn Stout. You can also follow along on Twitter with the hashtag #RewriteTheRules.

Monday, May 4, 2015

Firm Balance Sheets and Unemployment in the Great Recession

The balance sheets of households and financial firms have received a lot of emphasis in research on the Great Recession. The balance sheets of non-financial firms, in contrast, have received less attention. At first glance, this is perfectly reasonable; households and financial firms had high and rising leverage in the years leading up to the Great Recession, while non-financial firms' leverage remained constant (Figure 1, below).

New research by Xavier Giroud and Holger M. Mueller argues that the flat trendline for non-financial firms' leverage obscures substantial variation across firms, which proves important to understanding employment in the recession. Some firms saw large increases in leverage prior to the recession and others large declines. Using an establishment-level dataset with more than a quarter million observations, Giroud and Mueller find that "firms that tightened their debt capacity in the run-up ('high-leverage firms') exhibit a significantly larger decline in employment in response to household demand shocks than firms that freed up debt capacity ('low-leverage firms')."
The authors emphasize that "we do not mean to argue that household balance sheets or those of financial intermediaries are unimportant. On the contrary, our results are consistent with the view that falling house prices lead to a drop in consumer demand by households (Mian, Rao, and Sufi (2013)), with important consequences for employment (Mian and Sufi (2014)). But households do not lay off workers. Firms do. Thus, the extent to which demand shocks by households translate into employment losses depends on how firms respond to these shocks."

Firms' responses to household demand shocks depend largely on their balance sheets. Low-leverage firms were able to increase their borrowing during the recession to avoid reducing employment, while high-leverage firms were financially constrained and could not raise external funds to avoid reducing employment and cutting back investment:
"In fact, all of the job losses associated with falling house prices are concentrated among establishments of high-leverage firms. By contrast, there is no significant association between changes in house prices and changes in employment during the Great Recession among establishments of low-leverage firms."