Thursday, August 6, 2015

Macroeconomics Research at Liberal Arts Colleges

I spent the last two days at the 11th annual Workshop on Macroeconomics Research at Liberal Arts Colleges at Union College. The workshop reflects the growing emphasis that liberal arts colleges place on faculty research. There were four two-hour sessions of research presentations--international, banking, information and expectations, and theory--in addition to breakout sessions on pedagogy. I presented my research in the information and expectations session.

I definitely recommend this workshop to other liberal arts macro professors. The end of summer timing was great. I got to think about how to prioritize my research goals before the semester starts and to hear advice on teaching and course planning from a lot of really passionate teachers. It was very encouraging to witness how many liberal arts college professors at all stages of their careers have maintained very active research agendas while also continually improving in their roles as teachers and advisors.

After dinner on the first day of the workshop, there was a panel discussion about publishing with undergraduates. I also attended a pedagogy session on advising undergraduate research. Many of the liberal arts colleges represented at the workshop have some form of a senior thesis requirement. A big part of the discussion was how to balance the emphasis on "product vs. process" for undergraduate research. In other words, how active of a role should a faculty member take in trying to ensure a high-quality final product of a senior thesis project versus ensuring that different learning goals are met. What should those learning goals be? Some possibilities include helping students decide if they want to go to grad school, teach independence, writing skills, econometric techniques, the ability to for an economic argument. And relatedly, how should grades or honors designations reflect the final product and the learning goals that are emphasized?

We also discussed the relative merits of helping students publish their research, either in an undergraduate journal or a professional journal. There was a lot of lack of clarity about how it affects an assistant professor's tenure case if they have very low-ranked publications with undergraduate coauthors, and a general desire for more explicit guidelines about whether that is considered a valuable contribution.

These discussions of research by or with undergraduates left me really curious to hear about others' experiences doing or supervising undergraduate research. I'd be very happy to feature some examples of research with or by undergraduates as guest posts. Send me an email if you're interested.

At least two other conference participants have blogs, and they are definitely worth checking out. Joseph Joyce of Wellesley blogs about international finance at "Capital Ebbs and Flows." Bill Craighead of Wesleyan blogs at "Twenty-Cent Paradigms." Both have recent thoughtful commentary on Greece.

Friday, July 31, 2015

Surveys in Crisis

In "Household Surveys in Crisis," Bruce D. Meyer, Wallace K.C. Mok, and James X. Sullivan describe household surveys as "one of the main innovations in social science research of the last century." Large, nationally representative household surveys are the source of official rates of unemployment, poverty, and health insurance coverage, and are used to allocate government funds. But the quality of survey data is declining on at least three counts.

The first and most commonly studied problem is the rise in unit nonresponse, meaning fewer people are willing to take a survey when asked. Two other growing problems are item nonresponse-- when someone agrees to take the survey but refuses to answer particular questions-- and inaccurate responses. Of course, the three problems can be related. For example, attempts to reduce unit nonresponse by persuading reluctant households to take a survey could raise item nonresponse and inaccurate responses if these reluctant participants rush through a survey they didn't really want to take in the first place.

Unit nonresponse, item nonresponse, and inaccurate responses would not be too troublesome if they were random enough that survey statistics were unbiased, but that is unlikely to be the case. Nonresponse and misreporting may be systematically correlated with relevant characteristics such as income or receipt of government funds. Meyer, Mok, and Sullivan look at survey data about government transfer programs for which corresponding administrative data is also available, so they can compare survey results to presumably more accurate administrative data. In this case, the survey data understates incomes at the bottom of the distribution, understates the rate of program receipt and the poverty reducing effects of government programs, and overstates measures of poverty and of inequality. For other surveys that cannot be linked to administrative data, it is difficult to say which direction biases will go.

Why has survey quality declined? The authors discuss many of the traditional explanations:
"Among the traditional reasons proposed include increasing urbanization, a decline in public spirit, increasing time pressure, rising crime (this pattern reversed long ago), increasing concerns about privacy and confidentiality, and declining cooperation due to 'over-surveyed' households (Groves and Couper 1998; Presser and McCullogh 2011; Brick and Williams 2013). The continuing increase in survey nonresponse as urbanization has slowed and crime has fallen make these less likely explanations for present trends. Tests of the remaining hypotheses are weak, based largely on national time-series analyses with a handful of observations. Several of the hypotheses require measuring societal conditions that can be difficult to capture: the degree of public spirit, concern about confidentiality, and time pressure...We are unaware of strong evidence to support or refute a steady decline in public spirit or a rise in confidentiality concerns as a cause for declines in survey quality."
They find it most likely that the sharp rise in the number of government surveys administered in the US since 1984 has resulted in declining cooperation by "over-surveyed" households. "We suspect that talking with an interviewer, which once was a rare chance to tell someone about your life, now is crowded out by an annoying press of telemarketers and commercial surveyors."

Personally, I have not received any requests to participate in government surveys and rarely receive commercial survey requests. Is this just because I moved around so much as a student? Am I about to be flooded with requests? I think I would actually find it fun to take some surveys after working with the data so much. Please leave a comment about your experience with taking (or declining to take) surveys.

The authors also note that since there is a trend toward greater leisure time, it is unlikely that increased time pressure is resulting in declining survey quality. However, while people have more leisure time, they may also have more things to do with their leisure time (I'm looking at you, Internet) that they prefer to taking surveys. Intuitively I would guess that as people have grown more accustomed to doing everything online, they are less comfortable talking to an interviewer in person or on the phone. Since I almost never have occasion to go to the post office, I can imagine forgetting to mail in a paper survey. Switching surveys to online format could result in a new set of biases, but may eventually be the way to go.

I would also guess that the Internet has changed people's relationship with information, even information about themselves. When you can look up anything easily, that can change what you decide to remember and what facts you feel comfortable reporting off the top of your head to an interviewer.

Wednesday, July 8, 2015

Trading on Leaked Macroeconomic Data

The official release times of U.S. macroeconomic data are big deals in financial markets. A new paper finds evidence of substantial informed trading before the official release time of certain macroeconomic variables, suggesting that information is often leaked. Alexander Kurov, Alessio Sancetta, Georg H. Strasser, and Marketa Halova Wolfe examine high-frequency stock index and Treasury futures markets data around releases of U.S. macroeconomic announcements:
These announcements are quintessential updates to public information on the economy and fundamental inputs to asset pricing. More than a half of the cumulative annual equity risk premium is earned on announcement days (Savor & Wilson, 2013) and the information is almost instantaneously reflected in prices once released (Hu, Pan, & Wang, 2013). To ensure fairness, no market participant should have access to this information until the official release time. Yet, in this paper we find strong evidence of informed trading before several key macroeconomic news announcements....Prices start to move about 30 minutes before the official release time and the price move during this pre-announcement window accounts on average for about a half of the total price adjustment.
They consider the 30 macroeconomic announcements that other authors have shown tend to move markets, and find evidence of:

  • Significant pre-announcement price drift for: CB consumer confidence index, existing home sales, GDP preliminary, industrial production, ISM manufacturing index, ISM non-manufacturing index, and pending home sales.
  • Some pre-announcement drift for: advance retail sales, consumer price index, GDP advance, housing starts, and initial jobless claims.
  • No pre-announcement drift for: ADP employment, durable goods orders, new home sales, non-farm employment, producer price index, and UM consumer sentiment.
The figure below shows mean cumulative average returns in the E-mini S&P 500 Futures market from 60 minutes before the release time to 60 minutes after the release time for the series with significant evidence of pre-announcement drift.

Source: Kurov et al. 2015, Figure A1, panel c. Cumulative average returns in the E-mini S&P 500 Futures market .
Why do prices start to move before release time? It could be that some traders are superior forecasters, making better use of publicly-available information, and waiting until a few minutes before the announcement to make their trades. Alternatively, information might be leaked before the official release. Kurov et al. note that, while the first possibility cannot be ruled out entirely, the leaked information explanation appears highly likely. The authors conducted a phone and email survey of the organizations responsible for the macroeconomic data in their study to find out about data release procedures:
The release procedures fall into one of three categories. The first category involves posting the announcement on the organization’s website at the official release time, so that all market participants can access the information at the same time. The second category involves pre-releasing the information to selected journalists in “lock-up rooms” adding a risk of leakage if the lock-up is imperfectly guarded. The third category, previously not documented in academic literature, involves an unusual pre-release procedure used in three announcements: Instead of being pre-released in lock-up rooms, these announcements are electronically transmitted to journalists who are asked not to share the information with others. These three announcements are among the seven announcements with strong drift.
I wish I had a better sense of who was obtaining the leaked information and how much they were making from it.

Wednesday, June 24, 2015

Forecasting in Unstable Environments

I recently returned from the International Symposium on Forecasting "Frontiers in Forecasting" conference in Riverside. I presented some of my work on inflation uncertainty in a session devoted to uncertainty and the real economy. A highlight was the talk by Barbara Rossi,  a featured presenter from Universitat Pompeu Fabra, on "Forecasting in Unstable Environments: What Works and What Doesn't." (This post will be a bit more technical than my usual.)

Rossi spoke about instabilities in reduced form models and gave an overview of the evidence on what works and what doesn't in guarding against these instabilities. The basic issue is that the predictive ability of different models and variables changes over time. For example, the term spread was a pretty good predictor of GDP growth until the 1990s, and the credit spread was not. But in the 90s the situation reversed, and the credit spread became a better predictor of GDP growth while the term spread got worse.

Rossi noted that break tests and time varying parameter models, two common ways to protect against instabilities in forecasting relationships, do involve tradeoffs. For example, it is common to test for a break in an empirical relationship, then estimate a model in which the coefficients before and after the break differ. Including a break point reduces the bias of your estimates, but also reduces the precision. The more break points you add, the shorter are the time samples you use to estimate the coefficients. This is similar to what happens if you start adding tons of control variables to a regression when your number of observations is small.

Rossi also discussed rolling window estimation. Choosing the optimal window size is a challenge, with a similar bias/precision trade-off. The standard practice of reporting results from only a single window size is problematic, because the window size may have been selected based on "data snooping" to obtain the most desirable results. In work with Atsushi Inoue, Rossi develops out of sample forecast tests that are robust to window size. Many of the basic tools and tests from macroeconomic forecasting-- Granger casualty tests, forecast comparison tests, and forecast optimally tests-- can be made more robust to instabilities. For details, see Raffaella Giacomini and Rossi's chapter in the Handbook of Research Methods and Applications on Empirical Macroeconomics and references therein.

A bit of practical advice from Rossi was to maintain large-dimensional datasets as a guard against instability. In unstable environments, variables that are not useful now may be useful later, and it is increasingly computationally feasible to store and work with big datasets.

Wednesday, June 17, 2015

Another Four Percent

When Jeb Bush announced his presidential candidacy on Monday, he made a bold claim. "There's not a reason in the world we can’t grow at 4 percent a year,” he said, “and that will be my goal as president.”

You can pretty much guarantee that whenever a politician claims "there's not a reason in the world," plenty of people will be happy to provide one, and this case is no exception. Those reasons aside, for now, where did this 4 percent target come from? Jordan Weissmann explains that "the figure apparently originated during a conference call several years ago, during which Bush and several other advisers were brainstorming potential economic programs for the George W. Bush Institute...Jeb casually tossed out the idea of 4 percent growth, which everybody loved, even though it was kind of arbitrary." Jeb Bush himself calls 4 percent "a nice round number. It's double the growth that we are growing at." (To which Jon Perr snippily adds, "It's also an even number and the square of two.")

Let's face it, we have a thing for nice, round, kind of arbitrary numbers. The 2 percent inflation target, for example, was not chosen as the precise solution to some optimization problem, but more as a "rough guess [that] acquired force as a focal point." Psychology research shows that people put in extra effort to reach round number goals,  like a batting average of .300 rather than .299. A 4 percent growth target reduces something multidimensional and hard to define--economic success--to a single, salient number. An explicit numerical target provides an easy guide for accountability. This can be very useful, but it can also backfire.

As an analogy, imagine that citizens of some country have a vague, noble goal for their education system, like "improving student learning." They want to encourage school administrators and teachers to pursue this goal and hold them accountable. But with so many dimensions of student learning, it is difficult to gauge effort or success. They could introduce a mandatory, standardized math test for all students, and rate a teacher as "highly successful" if his or her students' scores improve by at least 10% over the course of the year. A nice round number. This would provide a simple, salient way to judge success, and it would certainly change what goes on in the classroom, with obvious upsides and downsides. Many teachers would put in more effort to ensure that students learned math--at least, the math covered on the test--but might neglect literature, art, or gym. Administrators might have incentive to engage in some deceptive accounting practices, finding reasons why a particular student's score should not be counted, why a group of students should switch classrooms. Even outright cheating, though likely rare, is possible, especially if jobs are hinging on the difference between 9.9% improvement and 10%. What is changing one or two answers?

Ceteris paribus, more math skills would bring a variety of benefits, just like more growth would, as the George W. Bush Institute's 4% Growth Project likes to point out. But making 4 percent growth the standard for success could also change policymakers' incentives and behaviors in some perverse ways. Potential policies' ability to boost growth will be overemphasized, and other merits or flaws (e.g. for the environment or the income distribution) underemphasized. The purported goal is sustained 4 percent growth over long time periods, which implies making the kind of long-run-minded reforms that boost both actual and potential GDP--not just running the economy above capacity for as long as possible until the music stops. But realistically, a president would worry more about achieving 4 percent while in office and less about afterwards, encouraging short-termism at best, or more unsavory practices at worst.

Even with all of these caveats, if the idea of a 4 percent solution still sounds appealing, it is worth opening up the discussion to what other 4 percent solutions might be better. Laurence Ball, Brad Delong, and Paul Krugman have made the case for 4 percent inflation target. I see their points but am not fully convinced. But what about 4 percent unemployment? Or 4 percent nominal wage growth? Are they more or less attainable than 4 percent GDP growth, and how would the benefits compare? If we do decide to buy into a 4 percent target, it is worth at least pausing to think about which 4 percent. 

Tuesday, June 16, 2015

Wage Increases Do Not Signal Impending Inflation

When the FOMC meets over the next two days, they will surely be looking for signs of impending inflation. Even though actual inflation is below target, any hint that pressure is building will be seized upon by more hawkish committee members as impetus for an earlier rate rise. The relatively strong May jobs report and uptick in nominal wage inflation are likely to draw attention in this respect.

Hopefully the FOMC members are aware of new research by two of the Fed's own economists, Ekaterina Peneva and Jeremy Rudd, on the passthrough (or lack thereof) of labor costs to price inflation. The research, which fails to find an important role for labor costs in driving inflation movements, casts doubts on wage-based explanations of inflation dynamics in recent years. They conclude that "price inflation now responds less persistently to changes in real activity or costs; at the same time, the joint dynamics of inflation and compensation no longer manifest the type of wage–price spiral that was evident in earlier decades."

Peneva and Rudd use a time-varying parameter/stochastic volatility VAR framework which lets them see how core inflation responds to a shock to the growth rate of labor costs at different times. The figure below shows how the response has varied over the past few decades. In 1975 and 1985, a rise in labor cost growth was followed by a rise in core inflation, but in recent decades, both before and after the Great Recession, there is no such response:

Peneva and Rudd do not take a strong stance on why wage-price dynamics appear to have changed. But their findings do complement research from Yash Mehra in 2000, who suggests that "One problem with this popular 'cost-push' view of the inflation process is that it does not recognize the influences of Federal Reserve policy and the resulting inflation environment on determining the causal influence of wage growth on inflation. If the Fed follows a non-accommodative monetary policy and keeps inflation low, then firms may not be able to pass along excessive wage gains in the form of higher product prices." Mehra finds that "Wage growth no longer helps predict inflation if we consider subperiods that begin in the early 1980s...The period since the early 1980s is the period during which the Fed has concentrated on keeping inflation low. What is new here is the finding that even in the pre- 1980 period there is another subperiod, 1953Q1 to 1965Q4, during which wage growth does not help predict inflation. This is also the subperiod during which inflation remained mostly low, mainly due to monetary policy pursued by the Fed."

Monday, May 25, 2015

The Limited Political Implications of Behavioral Economics

A recent post on Marginal Revolution contends that progressives use findings from behavioral economics to support the economic policies they favor, while ignoring the implications that support conservative policies. The short post, originally a comment by blogger and computational biologist Luis Pedro Coelho, is perhaps intentionally controversial, arguing that loss aversion is a case against redistributive policies and social mobility:
"Taking from the higher-incomes to give it to the lower incomes may be negative utility as the higher incomes are valuing their loss at an exaggerated rate (it’s a loss), while the lower income recipients under value it... 
...if your utility function is heavily rank-based (a standard left-wing view) and you accept loss-aversion from the behavioral literature, then social mobility is suspect from an utility point-of-view."
Tyler Cowen made a similar point a few years ago, arguing that "For a given level of income, if some are moving up others are moving down... More upward — and thus downward — relative mobility probably means less aggregate happiness, due to habit formation and frame of reference effects."

I don't think loss aversion, habit formation, and the like make a strong case against (or for) redistribution or social mobility, but I do think Coelho has a point that economists need to watch out for our own confirmation bias when we go pointing out other behavioral biases to support our favorite policies. Simply appealing to behavioral economics, in general, or to loss aversion or any number of documented decision-making biases, rarely makes a strong case for or against broad policy aims or strategies. The reason is best summarized by Wolfgang Pesendorfer in "Behavioral Economics Comes of Age":
Behavioral economics argues that economists ignore important variables that affect behavior. The new variables are typically shown to affect decisions in experimental settings. For economists, the difficulty is that these new variables may be unobservable or even difficult to define in economic settings with economic data. From the perspective of an economist, the unobservable variable amounts to a free parameter in the utility function. Having too many such parameters already, the economist finds it difficult to utilize the experimental finding.
All economic models require making drastic simplifications of reality. Whether they can say anything useful depends on how well they can capture those aspects of reality that are relevant to the question at hand and leave out those that aren't. Behavioral economics has done a good job of pointing out some aspects of reality that standard models leave out, but not always of telling us exactly when these are more relevant than dozens of other aspects of reality we also leave out without second thought. For example, "default bias" seems to be a hugely important factor in retirement savings, so it should definitely be a consideration in the design of very narrow policies regarding 401(K) plan participation, but that does not mean we need to also include it in every macroeconomic model.