Friday, August 18, 2017

The Low Misery Dilemma

The other day, Tim Duy tweeted:

It took me a moment--and I'd guess I'm not alone--to even recognize how remarkable this is. The New York Times ran an article with the headline "Fed Officials Confront New Reality: Low Inflation and Low Unemployment." Confront, not embrace, not celebrate.

The misery index is the sum of unemployment and inflation. Arthur Okun proposed it in the 1960s as a crude gauge of the economy, based on the fact that high inflation and high unemployment are both miserable (so high values of the index are bad). The misery index was pretty low in the 60s, in the 6% to 8% range, similar to where it has been since around 2014. Now it is around 6%. Great, right?

The NYT article notes that we are in an opposite situation to the stagflation of the 1970s and early 80s, when both high inflation and high unemployment were concerns. The misery index reached a high of 21% in 1980. (The unemployment data is only available since 1948).

Very high inflation and high unemployment are each individually troubling for the social welfare costs they impose (which are more obvious for unemployment). But observed together, they also troubled economists for seeming to run contrary to the Phillips curve-based models of the time. The tradeoff between inflation and unemployment wasn't what economists and policymakers had believed, and their misunderstanding probably contributed to the misery.

Though economic theory has evolved, the basic Phillips curve tradeoff idea is still an important part of central bankers' models. By models, I mean both the formal quantitative models used by their staffs and the way they think about how the world works. General idea: if the economy is above full employment, that should put upward pressure on wages, which should put upward pressure on prices.

So low unemployment combined with low inflation seem like a nice problem to have, but if they are indeed a new reality-- that is, something that will last--then there is something amiss in that chain of logic. Maybe we are not at full employment, because the natural rate of unemployment is a lot lower than we thought, or we are looking at the wrong labor market indicators. Maybe full employment does not put upward pressure on wages, for some reason, or maybe we are looking at the wrong wage measures. For example, San Francisco Fed researchers argue that wage growth measures should be adjusted in light of retiring Baby Boomers. Or maybe the link between wage and price inflation has weakened.

Until policymakers feel confident that they understand why we are experiencing both low inflation and low unemployment, they can't simply embrace the low misery. It is natural that they will worry that they are missing something, and that the consequences of whatever that is could be disastrous. The question is what to do in the meanwhile.

There are two camps for Fed policy. One camp favors a wait-and-see approach: hold rates steady until we actually observe inflation rising above 2%. Maybe even let it stay above 2% for awhile, to make up for the lengthy period of below-2% inflation. The other camp favors raising rates preemptively, just in case we are missing some sign that inflation is about to spiral out of control. This latter possibility strikes me as unlikely, but I'm admittedly oversimplifying the concerns, and also haven't personally experienced high inflation.


Thursday, August 10, 2017

Macro in the Econ Major and at Liberal Arts Colleges

Last week, I attended the 13th annual Conference of Macroeconomists from Liberal Arts Colleges, hosted this year by Davidson College. I also attended the conference two years ago at Union College. I can't recommend this conference strongly enough!

The conference is a response to the increasing expectation of high quality research at many liberal arts colleges. Many of us are the only macroeconomist at our college, and can't regularly attend macro seminars, so the conference is a much-needed opportunity to receive feedback on work in progress. (The paper I presented last time just came out in the Journal of Monetary Economics!)
This time, I presented "Inflation Expectations and the Price at the Pump" and discussed Erin Wolcott's paper, "Impact of Foreign Official Purchases of U.S.Treasuries on the Yield Curve."

There was a wide range of interesting work. For example, Gina Pieters presented “Bitcoin Reveals Unofficial Exchange Rates and Detects Capital Controls.” M. Saif Mehkari's work on “Repatriation Taxes” is highly relevant to today's policy discussions. Most of the presenters and attendees were junior faculty members, but three more senior scholars held a panel discussion at dinner. Next year, the conference will be held at Wake Forest.

I also attended a session on "Macro in the Econ Major" led by PJ Glandon. A link to his slides is here. One slide presented the image below, prompting an interesting discussion about whether and how we should tailor what is taught in macro courses to our perception of the students' interests and career goals.





Monday, August 7, 2017

Labor Market Conditions Index Discontinued

A few years ago, I blogged about the Fed's new Labor Market Conditions Index (LMCI). The index attempts to summarize the state of the labor market using a statistical technique that captures the primary common variation from 19 labor market indicators. I was skeptical about the usefulness of the LMCI for a few reasons. And as it turns out, the LMCI is now discontinued as of August 3.

The discontinuation is newsworthy because the LMCI was cited in policy discussions at the Fed, even by Janet Yellen. The index became high-profile enough that I was even interviewed about it on NPR's Marketplace.

One issue that I noted with the index in my blog was the following:
A minor quibble with the index is its inclusion of wages in the list of indicators. This introduces endogeneity that makes it unsuitable for use in Phillips Curve-type estimations of the relationship between labor market conditions and wages or inflation. In other words, we can't attempt to estimate how wages depend on labor market tightness if our measure of labor market tightness already depends on wages by construction.
This corresponds to one reason that is provided for the discontinuation of the index: "including average hourly earnings as an indicator did not provide a meaningful link between labor market conditions and wage growth."

The other reasons provided for discontinuation are that "model estimates turned out to be more sensitive to the detrending procedure than we had expected" and "the measurement of some indicators in recent years has changed in ways that significantly degraded their signal content."

I also noted in my blog post and on NPR that the index is almost perfectly correlated with the unemployment rate, meaning it provides very little additional information about labor market conditions. (Or interpreted differently, meaning that the unemployment rate provides a lot of information about labor market conditions.) The development of the LMCI was part of a worthy effort to develop alternative informative measures of labor market conditions that can help policymakers gauge where we are relative to full employment and predict what is likely to happen to prices and wages. So since resources and attention are limited, I think it is wise that they can be directed toward developing and evaluating other measures. 

Thursday, July 27, 2017

The Obesity Code and Economists as General Practitioners

"The past generation, like several generations before it, has indeed been one of greater and greater specialization…This advance has not been attained without cost. The price has been the loss of minds, or the neglect to develop minds, trained to cope with the complex problems of today in the comprehensive, overall manner called for by such problems.”
The above quote may sound like a recent criticism of economics, but it actually comes from a 1936 article, "The Need for `Generalists,'" by A. G. Black, Chief of the Bureau of Agricultural Economics, in the Journal of Farm Economics (p. 657). Just 12 years prior, John Maynard Keynes' penned his much-quoted description of economists for a 1924 obituary of Alfred Marshall:
“The study of economics does not seem to require any specialized gifts of an unusually high order. Is it not, intellectually regarded, a very easy subject compared with the higher branches of philosophy or pure science? An easy subject at which few excel! The paradox finds its explanation, perhaps, in that the master-economist must possess a rare combination of gifts. He must be mathematician, historian, statesman, philosopher—in some degree. He must understand symbols and speak in words. He must contemplate the particular in terms of the general and touch abstract and concrete in the same flight of thought. He must study the present in the light of the past for the purposes of the future. No part of man’s nature or his institutions must lie entirely outside his regard” (p. 321-322).
While Keynes celebrated economist-as-generalist, Black's complaint about the trend of overspecialization, coupled with excessive "mathiness" and insularity, continues. This often-fair criticism frequently comes from within the profession--because who loves thinking about economists more than economists? But at the same time, there is a trend in the opposite direction, a trend towards taking an increasing scope of nature and institutions into regard. Nowhere is this more obvious than among the blogging, tweeting, punditing economists with whom I associate.

The other day, for example, Miles Kimball wrote a bunch of tweets about the causes of obesity. When I liked one of his tweets (because it suggested cheese is not bad for you, and how could I not like that?) he asked if I would read "The Obesity Code" by Jason Fung, which he blogged about earlier, and respond to the evidence.

I was quick to agree. Only later did I pause to consider the irony that I feel more confident in my ability to evaluate scholarship from a medical field in which I have zero training than I often feel when asked to review scholarship or policies in my field of supposed expertise, monetary economics. Am I being a master-economist à la Keynes, or simply irresponsible?

Kenneth Arrow's 1963 "Uncertainty and the Welfare Economics of Healthcare" is credited with the birth of health economics. In this article, Arrow notes that he is concerned only with the market and non-market institutions of medical services, and not health itself. But since then, health economics has broadened in scope, and incorporates the study of actual health outcomes (like obesity), to both acclaim and criticism. See my earlier post about economics research on depression, or consider the extremely polarized ratings of economist Emily Oster's book on pregnancy (which I like very much).

Jason Fung himself is not an obesity researcher, but rather a physician who specializes in end-stage kidney disease requiring dialysis. The foreward to his book remarks, "His credentials do not obviously explain why he should author a book titled The Obesity Code," before going on to justify his decision. So, while duly aware of my limited credentials, I feel willing to at least comment on the book and point out many of its parallels with macroeconomic research.

The Trouble with Accounting Identities (and Counting Calories)

Fung's book takes issue with the dominant "Calories In/Calories Out" paradigm for weight loss. This idea-- that to lose weight, you need to consume fewer calories than you burn-- is based on the First Law of Thermodynamics: energy can neither be created nor destroyed in an isolated system. Fung obviously doesn't dispute the Law, but he disputes its application to weight loss, premised on "Assumption 1: Calories In and Calories Out are independent of each other."

Fung argues that in response to a reduction in Calories In, the body will reduce Calories Out, citing a number of studies in which underfed participants started burning far fewer calories per day, becoming cold and unable to concentrate as the body expended fewer resources on heating itself and on brain functioning.

In economics, an accounting identity is an equality that by definition or construction must be true. Every introductory macro course teaches national income accounting: GDP=C+I+G+NX. What happens, we may ask, if government spending (G) increases by $1000? Most students would probably guess that GDP would also increase by $1000, but this is relying on a ceteris parabis assumption. If consumption (C), investment (I), and net exports (NX) stay the same when G rises, then yes, GDP must rise by $1000. But if C, I, or NX is not independent of G, the response of GDP could very well be quite different. For example, in the extreme case that the government spending completely crowds out private investment, so I falls by $1000 (with no change in C or NX), then GDP will not change at all.

The First Law of Thermodynamics is also an accounting identity. It is true that if Calories In exceed Calories Out, we will gain weight, and vice versa. But it is not true that reducing Calories In leaves Calories Out unchanged. And according to Fung, Calories Out may respond so strongly to Calories In, almost one-for-one, that sustained weight loss will not occur.

Proximate and Ultimate Causes

A caloric deficit (Calories In < Calories Out) is a proximate cause of weight loss, and a caloric surplus a proximate cause of weight gain. But proximate causes are not useful for policy prescriptions. Think again about the GDP=C+I+G+NX accounting identity. This tells us that we can increase GDP by increasing consumption. Great! But how can we increase consumption? We need to know the deeper determinants of the components of GDP. Telling a patient to increase her caloric deficit to lose weight is as practical as advising a government to boost consumption to achieve higher GDP, and neither effort is likely to be very sustainable. So most of Fung's book is devoted to exploring what he claims are the ultimate causes of body weight and the policy implications that follow.

Set Points 

One of the most important concepts in Fung's book is the "set point" for body weight. This is the weight at which the body "wants" be be; efforts to sustain weight loss are unlikely to persist, as the body returns to its set weight by reducing basal energy expenditure.

An important question is what determines the set point. A second set of questions surrounds what happens away from the set point. In other words, what are the mechanisms by which homeostasis occurs? In macro models, too, we may focus on finding the equilibrium and on the off-equilibrium dynamics. The very idea that the body has a set point is controversial, or at least counterintuitive, as is the existence of certain set points in macro, especially the natural rate of unemployment.

The set point, according to Fung, is all about insulin. Reducing insulin levels and insulin resistance allows fat burning (lipolysis) so the body has plenty of energy coming in without the need to lower basal metabolism; this is the only way to reduce the set point. The whole premise of his hormonal obesity theory rests on this. It is the starting point for his explanation of why obesity occurs and what to do about it. Obesity occurs when the body's set point gradually increases over time, as insulin levels and insulin resistance rise in a "vicious cycle." So we need to understand the mechanisms behind the rise and the dynamics of this cycle.

Mechanisms and Models

Fung goes into great detail about the workings of insulin and related hormones and their interactions with glucose and fructose. This background all aims to support his proposals about the causes of obesity. The causes are multifactorial, but mostly have to do with the composition and timing of the modern diet (what and when we eat). Culprits include refined and processed foods, added sugar, and emphasis on low fat/high carb diets, high cortisol levels from stress and sleep deprivation, and frequent snacking. Fung cites dozens of empirical studies, some observational and others controlled trials, to support his hormonal obesity theory.

Here I am not entirely sure how closely to draw a parallel to economics. Macroeconomists also rely on models that lay out a series of mechanisms, and use (mostly) observational and (rarely) experimental data to test them, and like epidemiological researchers face challenges of endogeneity and omitted variable bias. But are biological mechanisms inherently different than economic ones because they are more observable, stable, and falsifiable? My intuition says yes, but I don't know enough about medical and biological research to be sure. Fung does not discuss the research behind scientists' knowledge of how hormones work, but only the research on health and weight outcomes associated with various nutritional strategies and drugs.

At the beginning of the book, Fung announces his refusal to even consider animal studies. This somewhat surprises me, as I thought that finding a result consistently across species could strengthen our conclusions, and mechanisms are likely to be similar, but he seems to view animal studies as totally uninformative for humans. If that is true, then why do we use animals in medical research at all?

Persistence Creates Resistance

So how does the body's set point rise enough that a person becomes obese? Fung claims that the Calories In/Calories Out model neglects the time-dependence of obesity, noting that it is much easier for a person who has been overweight for only a short while to lose weight. If someone has been overweight a long time, it is much harder, because they have developed insulin resistance. Insulin levels normally rise and fall over the course of the day, not normally causing any problem. But persistently high levels of insulin, a hormonal imbalance, result in insulin resistance, leading to yet higher levels of insulin, and yet greater insulin resistance (and weight gain). Fung uses the cycle of antibiotic resistance as an analogy for the development of insulin resistance:
Exposure causes resistance...As we use an antibiotic more and more, organisms resistance to it are naturally selected to survive and reproduce. Evenually, these resistance organisms dominate, and the antibiotic becomes useless (p. 110).
He also uses the example of drug resistance: a cocaine user needs ever greater doses. "Drugs cause drug resistance" (p. 111). Macroeconomics provides its own metaphors. In the early conception of the Phillips Curve, it was believed that the inverse relationship between unemployment and inflation could be exploited by policymakers. Just as a doctor who wants to cure a bacterial infection may prescribe antibiotics, a policymaker who wants lower unemployment must just tolerate a little higher inflation. But the trouble with following such a policy is that as that higher inflation persists, people's expectations adapt. They come to expect higher inflation in the future, and that expectation is self-fulfilling, so it takes yet higher inflation to keep unemployment at or below its "set point."

Institutional Interest and Influence

How a field evaluates evidence and puts it into practice--and even what research is funded and publicized--depends on the powerful institutions in the field and their vested interests, even if the interest is merely in saving face. According to Fung, the American Heart Association (AHA), snack food companies, and doctors repeatedly ignored evidence against the low-fat low-carb diet and the Calories In/Calories Out model to make money or save face. His criticisms of the AHA, in particular, are reminiscent of those against the IMF for the policies it has imposed through the conditions of its loans.

Of course, the critics themselves may have biases or vested interests. Fung himself quite likely neglected to mention a number of studies that did not fit his theory. In an effort to sell books and promote his website and reputation, he very likely is oversimplifying and projecting more-than-warranted confidence. So how do I evaluate the book overall, and will I follow its recommendations for myself and my family?

First, while the book's title emphasizes obesity, it doesn't seem to be written only for readers who are or are becoming obese. It is not clear whether the recommendations presented in this book are useful for people who are already maintaining a healthy weight, but he certainly never suggests otherwise. And for a book so focused on hormones, he makes shockingly little distinction between male and female dietary needs and responses. Since I am breastfeeding twins, and am a still-active former college athlete, my hormonal balance and dietary needs must be far from average, and I'm not looking to lose weight. He also doesn't make much distinction between the needs of adults and kids (like my toddler).

Still, despite the fact that he presents himself as destroying the conventional wisdom on weight loss, most of his advice is unlikely to be controversial: eat whole foods, reduce added sugar, don't fear healthy fats. Before reading this book, I already knew I should try to do that, though sometimes chose not to. I was especially focused on nutrition during my twin pregnancy, and most of the advice was basically equivalent. After reading this book, I'm slightly more motivated, as I have somewhat more evidence as to why it is beneficial, and I still don't see how it could hurt. Other advice is less supported, but at least not likely to be harmful: avoid artificial sweeteners (even Stevia), eat vinegar.

His support of fasting and snack avoidance, and his views on insulin provision to diabetics, seem the least supported and most likely to be harmful. He says that fasting and skipping snacks and breakfast provides recurrent periods of very low insulin levels, reducing insulin resistance, but I don't see any concrete evidence of the length of time you must wait in between eating to reap benefits. He cites ancient Greeks, like Hippocrates of Kos, and a few case studies, as "evidence" of the benefits of fasting. Maybe it is my own proclivity for "grazing," and my observations of my two-year-old when we skip a snack, that makes me skeptical. This may work for some, but I'm in no rush to try it.


Friday, July 7, 2017

New Publication: Measuring Uncertainty Based on Rounding

For the next few weeks, you can download my new paper in the Journal of Monetary Economics for free here. The title is "Measuring uncertainty based on rounding: New method and application to inflation expectations." It became available online the same day that my twins were born (!!) but was much longer in the making, as it was my job market paper at Berkeley.

Here is the abstract:
The literature on cognition and communication documents that people use round numbers to convey uncertainty. This paper introduces a method of quantifying the uncertainty associated with round responses in pre-existing survey data. I construct micro-level and time series measures of inflation uncertainty since 1978. Inflation uncertainty is countercyclical and correlated with inflation disagreement, volatility, and the Economic Policy Uncertainty index. Inflation uncertainty is lowest among high-income consumers, college graduates, males, and stock market investors. More uncertain consumers are more reluctant to spend on durables, cars, and homes. Round responses are common on many surveys, suggesting numerous applications of this method.

Wednesday, May 31, 2017

Low Inflation at "Essentially Full Employment"

Yesterday, Brad Delong took issue with Charles Evans' recent claim that "Today, we have essentially returned to full employment in the U.S." Evans, President of the Federal Reserve Bank of Chicago and a member of the FOMC, was speaking before the Bank of Japan Institute for Monetary and Economic Studies in Tokyo on "lessons learned and challenges ahead" in monetary policy. Delong points out that the age 25-54 employment-to-population ratio in the United States of 78.5% is low by historical standards and given social and demographic trends.

Evans' claim that the U.S. has returned to full employment is followed by his comment that "Unfortunately, low inflation has been more stubborn, being slower to return to our objective. From 2009 to the present, core PCE inflation, which strips out the volatile food and energy components, has underrun 2% and often by substantial amounts." Delong asks,
And why the puzzlement at the failure of core inflation to rise to 2%? That is a puzzle only if you assume that you know with certainty that the unemployment rate is the right variable to put on the right hand side of the Phillips Curve. If you say that the right variable is equal to some combination with weight λ on prime-age employment-to-population and weight 1-λ on the unemployment rate, then there is no puzzle—there is simply information about what the current value of λ is.
It is not totally obvious why prime-age employment-to-population should drive inflation distinctly from unemployment--that is, why Delong's λ should not be zero, as in the standard Phillips Curve. Note that the employment-to-population ratio grows with the labor force participation rate (LFPR) and declines with the unemployment rate. Typically, labor force participation is mostly acyclical: its longer run trends dwarf any movements at the business cycle frequency (see graph below). So in a normal recession, the decline in the employment-to-population ratio is mostly attributable to the rise in the unemployment rate, not the fall in LFPR (so it shouldn't really matter if you simply impose λ=0).

https://fred.stlouisfed.org/series/LNS11300060
As Christopher Erceg and Andrew Levin explain, a recession of moderate size and severity does not prompt many departures from the labor market, but long recessions can produce quite pronounced declines in labor force participation. In their model, this gradual response of labor force participation to the unemployment rate arises from high adjustment costs of moving in and out of the formal labor market. But the Great Recession was protracted enough to lead people to leave the labor force despite the adjustment costs. According to their analysis:
cyclical factors can fully account for the post-2007 decline of 1.5 percentage points in the LFPR for prime-age adults (i.e., 25–54 years old). We define the labor force participation gap as the deviation of the LFPR from its potential path implied by demographic and structural considerations, and we find that as of mid-2013 this gap stood at around 2%. Indeed, our analysis suggests that the labor force gap and the unemployment gap each accounts for roughly half of the current employment gap, that is, the shortfall of the employment-to-population rate from its precrisis trend.
Erceg and Levin discuss their results in the context of the Phillips Curve, noting that "a large negative participation gap induces labor force participants to reduce their wage demands, although our calibration implies that the participation gap has less influence than the unemployment rate quantitatively." This means that both unemployment and labor force participation enter the right hand side of the Phillips Curve (and Delong's λ is nonzero), so if a deep recession leaves the LFPR (and, accordingly, the employment-to-population ratio) low even as unemployment returns to its natural rate, inflation will still remain low.

Erceg and Levin also discuss implications for monetary policy design, considering the consequences of responding to the cyclical component of the LFPR in addition to the unemployment rate.
We use our model to analyze the implications of alternative monetary policy strategies against the backdrop of a deep recession that leaves the LFPR well below its longer run potential level. Specifically, we compare a noninertial Taylor rule, which responds to inflation and the unemployment gap to an augmented rule that also responds to the participation gap. In the simulations, the zero lower bound precludes the central bank from lowering policy rates enough to offset the aggregate demand shock for some time, producing a deep recession; once the shock dies away sufficiently, policy responds according to the Taylor rule. A key result of our analysis is that monetary policy can induce a more rapid closure of the participation gap through allowing the unemployment rate to fall below its longrun natural rate. Quite intuitively, keeping unemployment persistently low draws cyclical nonparticipants back into labor force more quickly. Given that the cyclical nonparticipants exert some downward pressure on inflation, some undershooting of the long-run natural rate actually turns out to be consistent with keeping inflation stable in our model.
While the authors don't explicitly use the phrase "full employment," their paper does provide a rationale for the low core inflation we're experiencing despite low unemployment. Erceg and Levin's paper was published in the Journal of Money, Credit, and Banking in 2014; ungated working paper versions from 2013 are available here.

Tuesday, May 2, 2017

Do Socially Responsible Investors Have It All Wrong?

Fossil fuels divestment is a widely debated topic at many college campuses, including my own. The push, often led by students, to divest from fossil fuels companies is an example of the socially responsible investing (SRI) movement. SRI strategies seek to promote goals like environmental stewardship, diversity, and human rights through portfolio management, including the screening of companies involved with objectionable products or behaviors.

It seems intuitive that the endowment of a foundation of educational institution should not invest in a firm whose activities oppose the foundation's mission. Why would a charity that fights lung cancer invest in tobacco, for example? But in a recent Federal Reserve Board working paper, "Divest, Disregard, or Double Down?", Brigitte Roth Tran suggests that intuition may be exactly backwards. She explains that "if firm returns increase with activities the endowment combats, doubling down on the investment increases expected utility by aligning funding availability with need. I call this 'mission hedging.'"

Returning to the example of the lung-cancer-fighting charity, suppose that the charity is heavily invested in tobacco. If the tobacco industry does unexpectedly well, then the charity will get large returns on its investments precisely when its funding needs are greatest (because presumably tobacco use and lung cancer rates will be up).

Roth Tran uses the Capital Asset Pricing Model to show that this mission hedging strategy "increases expected utility when endowment managers boost portfolio weights on firms whose returns correlate with activities the foundation seeks to reduce." More specifically,
"foundations that do not account for covariance between idiosyncratic risk and marginal utility of assets will generally under-invest in high covariance assets. Because objectionable firms are more likely to have such covariance, firewall foundations will underinvest in these firms by disregarding the mission in the investment process. SRI foundations will tend to underinvest in these firms even more by avoiding them altogether."
Roth Tran acknowledges that there are a number of reasons that mission hedging is not the norm. First, the foundation may experience direct negative utility from investing in a firm it considers reprehensible-- or experience a "warm glow" from divesting from such a firm. Second, the foundation may worry that investing in an objectionable firm will hurt its fundraising efforts or reputation (if donors do not understand the benefits of mission hedging). Third, the foundation may believe that divestment will directly lower the levels of the objectionable activity, though this effect is likely to be very small. Roth Tran points out that student leaders of the Harvard fossil fuel divestment campaign acknowledged that the financial impact on fossil fuel companies would be negligible.


Monday, April 17, 2017

EconTalk on the Economics of Pope Francis

Russ Roberts recently interviewed Robert Whaples on the EconTalk podcast, which I have listened to regularly for years. I was especially interested when I saw the title of this episode, The Economics of Pope Francis, both because I am a Catholic and because I generally find Roberts' discussions of religion (from his Jewish perspective) interesting and so articulate that they help me clarify my own thinking, even if my views diverge from his. 

In this episode, Roberts and Whaples, an economics professor at Wake Forest and convert to Catholicism, discuss the Pope's 2015 encyclical Laudato Si, which focuses on environmental issues and issues of markets, capitalism, and inequality more broadly. Given Roberts' strong support of free markets in most circumstances, I was pleased and impressed that he did not simply dismiss the Pope's work as anti-market, as many have. Near the end of the episode, Roberts says:
when I think about people who are hostile to capitalism, per se, I would argue that capitalism is not the problem. It's us. Capitalism is, what it's really good at, is giving us what we want--more or less...And so, if you want to change capitalism, you've got to change us. And that's--I really see that--I like the Pope doing that. I'm all for that.
I agree with Roberts' point that one place where religious leaders have an important role to play in the economy is in guiding the religious toward changing, or at least managing, their desires. Whaples discusses this too, summarizing the encyclical as being mainly about people's excessive focus on consumption:
It's mainly on the--the point we were talking about before, consuming too much. It's exhortation. He is basically saying what has been said by the Church for the last 2000 year...Look, you don't need all this stuff. It's pulling you away from the ultimate ends of your life. You are just pursuing it and not what you are meant, what you were created by God to pursue. You were created by God to pursue God, not to pursue this Mammon stuff.
Roberts and Whaples both agree that a lot of problems that are typically blamed on market capitalism could be improved if people's desires changed, and that religion can play a role in this (though they acknowledge that some non-religious people also turn away from materialism for various reasons.) Roberts' main criticism of the encyclical, however, is that: "The problem is the document has got too much other stuff there...it comes across as an institutional indictment, and much less an indictment of human frailty."

I would add, though, that just as markets reflect human wants, so do institutions, whether deliberately designed or developed and evolved more organically. So it is not totally clear to me that we can separate "indictment of human frailty" and "institutional indictment." No economic institutions are totally value-neutral, even free markets. Institutions and preferences co-evolve, and institutions can even shape preferences. And the Pope is of course the head of one of the oldest and largest institutions in the world, so it does not seem beyond his role to comment on institutions as an integral part of his exhortation to his flock.

Friday, April 7, 2017

Happiness as a Macroeconomic Policy Objective

Economists have mixed opinions about the degree to which subjective wellbeing and happiness should guide policymaking. Wouter den Haan, Martin Ellison, Ethan Ilzetzki, Michael McMahon, and Ricardo Reis summarize a recent survey of European economists by the Centre for Macroeconomics and CEPR. They note that the survey "finds a reasonable amount of openness to wellbeing measures among European macroeconomists. On balance, though, there remains a strong sense that while these measures merit further research, we are a long way off reaching a point where they are widely accepted and sufficiently reliable for macroeconomic analysis and policymaking."

As the authors note, the idea that happiness should be a primary focus of economic policy is central to Jeremy Bentham's "maximum happiness principle." Bentham is considered the founder of utilitarianism. Though the incorporation of survey-based quantitative measures of subjective wellbeing and life satisfaction is a relatively recent development in economics, utilitarianism, of course, is not. Notably, John Stuart Mill and many classical economists including William Stanley Jevons, Alfred Marshall, and Francis Edgeworth were deeply influenced by Bentham.

These classical economists might have been perplexed to see the results of Question 2 of the recent survey of economists, which asked whether quantitative wellbeing analysis should play an important role in guiding policymakers in determining macroeconomic policies. The responses, shown below, reveal slightly more negative than positive responses to the question. And yet, what macroeconomists and macroeconomic policymakers do today descends directly from the strategies for "quantitative wellbeing analysis" developed by classical economists.

Source: http://voxeu.org/article/views-happiness-and-wellbeing-objectives-macroeconomic-policy
In The Theory of Political Economy (1871), for example, Jevons wrote:
"A unit of pleasure or pain is difficult even to conceive; but it is the amount of these feelings which is continually prompting us to buying and selling, borrowing and lending, labouring and resting, producing and consuming; and it is from the quantitative effects of the feelings that we must estimate their comparative amounts."
Hence generations of economists have been trained in welfare economics based on utility theory, in which utility is an increasing function of consumption, u(c). Under neoclassical assumptions-- cardinal utility, stable preferences, diminishing marginal utility, and interpersonally comparable utility functions-- trying to maximize a social welfare function that is just the sum of all individual utility functions is totally Benthamite. And a focus on GDP growth is very natural, as more income should mean more consumption.

The focus on happiness survey data I think stems from recognition of some of the problems with the assumptions that allow us to link GDP to consumption to utility, for example, stable and exogenous preferences and interpersonally comparable utility functions (that depend exclusively on one's own consumption). One approach is to relax these assumptions (and introduce others) by, for example, using more complicated utility functions with additional arguments and/or changing preferences. So we see models with habit formation and "keeping up with the Joneses" effects.

Another approach is to ask people directly about their happiness. This, of course, introduces its own issues of methodology and interpretation, as many of the economist panelists point out. Michael Wickens, for example, notes that the “original happiness literature was in reality a measure of unhappiness: envy over income differentials, illness, divorce, being unmarried etc” and that “none of these is a natural macro policy objective.”

I think that responses to subjective happiness questions also include some backward-looking and some forward-looking components; happiness depends on what has happened to you and what you expect to happen in the future. This makes it hard for me to imagine how to design macroeconomic policy to formally target these indicators, and makes me tend to agree with Reis' opinion that they should be used as “complements to GDP though, not substitutes.”

Thursday, March 16, 2017

Cautious Optimism about Fed Independence

Unsurprisingly, the FOMC decided to raise its federal funds rate target by 25 basis points at its March 15 meeting. The move was widely anticipated, especially following the strong recent employment report. The day before the meeting, the New York Times wrote that the "Fed's challenge, after raising rates, may be existential," anticipating greater-than-ever threats to the Fed's independence from the President and Congress.

In the past few years, Republicans have been more frequent critics of low interest rates than Democrats. During the campaign, in September, Trump accused Yellen of keeping interest rates artificially low to boost support for Obama and the Democrats. However, a few months prior, in May, he expressed support for the Fed's low interest rate policy, primarily because of its favorable impact of the U.S. trade position with China and on government borrowing costs.

And since the election, it can again be presumed that he favors the maintenance of low rates. Why? For the same reason that most Presidents favor lower rates--to boost employment and growth (at least in the short run), and, in turn, Presidential approval. This is exactly why most central banks are granted independence: if Presidents set monetary policy, it would tend to be too loose, and therefore inflationary.

With Trump, this desire to boost employment and growth as a means to gain personal approval is especially acute. Trump is brazen in his desire to pass off job growth as a marker of his personal success:

So the worry reflected in the New York Times piece and elsewhere is that by raising rates, Yellen and the Fed are opposing the President's desires and risking retribution. The retribution could come in the form of legislation that would reduce the Fed's power or discretion, or in the form of Presidential appointments to the Board of Governors who would be more susceptible to Presidential persuasion. Trump will also have the opportunity to reappoint or replace Yellen as Chair next year. It seems, at first blush, that Federal Reserve independence is in serious danger.

While I don't want to undersell that danger, I do want to point out reasons to maintain at least a drop of optimism. First, fear of Presidential or Congressional retribution did not lead the FOMC to avoid this rate hike. Yellen is not willing to play by Trump's rules to ensure her reappointment. That was probably already obvious to anyone familiar with her career and reputation, but it is still noteworthy enough to reflect on. While the Fed's de jure independence is in tact for now, its de facto independence seems to be as well.

Second, with a President this polarizing and with approval ratings so low, Presidential attacks on the Fed could actually be just what the Fed needs. So far, Trump has avoided tweeting about the FOMC decision. But suppose Trump does criticize the Fed on Twitter soon. This might not be all bad-- it could be a case of "all press is good press." The biggest obstacle to the Fed's communication strategy, and in turn to accountability, seems to be its lack of a broad audience, and Trump--with 27 million twitter followers to the Fed's 402,000--could inadvertently provide the Fed with the communication platform it has so severely been lacking. The key will be for the Fed to use any newly-gained pulpit to convincingly argue why independence is worth protecting. This will be more effective if combined with discussions of how the Fed intends to hold itself more accountable to the public in the future.

Monday, February 13, 2017

Thoughts on Angrist and Pishke's "Undergraduate Econometrics Instruction"

Joshua Angrist and Jörn-Steffen Pischke, coauthors of "Mastering 'Metrics," have just released a new NBER working paper called "Undergraduate Econometrics Instruction: Through Our Classes, Darkly." They argue that pedagogy has not kept pace with trends in economic research in the past few decades:
In the 1960s and 1970s, an empirical economist’s typical mission was to “explain” economic variables like wages or GDP growth. Applied econometrics has since evolved to prioritize the estimation of specific causal effects and empirical policy analysis over general models of outcome determination. Yet econometric instruction remains mostly abstract, focusing on the search for “true models” and technical concerns associated with classical regression assumptions. Questions of research design and causality still take a back seat in the classroom, in spite of having risen to the top of the modern empirical agenda. This essay traces the divergent development of econometric teaching and empirical practice, arguing for a pedagogical paradigm shift.
The "pedagogical paradigm shift" they call for would include three main components:
One is a focus on causal questions and empirical examples, rather than models and math. Another is a revision of the anachronistic classical regression framework, away from explaining economic processes and towards controlled statistical comparisons. The third is an emphasis on modern quasiexperimental tools.  
Since I am relatively new to both teaching and economics-- I didn't major in economics as an undergraduate, and did my Ph.D. from 2010 to 2015-- the first economics course that I designed and taught at Haverford quite naturally adhered to many of Angrist and Pischke's recommendations. The course, which I taught in Fall 2015 and Fall 2016, is called Advanced Macroeconomics, but is essentially an applied econometrics course on empirical macroeconomic policy analysis. The students in the course are typically juniors and seniors who have already taken econometrics.

On the first day of class, we read excerpts from the 1968 paper "Monetary and Fiscal Actions: A Test of Their Relative Importance in Economic Stabilization" by Andersen and Jordan. The authors want to test whether "the response of economic activity to fiscal actions relative to that of monetary actions is (1) greater, (2) more predictable, and (3) faster." They use very simple regression analysis, essentially regressing changes in GNP on changes in measures of monetary and fiscal actions. This type of regression is now called a "St. Louis Equation," since Andersen and Jordan were at the St. Louis Fed. I ask my students to interpret the regression results and evaluate the validity of the authors' conclusions about policy effectiveness. With some prodding, the students come up with some ideas about potential omitted variable bias and data concerns. But they don't think about reverse causality or the idea of a "controlled statistical comparison." I introduce the reverse causality issue, and much of the rest of the course focuses on quasiexperimental tools.

The course has no textbook, but we use "Natural Experiments in Macroeconomics" by Nicola Fuchs-Schundeln and Tarek Hassan as the main reference. The course has four units: consumption, monetary policy, fiscal policy, and growth and distribution. In each unit, I assign natural experiment or quasiexperimental papers as well as other papers that attempt to achieve identification via other means, to varying degrees of success. The reading list was influenced by Christina Romer and David Romer's graduate course on Macroeconomic History at Berkeley, which introduced me to the notion of identification and ignited my interest in macroeconomics.

Angrist and Pischke also argue that "Regression should be taught the way it’s now most often used: as a tool to control for confounding factors" in contrast to "the traditional regression framework in which all regressors are treated equally." In other words, the coefficient of interest is on one of the regressors, while the other regressors serve as "control variables needed to insure that the regression-estimated effect of the variable of interest has a causal interpretation."

This advice on teaching regression resonates with my experience co-teaching the economics senior thesis seminar at Haverford for the past two years. Over the summer, my research assistant Alex Rodrigue read through several years' worth of senior theses in the archives and documented the research question in each thesis. We noticed that many students use research questions of the form "What are the factors that affect Y?" and run a regression of Y on all the variables they can think of, treating all regressors equally and not attempting to investigate any particular causal relationship from one variable X to Y. The more successful theses posit a causal relationship from X to Y driven by specific economic mechanisms, then use regression analysis and other methods to estimate and interpret the effect. The latter type of thesis has more pedagogical benefits, whether or not the student can ultimately achieve convincing identification, because it leads the student to think more seriously about economic mechanisms.

Sunday, January 8, 2017

Post-Election Political Divergence in Economic Expectations

"Note that among Democrats, year-ahead income expectations fell and year-ahead inflation expectations rose, and among Republicans, income expectations rose and inflation expectations fell. Perhaps the most drastic shifts were in unemployment expectations:rising unemployment was anticipated by 46% of Democrats in December, up from just 17% in June, but for Republicans, rising unemployment was anticipated by just 3% in December, down from 41% in June. The initial response of both Republicans and Democrats to Trump’s election is as clear as it is unsustainable: one side anticipates an economic downturn, and the other expects very robust economic growth."
This is from Richard Curtin, Director of the Michigan Survey of Consumers. He is comparing the economic sentiments and expectations of Democrats, Independents, and Republicans who took the survey in June and December 2016. A subset of survey respondents take the survey twice, with a six-month gap. So these are the respondents who took the survey before and after the election. The results are summarized in the table below, and really are striking, especially with regards to unemployment. Inflation expectations also rose for Democrats and fell for Republicans (and the way I interpret the survey data is that most consumers see inflation as a bad thing, so lower inflation expectations means greater optimism.)

Notice, too, that self-declared Independents are more optimistic after the election than before. More of them are expecting lower unemployment and fewer are expecting higher unemployment. Inflation expectations also fell from 3% to 2.3%, and income expectations rose. Of course, this is likely based on a very small sample size.
Source: Richard Curtin, Michigan Survey of Consumers