Tuesday, August 16, 2016

More Support for a Higher Inflation Target

Ever since the FOMC announcement in 2012 that 2% PCE inflation is consistent with the Fed's price stability mandate, economists have questioned whether the 2% target is optimal. In 2013, for example, Laurence Ball made the case for a 4% target. Two new NBER working papers out this week each approach the topic of the optimal inflation target from different angles. Both, I think, can be interpreted as supportive of a somewhat higher target-- or at least of the idea that moderately higher inflation has greater benefits and smaller costs than conventionally believed.

The first, by Marc Dordal-i-Carreras, Olivier Coibion, Yuriy Gorodnichenko, and Johannes Wieland, is called "Infrequent but Long-Lived Zero-Bound Episodes and the Optimal Rate of Inflation." One benefit of a higher inflation target is to reduce the occurrence of zero lower bound (ZLB) episodes, so understanding the welfare costs of these episodes is important in calculating an optimal inflation target. The authors explain that in standard models with a ZLB, normally-distributed shocks result in short-lived ZLB episodes. This is in contrast with the reality of frequent but long-lived ZLB episodes. They build models that can generate long-lived ZLB episodes and show that welfare costs of ZLB episodes increase steeply with duration; 8 successive quarters at the ZLB is costlier than two separate 4-quarter episodes.

If ZLB episodes are costlier, it makes sense to have a higher inflation target to reduce their frequency. The authors note, however, that the estimate of the optimal target implied by their models are very sensitive to modeling assumptions and calibration:
"We find that depending on our calibration of the average duration and the unconditional frequency of ZLB episodes, the optimal inflation rate can range from 1.5% to 4%. This uncertainty stems ultimately from the paucity of historical experience with ZLB episodes, which makes pinning down these parameters with any degree of confidence very difficult. A key conclusion of the paper is therefore that much humility is called for when making recommendations about the optimal rate of inflation since this fundamental data constraint is unlikely to be relaxed anytime soon."
The second paper, by Emi Nakamura, Jón Steinsson, Patrick Sun, and Daniel Villar, is called "The Elusive Costs of Inflation: Price Dispersion during the U.S. Great Inflation." This paper notes that in standard New Keynesian models with Calvo pricing, one of the main welfare costs of inflation comes from inefficient price dispersion. When inflation is high, prices get further from optimal between price resets. This distorts the allocative role of prices, as relative prices no longer accurately reflect relative costs of production. In a standard New Keynesian model, the implied cost of this reduction in production efficiency is about 10% if you move from 0% inflation to 12% inflation. This is huge-- an order of magnitude greater than the welfare costs of business cycle fluctuations in output. This is why standard models recommend a very low inflation target.

Empirical evidence of inefficient price dispersion is sparse, since there is relatively minimal fluctuation in inflation in the past few decades, when BLS microdata on consumer prices is available. Nakamura et al. undertook the arduous task of extending the BLS microdataset back to 1977, encompassing higher-inflation episodes. Calculating price dispersion within a category of goods can be problematic, because price dispersion may arise from differences in quality or features of the goods. The authors instead look at the absolute size of price changes, explaining, "Intuitively, if inflation leads prices to drift further away from their optimal level, we should see prices adjusting by larger amounts when they adjust. The absolute size of price adjustments should reveal how far away from optimal the adjusting prices had become before they were adjusted. The absolute size of price adjustment should therefore be highly informative about inefficient price dispersion."

They find that the mean absolute size of price changes is fairly constant from 1977 to the present, and conclude that "There is, thus, no evidence that prices deviated more from their optimal level during the Great Inflation period when inflation was running at higher than 10% per year than during the more recent period when inflation has been close to 2% per year. We conclude from this that the main costs of inflation in the New Keynesian model are completely elusive in the data. This implies that the strong conclusions about optimality of low inflation rates reached by researchers using models of this kind need to be reassessed."

Wednesday, July 27, 2016

Guest Post by Alex Rodrigue: The Fed and Lehman

The following is a guest contribution by Alex Rodrigue, a math and economics major at Haverford College and my fantastic summer research assistant. This post, like many others I have written, discusses an NBER working paper, this one by Laurence Ball. Some controversy arose out of the media coverage of Roland Fryer's recent NBER working paper on racial differences in police use of force, which I also covered on my blog, since the working paper has not yet undergone peer review. I feel comfortable discussing working papers since I am not a professional journalist and am capable of discussing methodological and other limitations of research. The working paper Alex will discuss was, like the Fryer paper, covered in the New York Times. I don't think there's a clear-cut criteria for whether a newspaper should report on a working paper or no--certainly the criteria should be more stringent for the NYT than for a blog--but in the case of the Ball paper, there is no question that the coverage was merited.

In his recently released NBER working paper, The Fed and Lehman Brothers: Introduction and Summary, Professor Laurence Ball of Johns Hopkins University summarizes his longer work concerning the actions taken by the Federal Reserve when Lehman Brothers’ experienced financial difficulties in 2008. The primary questions Professor Ball seeks to answer are why the Federal Reserve let Lehman Brothers fail, and whether explanations for this decision given by Federal Reserve officials, specifically those provided by Chairman Ben Bernanke, hold up to scrutiny. I was fortunate enough to speak with Professor Ball about this research, along with a number of other Haverford students and economics professors, including the author of this blog, Professor Carola Binder.

Professor Ball’s commitment to unearthing the truth about the Lehman Brothers’ bankruptcy and the Fed’s response is evidenced by the thoroughness of his research, including his analysis of the convoluted balance sheets of Lehman Brothers and his investigation of all statements and testimonies of Fed officials and Lehman Brothers executives. Professor Ball even filed a Freedom of Information Act lawsuit against the Board of Governors of the Federal Reserve in an attempt to acquire all available documents related to his work. Although the suit was unsuccessful, his commitment to exhaustive research allowed for a comprehensive, compelling argument to reject the justification of the Federal Reserve’s in the wake of Lehman Brothers’ financial distress.

Among other investigations into the circumstances of Lehman Brothers’ failure, Ball analyzes the legitimacy of claims that Lehman Brothers lacked sufficient collateral for a legal loan from the Federal Reserve. By studying the balance sheets of Lehman Brothers from the period prior to their bankruptcy, Ball finds “Lehman’s available collateral exceeds its maximum liquidity needs by $115 billion, or about 25%”, meaning that the Fed could have offered the firm a legal, secured loan. This finding directly contradicts Chairman Ben Bernanke’s explanations for the Fed’s decision, calling into question the legitimacy of the Fed’s treatment of the firm.

If the given explanation for the Fed’s refusal to help Lehman Brothers is invalid, then what explanation is correct? Ball suggests Secretary Treasurer Henry Paulson’s involvement in negotiations with the institution at the Federal Reserve Bank of New York, and his hesitance to be known as “Mr. Bailout,” as a possible reason for the Fed’s behavior. Paulson’s involvement in the case seems unusual to Professor Ball, especially because his position as a Secretary Treasurer gave him “no legal authority over the Fed’s lending decisions.” He also cites the failure of Paulson and Fed officials to anticipate the destructive effects of Lehman’s failure as another explanation for the Fed’s actions.

When asked about the future of Lehman Brothers had the Fed offered the loans necessary for its survival, Ball claims that the firm may have survived a bit longer, or at least for long enough to have wound down in a less destructive manner. He believes the Fed’s treatment of Lehman had less to do with the specific financial circumstances of the firm, and more with the timing of the its collapse. In fact, Professor Ball finds that “in lending to Bear Stearns and AIG, the Fed took on more risk than it would have if it rescued Lehman.” Around the time Lehman Brothers reached out for assistance, Paulson had been stung by criticism of the Bear Stearns rescue and the government takeovers of Fannie Mae and Freddie Mac.” If Lehman had failed before Fannie Mae and Freddie Mac or AIG, then maybe the firm would have received the loans it needed to survive.


The failure of Lehman Brothers’ was not without consequence. In discussion, Professor Ball cited a recent NYT article about his work, specifically mentioning his agreement with its assertion that the Fed’s allowance of the failure of the Lehman Brothers worsened the Great Recession, contributed to public disillusionment with the government’s involvement in the financial sector, and potentially led to the rise of “Trumpism” today. 

Thursday, July 21, 2016

Inflation Uncertainty Update and Rise in Below-Target Inflation Expectations

In my working paper "Measuring Uncertainty Based on Rounding: New Method and Application to Inflation Expectations," I develop a new measure of consumers' uncertainty about future inflation. The measure is based on a well-documented tendency of people to use round numbers to convey uncertainty or imprecision across a wide variety of contexts. As I detail in the paper, a strikingly large share of respondents on the Michigan Survey of Consumers report inflation expectations that are a multiple of 5%. I exploits variation over time in the distribution of survey responses (in particular, the amount of "response heaping" around multiples of 5) to create inflation uncertainty indices for the one-year and five-to-ten-year horizons.

As new Michigan Survey data becomes available, I have been updating the indices and posting them here. I previously blogged about the update through November 2015. Now that a few more months of data are publicly available, I have updated the indices through June 2016. Figure 1, below, shows the updated indices. Figure 2 zooms in on more recent years and smooths with a moving average filter. You can see that short-horizon uncertainty has been falling since its historical high point in the Great Recession, and long-horizon uncertainty has been at an historical low.

Figure 1: Consumer inflation uncertainty index developed in Binder (2015) using data from the University of Michigan Survey of Consumers. To download updated data, visit https://sites.google.com/site/inflationuncertainty/

Figure 2: Consumer inflation uncertainty index (centered 3-month moving average) developed in Binder (2015) using data from the University of Michigan Survey of Consumers. To download updated data, visit https://sites.google.com/site/inflationuncertainty/

The change in response patterns from 2015 to 2016 is quite interesting. Figure 3 shows histograms of the short-horizon inflation expectation responses given in 2015 and in the first half of 2016. The brown bars show the share of respondents in 2015 who gave each response, and the black lines show the share in 2016. For both years, heaping at multiples of 5 is apparent when you observe the spikes at 5 (but not 4 or 6) and at 10 (but not 9 or 11). However, it is less sharp than in other years when the uncertainty index was higher. But also notice that in 2016, the share of 0% and 1% responses rose and the share of 2, 3, 4, 5, and 10% responses fell relative to 2015.

Some respondents take the survey twice with a 6-month gap, so we can see how people switch their responses. Of the respondents who chose a 2% forecast in the second half of 2015 (those who were possible aware of the 2% target), 18% switched to a 0% forecast and 24% switched to a 1% forecast when they took the survey again in 2016. The rise in 1% responses seems most noteworthy to me-- are people finally starting to notice slightly-below-target inflation and incorporate it into their expectations? I think it's too early to say, but worth tracking.

Figure 3: Created by Binder with data from University of Michigan Survey of Consumers




Monday, July 11, 2016

Racial Differences in Police Use of Force

In an NBER working paper released today, Roland Fryer, Jr. uses the NYPD Stop, Question and Frisk database, the Public Police Contact Survey,  to conduct "An Empirical Analysis of Racial Differences in Police Use of Force." The paper also uses data collected by Fryer and students coded from police reports in Houstin, Austin, Dallas, Los Angeles, and several parts of Florida. The paper is worth reading in its entirety, and is also the subject of a New York Times article, which summarizes the main findings more thoroughly than I will do here.

Fryer estimates odds ratios to measure racial disparities in various types of outcomes. An odds ratio of 1 would mean that whites and blacks faced the same odds, while an odds ratio of greater than 1 for blacks would mean that blacks were more likely than whites to receive that outcome. These odds ratios can be estimated with or without controlling for other variables. One outcome of interest is whether the police used any amount of force at the time of interaction.. Panel A of the figure below shows the odds ratio by hour of the day. The point estimate is always above 1, and the 95% confidence interval is almost always above 1, meaning blacks are more likely to have force used against them than whites (and so are Hispanics). This disparity increases during daytime hours, with point estimates nearing 1.4 around 10 a.m.

Panel B shows that the average use of force against both blacks and whites peaks at around 4 a.m. and is lowest around 8 a.m. The racial gap is present at all hours, but largest in the morning and early afternoon.
Fryer builds a model to help interpret whether the disparities evident in the data represent "statistical" or "taste-based" discrimination. Statistical discrimination would result if police used race as a signal for likelihood of compliance of likelihood of having a weapon, whereas taste-based discrimination would be ingrained in officers' preferences. The data are inconsistent with solely statistical discrimination: "the marginal returns to compliant behavior are the same for blacks and whites, but the average return to compliance is lower for blacks – suggestive of a taste-based, rather than statistical, discrimination."

Fryer notes that his paper enters "treacherous terrain" including, but not limited, to data reliability. The oversimplifications and cold calculations that necessarily accompany economic models  never tell the whole story, but can nonetheless promote useful debate. For example, since Fryer finds racial disparities in police use of violence but not shootings, he notes that "To date, very few police departments across the country either collect data on lower level uses of force or explicitly punish officers for misuse of these tactics...Many arguments about police reform fall victim to the 'my life versus theirs, us versus them' mantra. Holding officers accountable for the misuse of hands or pushing individuals to the ground is not likely a life or death situation and, as such, may be more amenable to policy change."

Wednesday, July 6, 2016

Estimation of Historical Inflation Expectations

The final version of my paper "Estimation of Historical Inflation Expectations" is now available online in the journal Explorations in Economic History. (Ungated version here.)
Abstract: Expected inflation is a central variable in economic theory. Economic historians have estimated historical inflation expectations for a variety of purposes, including studies of the Fisher effect, the debt deflation hypothesis, central bank credibility, and expectations formation. I survey the statistical, narrative, and market-based approaches that have been used to estimate inflation expectations in historical eras, including the classical gold standard era, the hyperinflations of the 1920s, and the Great Depression, highlighting key methodological considerations and identifying areas that warrant further research. A meta-analysis of inflation expectations at the onset of the Great Depression reveals that the deflation of the early 1930s was mostly unanticipated, supporting the debt deflation hypothesis, and shows how these results are sensitive to estimation methodology.
This paper is part of a new "Surveys and Speculations" feature in Explorations in Economic History. Recent volumes of the journal open with a Surveys and Speculations article, where "The idea is to combine the style of JEL [Journal of Economic Literature] articles with the more speculative ideas that one might put in a book – producing surveys that can help to guide future research. The emphasis can either be on the survey or the speculation part." Other examples include "What we can learn from the early history of sovereign debt" by David Stasavage, "Urbanization without growth in historical perspective" by Remi Jedwab and Dietrich Vollrath, and "Surnames: A new source for the history of social mobility" by Gregory Clark, Neil Cummins, Yu Hao, and Dan Diaz Vidal. The referee and editorial reports were extremely helpful, so I really recommend this if you're looking for an outlet for a JEL-style paper with economic history relevance.

My paper grew out of a chapter in my dissertation. I was interested in inflation expectations in the Great Depression after serving as a discussant for a paper by Andy Jalil and Gisela Rua on "Inflation Expectations and Recovery from the Depression in 1933:Evidence from the Narrative Record." I also remember being struck by  Christina Romer and David Romer's, (2013, p. 68) remark that a whole “cottage industry” of research in the 1990s was devoted to the question of whether the deflation of 1930-32 was anticipated.
I found it interesting to think about why different papers came to different estimates of inflation expectations in the Great Depression by examining the methodological issues around estimating expectations when direct survey or market measures are not available. I later broadened the paper to consider the range of estimates of inflation expectations in the classical gold standard era and the hyperinflations of the 1920s.

A lot of my research focuses on contemporary inflation expectations, mostly using survey-based measures. Some of the issues that arise in characterizing historical expectations are still relevant even when survey or market-based measures of inflation expectations are readily available--issues of noise, heterogeneity, uncertainty, time-varying risk premia, etc. I hope this piece will also be useful to people interested in current inflation expectations in parts of the world where survey data is unreliable or nonexistent, or where markets in inflation-linked assets are underdeveloped.

What I enjoyed most about writing this paper was trying to determine and formalize the assumptions that various authors used to form their estimates, even when these assumptions weren't laid out explicitly. I also enjoyed conducting my first meta-analysis (thanks to the recommendation of the referee and editor.) I found T. D. Stanley's JEL article on meta-analysis to be a useful guide.




Friday, June 17, 2016

The St. Louis Fed's Regime-Based Approach

St. Louis Federal Reserve President James Bullard today presented “The St. Louis Fed’s New Characterization of the Outlook for the U.S. Economy.” This is a change in how the St. Louis Fed thinks about medium- and longer-term macroeconomic outcomes and makes recommendations for the policy path.
“The hallmark of the new narrative is to think of medium- and longer-term macroeconomic outcomes in terms of regimes. The concept of a single, long-run steady state to which the economy is converging is abandoned, and is replaced by a set of possible regimes that the economy may visit. Regimes are generally viewed as persistent, and optimal monetary policy is viewed as regime dependent. Switches between regimes are viewed as not forecastable.”
Bullard describes three “fundamentals” that characterize which regime the economy is in: productivity growth (high or low), the real return on short-term government debt (high or low), and state of the business cycle (recession or not). We are currently in a low-productivity growth, low rate, no-recession regime. The St. Louis Fed’s forecasts for the next 2.5 years are made with the assumption that we will stay in such a regime over the forecast horizon. They forecast real output growth of 2%, 4.7% unemployment, and 2% trimmed-mean PCE inflation.

As an example of why using this regime-based forecasting approach matters, imagine that the economy is in a low productivity growth regime in which the long-run growth rate is 2%, and that under a high productivity growth regime, the long-run growth rate would be 4%. Suppose you are trying to forecast the growth rate a year from now. One approach would be to come up with an estimate of the probability P that the economy will have switched to the high productivity regime, then estimate a growth rate of G1=(1-P)*2%+P*4%. An alternative is to assume that the economy will stay in its current regime, in which case your estimate is G2=2%<G1, and the chance that the economy switches regime is an “upside risk” to your forecast. This second approach is more like what the St. Louis Fed is doing. Think of it as taking the mode instead of the expected value (mean) of the probability distribution over future growth. They are making their forecasts based on the most likely outcome, not the weighted average of the different outcomes. 

Bullard claims that P is quite small, i.e. regimes are persistent, and that regime changes are unforecastable. He therefore argues that "the best that we can do today is to forecast that the current regime will persist and set policy appropriately for this regime."  The policy takeaway is that “In light of this new approach and the associated forecast, the appropriate regime-dependent policy rate path is 63 basis points over the forecast horizon.”

The approach of forecasting that the current regime will persist and setting policy appropriately for the current regime is an interesting contrast to the "robust control" literature. As Richard Dennis summarizes, "rather than focusing on the 'most likely' outcome or on the average outcome, robust control argues that policymakers should focus on and defend against the worst-case outcome." He adds:
"In an interesting application of robust control methods, Sargent (1999) studies a simple macro-policy model and shows that robustness, in the “robust control” sense, does not necessarily lead to policy attenuation. Instead, the robust policy rule may respond more aggressively to shocks. The intuition for this result is that, by pursuing a more aggressive policy, the central bank can prevent the economy from encountering situations where model misspecification might be especially damaging."
 In Bullard's Figure 1 (below), we see the baseline forecast corresponding to continuation in the no recession, low real rate of return, low productivity growth regime. We also see some of the upside risks to the policy rate path, corresponding to switches to high real rate of return and/or high productivity growth regimes. We also see the arrow pointing to recession, but the four possible outcomes associated with that switch are omitted from the diagram. Bullard writes that "We are currently in a no recession state, but it is possible that we could switch to a recession state. If such a switch occurred, all variables would be affected but most notably, the unemployment rate would rise substantially. Again, the possibility of such a switch does not enter directly into the forecast because we have no reason to forecast a recession given the data available today. The possibility of recession is instead a risk to the forecast."

 In a robust-control inspired approach, the possibility of switching into a recession state (and hitting the zero lower bound again) would get weighted pretty heavily in determination of the policy path, because even if it is unlikely, it would be really bad. What would that look like? This gets a little tricky because the "fundamentals" characterizing these regimes are not all fundamental in the sense of being exogenous to monetary policy. In particular, whether and when the economy enters another recession depends on the path of the policy rate. So too, indirectly, does the real rate of return on short-term government debt, both through liquidity premia and through expected inflation if we get back to the ZLB.

Wednesday, May 25, 2016

Behavioral Economics Then and Now

Although it has never been clear whether the consumer needs to be protected from his own folly or from the rapaciousness of those who feed on him, consumer protection is a topic of intense current interest in the courts, in the legislatures, and in the law schools." So write James J. White and Frank W. Munger Jr. in a 1971 article from the Michigan Law Review.

Today, it is not uncommon for behavioral economists to weigh in on financial regulatory policy and consumer protection. White and Munger, not economists but lawyers, played the role of behavioral economists before the phrase was even coined. They managed to anticipate many of the hypotheses and themes that would later dominate behavioral economics-- but with more informal and colorful language. A number of new legislative and judicial acts in the late 1960s provided the impetus for their study: 
"Congress has passed the Truth-in-Lending Act; the National Conference of Commissioners on Uniform State Laws has proposed the Uniform Consumer Credit Code; and many states have enacted retail installment sales acts to update and supplement their long-standing usury laws. These legislative and judicial acts have always relied, at best, on anecdotal knowledge of consumer behavior. In this Article we offer the results of an empirical study of a small slice of consumer behavior in the use of installment credit. 
In their recent efforts, the legislatures, by imposing new interest rate disclosure requirements on installment lenders,  have sought to protect the consumer against pressures to borrow money at a higher rate of interest than he can afford or need pay. The hope, if not the expectation, of the drafters of such disclosure legislation is that the consumer who is made aware of interest rates will seek the lowest-priced lender or will decide not to borrow. This migration of the consumers to the lowest-priced lender will, so the argument goes, require the higher-priced lender to reduce his rate in order to retain his business. These hopes and expectations are founded on the proposition that the consumer is largely ignorant of the interest rate that he pays; this ignorance presumably keeps him from going to a lender with cheaper rates. Knowledge of interest rates, it is believed, will rectify this defect…”
Here comes their "squatting baboon" metaphor:
“Presumably, consumers in a perfect market will behave like water in a pond, which gravitates to the lowest point-i.e., consumer borrowers should all tum to the lender that gives the cheapest loan. We began this project with a strong suspicion-based on the observations of others-that the consumer credit market is far from perfect and that water governed by the force of gravity is a poor metaphor with which to describe the behavior of consumer debtors. The consumer debtor's choice of creditor clearly involves consideration of many factors besides interest rate. Therefore, a metaphor that better describes our suspicions about the borrower's behavior in a market in which rate differences appear involves a group of monkeys in a cage with a new baboon of unknown temperament. The baboon squats in one comer of the cage near some choice, ripe bananas. In the far comer of the cage is a supply of wilted greens and spoiled bananas, the monkeys' usual fare. Some of the monkeys continue eating their usual fare because they are unaware of the new bananas and the visitor. Other monkeys observe the new bananas but do not approach them. Still others, more daring or intelligent than the rest, seek ways of snatching an occasional banana from the baboon's stock. The baboon strikes at all the brown monkeys but he permits black monkeys to eat without interference. Yet many of the black monkeys make no attempt to eat. One suspects that a social scientist who interviewed the members of the monkey tribe about their experience would find that many of those who saw and appreciated the choice bananas would be unable to articulate the reasons for their failure to eat any of them. The social scientist might also discover that a few who looked at the baboon in obvious fright would nevertheless deny that they were afraid. In addition, he might find that some were so busy picking fleas or nursing that they did not observe the choice bananas at all. We suspected that consumer borrowers had similarly diverse reasons for their behavior.

We presumed that some paid high interest rates only because of ignorance of lower rates and that others correctly concluded that they could not qualify for a cheaper loan than they received. Others, we suspected, were merely too lazy or too fearful of bankers to seek lower rates.”

Suggesting that people are just lazy, or comparing them to monkeys, has (understandably) fallen out of fashion. But pointing out that consumers are not Homo economicus has not. The authors interview people in Washtenaw County, Michigan who had purchased a new car in 1967. Most of the lenders in the area loaned money at the legal maximum add-on, while Huron Valley National Bank (HVNB) loaned at a significantly lower rate. They interview an HVNB loan officer to determine whether different borrowers would have received a loan from HVNB in 1967 and at what terms. They find that most of the consumers in their sample could have borrowed at a lower rate.

A majority of the sample did not know the rate at which they had borrowed. Most had allowed the auto dealer to arrange the loan rather than shopping for the lowest rate. Even if they knew that lower rates were available elsewhere, they declined to shop around. The authors find differences in financial sophistication, education, and job characteristics between consumers who shopped around for lower rates and those who did not. They conclude:
“The results of our study suggest that, at least with regard to auto loans, the disclosure provisions of the Truth-in-Lending Act will be largely ineffective in changing consumer behavior patterns. Certainly the Act will not improve the status of those who already know that lower rates are available elsewhere. And we discovered no evidence that knowledge of the interest rate-which, even under the Act will usually come after a tentative agreement to purchase a specified car has been reached-will stimulate a substantial percentage of consumers to shop for a lower rate elsewhere.”
The authors come down as pessimistic about the Truth-in-Lending Act, but make no new policy recommendations of their own. If they were writing today, instead of just predicting that a policy would be ineffective, they might suggest ways to design the policy to "nudge" consumers to make different decisions. The Truth-in-Lending Act has been amended numerous times over the years, and was placed under the authority of the Consumer Financial Protection Bureau (CFPB) by the Dodd-Frank Act. Behavioral economics has played a central role in the work of the CFPB. But the active application of behavioral law and economics to regulatory policy is not universally accepted. For example, Todd Zwicki writes:
"We argue that even if the findings of behavioral economics are sound and robust, and the recommendations of behavioral law and economics are coherent (two heroic assumptions, especially the latter), there still remain vexing problems of actually implementing behavioral economics in a real-world political context. In particular, the realities of the political and regulatory process suggests that the trip from the laboratory to the real-world crafting of regulations that will improve consumers’ lives is a long and rocky one."
Zwicki expands this argument in a paper coauthored with Adam Christopher Smith. While I'm not convinced that their rather negative portrayal of the CFPB is warranted, I do think the paper presents some provocative cautions about how behavioral economics is applied to policy--especially the warning against "selective modeling of behavioral bias," which I have heard even top behavioral economists caution against.