What if Country Bonds Were Linked to GDP Growth?

What if countries could have some built-in flexibility in repaying their debts: specifically, what if the repayment of the debt was linked to whether the domestic economy was growing? Thus, the burden of debt payments would fall in a recession, which is when government sees tax revenues fall and social expenditures rise. Imagine, for example, how the the situation of Greece with government debt would have been different if the country’s lousy economic performance had automatically restructured its debt burden in away that reduced current payments. Of course, the tradeoff is that when the economy is going well, debt payments are higher–but presumably also easier to bear.

There have been some experiments along these lines in recent decades, but the idea is now gaining substantial interest,  James Benford, Jonathan D. Ostry, and Robert Shiller have edited a 14-paper collection of papers on Sovereign GDP-Linked Bonds: Rationale and Design (March 2018, Centre for  Economic Policy Research, available with free registraton here).

For a taste of the arguments, here are a few thoughts from the opening essay: “Overcoming the obstacles to adoption of GDP-linked debt,” by Eduardo Borensztein, Maurice Obstfeld, and Jonathan D. Ostry.  They provide an overview of issues like: Would borrowers have to pay higher interest rates for GDP-linked borrowing? Or would the reduced risk of default counterbalance other risks? What measure of GDP would be used as part of such a debt contract? They write:

“Elevated sovereign debt levels have become a cause for concern for countries across the world. From 2007 to 2016, gross debt levels shot up in advanced economies � from 24 to 89% of GDP in Ireland, from 35 to 99% of GDP in Spain, and from 68 to 128% of GDP in Portugal, for example. The increase was generally more moderate in emerging economies, from 36 to 47% of GDP on average, but the upward trend continues. …

“GDP-linked bonds tie the value of debt service to the evolution of GDP and thus keep it better aligned with the overall health of the economy. As public sector revenues are closely related to economic performance, linking debt service to economic growth acts as an automatic stabiliser for debt sustainability. ..  While most efforts to reform the international financial architecture over the past 15 years have aimed at facilitating defaults, for example through a sovereign debt restructuring framework (SDRM), the design of a sovereign debt structure that is less prone in the first place to defaults and their associated costs  would be a more straightforward policy initiative. GDP-linked debt is an attractive instrument for this purpose because it can ensure that debt stays in step with the growth of the economy in the long run and can create fiscal space for countercyclical policies during recessions. …

“The first lesson is to ensure that the payout structure of the instrument reflects the state of the economy and is free from complexities or delays that can make payments stray from their link to the economic situation. To date, GDP-linked debt has been issued primarily in the context of debt restructuring operations, from the Brady bond exchanges that began in 1989 to the more recent cases of Greece and Ukraine. …  This feature, however, gave rise to structures that were not ideal from the point of view of debt risk management. For example, some specifications provided for large payments if GDP crossed certain arbitrary thresholds or were a function of the distance to GDP from those thresholds. In addition, some payout formulas were sensitive to the exchange rate, failed to take inflation into account, or were affected by revisions of population or national account statistics. All these mechanisms resulted in payments that were  disconnected from the business cycle and the state of public finances, detracting from the value of these GDP-linked instruments for risk management (see Borensztein 2016).

“The second lesson is that the specification of the payout formula can strengthen the integrity of the instruments. GDP statistics are supplied by the sovereign, and there is no realistic alternative to this arrangement. This fact is often held up as an obstacle to wide market acceptance of the instruments. However, the misgivings seem to have been exaggerated, as under-reporting of GDP growth is not a politically attractive idea for a policymaker whose success will be judged on the strength of economic performance. … 

“[T]he main source of reluctance regarding the use of GDP-linked debt, or insurance instruments more generally, may not stem from markets but from policymakers. Politicians tend to have relatively short horizons, and would not find debt instruments attractive that offer insurance benefits in the medium to long run but are costlier in the short run, as they include an insurance premium driven by the domestic economy�s correlation with the global business cycle. In addition, if the instruments are not well understood, they may be perceived as a bad choice if the economy does well for some time. The value of insurance may come to be appreciated only years later, when the country hits a slowdown or a recession, but by then the politician may be out of office. While this problem is not ever likely to go away completely, multilateral institutions might be able to help by providing studies on the desirability of instruments for managing country risk, and how to support their market development, in analogy to work done earlier in the millennium promoting emerging markets� domestic-currency sovereign debt markets.”

Back in 2015, the Ad Hoc London Term Sheet Working Group decided to produce a hypothetical model example of how a specific contract for GDP-linked government agreement might work, with the ideas that the framework could then be adapted and applied more broadly. This volume has a short and readable overview of the results by two members of the working group, in “A Term Sheet for GDP-linked bonds,” by Yannis Manuelides and Peter Crossan. I’ll just add that in the introduction to the book, Robert Shiller characterizes the London Term Sheet approach in this way:

“The kind of index-linked bond described in the London Term Sheet in this volume is close to a conventional bond, in that it has a fixed maturity date and a balloon payment at the end. The complexities described in the Term-Sheet are all about inevitable details and questions, such as how the coupon payments should be calculated for a GDP-linked bond that is issued on a specific date within the quarter, when the GDP data are issued only quarterly. The term sheet is focused on a conceptually simple concept for a GDP-linked  bond, as it should be. It includes, as a special case, the even simpler concept � advocated recently by me and my Canadian colleague Mark Kamstra � of a perpetual GDP-linked bond, if one sets the time to maturity to infinity. Perpetual GDP-linked bonds are an analogue of shares in corporations, but with GDP replacing corporate earnings as a source of dividends. However, it seems there are obstacles to perpetual bonds and these obstacles might slow the acceptance of GDP-linkage. The term-sheet here gets the job done with finite maturity, shows how a GDP-linkage can be done in a direct and simple way, and should readily be seen as appealing.

“The London Term Sheet highlighted in this volume describes a bond which is simple and attractive, and the chapters in this volume that spell out other considerations and details of implementation, have the potential to reduce the human impact of risks of economic crisis, both real crises caused by changes in technology and environment, and events better described as financial crises. The time has come for sovereign GDP-linked bonds. With this volume they are ready to go.”

Rising Interest Rates, but Easier Financial Conditions

The Federal Reserve has been gradually raising its target interest rate (the “federal funds interest rate) for about two years, since early 2016. This increase has been accompanied by a controversy that I think of as a battle of metaphors. By raising interest rates, is the Fed stepping on the brakes of the economy? Or is it just easing off on the accelerator pedal?

To shed light on this controversy, it would be useful to to have a measure of financial conditions in the US economy that doesn’t involve one specific interest rate, but instead looks at actual factors like whether credit is relatively available or not, whether leverage is high or low, and whether those who provide loans are able to raise money with relatively low risk. Fortunately, the Federal Reserve Bank of Chicago has been putting together a National Financial Conditions Index based on exactly these components. Here’s a figure of the data going back to the 1970s.

This figure needs a little interpreting. Zero is when financial conditions are average. Positive numbers reflect when financial conditions are tight or difficult. For example, you can see that in the middle of the Great Recession, there is an upward spike showing that financial conditions were a mess and it was hard to raise capital or get a loan at that time. Several previous recessions show a similar spike. On the other side, negative numbers mean that financial conditions are fairly easy by historical standards to finance and receive loans.

As the Chicago Fed explains: “The National Financial Conditions Index (NFCI) and adjusted NFCI (ANFCI) are each constructed to have an average value of zero and a standard deviation of one over a sample period extending back to 1971. Positive values of the NFCI have been historically associated with tighter-than-average financial conditions, while negative values have been historically associated with looser-than-average financial conditions.”

The interesting thing about our present time is that although the Fed has been raising its target interest rate since early 2016, financial conditions haven’t gotten tighter. Instead the National Financial Conditions Index is lower now than it was back in early 2016; indeed, this measure is at its lowest level in about 25 years. At least for the last two years, any concerns that a higher federal funds interest rate would choke off finance and lending have been misplaced. Instead, having the Fed move the federal funds rate back close to its historically typical levels seems to have helped in convincing financial  markets that the crisis was past and normality was returning, so it was a good time to provide finance or to borrow.

The National Financial Conditions Index can also be broken down into three parts: leverage, risk, and credit. The Chicago Fed explains: “The three subindexes of the NFCI (risk, credit and leverage) allow for a more detailed examination of the movements in the NFCI. Like the NFCI, each is constructed to have an average value of zero and a standard deviation of one over a sample period extending back to 1973. The risk subindex captures volatility and funding risk in the financial sector; the credit subindex is composed of measures of credit conditions; and the leverage subindex consists of debt and equity measures. Increasing risk, tighter credit conditions and declining leverage are consistent with increases in the NFCI. Therefore, positives values for each subindex have been historically associated with a tighter�than�average corresponding aspect of financial conditions, while negative values indicate the opposite.”

Here’s a figure showing the breakdown of the three components. Although the three lines do tend to rise and fall together, it seems clear that the blue line–showing the extent of leverage or borrowing–plays an especially large role in the fluctuations over the last 25 years. But right now, all three parts of the index are comfortably down in the negative numbers.

Patterns can turn, of course. Perhaps if the Federal Reserve increases the federal funds rate at its next scheduled meeting (March 20-21), financial conditions will worsen in some substantial way. But at least for now, the Federal Reserve raising interest rates back from the near-zero rates that had prevailed for seven years is having the (somewhat paradoxical) effect of being accompanied by  looser financial conditions. And concerns over raising those rates at least a little further seem overblown.

Network Effects, Big Data, and Antitrust Issues For Big Tech

You don’t need to be a weatherman to see that the antitrust winds are blowing toward the big tech companies like Amazon, Facebook, Google, Apple, and others. But an immediate problem arises. At least under modern US law, being a monopoly (or a near-monopoly) is not illegal. Nor is making high profits illegal, especially when it is accomplished by providing services that are free to consumers and making money through advertising. Antitrust kicks in when anticompetitive behavior is involved: that is, a situation in which a firm takes actions which have the effect of blocking actual or potential competitors./

For example, the antitrust case against Microsoft that was settled back in 2001 wasn’t that the firm was big or successful, bur rather that the firm was engaged in an anticompetitive practice of “tying” together separate products, and in this way trying to use its near-monopoly position in the operating systems that run personal computers to gain a similar monopoly position for its internet browser–and in this way to drive off potential competitors. .

In the case of big tech companies, a common theory is that they hold a monopoly position because of what economists call “network effects.” The economic theory of network effects started with the observation that certain products are only valuable if other people also own the same product–think of a telephone or fax machine. Moreover, the product becomes more valuable as the network gets bigger. When “platform” companies like Amazon or Facebook came along, network effects got a new twist. The idea became that if a website managed to gain a leadership position in attracting buyers and sellers (like Amazon, OpenTable, or Uber), or users and providers of content (like Facebook, YouTube, or Twitter), then others would be attracted to the website as well. Any potentially competing website might have a hard time building up its own critical mass of users, in which case network effects are acting as an anticompetitive barrier.

Of course, the idea that an already-popular meeting place has an advantage isn’t limited to the virtual world: many shopping malls and downtown areas rely on a version of network effects, too, as to stock markets, flea markets, and bazaars.

But while it’s easy to sketch in the air an argument about network effects,  the question of how network effects work in reality isn’t a simple one.  David S. Evans and Richard Schmalensee offer a short essay on “Debunking the `Network Effects’ Bogeyman: Policymakers need to march to the evidence, not to slogans,” in Regulation magazine Winter 2017-18, pp. 36-39).

As they point out, lots of companies that might at the time seemed to have an advantage of “network effects”  have faltered: for example, eBay looked like the network Goliath back in 2001, but it was soon overtaken by Amazon. They write:

“The flaw in that reasoning is that people can use multiple online communications platforms, what economists call `multihoming.’   A few people in a social network try a new platform. If enough do so and like it, then eventually all network members could use it and even drop their initial platform. This process has happened repeatedly. AOL, MSN Messenger, Friendster, MySpace, and Orkut all rose to great heights and then rapidly declined, while Facebook, Snap, WhatsApp, Line, and others quickly rose. …

“Systematic research on online platforms by several authors, including one of us, shows considerable churn in leadership for online platforms over periods shorter than a decade. Then there is the collection of dead or withered platforms that dot this sector, including Blackberry and Windows in smartphone operating systems, AOL in messaging, Orkut in social networking, and Yahoo in mass online media … 

“The winner-take-all slogan also ignores the fact that many online platforms make their money from advertising. As many of the firms that died in the dot-com crash learned, winning the opportunity to provide services for free doesn�t pay the bills. When it comes to micro-blogging, Twitter has apparently won it all. But it is still losing money because it hasn�t been very successful at attracting advertisers, which are its main source of income. Ignoring the advertising side of these platforms is a mistake. Google is still the leading platform for conducting searches for free, but when it comes to product searches�which is where Google makes all its money�it faces serious competition from Amazon. Consumers are roughly as likely to start product searches on Amazon.com, the leading e-commerce firm, as on Google, the leading search-engine firm.”

It should also be noted that if network effects are large and block new competition, they pose a problem for antitrust enforcement, too. Imagine that Amazon or Facebook was required by law to split into multiple pieces, with the idea that the pieces would compete with each other. But if network effects really are large, then one or another of the pieces will grow to critical mass and crowd out the others–until the status quo re-emerges.

A related argument is that big tech firms have access to Big Data from many players in a given market, which gives them an advantage. Evans and Schmalensee are skeptical of this point, too. They write:

“Like the simple theory of network effects, the �big data is bad� theory, which is often asserted in competition policy circles as well as the media, is falsified by not one, but many counterexamples. AOL, Friendster, MySpace, Orkut, Yahoo, and many other attention platforms had data on their many users. So did Blackberry and Microsoft in mobile. As did numerous search engines, including AltaVista, Infoseek, and Lycos. Microsoft did in browsers. Yet in these and other categories, data didn�t give the incumbents the power to prevent competition. Nor is there any evidence that their data increased the network effects for these firms in any way that gave them a substantial advantage over challengers.

“In fact, firms that at their inception had no data whatsoever sometimes displaced the leaders. When Facebook launched its social network in India in 2006 in competition with Orkut, it had no data on Indian users since it didn�t have any Indian users. That same year Orkut was the most popular social network in India, with millions of users and detailed data on them. Four years later, Facebook was the leading social network in India. Spotify provides a similar counterexample. When Spotify entered the United States in 2011, Apple had more than 50 million iTunes users and was selling downloaded music at a rate of one billion songs every four months. It had data on all those people and what they downloaded. Spotify had no users and no data when it started. Yet it has been able to grow to become the leading source of digital music in the world. In all these and many other cases the entrants provided a compelling product, got users, obtained data on those users, and grew.

“The point isn�t that big data couldn�t provide a barrier to entry or even grease network effects. As far as we know, there is no way to rule that out entirely. But at this point there is no empirical support that this is anything more than a possibility, which one might explore in particular cases.”

Evans and Schmalensee are careful to note that they are not suggesting that online platform companies should be exempt from antitrust scrutiny, and perhaps in some cases the network and data arguments might carry weight. As they write:

“Nothing we�ve said here is intended to endorse a �go-easy� policy toward online platforms when it comes to antitrust enforcement. … There�s no particular reason to believe these firms are going to behave like angels. Whether they benefit from network effects or not, competition authorities ought to scrutinize dominant firms when it looks like they are breaking the rules and harming consumers. As always, the authorities should use evidence-based analysis grounded in sound economics. The new economics of multisided platforms provides insights into strategies these firms may engage in as well as cautioning against the rote application of antitrust analysis designed for single-sided firms to multisided ones.

“It is time to retire the simple network effects theory�which is older than the fax machine�in place of deeper theories, with empirical support, of platform competition. And it is not too soon to ask for supporting evidence before accepting any version of the �big data is bad� theory. Competition policy should march to the evidence, not to the slogans.”

For an introduction to the economics of multi-sided “platform” markets, a useful starting point is Marc Rysman’s “The Economics of Two-Sided Markets” in the Summer 2009 issue of the Journal of Economic Perspectives (23:3, 125-43). 

For an economic analysis of policy, the underlying reasons matter a lot, because they set a precedent that will affect future actions by regulators and firms. Thus, it’s not enough to rave against the size of Big Tech. It’s necessary to get specific: for example, about how public policy should view network effects or online buyer-and-seller platforms, and about the collection, use, sharing, and privacy protections for data. We certainly don’t want the current big tech companies to stifle new competition or abuse consumers. But in pushing back against the existing firms, we don’t want regulators to set rules that could close off new competitors, either. 

Behind the Declining Labor Share of Income

Total income earned can be divided into what is earned by labor in wages, salaries, and benefits, and what is earned by capital in profits and interest payments. The line between these categories can be a blurry: for example, should the income received by someone running their own business be counted as “labor” income received for their hours worked, or as “capital” income received from their ownership of the business, or some mixture of both?

However, the US Bureau of Labor Statistics has been doing this calculation for  decades using a standardized methodology over time. The US labor share of income was in the range of 61-65% from the 1950s up through the 1990s. Indeed, for purposes of basic long-run economic models, the share was sometimes treated as a constant. But in the early 2000s, the labor share started dropping and fell to the historically low range of 56-58%. Loukas Karabarbounis and Brent Neiman provide some perspective on what has happened, citing a lot of the recent research. in “Trends in Factor Shares: Facts and Implications,” appearing in the NBER Reporter (2017, Number 4).


They built up a data set for a range of countries, and found that many of them had experienced a decline in labor share. Thus, the underlying economic explanation is unlikely to be a purely US factor, but instead needs to be something that reaches across many economies. They write: “The decline has been broad-based. As shown in Figure 1, it occurred in seven of the eight largest economies of the world. It occurred in all Scandinavian countries, where labor unions have traditionally been strong. It occurred in emerging markets such as China, India, and Mexico that have opened up to international trade and received outsourcing from developed countries such as the United States.”

They argue that one major factor behind this shift is cheaper information technology, which encouraged firms to substitute capital for labor. They write:

“There was a decline in the price of investment relative to consumption that accelerated globally around the same time that the global labor share began its decline. A key hypothesis that we put forward is that the decline in the relative price of investment, often attributed to advances in information technology, automation, and the computer age, caused a decline in the cost of capital and induced firms to produce with greater capital intensity. If the elasticity of substitution between capital and labor � the percentage change in the capital-labor ratio in response to a percentage change in the relative cost of labor and capital � is greater than one, the lowering of the cost of capital results in a decline in the labor share…. [O]ur estimates imply that this form of technological change accounts for roughly half of the decline in the global labor share. …

“If technology explains half of the global labor share decline, what might explain the other half? We use investment flows data to separate residual payments into payments to capital and economic profits, and find that the capital share did not rise as it should if capital-labor substitution entirely accounted for the decline in the labor share. Rather, we note that increases in markups and the share of economic profits also played an important role in the labor share decline.” 

The fall in the labor share of income has consequences that ripple through the rest of the global economy. For example, it contributes to the rise in inequality. Another change from a few decades ago is that corporations used to raise money from household savers, by issuing bonds, taking out loans, or selling stock. But with the rise in the capital share and corporate profits, about two-thirds of global investments is financed by firms themselves. Indeed, it used to be that there were net flows of financial capital into the corporate sector; now, there are net flows of financial capital out of the corporate sector (through stock buy-backs, the rise in corporate cash holdings, and other mechanisms). When comparing current stock prices and price-earnings ratios to historical values, it’s worth remembering when the capital share of income is higher, stock prices represent a different value proposition than they did several decades ago.

For previous posts on the declining labor share of income, see:

Could Driverless Trucks Create More Trucking Jobs? Uber Says "Maybe"

Could driverless trucks create more trucking jobs? It sounds logically impossible. But remember that automatic teller machines did not reduce the number of jobs bank tellers, and may even have increased it slightly,  because it changed altered the range of tasks typically done by a bank teller.  IN general, new technology doesn’t just alter a single dimension of an industry, but can lead to complementary changes as well. Uber Advanced Technologies Group (!) spells out a scenario in which driverless trucks lead to more trucking jobs in “The Future of Trucking: Mixed Fleets, Transfer Hubs, and More Opportunity for Truck Drivers” (Medium, February 1, 2018).

Imagine that with the arrival of driverless trucks, the trucking industry splits into two parts: long-distance driverless trucks, which operate almost mostly on highways and large roads between a network of “transfer hubs,” and short-distance trucks with human drivers, which take the trucks from transfer hubs to local addresses. As the Uber authors point out:

“The biggest technical hurdles for self-driving trucks are driving on tight and crowded city streets, backing into complex loading docks, and navigating through busy facilities. At each of the local haul pick ups and drop offs, there will need to be loading and unloading. These maneuvers require skills that will be hard for self-driving trucks to match for a long time. By taking on the long haul portion of driving, self-driving trucks can ease some of the burden of increasing demand, while also creating an opportunity for drivers to shift into local haul jobs that keep them closer to home.”

The crucial part of the scenario is that most trucks, given their human drivers, are now on the road for only about one-third of every day. However, the long-distance driverless trucks could be on the road two-thirds or more of every day. As a result, the costs of long-distance shipping would drop substantially, which in turn would give firms and consumers an incentive to expand the quantity of what they ship by truck. In one simulation that has 1 million driverless long-distance trucks on the road, the result is an additional 1.4 million drivers needed for shorter-haul local trucking.

For a great many truckers, short-hauling offers a better lifestyle, in part because you can sleep in your own bed every night. It’s not clear how wages might adjust in response to these kinds of changes. Wages for long-haul truckers might fall,  because competing with driverless technology on those routes would be tough, but the shift in wages for short-haul truckers depends on other ways in which the industry might evolve. The Uber folks are trying to crowd-source the economic analysis here by putting their models and data up on a GitHub site, so if this kind of analysis floats your boat (or you want to assign it as a student project), you have an option here.

Just to be clear, I’m not endorsing the scenario that 10 years from now, there will be a million autonomous trucks on US highways and even more truckers in the short-haul business. But I am endorsing the broader point that a simple “technology replaces jobs” story–even one as seemingly straightforward as how autonomous trucks will affect the number of truck drivers–is always more complex and sometimes even counterintuitive to how it may appear at first glance.

Winter 2018 Journal of Economic Perspectives

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. Here, I’ll start with Table of Contents for the just-released Winter 2018 issue, which in the Taylor household is known as issue #123. Below that are abstracts and direct links for all of the papers. I will blog more specifically about some of the papers in the next week or two, as well.

______________
Symposium: Housing
“The Economic Implications of Housing Supply,” by Edward Glaeser and Joseph Gyourko
In this essay, we review the basic economics of housing supply and the functioning of US housing markets to better understand the distribution of home prices, household wealth, and the spatial distribution of people across markets. We employ a cost-based approach to gauge whether a housing market is delivering appropriately priced units. Specifically, we investigate whether market prices (roughly) equal the costs of producing the housing unit. If so, the market is well-functioning in the sense that it efficiently delivers housing units at their production cost. The gap between price and production cost can be understood as a regulatory tax. The available evidence suggests, but does not definitively prove, that the implicit tax on development created by housing regulations is higher in many areas than any reasonable negative externalities associated with new construction. We discuss two main effects of developments in housing prices: on patterns of household wealth and on the incentives for relocation to high-wage, high-productivity areas. Finally, we turn to policy implications.
Full-Text Access | Supplementary Materials

“Homeownership and the American Dream,” by Laurie S. Goodman and Christopher Mayer
For decades, it was taken as a given that an increased homeownership rate was a desirable goal. But after the financial crises and Great Recession, in which roughly eight million homes were foreclosed on and about $7 trillion in home equity was erased, economists and policymakers are re-evaluating the role of homeownership in the American Dream. Many question whether the American Dream should really include homeownership or instead focus more on other aspects of upward mobility, and most acknowledge that homeownership is not for everyone. We take a detailed look at US homeownership from three different perspectives: 1) an international perspective, comparing US homeownership rates with those of other nations; 2) a demographic perspective, examining the correlation between changes in the US homeownership rate between 1985 and 2015 and factors like age, race/ethnicity, education, family status, and income; 3) and, a financial benefits perspective, using national data since 2002 to calculate the internal rate of return to homeownership compared to alternative investments. Our overall conclusion: homeownership is a valuable institution. While two decades of policies in the 1990s and early 2000s may have put too much faith in the benefits of homeownership, the pendulum seems to have swung too far the other way, and many now may have too little faith in homeownership as part of the American Dream.
Full-Text Access | Supplementary Materials

“Sand Castles before the Tide? Affordable Housing in Expensive Cities,” Gabriel Metcalf
This article focuses on cities with unprecedented economic success and a seemingly permanent crisis of affordable housing. In the expensive cities, policymakers expend great amounts of energy trying to bring down housing costs with subsidies for affordable housing and sometimes with rent control. But these efforts are undermined by planning decisions that make housing for most people vastly more expensive than it has to be by restricting the supply of new units even in the face of growing demand. I begin by describing current housing policy in the expensive metro areas of the United States. I then show how this combination of policies affecting housing, despite internal contradictions, makes sense from the perspective of the political coalitions that can form in a setting of fragmented local jurisdictions, local control over land use policies, and homeowner control over local government. Finally, I propose some more effective approaches to housing policy. My view is that the effects of the formal affordable housing policies of expensive cities are quite small in their impact when compared to the size of the problem�like sand castles before the tide. I will argue that we can do more, potentially much more, to create subsidized affordable housing in high-cost American cities. But more fundamentally, we will need to rethink the broader set of exclusionary land use policies that are the primary reason that housing in these cities has become so expensive. We cannot solve the problem unless we fix the housing market itself.
Full-Text Access | Supplementary Materials

Symposium: Friedman’s Natural Rate Hypothesis after 50 Years

“Friedman’s Presidential Address in the Evolution of Macroeconomic Thought,” by N. Gregory Mankiw and Ricardo Reis
Milton Friedman’s presidential address, “The Role of Monetary Policy,” which was delivered 50 years ago in December 1967 and published in the March 1968 issue of the American Economic Review, is unusual in the outsized role it has played. What explains the huge influence of this work, merely 17 pages in length? One factor is that Friedman addresses an important topic. Another is that it is written in simple, clear prose, making it an ideal addition to the reading lists of many courses. But what distinguishes Friedman’s address is that it invites readers to reorient their thinking in a fundamental way. It was an invitation that, after hearing the arguments, many readers chose to accept. Indeed, it is no exaggeration to view Friedman’s 1967 AEA presidential address as marking a turning point in the history of macroeconomic research. Our goal here is to assess this contribution, with the benefit of a half-century of hindsight. We discuss where macroeconomics was before the address, what insights Friedman offered, where researchers and central bankers stand today on these issues, and (most speculatively) where we may be heading in the future.
Full-Text Access | Supplementary Materials

“Should We Reject the Natural Rate Hypothesis?” by Olivier Blanchard
Fifty years ago, Milton Friedman articulated the natural rate hypothesis. It was composed of two sub-hypotheses: First, the natural rate of unemployment is independent of monetary policy. Second, there is no long-run trade-off between the deviation of unemployment from the natural rate and inflation. Both propositions have been challenged. The paper reviews the arguments and the macro and micro evidence against each. It concludes that, in each case, the evidence is suggestive, but not conclusive. Policymakers should keep the natural rate hypothesis as their null hypothesis, but keep an open mind and put some weight on the alternatives.
Full-Text Access | Supplementary Materials

“Short-Run and Long-Run Effects of Milton Friedman’s Presidential Address,” by Robert E. Hall and Thomas J. Sargent
The centerpiece of Milton Friedman’s (1968) presidential address to the American Economic Association, delivered in Washington, DC, on December 29, 1967, was the striking proposition that monetary policy has no longer-run effects on the real economy. Friedman focused on two real measures, the unemployment rate and the real interest rate, but the message was broader�in the longer run, monetary policy controls only the price level. We call this the monetary-policy invariance hypothesis. By 1968, macroeconomics had adopted the basic Phillips curve as the favored model of correlations between inflation and unemployment, and Friedman used the Phillips curve in the exposition of the invariance hypothesis. Friedman’s presidential address was commonly interpreted as a recommendation to add a previously omitted variable, the rate of inflation anticipated by the public, to the right-hand side of what then became an augmented Phillips curve. We believe that Friedman’s main message, the invariance hypothesis about long-term outcomes, has prevailed over the last half-century based on the broad sweep of evidence from many economies over many years. Subsequent research has not been kind to the Phillips curve, but we will argue that Friedman’s exposition of the invariance hypothesis in terms of a 1960s-style Phillips curve is incidental to his main message.
Full-Text Access | Supplementary Materials

Articles

“Exchange-Traded Funds 101 for Economists,” by Martin Lettau and Ananth Madhavan
Exchange-traded funds (ETFs) represent one of the most important financial innovations in decades. An ETF is an investment vehicle, with a specific architecture that typically seeks to track the performance of a specific index. The first US-listed ETF, the SPDR, was launched by State Street in January 1993 and seeks to track the S&P 500 index. It is still today the largest ETF by far, with assets of $178 billion. Following the introduction of the SPDR, new ETFs were launched tracking broad domestic and international indices, and more specialized sector, region, or country indexes. In recent years, ETFs have grown substantially in assets, diversity, and market significance, including substantial increases in assets in bond ETFs and so-called “smart beta” funds that track certain investment strategies often used by actively traded mutual funds and hedge funds. In this paper, we begin by describing the structure and organization of exchange-traded funds, contrasting them with mutual funds, which are close relatives of exchange-traded funds, describing the differences in how ETFs operate and their potential advantages in terms of liquidity, lower expenses, tax efficiency, and transparency. We then turn to concerns over whether the rise in ETFs may raise unexpected risks for investors or greater instability in financial markets. While concerns over financial fragility are worth serious consideration, some of the common concerns are overstated, and for others, a number of rules and practices are already in place that offer a substantial margin of safety.
Full-Text Access | Supplementary Materials

“Frictions or Mental Gaps: What’s Behind the Information We (Don’t) Use and When Do We Care?” by Benjamin Handel and Joshua Schwartzstein
Consumers suffer significant losses from not acting on available information. These losses stem from frictions such as search costs, switching costs, and rational inattention, as well as what we call mental gaps resulting from wrong priors/worldviews, or relevant features of a problem not being top of mind. Most research studying such losses does not empirically distinguish between these mechanisms. Instead, we show that most highly cited papers in this area presume one mechanism underlies consumer choices and assume away other potential explanations, or collapse many mechanisms together. We discuss the empirical difficulties that arise in distinguishing between different mechanisms, and some promising approaches for making progress in doing so. We also assess when it is more or less important for researchers to distinguish between these mechanisms. Approaches that seek to identify true value from demand, without specifying mechanisms behind this wedge, are most useful when researchers are interested in evaluating allocation policies that strongly steer consumers towards better options with regulation, traditional policy instruments, and defaults. On the other hand, understanding the precise mechanisms underlying consumer losses is essential to predicting the impact of mechanism policies aimed primarily at reducing specific frictions or mental gaps without otherwise steering consumers. We make the case that papers engaging with these questions empirically should be clear about whether their analyses distinguish between mechanisms behind poorly informed choices, and what that implies for the questions they can answer. We present examples from several empirical contexts to highlight these distinctions.
Full-Text Access | Supplementary Materials

“Do Economists Swing for the Fences after Tenure?” by Jonathan Brogaard, Joseph Engelberg and Edward Van Wesep
Using a sample of all academics who pass through top 50 economics and finance departments from 1996 through 2014, we study whether the granting of tenure leads faculty to pursue riskier ideas. We use the extreme tails of ex-post citations as our measure of risk and find that both the number of publications and the portion consisting of “home runs” peak at tenure and fall steadily for a decade thereafter. Similar patterns hold for faculty at elite (top 10) institutions and for faculty who take differing time to tenure. We find the opposite pattern among poorly cited publications: their numbers rise post-tenure.
Full-Text Access | Supplementary Materials

“Retrospectives: Cost-Push and Demand-Pull Inflation: Milton Friedman and the “Cruel Dilemma,” by Johannes A. Schwarzer
This paper addresses two conflicting views in the 1950s and 1960s about the inflation-unemployment tradeoff as given by the Phillips curve. Many economists at this time emphasized the issue of a seemingly unavoidable inflationary pressure at or even below full employment. In contrast, Milton Friedman was convinced that full employment and price stability are not conflicting policy objectives. This dividing line between the two camps ultimately rested on fundamentally different views about the inflationary process: For economists of the 1950s and 1960s cost-push forces are responsible for the apparent conflict between price stability and full employment. On the other hand, Friedman, who regarded inflation to be an exclusively monetary phenomenon, rejected the notion of ongoing inflationary cost-push pressures at full employment. Besides his emphasis on the full adjustment of inflation expectations, this rejection of cost-push theories of inflation, which implied a decoupling of the two previously perceived incompatible policy objectives, was the other important element in Friedman’s attack on the Phillips curve tradeoff in his 1967 presidential address to the American Economic Association.
Full-Text Access | Supplementary Materials

“Recommendations for Further Reading,” by Timothy Taylor
Full-Text Access | Supplementary Materials

“Using JEP Articles as Course Readings? Tell Us About It!”
Full-Text Access | Supplementary Materials

Man charged in robbery spree

A man is facing 28 charges in connection with a string of gas station and convenience store robberies in Peel and Toronto. Toronto Police say the spree spanned nearly five months, beginning March 23 and ending on Sunday.

In each incident, police say, a man entered the business with a note indicating he had a weapon and made a demand for money. Several of the robberies were captured on security cameras, police say. Karthik Poologanathan, 26, was arrested this morning . He’s charged with 14 counts of robbery and 14 counts of fail to comply with probation.

The Rising Importance of Soft Skills

What skills are most important for an employee to succeed at Google? Back in 2013, the company undertook Project Oxygen to answer that question.  Cathy N. Davidson described the result in the Washington Post last month (“The surprising thing Google learned about its employees � and what it means for today�s students,” December 20, 2017).  She writes:

“Sergey Brin and Larry Page, both brilliant computer scientists, founded their company on the conviction that only technologists can understand technology. Google originally set its hiring algorithms to sort for computer science students with top grades from elite science universities. In 2013, Google decided to test its hiring hypothesis by crunching every bit and byte of hiring, firing, and promotion data accumulated since the company�s incorporation in 1998. Project Oxygen shocked everyone by concluding that, among the eight most important qualities of Google�s top employees, STEM expertise comes in dead last. The seven top characteristics of success at Google are all soft skills: being a good coach; communicating and listening well; possessing insights into others (including others different values and points of view); having empathy toward and being supportive of one�s colleagues; being a good critical thinker and problem solver; and being able to make connections across complex ideas.”

Well, Google is a big company. Perhaps the soft skills matter for a lot of its employees. But for the A-level invention teams, surely the technical skills count for more? Last spring, Google tested that hypothesis with Project Aristotle. Davidson reports the results:

“Project Aristotle, a study released by Google this past spring, further supports the importance of soft skills even in high-tech environments. Project Aristotle analyzes data on inventive and productive teams. Google takes pride in its A-teams, assembled with top scientists, each with the most specialized knowledge and able to throw down one cutting-edge idea after another. Its data analysis revealed, however, that the company�s most important and productive new ideas come from B-teams comprised of employees who don�t always have to be the smartest people in the room. Project Aristotle shows that the best teams at Google exhibit a range of soft skills: equality, generosity, curiosity toward the ideas of your teammates, empathy, and emotional intelligence. And topping the list: emotional safety. No bullying. To succeed, each and every team member must feel confident speaking up and making mistakes. They must know they are being heard.”

Well, maybe the importance of soft skills is for some reason more pronounced at Google, or at a certain kind of high-tech company, than for the economy as a whole? Davidson notes: “A recent survey of 260 employers by the nonprofit National Association of Colleges and Employers, which includes both small firms and behemoths like Chevron and IBM, also ranks communication skills in the top three most-sought after qualities by job recruiters. They prize both an ability to communicate with one�s workers and an aptitude for conveying the company�s product and mission outside the organization.”

The evidence for the rising importance of soft skills goes beyond the anecdotal. David J. Deming provides an overview of economic research on this topic in “The Value of Soft Skills in the Labor Market” (NBER Reporter, 2017 Number 4). Deming cites evidence that for the US economy as a whole, the number of STEM (science, technology, engineering, mathematics) jobs rose rapidly from 1980 to 2000, but has declined since then. Moreover, the labor market returns to higher levels of cognitive skill have declined, too. Deming writes (footnotes omitted):

“While cognitive skills are still important predictors of labor market success, their importance has declined since 2000. An important recent paper finds significantly smaller labor market returns to cognitive skills in the early and mid-2000s, compared with the late 1980s and early 1990s. It compares the returns to cognitive skills across the 1979 and 1997 waves of the National Longitudinal Survey of Youth (NLSY) � the same survey that was used to document the importance of cognitive skills in several influential early papers. In a 2017 study, I replicate this finding and also show that returns to soft skills increased between the 1979 and 1997 NLSY waves. Moreover, recent findings suggest that employment and wage growth for managerial, professional, and technical occupations stalled considerably after 2000, which the researchers argue represents a `great reversal’ in the demand for cognitive skills.

“The slow overall growth of high-skilled jobs in the 2000s is driven by a decline in science, technology, engineering, and math (STEM) occupations. STEM jobs shrank as a share of all U.S. employment between 2000 and 2012, after growing strongly between 1980 and 2000. This relative decline of STEM jobs preceded the Great Recession. In contrast, between 2000 and 2012 non-STEM professional occupations such as managers, nurses, physicians, and finance and business support occupations grew at a faster rate than during the previous decade. The common thread among these non-STEM professional jobs is that they require strong analytical skills and significant interpersonal interaction. We are not witnessing an end to the importance of cognitive skills � rather, strong cognitive skills are increasingly a necessary � but not a sufficient � condition for obtaining a good, high-paying job. You also need to have social skills.

“Between 1980 and 2012, social skill-intensive occupations grew by nearly 12 percentage points as a share of all U.S. jobs. Wages also grew more rapidly for social skill-intensive occupations than for other occupations over this period.” 

Here’s a figure from one of Deming’s papers. The pattern is that wages for jobs that are “high-social low-math” rose at about the same rates as jobs that are “high-social, high math.” The jobs with slower wage growth are those that are “low-social,  high-math,” or “low-social, low math.”

Again, the underlying message here is not that tech skills don’t matter. Any high-income economy needs a substantial number of workers out on the on the bleeding edge of technology. But most workers in any economy are going to be involves in using and applying technology.
When it comes to use and application across a variety of contexts, soft skills and social skills start to become quite important.

Those who want some additional follow-up on this issue might begin with:

    A Puzzle: Why Do Retail Chains Charge Uniform Prices Across Stores?

    Imagine yourself as the profit-seeking owner of a chain of retail stores. Would you charge the same (or nearly the same) price across all the stores? Or would you vary prices according to average income level of consumers who use that store, or according to whether the local economy was  robust or shaky, or according to whether the store had geographically nearby competitors?

    In their working paper on “Uniform Pricing in US Retail Chains,” Stefano DellaVigna and Matthew Gentzkow argue that most retail chains do in fact charge the same (or nearly the same) prices across stores, but that profits would be higher if they did varied prices instead (Stanford Institute for Economic Policy Research Working Paper No. 17-042, November 14, 2017). Obviously, this finding poses a puzzle. They describe the data on 73 retail chains in this way:

    In this paper, we show that most large US food, drugstore, and mass merchandise chains in fact set uniform or nearly-uniform prices across their stores. …  Our analysis is based on store-level scanner data for 9,415 food stores, 9,977 drugstores, and 3,288 mass merchandise stores from the Nielsen-Kilts retail panel. … Our first set of results documents the extent of uniform pricing. While we observe no cases in which the measured prices are the same for all products across stores, the variation in prices within chains is small in absolute terms and far smaller than the variation between stores in different chains. This is true despite the fact that consumer demographics and levels of competition vary signifi cantly within chains: consumer income per capita ranges from $22,700 at the average 10th percentile store to $40,900 at the average 90th-percentile store, and the number of competing stores within 10 kilometers varies from 0.6 at the 10th-percentile store to 8.3 at the 90th-percentile store. Prices are highly similar within chains even if we focus on store pairs that face very different income levels, or that are in geographically separated markets. 

    In one calculation, if the stores raised prices in stores where the average buyer had higher incomes, they could increase profits by 7%. Why don’t stores do this? Indeed, they point to other evidence on European retailers, and on the sale of certain-brand-name products in US markets, which suggests that this pattern of strangely uniform prices is widespread. In the style of any good detective story, there are a list of suspects and clues.

    For example, advertising might create a situation where a certain price is publicized to a wide area.  However, the chains in this study advertise mostly on a city-by-city basis, and it would not be difficult for them to vary their advertising across cities.

    Another possible explanation is that chain stores don’t want to get into a price war with their competitors. With uniform prices, they are signalling to their competitors that they won’t be locally flexible in their price choices. However, the evidence shows the same uniformity both when chains are facing lots of competitors, and when they are not, which suggests that worries about avoiding a price war aren’t the main issue.

    Perhaps varying prices across stores would appear unfair to consumers, and thus damage the brand name of the store. But not many consumers are going to comparison-shop across stores in widely separate areas. And if prices are  higher for stores in high-income areas than in low-income areas, it’s hard to imagine that lots of consumers would view this as violation of some ethical rule.

    The explanation these authors find most likely involves managerial decision-making costs: that is, figuring out how to set varying prices across stores, and how to adjust those prices over time, is a substantial task. Unless the payoff in terms of higher profits is large, the inertia of uniform pricing becomes attractive. The authors find some suggestive evidence that store-level price flexibility does seem higher, and seems to be increasing, in settings where the profit potential is larger, which they ascribe in part to the ability of improved information technology to keep track of varying prices across stores.

    The authors forthrightly note that “none of this evidence is de finitive,” which means that the phenomenon of what seem to be overly uniform prices is a good talker for courses in microeconomics and business schools, and an interesting research topic.

    In addition the pattern of overly uniform prices is more than just an intellectual puzzle, as the authors point out. For example, less-uniform prices might mean lower prices for areas with low-income consumers, while “redistributing” the higher prices to higher-income consumers, and in this way reduce inequality. Less-uniform prices would mean bigger price cuts when a local economy has a shaky time, which in turn could help that local economy recover. On the other side, less-uniform prices would presumably mean higher prices for those in remote areas, with less geographic competition.

    What’s Wrong with Macro? A Symposium from the Oxford Review of Economic Policy

    Macroconomists were notorious for their disagreements before 2007. Such wrangling only increased with the carnage of the Great Financial Crisis and its aftermath. The Oxford Review of Economic Policy  has now devoted a special double issue (Spring-Summer 2018) to a symposium on the topic of “Rebuilding macroeconomic theory.” Lots of big names (to economists!) are featured, and at least for now, all the papers are freely available and ungated.  

    In an introductory essay, David Vines and Samuel Wills isolate some of the common theme in their introductory essay: “Four main changes to the core model are recommended: to emphasize financial frictions, to place a limit on the operation of rational expectations, to include heterogeneous agents, and to devise more appropriate microfoundations.” However, I found myself most struck by an essay by Ricardo Reis which dares to pose the question “Is something really wrong with macroeconomics?”
    Here are a couple of the points that stuck with me from that essay.

    One common complaint about macroeconomics is that most models did not forecast the Great Recession, and indeed, that macroeconomic forecasts in general are often incorrect. Reis points out that most macroeconomic research is not about forecasting, which he illustrates by referring to recent issues of a leading research journals in macroeconomics and to the PhD topics of young researchers in macroeconomics. With regard to the specific sub-topic of forecasting, Reis offers a provocative analogy to the state of medical knowledge:

    “Imagine going to your doctor and asking her to forecast whether you will be alive 2 years from now. That would sound like a preposterous request to the physician, but perhaps having some actuarial mortality tables in her head, she would tell you the probability of death for someone of your age. For all but the older readers of this article, this will be well below 50 per cent. Yet, 1 year later, you have a heart attack and die. Should there be outrage at the state of medicine for missing the forecast, with such deadly consequences?

    “One defence by the medical profession would be to say that their job is not to predict time of death. They are driven to understand what causes diseases, how to prevent them, how to treat them, and altogether how to lower the chances of mortality while trading this off against life quality and satisfaction. Shocks are by definition unexpected, they cannot be predicted. In fact, in practice, most doctors would refuse to answer the question in the first place, or they would shield any forecast with a blank statement that anything can happen. This argument applies, word for word, to economics
    once the word �disease� is replaced by the words �financial crisis�. …

    “Too many people all over the world are today being unexpectedly diagnosed with cancer, undergo enormously painful treatment, and recover to live for many more years. This is rightly hailed as a triumph of modern oncology, even if so much more remains to be done. After suffering the worst shock in many decades, the global economy�s problems were diagnosed by economists, who designed policies to respond to them, and in the end we had a painful recession but no melt-down. Some, somehow, conclude that economics is at fault. …

    “Currently, the major and almost single public funder for economic research in the United States is the National Science Foundation. Its 2015 budget for the whole of social, behavioural, and economic sciences was $276m. The part attributed to its social  and economic sciences group was $98m. The main public funder of health studies in the United States is the National Institute of Health (NIH), but there are many more, including several substantial private funders. The NIH�s budget for 2015 was $29 billion. Its National Institute of Allergy and Infectious Diseases alone received $4.2 billion in funding. A very conservative estimate is that society invests at least 40 times more trying to study infectious diseases, including forecasting the next flu season or the next viral outbreak, than it does in economics. More likely, the ratio of public investment to science devoted to predicting and preventing the next disease is two or even three orders of magnitude larger than the budget of science dedicated to predicting and preventing economics crises. There is no simple way to compare the output per unit of funding across different fields, but relative to its meagre funding, the performance of economics forecasting is perhaps not so bad.”

    Another complaint about modern macroeconomics is that doesn’t seem to offer clear guidance for policy. Reis points out that macroeocnomics is not the only area of economics with this issue: for example, there is considerable dispute among economists about topics like minimum wages or what tax rates to levy on those with high incomes, too. But perhaps even more to the point, economists often have little control over economic policy–except, in recent years, for central banks. Reis observes:

    “In deciding the size of the budget deficit, or whether a fiscal stimulus or austerity package is adopted, macroeconomists will often be heard by the press or policy-makers, but almost never play a decisive role in any of the decisions that are made. Most macroeconomists support countercyclical fiscal policy, where public deficits rise in recessions, both in order to smooth tax rates over time a nd to provide some stimulus to aggregate demand. Looking at fiscal policy across the OECD countries over the last 30 years, it is hard to see too much of this advice being taken. Rather, policy is best described as deficits almost all the time, which does not match normative macroeconomics. Moreover, in popular decisions, like the vote in the United Kingdom to leave the European Union, macroeconomic considerations seemed  to play a very small role in the choices of voters. Critics that blame the underperformance of the economy on economists vastly overstate the influence that economists actually have on economic policy.

    “One area where macroeconomists have perhaps more of an influence is in monetary policy. Central banks hire more PhD economists than any other policy institution, and in the United States, the current and past chair of the Federal Reserve are distinguished academic macroeconomists, as have been several members of the Federal Open Market Committee (FOMC) over the years. … Looking at the major changes in the monetary policy landscape of the last few decades�central bank independence, inflation targeting, financial stability�they all followed long academic literatures. Even individual policies, like increasing transparency,the saturation of the market for reserves, forward guidance, and balance-sheet policy, were adopted following academic arguments and debates.” 

    Reis points out that central banks around the world were tasked with the job of keeping inflation low, and they have largely done so. Moreover, the response of central banks to the Great Financial Crisis was heavily shaped by macroeconomic research:

    “Macroeconomists did not prevent the crises, but following the collapse of Lehman or the Greek default, news reports were dominated by non-economists claiming that capitalism was about to end and all that we knew was no longer valid, while economists used their analytical tools to make sense of events and suggest policies. In the United States in 2007�8, the Federal Reserve, led by the certified academic macroeconomist Ben Bernanke, acted swiftly and decisively. In terms of its conventional instruments, the Federal Reserve cut interest rates as far as it could and announced it would keep them low for a very long time. Moreover, it saturated the market for reserves by paying interest on reserves, and it expanded its balance sheet in order to affect interest rates at many horizons. Finally, it adopted a series of unconventional policies, intervening in financial markets to prevent shortages of liquidity. Some of these decisions are more controversial than others, and some were more grounded in macroeconomic research than others. But overall, facing an adverse shock that seems to have been as serious as the one behind the Great Depression, monetary policy responded, and the economy recovered. While the recession was deep, it was nowhere as devastating as a depression. The economic profession had spent decades studying the Great Depression, and documenting the policy mistakes that contributed to its severity; these mistakes were all avoided in 2008�10.”

    Macroeconomics is a juicy target for controversy, and many of the essays in this volume hit their mark. But it’s also true that, shocking though this may sound, macroeconomics isn’t magic, either. Economics is a developed analytical structure for thinking about issues and potential tradeoffs, not a cookbook full of easy answers. Reis makes a strong case that macroeconomics has its fair share of success stories, and also its fair share of open questions–just like a lot of other policy-relevant academic research.

    Here’s a listing of the articles in the special issue of the Oxford Review of Economic Policy, with links to the individual articles.  Again, all of the articles appear to be ungated and freely available, at least for now.

    Design a site like this with WordPress.com
    Get started