Bitcoin and Illegal Activity

One of the main attractions of bitcoin is its anonymity, which is worth the most to those who are carrying out questionable or illegal transactions. But how much of bitcoin used is tied to illegal activity? Sean Foley, Jonathan R. Karlsen,  and Talis J. Putnin� tackle this question in their January 2018 working paper, “Sex, drugs, and bitcoin: How much illegal activity is financed through cryptocurrencies?” available at the SSRN website.

I’ve tended in the past to view bitcoin and other digital cryptocurrencies as a fascinating sideshow: that is, a combination of the deeply interesting blockchain technology, but at a relatively small scale. The authors point out that the scale has been rising substantially (footnotes omitted).

“Cryptocurrencies have grown rapidly in price, popularity, and mainstream adoption. The total market capitalization of bitcoin alone exceeds $250 billion as at January 2018, with a further $400 billion in over 1,000 other cryptocurrencies. The numerous online cryptocurrency exchanges and markets have daily dollar volume of around $50 billion. Over 170 �cryptofunds� have emerged (hedge funds that invest solely in cryptocurrencies), attracting around $2.3 billion in assets under management. Recently, bitcoin futures have commenced trading on the CME [Chicago Mercantile Exchange] and CBOE [Chicago Board Options Exchange], catering to institutional demand for trading and hedging bitcoin. What was once a fringe asset is quickly maturing.”

I’m not sure where the dividing line is, but as cryptocurrencies seem headed toward exceeding $1 trillion in total value, greater attention will need to be paid. The focus of these authors is on links from bitcoin to illegal activity. Their research uses a key fact about the technology of bitcoin: the transactions carried out by bitcoin are all publicly available and observable, but the names of the participants are not. For example, you can observe that accounts A, B, and C all made a simultaneous payment to accounts D and E–but you don’t know who those parties are. The authors write:

“We extract the complete record of bitcoin transactions from the public bitcoin blockchain, from the first block on January 3, 2009, to the end of April 2017. For each transaction, we collect the transaction ID, sender and recipient address, timestamp, block ID, transaction fee, and transaction amount. … The data that make up the bitcoin blockchain reveal �addresses� (identifiers for parcels of bitcoin) but not the �users� (individuals) that control those addresses. A user typically controls several addresses. … Our sample has a total of approximately 106 million bitcoin users, who collectively conduct approximately 606 million transactions, transferring around $1.9 trillion.”

However, there are cases where the anonymity of blockchain has been broken, or at least dented. For example, law enforcement may expose who actually controls a certain address. Or certain addresses may be escrow accounts for firms operating on the “dark web.” Or one can look at darknet forums, where anonymous parties may in some cases reveal their bitcoin address–for example, because they are complaining that they never received what they paid for.

Once you have a list of bitcoin addresses linked to anonymous activity, you can then track the transactions to and from those addresses. By looking at the patterns that emerge, you can build up a “cluster” of accounts and transactions that seem likely to be illegal. The authors compare the size of this cluster to the total number of bitcoin transactions. They write:

“We find that illegal activity accounts for a substantial proportion of the users and trading activity in bitcoin. For example, approximately one-quarter of all users (25%) and close to one-half of bitcoin transactions (44%) are associated with illegal activity. Furthermore, approximately one-fifth (20%) of the total dollar value of transactions and approximately one-half of bitcoin holdings (51%) through time are associated with illegal activity. Our estimates suggest that in the most recent part of our sample (April 2017), there are an estimated 24 million bitcoin market participants that use bitcoin primarily for illegal purposes. These users annually conduct around 36 million transactions, with a value of around $72 billion, and collectively hold around $8 billion worth of bitcoin.

“To give these numbers some context, a report to the US White House Office of National Drug Control Policy estimates that drug users in the United States in 2010 spend in the order of $100 billion annually on illicit drugs.5 Using different methods, the size of the European market for illegal drugs is estimated to be at least �24 billion per year. While comparisons between such estimates and ours are imprecise for a number of reasons (and the illegal activity captured by our estimates is broader than just illegal drugs), they do provide a sense that the scale of the illegal activity involving bitcoin is not only meaningful as a proportion of bitcoin activity, but also in absolute dollar terms.”

As the authors note, these amounts are large enough to suggest that cryptocurrencies have the potential to shift how black markets operate. Many bitcoin accounts make only a single transaction, and then are never active again. And unsurprisingly, we are also seeing  “the emergence of alternative cryptocurrencies that are more opaque and better at concealing a user�s activity (e.g., Dash, Monero, and ZCash).” In the past, I have tended to believe that if law enforcement really wanted to break the anonymity of a cryptocurrency account, and devoted sufficient time and energy to a combination of old-fashioned and cyber-police work, it could do so. But the technology for anonymity keeps moving ahead.

For a recent, readable, and fairly short overview of bitcoin and the underlying blockchain technology, see “A Short Introduction to the World of Cryptocurrencies,” by Aleksander Berentsen and Fabian Sch�r, in the Federal Reserve Bank of St. Louis Review (First Quarter 2018, pp. 1-16) For previous discussions of Bitcoin and blockchain technology on this blog, see:

Textiles: Your Clothes are Pollutants

“[T]he way we design, produce, and use clothes has drawbacks that are becoming increasingly clear. The textiles system operates in an almost completely linear way: large amounts of non-renewable resources are extracted to produce clothes that are often used for only a short time, after which the materials are mostly sent to landfill or incinerated. More than USD 500 billion of value is lost every year due to clothing underutilisation and the lack of recycling. Furthermore, this take-make-dispose model has numerous negative environmental and societal impacts. For instance, total greenhouse gas emissions from textiles production, at 1.2 billion tonnes annually, are more than those of all international flights and maritime shipping combined. Hazardous substances affect the health of both textile workers and wearers of clothes, and they escape into the environment. When washed, some garments release plastic microfibres, of which around half a million tonnes every year contribute to ocean pollution � 16 times more than plastic microbeads from cosmetics. Trends point to these negative impacts rising inexorably …”

This, from the Executive Summary, is one of many jolting statements from A new textiles economy: Redesigning fashion�s future, published by the Ellen MacArthur Foundation in November 2017. The report seeks to envision a “circular” model of textile production: “In a new textiles economy, clothes, textiles, and fibres are kept at their highest value during use and re-enter the economy afterwards,
never ending up as waste.”

Sales in the textile industry are growing rapidly as world incomes rise, and as people expand their wardrobes and wear each fewer times.

A few comments from the report on the environmental consequences, which are rising, too (footnotes omitted throughout):

“Large amounts of nonrenewable resources are extracted to produce clothes that are often used for only a short period,4 after which the materials are largely lost to landfill or incineration. It is estimated that more than half of fast fashion produced is disposed of in under a year. …”

“Worldwide, clothing utilisation � the average number of times a garment is worn before it ceases to be used � has decreased by 36% compared to 15 years ago. While many low-income countries have a relatively high rate of clothing utilisation, elsewhere rates are much lower. In the US, for example, clothes are only worn for around a quarter of the global average. The same pattern is emerging in China, where clothing utilisation has decreased by 70% over the last 15 years. Globally, customers miss out on USD 460 billion of value each year by throwing away clothes that they could continue to wear, and  some garments are estimated to be discarded after just seven to ten wears. …”

“Less than 1% of material used to produce clothing is recycled into new clothing, representing a loss of more than USD 100 billion worth of materials each year. As well as significant value losses, high costs are associated with disposal: for example, the estimated cost to the UK economy of landfilling
clothing and household textiles each year is approximately GBP 82 million (USD 108 million). Across the industry, only 13% of the total material input is in some way recycled  after clothing use …” 

“The textiles industry relies mostly on non-renewable resources � 98 million tonnes in total per year � including oil to produce synthetic fibres, fertilisers to grow cotton, and chemicals to produce, dye, and finish fibres and textiles. Textiles production (including cotton farming) also uses around 93 billion cubic metres of water annually, contributing to problems in some water-scarce regions. … [I]t is recognised that textile production discharges high volumes of water containing hazardous chemicals into the environment. As an example, 20% of industrial water pollution globally is attributable to the dyeing and treatment of textiles. …”

Much of the report is given over to discussion of four broad areas in changes could be made, with numerous examples of what is happening in each area: “1. Phase out substances of concern and microfibre release; 2. Transform the way clothes are designed, sold, and used to break free from their increasingly disposable nature; 3. Radically improve recycling by transforming clothing design, collection, and reprocessing; 4. Make effective use of resources and move to renewable inputs.”

I found especially interesting the vision in which clothes are designed for greater durability, combined with business models in which consumers rent a far greater share of their clothing. Once you start thinking along these lines, market segments where this approach might work well (aided by the ability of clothing providers to know your measurements in advance) become apparent. Small children? Maternity wear? That ski outfit you only wear a few times a year? As the report notes:

“Subscription models allow customers to pay a flat monthly service fee to have a fixed number of garments on loan at any one time. These models can provide an attractive offering for customers desiring frequent changes of outfit, as well as an appealing business case for retailers. … Subscription models are already disrupting the market, with brands such as Le Tote, Gwynnie Bee, Kleiderei, and YCloset. This demonstrates that there is a willingness to pay monthly subscriptions for clothing, with YCloset in China securing a USD 20 million investment to scale up in March 2017. Another successful model is Rent the Runway, initially set up for online short-term rental of clothing for occasion wear and high-end luxury garments, which expanded to include a monthly rental subscription model in 2016. … YCloset is riding the wave of popularity for sharing economy services in China, gaining customers in over 100 Chinese cities since their app launched in 2015. They target mid-market urban customers who want to access variety and a fresh look, but who lack the budget to buy midrange or luxury clothing. …”

“The Danish company Vigga, established in 2014, allows parents to access igh-quality baby clothing for a fraction of the cost of buying new, with bundles of 20 appropriately sized baby clothing items provided at a time through a subscription service. By increasing durability, centralising washing and quality control, and streamlining operations through RFID (Radio Frequency Identification) tagging, on average Vigga circulates their baby clothes to five families before they are visibly used and go into
recycling, and they are working on increasing this number. Similar services have emerged in other countries, for example Tale Me in Belgium. Subscription services have also been introduced for pregnant women through companies such as Borrow For Your Bump, attempting to better address a woman�s needs for maternity wear. …”

“Houdini Sportswear has offered customers the option to rent their outdoor sports shells since 2013. This creates an attractive financial model for both the brand and the customer, who can afford high-quality performance sportswear for one weekend or week for 10�25% of its retail price, rather than buying a cheaper, low-quality version or needing to store the garment for the rest of the year. At the same time, Houdini achieves higher overall margins by combining rental and resale. …”

The report especially appealed to me for a couple of reasons. One is that I tend to think of textiles as an important but rather sleepy industry, gradually being transformed by robots and automated production. The report persuaded me that the opportunities for innovation in textiles are far greater than I had imagined. Also, it seems to me that even among those who drive hybrid cars and recycle religiously, the environment effects of clothing choices are often not much considered. This report should help to open up a new dimension of environmental awareness.

For a different angle on these issues, see “Quandaries of Global Trade in Secondhand Clothing” (May 22, 2015).

The Problem of Questionable Patents

The theoretical case for patents is clear enough: if you want people and companies to have an incentive for investing money and time in seeking innovations, you need to offer them some assurance that others won’t immediately copy any successful discoveries.  But with the power of patents comes the risk of gaming the patent system and of patents being granted when the proffered invention is either not new, or obvious, or both. Michael D. Frakes and Melissa F. Wasserman tackle these issues in “Decreasing the Patent Office�s Incentives to Grant Invalid Patents” (Hamilton Institute Policy Proposal 2017-17, December 2017). Also, Jay Shambaugh, Ryan Nunn, and Becca Portman offer some useful background information in “Eleven Facts about Innovation and Patents” (Hamilton Project, December 2017).

The Shambaugh, Nunn, and Portman paper offers a few background figures on patents that, as you look, at them, can raise your eyebrows a bit. The background here is that three main patent-granting agencies in the world–the US Patent Office and Trademark Office, the Japanese Patent Office, and the European Patent Office–are sometimes referred to as the Trilateral Patent Offices. The usual belief is that “compared to the USPTO, the JPO and EPO are believed to apply stricter scrutiny to applications.” Getting a patent from all three of these offices is called a “triadic” patent, and the number of triadic patents is sometimes used as a measure of quality. Now consider a couple of comparisons.

The number of patent applications in the US had more-or-less doubled since 2000. In that time, the number of patent applications in Japan has dropped by one-quarter, while the number in Europe has risen by about 50%. One possible interpretation of this pattern is that the US economy is the grip of a massive wave of innovation far outstripping Japan and Europe, which may foretell a productivity boom for the US economy. An alternative interpretation is that it’s so much easier to apply for a patent in the US, and to have a patent granted, that the US Patent Office is attracting lots of low-quality and invalid patent applications, and some of those are sneaking through the system to receive actual patents.

Here’s a figure that poses a similar question. This graph shows the share of GDP spent on research and development on the horizontal axis. The vertical axis is a measure of the number of “high-quality” patents, which in this figure refers to an innovation that is patented in at least two of the three Trilateral Patent Offices. The US level of R&D spending is a bit below that of Germany and Japan, but similar. However, when measured in terms of high-quality patents filed, the US lags well behind. Again, this could be the result that US firms aren’t bothering to apply for European and Japanese protection for all their great patents. Or it could be a signal that the rise in US patents includes a greater of low-quality or even invalid patents than those in Japan and Europe.

Frakes and Wasserman lay out how the US Patent Office works in greater detail, in a way that for me sharpens these concerns. For example, they write (citations omitted):

“There is an abundance of anecdotal evidence that patent examiners are given insufficient time to adequately review patent applications. On average, a U.S. patent examiner spends only 19 hours reviewing an application, including reading the application, searching for prior art, comparing the prior art with the application, and (in the case of a rejection) writing a rejection, responding to the patent applicant�s arguments, and often conducting an interview with the applicant�s attorney. Because patent applications are legally presumed to comply with the statutory patentability requirements when filed, the burden of proving unpatentability rests with the Agency. That is, a patent examiner who does not explicitly set forth reasons why the application fails to meet the patentability standards must then grant the patent.” 

The US Patent Office is funded by the fees it collects, which fall into several categories, as Frakes and Wasserman explain:

“The overwhelming majority of Patent Office costs are attributed to reviewing and examining applications. To help cover these expenses, the Agency charges examination fees to applicants. These fees fail to cover even half of the Agency�s examination costs, however. To make up for this deficiency, the Agency relies heavily on two additional fees that are collected only in the event that a patent is granted: (1) issuance fees, paid at the time a patent is granted; and (2) renewal fees, paid periodically over the lifetime of an issued patent as a condition of the patent remaining enforceable. Combined with examination fees, these fees account for nearly all of the Patent Office�s revenue. … In fiscal year 2016 the Patent Office estimated that the average cost of examining a patent application was about $4,200 . The examination fee that year was set at only $1,600 for large for-profit corporations; at $800 for individuals, small firms, nonprofit corporations, or other enterprises that qualify for small-entity status; and at $400 for individuals, small firms, nonprofit corporations, or other enterprises that qualify for micro-entity status.”

An obvious concern is that if the US Patent Office relies heavily on fees that are collected only after a patent is granted, then there is an obvious incentive to grand more patents. Indeed, they cite studies to show that when the Patent Office is facing financial troubles, it tends to grant more patents. 

An additional concern is that the US Patent Office doesn’t really reject patents, at least not permanently, because you can apply repeatedly. “Considering that about 40 percent of the applications filed in fiscal year 2016 are repeat applications (up from 11 percent in 1980), a substantial percentage of the Patent Office�s backlog can be attributed to its inability to definitively reject applications.” To put it another way, an applicant for a patent can just keep applying until it gets assigned to a less-experienced examiner during a budget crunch, and improve your odds that it will eventually be granted.

With these thoughts in mind, Frakes and Wasserman offer some practical solutions, which include: 1) increase patent examination fees and abolish “issuance” fees, to reduce the financial incentive to grant patents; 2) limit repeat applications, perhaps by charging higher fees; 3) give patent examiners more time (and charge higher fees to support that additional time as needed).

But the key economic insight between these proposals and others is that in a economy whose future is based on innovation and technology, the danger of granting a substantial number of patents which should not have been allowed has important costs. As Frakes and Wasserman write: 

“Although patents encourage innovation by helping inventors to recoup their research and development expenses, this comes at a cost�consumers pay higher prices and have less access to the patented invention. Although society can accept such consequences for a properly issued patent, an invalid patent imposes these costs on society without providing the commensurate benefits from additional innovation because, by definition, an invalid patent is one issued for an existing technology or an obvious technological advancement. Invalid patents provide no innovative benefit to society because the public already possessed the patented inventions.

“In addition to this harm, erroneously issued patents can stunt innovation and competition. Competitors might forgo research and development in areas covered by improperly issued patents to minimize the risk of expensive and time-consuming  litigation. There is growing empirical evidence that invalid patents can increase so-called patent thickets�dense webs of overlapping patent rights�that in turn raise the cost of licensing and complicate business planning. Because a firm needs a license to all of the patents that cover its products, other firms can use questionable patents to opportunistically extract licensing fees. There is mounting evidence that nonpracticing entities�commonly known as patent trolls�use patents of questionable validity to assert frivolous lawsuits and extract licensing revenue from innovative firms. Invalid patents can also undermine the business relations of market entrants because customers might be deterred from transacting with a company out of fear of a contributory patent infringement suit. Finally, erroneously issued patents can inhibit the ability of start-ups to obtain venture capital, especially if a dominant player in the market holds the patent in question.”

For some other thoughts on the economics of patents, the interested reader might check:

Kill the Zombie Firms

I first ran into the idea of zombie firms–and the need to kill them–back in the late 1980s, when the US savings and loan industry was melting down. Here’s an explanation from that time from  Edward Kane (“The High Cost of Incompletely Funding the FSLIC Shortage of Explicit Capital.” Journal of Economic Perspectives, 1989, 3:4, 31-47).

“The events of the early 1980s broke the savings and loan industry into two divergent parts: the living and the living dead. This terminology portrays firms whose enterprise-contributed capital has been lost as soulless “zombie” institutions. … Zombie firms now constitute roughly 25 percent of the FSLIC-insured thrift industry. As in a George Romero zombie movie, capital forbearance brings dead firms back to a malefic form of quasi-life in which they attack the living, turning the prey they feed on into zombies, too. In a kind of Gresham’s Law scenario (an analogy suggested by Joseph Stiglitz), “bad” zombie thrifts tend to drive out healthy competition. Zombie institutions do this by sucking deposits away from their competitors by offering high interest rates and by bidding down loan rates on high-risk projects. This squeezes profit margins and the proliferation of weak competitors and risky positions ultimately raises deposit-insurance premiums for everyone.”

The key insight is that when governments show restraint in killing the zombies, they soak up capital and slash prices in a way that makes it hard for other firms to compete, thus creating more zombies. Frank Borman, an astronaut who commanded Apollo 8 and later ran Eastern Air Lines, liked to say: “Capitalism without bankruptcy is like Christianity without hell” (for example, see Time magazine, “The Growing Bankruptcy Brigade,” October 18, 1982, p. 104).

Zombie firms were also sighted in Japan after its economic meltdown in the early 1990s. For example, Takeo Hoshi and Anil K. Kashyap wrote (“Japan’s Financial Crisis and Economic Stagnation .” Journal of Economic Perspectives, 2004, 18:1, 3-26):

“Caballero, Hoshi and Kashyap (2003) explore the consequences of these subsidies for macro performance in Japan. They find that subsidies have not only kept many money-losing �zombie� firms in business, but also have depressed the creation of new businesses in the sectors where the subsidized firms are most prevalent. For instance, they show that in the construction industry, job creation has dropped sharply, while job destruction has remained relatively low. Thus, because of a lack of restructuring, the mix of firms in the economy has been distorted with inefficient firms crowding out new, more productive firms. Not only does the rise of the zombies help explain the overall slowdown in productivity, Caballero, Hoshi and Kashyap show that zombie-infested sectors have seen sharper declines in productivity growth than the sectors with fewer zombies. … For instance, the lack of lending by the healthy banks makes sense because these banks see no point in lending to firms that will have to compete against the zombies that are kept on life support by the sick banks.”

But as anyone who watches television after midnight knows, despite all the warnings, zombies never actually die. Now they have been spotted in China. W. Raphael Lam, Alfred Schipke, Yuyan Tan, and Zhibo Tan have written “Resolving China�s Zombies: Tackling Debt and Raising Productivity” (IMF Working Paper WP/17/266, November 27, 2017).

“Nonviable �zombie� firms have become a key concern in China. … [T]his paper illustrates the central role of zombies and their strong linkages with state-owned enterprises (SOEs) in contributing to debt vulnerabilities and low productivity. As a group, zombie firms and SOEs account for an outsized share of corporate debt, contribute to much of the rise in debt, and face weak fundamentals. Empirical results also show that resolving these weak firms can generate significant gains of 0.7�1.2 percentage points in long-term growth per year. … While the government has introduced various reforms to facilitate deleveraging and resolve weak companies, progress has been limited. The empirical results in this paper would support the arguments that accelerating that progress requires a more holistic and coordinated strategy, which should include debt restructuring to recognize losses, fostering operational restructuring, reducing implicit support, and liquidating zombies.”

By their measures, the number of zombie firms in China and their share of debt had been declining, but are now on the rise again. (This happens in every zombie movie.)

But it’s not just China.  Dan Andrews, M�ge Adalet McGowan, and Valentine Millot confirms the worldwide threat in “Confronting the zombies: policies for productivity revival” (OECD Economic Policy Paper #21, December 2017), as well as in underlying research papers like M�ge Adalet McGowan, Dan Andrews and Valentine Millot, “The Walking Dead? Zombie Firms and Productivity Performance in OECD Countries” (OECD Economics Department Working Papers, No. 1372, ECONOMICS DEPARTMENT WORKING PAPERS No. 1372, January 10, 2017). In the policy paper, they write (citations omitted):

“There is growing recognition, however, that the productivity slowdown experienced over the past two decades is partly rooted in a rise of adjustment frictions that rein in the creative destruction process . One important dimension of this phenomenon is that firms that would typically exit or be forced to restructure in a competitive market � i.e. �zombie� firms � are increasingly lingering in a precarious state to the detriment of aggregate productivity. In this view, reviving productivity growth will partly depend on the policies that effectively facilitate the exit or restructuring of weak firms, while simultaneously coping with any social costs that arise from a heightened churning of firms and jobs. To this end, policies need to be reformed and packaged to enhance productivity growth in an inclusive fashion.

“Against this background, this paper summarises the policy messages emerging from a large amount of cross-country research on Exit Policies and Productivity Growth. Main findings are reported under two main headings. First, the paper provides evidence for the conjecture that weak firms are stifling productivity growth and highlights the considerable scope for raising growth by spurring the orderly exit or restructuring of such firms. Second, it explores the potential for insolvency, financial and other reforms to revive productivity growth by addressing three inter-related sources of structural weakness in labour productivity: the survival of �zombie� firms, capital misallocation and stalling technological diffusion. Overall, the results suggest that there is much scope to revive productivity growth via reforms focused on improving the design of insolvency regimes, financial sector health and other dimensions of policy that spur corporate restructuring.”

For example, one working definition of “zombie” firms is that they older firms, at least 10 years of age, that cannot cover their interest payments with their profits for three consecutive years. But tinkering with this definition doesn’t alter the main conclusions.” The propensity for high productivity firms to expand and low productivity firms to downsize has declined over time.” The prevalence of zombie firms across OECD countries has risen, and the share of capital they absorb has risen.

For one more recent example, see the remarks by Claudio Borio of the Bank of International Settlements, “A blind spot in today�s macroeconomics? (at a BIS-IMF-OECD Joint Conference on �Weak productivity: the role of financial factors and policies,� January 10�11, 2018), who discusses “the interaction between interest rates and the financial cycle and will also present some intriguing empirical regularities between the growing incidence of �zombie� firms in an economy and declining interest rates.”

A dynamic economy needs to be continually shape-shifting. Recognizing that zombie firms should not be nourished at the expense of other firms in the economy is a useful step in that direction.

Is the Euro Out of Danger?

The euro was officially adopted in 1999, although it took a few more years to be phased in for everyday use. Over the years, my feelings about the new currency have see-sawed from one extreme to the other: wasn’t sure it would be adopted in the first place, wasn’t sure it would work if adopted, seemed to work pretty well at first, then led to large trade imbalances within Europe and a financial crisis, now seems to be functioning smoothly again. Is the euro now out of danger, or do certain underlying risks remain substantial?

The most recent issue of the Milken Institute Review (First Quarter 2018) has a couple of useful articles for getting up to speed on where the euro has been and where it might be headed next. Barry Eichengreen contributes “Euro Malaise: From Remission to Cure,” while Jean Tirole discusses the future of Europe after the euro crisis in an excerpt from his recent book, Economics for the Common Good. Eichengreen diagnoses five main issues of the euro in this way:

“First, Europe has a financial-stability problem. As a result of bad management, bad supervision and badly designed regulation, euro-area banks became deeply entangled in the global financial crisis. On the cusp of the meltdown, they were undercapitalized, overleveraged and blithely unware of the risks of investing in U.S. securities backed by subprime mortgages. European regulators were then slow to clean up the post-meltdown mess, which goes a long way toward explaining why Europe�s recovery has been so sluggish.

“Second, the euro area has a debt problem. Government debt as a share of GDP in the area as a whole is not noticeably higher than in the United States, but it is spread unevenly across countries. It is a problem for Belgium, Cyprus, Italy and Portugal with debt-to-GDP ratios well above 100 percent. And it is a monster problem for Greece, with an eye-watering ratio approaching 180 percent. Servicing these heavy debts is a drain on public finances that will become even more burdensome when interest rates rise from current, historically low levels. …

“Third (and relatedly), fiscal policy is a problem. The euro area has an elaborate set of fiscal rules that are honored mainly in the breach. When Greece flaunted those rules at the end of the last decade, it was only following in the footsteps of France and Germany, which had broken the rules some five years earlier. Although the rules in question specify sanctions and fines for violators, those fines have never once been levied in the eurozone�s almost two decades of existence.

“Fourth, the euro area lacks an adequate financial fire brigade, a regional equivalent of the International Monetary Fund. …

“Fifth, the euro area lacks the flexibility to adjust to what the economist Robert Mundell, the intellectual father of the euro, referred to as �asymmetric disturbances.� There is no mechanism for eliminating the imbalances that arise when some member-states are booming while others are depressed, or when some members increase productivity more rapidly than others. It has no way of eliminating the chronic trade surpluses of some members and chronic deficits of others.

Eichengreen discusses what is happening in each of these areas, with particular attention to the negotiations between Angela Merkel in Germany and Emmanuel Macron in France. Deals could be cut to address at least some weaknesses of the euro, but it’s not at all clear that they will be. He concludes: Marine Le Pen, the hard-right French politician who opposed Macron in the second round of the French election, called the euro `the corpse that still moves.’ Merkel and Macron now have a narrow window in time to breathe new life into its body.”

Jean Tirole offers a reminder of what the euro was intended to accomplish, and how it has gone some distance in that direction:

“Even so, the euro represented an extraordinary symbol of European integration. Far more than a simple convenience for travelers, the single currency eliminated exchange rate uncertainty. Trade among euro area countries increased by around 50 percent between 1999 (the launch of the euro) and 2011. The euro was also intended to contribute to the stability of national economies by facilitating the diversification of savings across European countries: households and companies could invest abroad at lower cost, and their wealth was therefore less dependent on local conditions. Finally, the euro was intended to facilitate the circulation of capital in southern Europe, strengthening the financial credibility of those states and thus allowing them to finance their development.”

Tirole also walks through some of the major difficulties the euro created. One issue was a divergence in wages and productivity levels that led to large trade imbalances:

“Germany has consistently practiced wage moderation (in a relatively consensual way, because the labor unions in the sectors exposed to international competition have supported it), while wages in the southern countries exploded. In the countries of southern Europe plus Ireland, wages increased by 40 percent while labor productivity increased by only 7 percent. This divergence generated low prices for German products and high ones for those from southern Europe. Unsurprisingly, intra-European trade became massively unbalanced, with Germany exporting far more than it imported, and the southern countries doing the opposite.”

For discussions of these issues in the Journal of Economic Perspectives, where I work as Managing Editor, readers might want to check Christian Dustmann, Christian, Bernd Fitzenberger, Uta Sch�nberg, and Alexandra Spitz-Oener, “From Sick Man of Europe to Economic Superstar: Germany’s Resurgent Economy,” in the Winter 2014 issue, and Christian Thimann, “The Microeconomic Dimensions of the Eurozone Crisis and Why European Politics Cannot Solve Them,” from the Summer 2015 issue.

Another issue is that as the euro was facilitating capital movements to countries which had traditionally had to pay higher interest rates to borrow, that borrowing got out of control in some countries. There’s Tirole:

“More broadly, the confidence created by the poorer countries� joining the eurozone substantially lowered the interest rates paid by borrowers in these countries. The easier access to funds generated capital inflows. These inflows, sometimes combined with weak regulation of banks� risk-taking, fueled asset price increases and created financial bubbles, particularly in real estate.

“Massive levels of debt, both public and private, are implicated in the origins of the crisis that threatens the existence of the eurozone today. Excessive borrowing was sometimes the fault of a spendthrift public sector or a failure to collect taxes (as in Greece), and sometimes the fault of the private financial sector (as in Spain and Ireland). When the Irish government budget deficit ballooned from 12 to 32 percent of GDP in 2010, it was because the banks had to be bailed out.”

The Maastricht Treaty back in 1992 anticipated the possible problem that countries could be motivated to overborrow, and among other conditions set a rule that no country would have a public debt/GDP ratio over 60%. Tirole reminds us of the current levels, with the red vertical line marking the earlier promised 60% limit:

As Tirole writes: “The Greek debt of 180 percent of GNP (characterized by a high rate of foreign holdings) is gigantic for a country with limited fiscal capacity, and has a long maturity (about twice as long as that of other national debts) and a low interest rate following the restructurings of 2010 and 2012. Payments are due to become large only after 2022, and then will be made over many years.”

Again, readers interested in these dynamics may wish to check some earlier JEP articles. In the Summer 2012 issue, Philip R. Lane wrote “The European Sovereign Debt Crisis.” The Summer 2013 issue included four papers on euro-related issues: Enrico Spolaore, “What Is European Integration Really About? A Political Guide for Economists”; Jes�s Fern�ndez-Villaverde, Luis Garicano, and Tano Santos. “Political Credit Cycles: The Case of the Eurozone”; Kevin H.O’Rourke and Alan M. Taylor, “Cross of Euros”; and Stephanie Schmitt-Groh� and Martin Uribe,”Downward Nominal Wage Rigidity and the Case for Temporary Inflation in the Eurozone.”

Tirole offers an even-handed discussion of the possible directions for the next set of reforms to solidify the euro, while admitting that at present, all possible directions are problematic.

One set of options involves a greater degree of unification across Europe, which Tirole calls the “Maastricht approach.”  For example, there could be a European Fiscal Council that would track borrowing in different countries and sound the alarm if it seemed to be getting out of control. But as Tirole writes: “This fiscal council would have to truly represent Europe as a whole and have the authority to require prompt corrective action. In addition, since financial sanctions are not a good idea if a country is already in financial difficulty, other measures must be used � although these would only sharpen concerns about legitimacy and sovereignty. As things stand, the current impulse toward national sovereignty works against such improvement of the Maastricht approach.”

The other broad set of options, which Tirole calls the “federalist approach,” instead starts from the assumption that the EU countries might look for certain limited opportunities to share risks and coordinate in limited ways. For example, one could imagine a system in which each country chooses its own pension contributions and benefits, but the pension funds themselves are run by a common European entity that would apply a common methodology so that scheduled payments into the system and promised benefits from the system remained in alignment. Similarly, one can imagine a cross-European plan for a least some minimum level of unemployment insurance, or a plan that provides for common standards of bank supervision and regulation, together with deposit insurance. But as Tirole points out, European countries have different political preferences, and so mixing countries with high and low pension levels, or high and low unemployment levels, or high and low levels of deposit insurance, is a tricky business. 

The euro situation is in a lull just now, which means there is some time and space for advance planning to reduce the risks of a future crisis. The question is whether European countries and institutions are going to squander their respite. 

For previous discussions of the euro, see:

Why Has US Regional Convergence Declined?

In the decades after World War II and up into the 1980s, the US economy experienced regional convergence: that is, the economies and incomes in poorer regions (like the US South) tended to grow more quickly than the economies of richer regions (like the US North). But in the 1980s, this pattern of regional convergence slowed down.

A couple of recent research papers have investigated the shift. Peter Ganong and Daniel W. Shoag have published “Why Has Regional Income Convergence in the U.S. Declined?” in the Journal of Urban Economics (November 2017,  pp. 76-90). The paper isn’t freely available online, but some readers will have access through library subscriptions, and there is a July 2016 version freely available as a Hutchins Center working paper. Elisa Giannone adds some aditional pieces to the puzzle in “Skilled-Biased Technical Change and Regional Convergence” (January 4, 2017), written as part of her doctoral dissertation.

 Both papers are tackling the same basic fact pattern, although with different data sources. Thus, Ganong and Shoag write:

“The convergence of per-capita incomes across US states from 1880 to 1980 is one of the most striking patterns in macroeconomics. For over a century, incomes across states converged at a rate of 1.8% per year. Over the past thirty years, this relationship has weakened dramatically, as shown in Figure 1.1 The convergence rate from 1990 to 2010 was less than half the historical norm, and in the period leading up to the Great Recession there was virtually no convergence at all.”

Here’s a figure to illustrate the pattern. In the left-hand panel, the horizontal axis measures the per capita income of states in 1940. The vertical axis shows the growth rate of state per capita income from 1940-1960. The downward-sloping line shows that the lower-income states in 1940 tended to have faster growth in the two decades that followed. The right-hand panel does the same exercise, but this time starting in 1990 and running through 2010. The downward-sloping but much flatter line in the right hand panel shows that while the lower income states in 1990 did grow a bit more quickly in the next two decades, the rate of convergence had become much slower.

A similar pattern shows up with what the authors call “directed migration,” or movement of people from lower-income to higher income states: that is, the amount of such movement has declined substantially in the last few decades. 

In her paper, Giannone is using city-level data, rather than state-level data, and she writes: “Between 1940 and 1980 the wage gap between poorer U.S. cities and richer ones was shrinking at an annual rate of roughly 1.4%. After 1980, however, there was no further regional convergence overall.”

A number of economic models predict convergence between regions. After all, people from lower-wage regions have an incentive to move to higher-wage regions, and some of them will do so. Conversely, firms have some incentive to invest in plant and equipment where the labor force and land are cheaper, and some of them will do so. Over time, these patterns should lead to a degree of convergence. What has changed?

To explain the slower convergence between regions, Ganong and Shoag offer an explanation based in patterns of migration and housing prices. They argue that the rise of housing prices in a number of high-income areas of the US had discouraged migration by lower-wage workers who don’t already live there. They argue that the US economy has shifted from a converging labor market across states, with higher levels of migration between states, to a market where we are sorting into two group: those with higher incomes who live in areas with higher housing prices, and those with lower incomes who live in areas with lower housing prices.

Giannone offers a a different explanation based on “skill-biased technical change,” which is the lingo for when a certain kind technological progress tends to help those with higher skills (and wages) more than those with lower skills (and wages). Her evidence suggests that the decline in convergence only happened for college-educated workers, while wages for workers with less education continued to converge. Moreover, her evidence shows that in a certain number of cities, the premium paid for  high-skill workers increased dramatically–even as more high-skill workers migrated to those cities. Her argument is that “agglomeration effects,” which refer to the notion that a lot of skilled workers in close proximity may generate an extremely high wage urban economy (for example, think Silicon Valley).
These explanations based on technological change and housing markets are potentially complementary, and they do not exhaust the possibilities for why mobility across regions, and thus economic convergence, has declined. For example, last month I posted about David Shleicher’s argument that (“Why More Americans Seem Stuck in Place,” December 7, 2017). He attributes much of the issue to state and local laws affecting housing, jobs, and personal finance. For example, he wrote:

“[S]tate and local (and a few federal) laws and policies have created substantial barriers to interstate mobility, particularly for lower-income Americans. Land-use laws and occupational licensing regimes limit entry into local and state labor markets. Differing eligibility standards for public benefits, public employee pensions, homeownership tax subsidies, state and local tax laws, and even basic property law doctrines inhibit exit from low-opportunity states and cities. Building codes, mobile home bans, location-based subsidies, legal constraints on knocking down houses, and the problematic structure of Chapter 9 municipal bankruptcy all limit the capacity of failing cities to shrink gracefully, directly reducing exit among some populations and increasing the economic and social costs of entry limits elsewhere.”  

Many countries have longstanding divergences in income between certain areas or regions: north and south, east and west, coastal and inland, rural and urban. The power of economic convergence can reduce such differences over time, but only in very rare occasions does it eliminate or overturn such differences. However, mobility across regions isn’t just about economic convergence. It’s also about a whether people in all areas have a sense of opportunity; whether people in the lower-income areas know a few people who moved elsewhere and enjoyed it; whether people from higher-income areas have friends with family and personal connections in lower-income areas. Those who move across regions are a kind of social bungee-cord, so that the distance between geographic areas isn’t just measured by mileage on a map, but is bridged by human connections as well.

Attributing Economic Outcomes to Presidents: Year One of Trump

The US economy has performed well on a wide variety of measures in the year since President Trump was inaugurated on January 20, 2017.  The unemployment rate was 4.8% in January 2017 and 4.1% in December 2017. The unemployment rate among black Americans has fallen to its lowest level in the 45 years that regular statistics have been kept. The most recent estimates of GDP growth (which are preliminary and subject to later revision) show GDP growth of 3.1% in the second quarter of 2017 and 3.2% in the third quarter. Stock market indexes like the S&P 500 have risen dramatically.

The new year has brought a wave of economic good-news stories in the business press.  Business investment spending seem to be on the rise. By one measure, US manufacturing in 2017 has its best year since 2004. News sources not known to be overly friendly to the Trump administration, like the New York Times, are reporting stories like “The Trump Effect: Business, Anticipating Less Regulation, Loosens Purse Strings.” US carbon emissions declined in 2017, which the US Energy Information Administration attributes in substantial part to fewer days of hot weather than in 2016–thus reducing the need for heavy use of air conditioning.

How much credit does President Trump deserve for this showing? Trump’s critics quickly point out that an enormous economy like the United States has considerable momentum. For example,  Paul Krugman says that Trump gets “essentially zero” credit for US economic performance in 2017.  I agree that the effect of presidents–and especially newly elected presidents–on the economy is often overrated. But the observation that a US president has only a modest effect on the economy during a first year of office often includes a heavy dose of partisan bias.

Maybe I missed it, but when Trump had been elected and was headed toward taking office in January 2017, I didn’t hear a lot of his critics say: “Well, the US economy is really set up for a strong year in 2017, but when it happens, Trump won’t deserve any credit.” Instead, there were predictions of grave difficulties ahead. Moreover, if the US economy had headed south early in 2017, with rising unemployment, sluggish output, stagnant investment, and a falling stock market, I strongly suspect that Trump critics like Krugman would place the blame on Trump’s election. And in that case, it would be Republicans and Trump sympathizers arguing that the new president had inherited an unexpectedly poor situation and should receive essentially zero blame.

It’s interesting to reflect back on previous presidencies, and apply the standard of “for the first year a president is in office, what happens (for good or bad) is largely what they inherited.”  For example, the Great Recession ended in June 2009, six months into President Obama’s first term. By this standard, the exit from the Great Recession should be credited to the economy and policies inherited from the previous Bush administration. The 2001 recession arrived in March of that year, just two months after President George W. Bush had assumed office. By this standard, that recession should be attributed to the economy and policies inherited from the Clinton administration. The incoming Clinton administration in 1993 inherited an economy with a falling unemployment rate, which by this standard should be attributed to the outgoing Bush administration.

I’m sympathetic to the argument that the first year of any presidential administration is essentially inherited, but this only sharpens the question of why the US economy performed so well in 2017. I’d suggest three possibilities.

First, the election campaign of 2016 seemed to involve all the candidates talking down the economy. But the national unemployment rate had fallen to 5,0% in September 2015, and has stayed at or below that level since then.  Here are the quarterly rates of real GDP growth since 2000.  The annualized growth rates of slightly more than 3% in Q2 and Q3 of 2017 are just fine, but they don’t really stand out from a number of previous quarters since about 2010.  Overall, the US economy has considerable forward momentum at this point. It has continued to grow despite a succession of interest rate increases, and despite some terrible weather-related events in 2017. Growth across the main sectors of the economy has been quite balanced, rather than tilted toward a sector like housing or high-tech in a way that can lead to instability.

Second, although Americans like to think of our economy as an outpost remote from the rest of the world, we are in fact tied into the global economy in a number of ways. The World Bank argues that 10 years after the Great Recession, the global economy is at last again producing at its full potential. When an economic pattern happens across a number of nations at the same time, it’s wise to suspect that there is a common underlying force that goes beyond national policies. For example, the fact that income inequality has been rising across many nations of the world, not just the US, suggests that the reasons for that increase are deeper economic patterns that affect many countries, not specific national actions. Similarly, with unemployment rates falling and stock markets rising in many countries around the world during 2017, it suggests that the reasons for that increase are patterns that cross many countries, not specific national actions.  My own sense is that many firms and banks had been holding their breath for a few years, waiting to be sure that the carnage of the Great Recession is behind them–and now they are stepping up.

Third, even with the US domestic momentum and full-potential output of the world economy taken into account, it does feel to me as if the Trump presidency was in some way an inflection point. The Trump administration’s two main economic policy changes in 2017 involve a much more hands-off regulatory environment and the recently released tax bill. On the merits of these policies, I’ve expressed some concerns about both. Regulatory reform can be a positive step for an economy, and the UK and Canada have shown some ways to carry it out, but reform that does more to sort out regulations that justify their costs from those that don’t is one thing, while just blocking and negating regulations willy-nilly is something else. The recent tax reform bill has many moving parts, but to me, the crucial question is the extent to which lower tax rates and other changes that benefit corporations pay off in a demonstrable surge in investment and wages. There are a bevy of recent anecdotes of companies announcing such changes, but in the next year or two, it will be interesting to check the follow-through on those promises–or whether most firms just cash the tax breaks, pay big bonuses to executives, and continue their investment and and wage-paying along much the same trajectory.

But whatever the merits of  these changes, it’s not possible that their effects would have been directly felt early in 2017. Instead, firms would need to be reacting to the expectation of an improved business climate in the future. Like a lot of economists, I mistrust using “business climate” or “business confidence” as an explanation.  I’d prefer to be able to trace back “business confidence” to specific measurable parts of the economy, and to focus on those instead. But just because something is hard to measure doesn’t mean it isn’t real. It seems at least plausible that firms in a number of industries felt that the US business climate was not supportive, and interpreted Trump’s election as a sign that polies more likely to support profit-seeking firms were on their way.

In thinking about business confidence, it may also be that some of Trump’s important policy steps were the ones not taken. There were concerns that Trump might trigger a trade war. But while he has been hostile to additional trade agreements (as were the main Democratic party contenders for President in 2016), he did little to add impediments to trade in 2017.  Similarly, there was concern that Trump might replace Federal Reserve chair Janet Yellen with someone who didn’t have the necessary trust and connections in financial markets, but the selection of Jerome Powell seemed to calm those concerns.

There’s an old line commonly attributed to John Naisbitt (I don’t have a citation) that “leadership involves finding a parade and getting in front of it.” Politicians often excel at this kind of leadership, and in that spirit,  I don’t blame President Trump for claiming excessive credit for the good news of the US economy in 2017. When it comes to economic outcomes, presidents are a bit like the coaches of professional sports teams–that is, they often get an outsized share of the credit for success and the blame for failure.

Does Retirement Raise the Risk of Death?

“Social Security eligibility begins at age 62, and approximately one third of Americans immediately claim at that age. We examine whether age 62 is associated with a discontinuous change in aggregate mortality, a key measure of population health. Using mortality data that covers the entire U.S. population and includes exact dates of birth and death, we document a robust two percent increase in male mortality immediately after age 62. The change in female mortality is smaller and imprecisely estimated. Additional analysis suggests that the increase in male mortality is connected to retirement from the labor force and associated lifestyle changes.”

This is a technical research paper that will only be accessible to the initiate, but you can get a good flavor of the results from a couple of figures. A first figure shows the patterns of claiming Social Security. There are various rules about the age at which different kinds of benefits can be claimed. For example Social Security disability benefits can be claimed earlier, but access to what most people think of as the usual Social Security Benefits starts at 62. The figure shows the step-change or discontinuity in number of people retiring at 62, followed by a slower rise in claiming benefits and a smaller jump at age 65. 

There’s also a step-change discontinuity in the death rate at age 62–but only for men, not for women. Mortality rates increase with age. As this figure shows, the mortality rate for women before and after age 62 is close to a smooth line. But for men, there’s a jump. 
The authors dig into data on behavioral and economic patterns to see if they can find an underlying reason for this difference. Proving cause-and-effect here is very difficult, as the authors admit, but some patterns do emerge in the data. For example, there doesn’t seem to be a gender gap in how income or health insurance coverage shifts at age 62. However, one difference is that men are more likely than women to stop working for pay when they start claiming Social Security. Men are more likely to start or increase smoking (even if they have never smoked before) and to become more sedentary. Men at age 62 also see a rise in deaths due to chronic obstructive pulmonary disease, lung cancer, and traffic accidents.
In short, if you are retiring, take up a habit other than smoking. And if you know someone who is retiring, invite them for a walk and give them a hug. 

Measuring the "Free" Digital Economy

The digital economy provides a number of services for which the marginal price (given an internet connection) is zero: games like Candy Crush, email, web searches, access to information and entertainment, and many more. Because users are not paying an additional price for using these services, this form of economic output doesn’t seem to be captured by conventional economic statistics. Leonard Nakamura, Jon Samuels , and Rachel Soloveichik offer some ways of thinking about the question in in “Measuring the `Free’ Digital Economy within theGDP and Productivity Accounts, written for the Economic Statistics Centre of Excellence, an independent UK research center funded by Britain’s Office of National Statistics (December 2017, ESCoE Discussion Paper 2017-3).

Essentially, they propose that the economic value of “free” content can be measured by the marketing and advertising revenue that it generates. In other words, you “pay” for “free” content not with money, but by selling a slice of your attention to advertising. Thus, their approach is a practical application of the saying: “If you’re not paying for it, you’re the product.” They write:

�Free� digital content is pervasive. Yet, unlike the majority of output produced by the private business sector, many facets of the digital economy (e.g., Google, Facebook, Candy Crush) are provided without a market transaction between the final user of the content and the producer of the content. … Furthermore, because these technologies are so pervasive and have induced large changes in consumer behavior and business practice, these open questions have evolved into arguments that the exclusion of these technologies from the national accounts leads to a significant downward bias in official estimates of growth and productivity.

The first contribution of this paper is to provide an argument that, yes, it is possible to measure many aspects of the �free� digital economy via the lens of a production account. … To be clear at the outset, this approach does not provide a willingness to pay or welfare valuation of the �free� content. But this approach does provide an estimate of the value of the content that is consistent with national accounting estimates of production.

We model the provision of �free� content as a barter transaction. Consumers and businesses receive content in exchange for exposure to advertising or marketing. Our approach reduces to treating the provision of the �free� digital content as payment in kind for viewership services produced by households and businesses. Put differently, the national accounts currently ignore the role of households in the production of advertising and marketing. In our methodology, households are active producers of viewership services that they barter for consumer entertainment. …

We focus on two types of �free� content: advertising-supported media and marketing-supported information. Advertising-supported media includes digital content like Google search, but also more traditional content like print media and broadcast television. Marketing-supported information includes digital content like so-called freemium games for smartphones or recipes from BettyCrocker.com, but also more traditional content like print newsletters and audiovisual marketing. Conceptually, the barter transaction between the producer and user of �free� information is nearly identical to that with advertising-supported media. The main difference is that advertising viewership is almost exclusively �purchased� by media companies from the general public and then resold to outside companies. In  contrast, the marketing viewership that is exchanged for �free� information is generally �purchased� by nonmedia companies from potential customers and used in-house.

A number of interesting insights emerge from this approach. Here’s a figure showing total US advertising spending over time as a share of GDP. Total advertising revenue has been fairly stable over time, with the abrupt fall in print advertising being mostly offset by a rise in digital advertising. 
This figure shows total expenditures on marketing over time as a share of GDP. In this case, spending on print marketing has declined, but because of rising expenditures on digital marketing, total spending on marketing has risen by more than 1% of GDP in the last 20 years. 
Overall, measuring the value of the “free” digital economy has relatively little effect on output or trends in total factor productivity (TFP)_. They write (citations and footnotes omitted):

“We are particularly interested in the analysis of �free� digital content beginning in 1995 because that year has been previously identified as an inflection point in the production of information technology (IT) equipment. Moreover, that is when the Internet emerged as a significant source of �free� content. We calculate that, from 1995 to 2014, our experimental methodology applied to digital content annually raises nominal GDP growth by 0.036 percentage point, real GDP growth by 0.089 percentage point, and TFP growth 0.048 percentage point. The growth of digital content is partially offset by a decrease in �free� print content like newspapers. From 1995 to 2014, all �free� content
categories together annually raise nominal GDP growth by 0.033 percentage point, raise real GDP growth by 0.080 percentage point, and raise TFP growth by 0.073 percentage point.  …  These revised numbers slightly ameliorate the recent slowdown in economic growth�but not nearly enough to reverse the slowdown.”

This analysis seems broadly sensible and correct to me: for a previous argument along similar lines, see  “How Well Does GDP Measure the Digital Economy?” (July 19, 2016). But it comes with a warning that applies to all discussions of economic output, and is recognized repeatedly by the authors here.

GDP is measured by the monetary value of what is bought and sold, but it doesn’t measure consumer welfare (or “happiness” or “utility”) in a direct way. Thus, it’s possible that even if the gains to GDP from including “free” digital services are relatively small, perhaps those small gains are increasing consumer welfare and happiness by a much larger amount. Of course, one can make a similar argument that the monetary value of certain other outputs, from broadcast television back in the 1960s and 1970s, or the availability of aspirin, is a lot less than the consumer welfare generated by these products. Measuring “the economy” is an exercise of adding up sales receipts, while thinking about benefits and costs of economic patterns (as has long been recognized) is a much broader exercise. 

The State of Play with Carbon Capture and Storage

Carbon capture and storage technology isn’t likely to be the silver bullet that slays climate change  by itself. But it may well be a necessary and meaningful part of the package of policy responses. Akshat Rathi has written a series of readable articles for Quartz magazine (listed here) that give a useful sense of the state of the technology in this area, and its real-but-limited potential.

For example, in an overview article on December 4, 2017, “Humanity�s fight against climate change is failing. One technology can change that,” Rathi notes that before starting a year of research and writing on the topic, he was skeptical of carbon capture and storage could be cost-effective. However, he also notes that this technology may both be necessary and possible.

On the issue of necessity, Rathi writes: “The foremost authority on the matter, the Intergovernmental Panel on Climate Change, has modeled hundreds of possible futures to find economically optimal paths to achieving these goals, which require the world to bring emissions down to zero by around 2060. In virtually every IPCC model, carbon capture is absolutely essential�no matter what else we do to mitigate climate change.” (Rathi and David Yanovsky offer an interactive game to drive home this point here.)

On the issue of possibility, the evidence is scattered, and still more at the proof-of-concept stage than at a full-fledged and ongoing industry. But some of these fledgling projects are intriguing. For example, Rathi discusses an operation in Iceland (discussed in more detail in an earlier article) which issues geothermal heat to capture carbon dioxide and inject it underground in a location where it combines with minerals to form solid rock–an operation which is an overall net subtraction of carbon from the atmosphere. Rathi notes:

“Since 2014, the plant has been extracting heat from underground, capturing the carbon dioxide released in the process, mixing it with water, and injecting it back down beneath the earth, about 700 meters (2,300 ft) deep. The carbon dioxide in the water reacts with the minerals at that depth to form rock, where it stays trapped. … In other words, Hellisheidi is now a zero-emissions plant that turns a greenhouse gas to stone. … Critics laughed at those pursuing a moonshot in �direct-air capture� only a decade ago. Now Climeworks is one of three startups�along with Carbon Engineering in Canada and Global Thermostat in the US�to have shown the technology is feasible. The Hellisheidi carbon-sucking machine is the second Climeworks has installed in 2017. If it continues to find the money, the startup hopes its installations will capture as much as 1% of annual global emissions by 2025, sequestering about 400 million metric tons of carbon dioxide per year.”

In another article, Rathi discusses a plant which generates electricity from natural gas near Houston, in  “A radical startup has invented the world�s first zero-emissions fossil-fuel power plant” (December 5, 2017). The process involves using “supercritical” carbon dioxide, under high termperatures and pressures. He writes:

“In the end, the Allam cycle is only slightly more efficient than typical combined-cycle systems. But it has the major added benefit of capturing all potential carbon dioxide emissions essentially for free. … Beyond the greenhouse-gas effect, carbon dioxide has some fascinating properties. At high pressure and temperature, for instance, it enters a state of matter where it�s neither a gas nor a liquid but has properties of both. It�s called a �supercritical fluid.� If you�ve ever had decaf coffee, you�ve likely been an unwitting customer of supercritical carbon dioxide, which is often used to extract caffeine from coffee beans with minimal changes to the taste.”

China, which leads the countries of the world in carbon emissions, has been experimenting with carbon capture and storage, without yet making a strong commitment to the technology, as Rathi explains here (and a 2015 report from the Asia Development Bank discusses here).

There are a variety of new projects and possible innovations either to capture carbon from emissions at lower cost, or to turn carbon dioxide into solids like soda ash, and other approaches. Carbon capture and storage isn’t yet a proven large-scale technology, but it’s a promising one.

For some previous posts on this topic, with links to various reports and articles, see:

Design a site like this with WordPress.com
Get started