The Global Rise of Internet Access and Digital Government

What happens if you mix government and the digital revolution? The answer is Chapter 2 of the April 2018 IMF publication Fiscal Monitor, called “Digital Government.” The report offers some striking insights about access to digital technology in the global economy and how government may use this technology.

Access to digital services is rising fast in developing countries, especially in the form of mobile phones, which appears to be on its way to outstripping access to water, electricity, and secondary schools.

Of course, there are substantial portions of the world population not connected as yet, especially in Asia and Africa.

The focus of the IMF chapter is on how digital access might improve the basic functions of government taxes and spending. On the tax side, for example, taxes levied at the border on international trade, or value-added taxes, can function much more simply as records become digitized. Income taxes can be submitted electronically. The government can use electronic records to search for evidence of tax evasion and fraud.

On the spending side, many developing countries experience a situation in which those with the lowest income levels don’t receive government benefits to which they are entitled by law, either because they are disconnected from the government or because there is a “leakage” of government spending to others.  The report cites evidence along these lines:

“[D]igitalizing government payments in developing countries could save roughly 1 percent of GDP, or about $220 billion to $320 billion in value each year. This is equivalent to 1.5 percent of the value of all government payment transactions. Of this total, roughly half would accrue directly to governments and help improve fiscal balances, reduce debt, or finance priority expenditures, and the remainder would benefit individuals and firms as government spending would reach its intended targets (Figure 2.3.1). These estimates may underestimate the value of going from cash to digital because they exclude potentially significant benefits from improvements in public service delivery, including more widespread use of digital finance in the private sector and the reduction of the informal sector.”

I’ll also add that the IMF is focused on potential gains from digitalization, which is  fair enough. But this chapter doesn’t have much to say about potential dangers of overregulation, over-intervention, over-taxation, and even outright confiscation that can arise when certain governments gain extremely detailed access to information on sales and transactions. 

State and Local Spending on Higher Education

“Everyone” knows that the future of the US economy depends on a well-educated workforce, and on a growing share of students achieving higher levels of education. But state spending patterns on higher education aren’t backing up this belief. Here are some figures from the SHEF 2017: State Higher Education Finance report published last month by the State Higher Education Executive Officers Association.

The bars in this figure shows per-student sending on pubic higher education by state and local government from all sources of funding, with the lower blue part of the bar showing government spending and the upper green part of the bar showing spending based on tuition revenue from students. The red line shows enrollments in public colleges, which have gone flat or even declined a little since the Great Recession.

This figure clarifies a pattern that is apparent from the green bars in the above figure: the share of spending on public higher education that comes from tuition has been rising. It was around 29-31% of total spending in the 1990s, up to about 35-36% in the middle of the first decade of the 2000s, and in recent years has been pushing 46-47%. That’s a big shift in a couple of decades.

The reliance on tuition for state public education varies wildly across states, with less than 15% of total spending on public higher ed coming from tuition in Wyoming and California, and 70% or more of total spending on public higher education coming from tuition in Michigan, Colorado, Pennsylvania, Delaware, New Hampshire, and Vermont.
There are lots of issues in play here: competing priorities for state and local spending, rising costs of higher education, the returns from higher education that encourage students (and their families) to pay for it, and so on. For the moment, I’ll just say that it doesn’t seem like a coincidence that the tuition share of public higher education costs is rising at the same time that enrollment levels are flat or declining. 

US Mergers and Antitrust in 2017

Each year the Federal Trade Commission and and the Department of Justice Antitrust Division publish the Hart-Scott-Rodino Annual Report, which offers an overview of merger and acquisition activity and antitrust enforcement during the previous year. The Hart-Scott-Rodino legislation requires that all mergers and acquisitions above a certain size–now set at $80.8 million–be reported to the antitrust authorities before they occur. The report thus offers an overview of recent merger and antitrust activity in the United States.

For example, here’s a figure showing the total number of mergers and acquisitions reported. The total has been generally rising since the end of the Great Recession in 2009, but there was a substantial from 1832 transactions in 2016 to 2052 transactions in 2017.  Just before the Great Recession, the number of merger transactions peaked at 2,201, so the current level is high but not unprecedented.

The report also provides a breakdown on the size of mergers. Here’s what it looked like in 2017. As the figure shows, there were 255 mergers and acquisitions of more than $1 billion. 

After a proposed merger is reported, the FTC or the US Department of Justice can request a “second notice” if it perceives that the merger might raise some anticompetitive issues. In the last few years, about 3-4% of the reported mergers get this “second request.” 
This percentage may seem low, but it’s not clear what level is appropriate.. After all, the US government isn’t second-guessing whether mergers and acquisitions make sense from a business point of view. It’s only asking whether the merger might reduce competition in a substantial way. If two companies that aren’t directly competing with other combine, or if two companies combine in a market with a number of other competitors, the merger/acquisition may turn out well or poorly from a business point of view, but it is less likely to raise competition issues.
Teachers of economics may find the report a useful place to come up with some recent examples of antitrust cases, and there are also links to some of the underlying case documents and analysis (which students can be assigned to read). Here are a few examples from 2017 cases of the Antitrust Division at the US Department of Justice and the Federal Trade Commission. In the first one, a merger was blocked because it would have reduced competition for disposal of low-level radioactive waste.   In the second, a merger between two sets of movie theater chains was allowed only a number of conditions were met aimed at preserving competition in local markets. The third case involved a proposed merger between the two largest providers daily paid fantasy sports contests, and the two firms decided to drop the merger after it was challenged.

In United States v. Energy Solutions, Inc., Rockwell Holdco, Inc., Andrews County Holdings, Inc. and Waste Control Specialists, LLC, the Division filed suit to enjoin Energy Solutions, Inc. (ES), a wholly-owned subsidiary of Rockwell Holdco, Inc., from acquiring Waste Control Specialists LLC (WCS), a wholly-owned subsidiary of Andrews County Holdings, Inc. The complaint alleged that the transaction would have combined the only two licensed commercial low-level radioactive waste (LLRW) disposal facilities for 36 states, Puerto Rico and the District of Columbia. There are only four licensed LLRW disposal facilities in the United States. Two of these facilities, however, did not accept LLRW from the relevant states. The complaint alleged that ES�s Clive facility in Utah and WCS�s Andrews facility in Texas were the only two significant disposal alternatives available in the relevant states for the commercial disposal of higher-activity and lower-activity LLRW. At trial, one of the defenses asserted by the defendants was that that WCS was a failing firm and, absent the transaction, its assets would imminently exit the market. The Division argued that the defendants did not show that WCS�s assets would in fact imminently exit the market given its failure to make good-faith efforts to elicit reasonable alternative offers that might be less anticompetitive than its transaction with ES. On June 21, 2017, after a 10-day trial, the U.S. District Court for the District of Delaware ruled in favor of the Division. …

In United States v. AMC Entertainment Holdings, Inc. and Carmike Cinemas, Inc., the Division challenged AMC Entertainment Holdings, Inc.�s proposed acquisition of CarmikeCinemas, Inc. AMC and Carmike were the second-largest and fourth-largest movie theatre chains, respectively, in the United States. Additionally, AMC owned significant equity in National CineMedia, LLC (NCM) and Carmike owned significant equity in SV Holdco, LLC, a holding company that owns and operates Screenvision Exhibition, Inc. NCM and Screenvision are the country�s predominant preshow cinema advertising networks, covering over 80 percent of movie theatre screens in the United States. The complaint alleged that the proposed acquisition would have provided AMC with direct control of one of its most significant movie theatre competitors, and in some cases, its only competitor, in 15 local markets in nine states. As a result, moviegoers likely would have experienced higher ticket and concession prices and lower quality services in these local markets. The complaint further alleged that the acquisition would have allowed AMC to hold sizable interests in both NCM and Screenvision post-transaction, resulting in increased prices and reduced services for advertisers and theatre exhibitors seeking preshow services. On December 20, 2016, a proposed final judgment was filed simultaneously with the complaint settling the lawsuit. Under the terms of the decree, AMC agreed to (1) divest theatres in the 15 local markets; (2) reduce its equity stake in NCM to 4.99 percent; (3) relinquish its seats on NCM�s Board of Directors and all of its other governance rights in NCM; (4)transfer 24 theatres with a total of 384 screens to the Screenvision cinema advertising network; and (5) implement and maintain �firewalls� to inhibit the flow of competitively sensitive information between NCM and Screenvision. The court entered the final judgment on March 7, 2017. …

In DraftKings/FanDuel, the Commission filed an administrative complaint challenging the merger of DraftKings and FanDuel, two providers of paid daily fantasy sports contests. The Commission’s complaint alleged that the transaction would be anticompetitive because the merger would have combined the two largest daily fantasy sports websites, which controlled more than 90 percent of the U.S. market for paid daily fantasy sports contests. The Commission alleged that consumers of paid daily fantasy sports were unlikely to view season-long fantasy sports contests as a meaningful substitute for paid daily fantasy sports, due to the length of season-long contests, the limitations on number of entrants, and several other issues. Shortly after the Commission filed its complaint, the parties abandoned the merger on July 13, 2017, and the Commission dismissed its administrative complaint.

Should the 5% Convention for Statistical Significance be Dramatically Lower?

For the uninitiated, the idea of “statistical significance” may seem drier than desert sand. But it’s how research in the social sciences and medicine decides what findings are worth paying attention to as plausible true–or not. For that reason, it matters quite a bit. Here, I’ll sketch a quick overview for beginners of what statistical significance means, and why there is controversy among statisticians and researchers over what research results should be regarded as meaningful or new.
To gain some intuition , consider an experiment to decide whether a coin is equally balanced, or whether it is weighted toward coming up “heads.” You toss the coin once, and it comes up heads. Does this result prove, in a statistical sense, that the coin is unfair? Obviously not. Even a  fair coin will come up heads half the time, after all. 
You toss the coin again, and it comes up “heads” again. Do two heads in a row prove that the coin is unfair? Not really. After all, if you toss a fair coin twice in a row, there are four possibilities: HH, HT, TH, TT. Thus, two heads will happen one-fourth of the time with a fair coin, just by chance.

What about three heads in a row? Or four or five or six or more? You can never completely rule out the possibility that a string of heads, even a long string of heads, could happen entirely by chance. But as you get more and more heads in a row, a finding that is all heads, or mostly heads, becomes increasingly unlikely. At some point, it becomes very unlikely indeed.  

Thus, a researcher must make a decision. At what point are the results sufficiently unlikely to have happened by chance, so that we can declare that the results are meaningful?  The conventional answer is that if the observed result had a 5% probability or less of happening by chance, then it is judged to be “statistically significant.” Of course, real-world questions of whether a certain intervention in a school will raise test scores, or whether a certain drug will help treat a medical condition, are a lot more complicated to analyze than coin flips. Thus, so practical researchers spend a lot of time trying to figure out whether a given result is “statistically significant” or not.

Several questions arise here.

1) Why 5%? Why not 10%? Or 1%? The short answer is “tradition.” A couple of year ago, the American Statistical Association put together a panel to reconsider the 5% standard. The

Ronald L. Wasserstein and Nicole A. Lazar wrote a short article :”The ASA’s Statement on p-Values: Context, Process, and Purpose,” in  The American Statistician  (2016, 70:2, pp. 129-132.) (A p-value is an algebraic way of referring to the standard for statistical significance.) They started with this anecdote:

“In February 2014, George Cobb, Professor Emeritus of Mathematics and Statistics at Mount Holyoke College, posed these questions to an ASA discussion forum:

Q:Why do so many colleges and grad schools teach p = 0.05?
A: Because that�s still what the scientific community and journal editors use.
Q:Why do so many people still use p = 0.05?
A: Because that�s what they were taught in college or grad school.

Cobb�s concern was a long-worrisome circularity in the sociology of science based on the use of bright lines such as p<0.05: �We teach it because it�s what we do; we do it because it�s what
we teach.�

But that said, there’s nothing magic about the 5% threshold. It’s fairly common for academic papers to report the results that are statistically signification using a threshold of 10%, or 1%. Confidence in a statistical result isn’t a binary, yes-or-no situation, but rather a continuum. 

2) There’s a difference between statistical confidence in a result, and the size of the effect in the study.  As a hypothetical example, imagine a study which says that if math teachers used a certain curriculum, learning in math would rise by 40%. However, the study included only 20 students.

In a strict statistical sense, the result may not be statistically significant, in the sense that with a fairly small number of students, and the complexities of looking at other factors that might have affected the results, it could have happened by chance. (This is similar the problem that if you flip a coin only two or three times, you don’t have enough information to state with statistical confidence whether it is a fair coin or not.) But it would seem peculiar to ignore a result that shows a large effect. A more natural response might be to design a bigger study with more students, and see if the large effects hold up and are statistically significant in a bigger study.

Conversely, one can imagine a hypothetical study which uses results from 100,000 students, and finds that if math teachers use a certain curriculum, learning in math would rise by 4%. Let’s say that the researcher can show that the effect is statistically significant at the 5% level–that is, there is less than a 5% chance that this rise in math performance happened by chance. It’s still true that the rise is fairly small in size. 

In other words, it can sometimes be more encouraging to discover a large result in which you do not have full statistical confidence than to discover a small result in which you do have statistical confidence.

3) When a researcher knows that 5% is going to be the dividing line between a result being treated as meaningful or not meaningful, it becomes very tempting to fiddle around with the calculations (whether explicitly or implicitly) until you get a result that seems to be statistically significant.

As an example, imagine a study that considers whether early childhood education has positive effects on outcomes later in life. Any researcher doing such a study will be faced with a number of choices. Not all early childhood education programs are the same, so one may want to adjust for factors like the teacher-student ratio, training received by students, amount spent per student, whether the program included meals, home visits, and other factors. Not all children are the same, so one may want to look at factors like family structure, health,  gender, siblings, neighborhood, and other factors. Not all later life outcomes are the same, so one may want to look at test scores, grades, high school graduation rates, college attendance, criminal behavior, teen pregnancy, and employment and wages later in life.

But a problem arises here. If a research hunts through all the possible factors, and all the possible combinations of all the possible factors, there are literally scores or hundreds of possible connections. Just by blind chance, some of these connections will appear to be statistically significant. It’s similar to the situation where you do 1,000 repetitions of flipping a coin 10 times. In those 1,000 repetitions, at least a few times heads is likely to come up 8 or 9 times out of 10 tosses. But that doesn’t prove the coin is unfair! It just proves you tried over and over until you got a specific result.

Modern researchers are very aware of the dangers that when you hunting through lots of  possibilities, then just by chance, a random scattering of the results will appear to be statistically significant. Nonetheless, there are some tell-tale signs that this research strategy of hunting to find a result that looks statistically meaningful may be all too common. For example, one warning sign is when other researchers try to replicate the result using different data or statistical methods, but fail to do so. If a result only appeared statistically significant by random chance in the first place, it’s likely not to appear at all in follow-up research.

Another warning sign is that when you look at a bunch of published studies in a certain area (like how to improve test scores, how a minimum wage affects employment, or whether a drug helps with a certain medical condition), you keep seeing that the finding is statistically significant at almost exactly the 5% level, or just a little less. In a large group of unbiased studies, one would expect to see the statistical significance of the results scattered all over the place: some 1%, 2-3%, 5-6%, 7-8%, and higher levels. When all the published results are bunched right around 5%, it make one suspicious that the researchers have put their thumb on the scales in some way to get a result that magically meets the conventional 5% threshold. 

The problem that arises is that research results are being reported as meaningful in the sense that they had a 5% or less probability of happening by chance, when in reality, that standard is being evaded by researchers. This problem is severe and common enough that a group of 72 researchers recently wrote: “Redefine statistical significance: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries,” which appeared in Nature Human Behavior (Daniel J. Benjamin et al., January 2018, pp. 6-10). One of the signatories, John P.A. Ioannidis provides a readable over in “Viewpoint: The Proposal to Lower P Value Thresholds to .005” (Journal of the American Medical Association, March 22, 2018, pp. E1-E2). Ioannidis writes: 

“P values and accompanying methods of statistical significance testing are creating challenges in biomedical science and other disciplines. The vast majority (96%) of articles that report P values in the abstract, full text, or both include some values of .05 or less. However, many of the claims that these reports highlight are likely false. Recognizing the major importance of the statistical significance conundrum, the American Statistical Association (ASA) published3 a statement on P values in 2016. The status quo is widely believed to be problematic, but how exactly to fix the problem is far more contentious.  … Another large coalition of 72 methodologists recently proposed4 a specific, simple move: lowering the routine P value threshold for claiming statistical significance from .05 to .005 for new discoveries. The proposal met with strong endorsement in some circles and concerns in others. P values are misinterpreted, overtrusted, and misused. … Moving the P value threshold from .05 to .005 will shift about one-third of the statistically significant results of past biomedical literature to the category of just �suggestive.�

This essay is published in a medical journal, and is thus focused on biomedical research. The theme is that a result with 5% significance can be treated as “suggestive,” but for a new idea to be accepted, the threshold level of statistical significance should be 0.5%– that is the probability of the outcome happening by random chance should be 0.5% or less.” 

The hope of this proposal is that researchers will design their studies more carefully and use larger sample sizes. Ioannidis writes: “Adopting lower P value thresholds may help promote a reformed research agenda with fewer, larger, and more carefully conceived and designed studieswith sufficient power to pass these more demanding thresholds.” Ioannidis is quick to admit that this proposal is imperfect, but argues that it is practical and straightforward–and better than many of the alternatives.

The official “ASA Statement on Statistical Significance and P-Values” which appears with the Wasserstein and Lazar article includes a number of principles worth considering. Here are three of them:

Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold. …
A p-value, or statistical significance, does not measure the size of an effect or the importance of a result. …
By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.

Whether you are doing the statistics yourself, or just a consumer of statistical studies produced by others, it’s worth being hyper-aware of what “statistical significance” means, and doesn ‘t mean. 

For those who would like to dig a little deeper, some useful starting points might be the six-paper symposium on “Con out of Economics” in the Spring 2010 issue of the Journal of Economic Perspectives, or the six-paper symposium on “Recent Ideas in Econometrics” in the Spring 2017 issue. 

US Lagging in Labor Force Participation

Not all that long ago in 1990, the share of :”prime-age” workers from 25-54 who were participating in the labor force was basically the same in the United States, Germany, Canada, and Japan. But since then, labor force participation in this group has fallen in the United States while rising in the other countries. Mary C. Daly lays out the pattern in “Raising the Speed Limit on Future Growth” (Federal Reserve Bank of San Franciso Economic Letter, April 2, 2018).

Here’s a figure showing the evolution of labor force participation in the 25-54 age bracket in these four economies. Economists often like to focus on this group because it avoids differences in rates of college attendance (which can strongly influence labor force participation at younger years) and differences in old-age pension systems and retirement patterns (which can strongly influence labor force participation in older years).

U.S. labor participation diverging from international trends

Daly writes (ciations omitted):

“Which raises the question�why aren�t American workers working?

“The answer is not simple, and numerous factors have been offered to explain the decline in labor force participation. Research by a colleague from the San Francisco Fed and others suggests that some of the drop owes to wealthier families choosing to have only one person engaging in the paid labor market …

“Another factor behind the decline is ongoing job polarization that favors workers at the high and low ends of the skill distribution but not those in the middle. … Our economy is automating thousands of jobs in the middle-skill range, from call center workers, to paralegals, to grocery checkers.A growing body of research finds that these pressures on middle-skilled jobs leave a big swath of workers on the sidelines, wanting work but not having the skills to keep pace with the ever-changing economy.

“The final and perhaps most critical issue I want to highlight also relates to skills: We�re not adequately preparing a large fraction of our young people for the jobs of the future. Like in most advanced economies, job creation in the United States is being tilted toward jobs that require a college degree . Even if high school-educated workers can find jobs today, their future job security is in jeopardy. Indeed by 2020, for the first time in our history, more jobs will require a bachelor�s degree than a high school diploma.

“These statistics contrast with the trends for college completion. Although the share of young people with four-year college degrees is rising, in 2016 only 37% of 25- to 29-year-olds had a college diploma. This falls short of the progress in many of our international competitors, but also means that many of our young people are underprepared for the jobs in our economy.”

On this last point, my own emphasis would differ from Daly’s. Yes, steps that aim to increase college attendance over time are often worthwhile.  But as she notes, only 37% of American 25-29 year-olds have a college degree. A dramatic rise in this number would take an extraordinary social effort. Among other things, it would require a dramatic expansion in the number of those leaving high school who are willing and ready to benefit from a college degree, together with a vast enrollments across the higher education sector.  Even if the share of college graduates could be increased by one-third or one-half–which would be a very dramatic change–a very large share of the population would not have a college degree.

It seems to me important to separate the ideas of “college” and “additional job-related training.” It’s true that the secure and decently-paid jobs of the future will typically require additional training past high school. At least in theory, that training could be provided in many ways: on-the-job training, apprenticeships, focused short courses focused certifying competence in a certain area, and so on. For some students, a conventional college degree will be a way to build up these skills. However, there are also a substantial number of students who are unlikely to flourish in a conventional classroom-based college environment, and a substantial number of jobs where a traditional college classroom doesn’t offer the right preparation. College isn’t the right job prep  for everyone. We need to build up other avenues for US workers to acquire the job-related skills they need, too.

US Travel Ban Forces Atiku To Sell Off Mansion



Atiku Abubakar, former vice-president of Nigeria, has allegedly sold off his mansion located in Maryland, U.S. Atiku reportedly co-owned the mansion with his American wife, Jennifer Douglas, who was mentioned in a 2015 corruption case involving her husband and former U.S. congressman, William Jefferson. The house was searched by FBI at the time. Atiku denied wrongdoing in the case, but has not visited the U.S. since then


The seven-bedroom cream-coloured single family brick house on 9731 Sorrel Ave, Potomac, Maryland, was reportedly disposed of for $2.95 million.

It “was originally listed for $3.25 million on Zillow and other online real estate websites on January 25, 2018 but was eventually sold for a pending offer of $2.95 million on February 26, 2018 after an online auction”, a report claimed.

Atiku and Douglas bought the 7,131 square feet house in December 1999 for $1.75 million.

The mansion is described as “a colonial-style building that sits in the middle of a 2.3 acres premises of lush green trees. “The mansion has a total of 21 rooms, multiple terraces which are said to be ideal for outdoor parties, a pool sauna, a gazebo, a gourmet kitchen and an outdoor swimming pool”.

The house has long been abandoned, at least since Abubakar left office as vice president in 2007. A source told the newspaper that after 2007, house service staff lived in the mansion for some years, but the place was later locked up, and had remained unoccupied.

Atiku’s wife, Douglas is said to be owing $27,913.77 tax on the property.

Atiku’s spokesperson, Paul Ibe, admitted the sale of the mansion.

“Atiku Abubakar is a successful businessman who has a long history of real estate investments,” he told PREMIUM TIMES.

“The U.S. home was simply one of such numerous investments. The home was no longer serving the purpose for which it was bought. Consequently, it has to be put up for sale via open auction, a growing and preferred method of selling high end properties. The proceeds thereof will be deployed to business aimed at creating jobs,” he added

Abubakar has consistently denied being barred from the U.S.


Rebalancing the Economy Toward Workers and Wages

Economists have recognized for a long time that in negotiations between employers and workers, the employer has a built-in advantage. John Bates Clark ,  probably the most eminent American economist of his time, put it this way in his 1907 book, Essentials of Economic Theory

“In the making of the wages contract the individual laborer is at a disadvantage. He has something which he must sell and which his employer is not obliged to take, since he [that is, the employer] can reject single men with impunity. …  A period of idleness may increase this disability to any extent. The vender of anything which must be sold at once is like a starving man pawning his coat�he must take whatever is offered.”

Are there some ways to tip the balance a bit more toward workers? Jay Shambaugh and Ryan Nunn have edited an ebook, Revitalizing Wage Growth: Policies to Get American Workers a Raise, with nine chapters on causes of wage stagnation and policy proposals to address it (published by the Hamilton Project at the Brookings Institution, February 2018, full Table of Contents is appended below). Given that the US unemployment rate has now been 5% or less for more than two years, since September 2016, the question of wage growth is rightfully assuming high importance.

In an overview essay, “How Declining Dynamism Affects Wages,” Jay Shambaugh, Ryan Nunn, and Patrick Liu point out that labor markets have come less dynamic. A dynamic market is always experiencing gross creation of jobs and gross losses of jobs, which together make up net job creation. But gross job creation is down. So is employer-to-employer movement. So are the levels of start-up firms, which are often a source of productivity growth and new job hires.

I found several of the chapters about improving worker bargaining power to be especially interesting.

For example, Matt Marx argues for “Reforming Non-Competes to Support Workers.”

“Today, non-competes are widely used in a variety of occupations, especially among knowledge workers and executives. Prescott, Bishara, and Starr (2016) estimate that 18 percent of respondents to an online survey across a broad set of occupations had signed a non-compete for their current job. Looking specifically at engineers, Marx (2011) finds that 43 percent of workers had signed a non-compete in the past 10 years. Executives were even more likely to have signed: Garmaise (2011) finds that at least 70 percent of senior executives in public companies were bound by a non-compete. …

“Perhaps the most well-established effect in the non-compete literature is that such employment agreements discourage workers from changing jobs. … If non-compete agreements discourage workers from changing jobs, this restriction circumscribes the effective market for their skills. With fewer firms to bid for their labor, they might receive fewer and less-attractive job offers. …  To date, the only published paper to investigate the impact of non-compete agreements on wages is Garmaise (2011). He finds that executives are paid less in states that have adopted stricter noncompete
policies. ..[T]alent flows less within states with tighter non-compete laws. Researchers have also examined labor flows across states. Marx, Singh, and Fleming (2015) find that Michigan�s rule change providing for enforcement of non-compete agreements  resulted in a brain drain of talent out of the state. Specifically, technical workers left for other states with less-strict enforcement of non-competes. Worse, this brain drain due to non-compete agreements is greater for the most highly skilled workers.. … Non-competes act as a brake on entrepreneurial activity, both by blocking the emergence  of new companies and by making it harder for them to grow. … Non-competes not only make it more difficult to start a company, but also make it harder to grow a start-up.”

In “A Proposal for Protecting Low-Income Workers from Monopsony and Collusion,” Alan B. Krueger and Eric A. Posner argue:

“New evidence that labor markets are being rendered uncompetitive by large employers suggests that the time has come to strengthen legal protections for workers. Labor market collusion or monopsonization�the exercise of employer market power in labor markets�may contribute to wage stagnation, rising inequality, and declining productivity in the American economy, trends which have hit low-income workers especially hard. To address these problems, we propose three reforms. First, the federal government should enhance scrutiny of mergers for adverse labor market effects. Second, state governments should ban  non-compete covenants that bind low-wage workers. Third, no-poaching arrangements among establishments that belong to a single franchise company should be prohibited.” 

Benjamin Harris writes on the theme: “Information Is Power Fostering Labor Market Competition through Transparent Wages:”

“In the U.S. labor market, information on wages and compensation is decidedly asymmetric. Employees frequently do not know how their pay compares to comparable workers, either within or outside their firm, and are reluctant to seek this knowledge out of fear of retaliation, social norms, or general inertia. In stark contrast, many employers use compensation surveys to know precisely where their workers fall in the distribution of wages. In other markets characterized by asymmetric information, the entity with more complete information maintains a distinct advantage (Hart and Holmstr�m 1987); the U.S. labor market is likely no different. …

“This paper puts forth an aggressive agenda to promote better wage transparency through a five-part proposal. … The first pillar of the proposal advocates for states to adopt comprehensive laws, such as those found in Michigan, both to protect workers from employer retaliation for discussing wages, and to discourage employers from asking workers to waive their right to disclose pay. … The second pillar of the proposal addresses the interrupted progress of a 2016 action by the EEOC that would require large companies to more comprehensively report their compensation data. The action … would have required companies with more than 100 workers to report aggregated wage data by demographic characteristics. … The third pillar …  would reform the safe harbor guidelines, which protect firms from claims of wage collusion, to require that companies share any commissioned compensation survey data with workers. … The fourth pillar explicitly prohibits employers from asking about prior pay levels during the hiring process unless they provide data on the pay of comparable workers. … The fifth pillar ,,, calls for Congress to appropriate a small amount of funds for the U.S. Department
of Labor (DOL) to study the impact of wage transparency on compensation levels.”

___________________________
Here’s the full Table of Contents:

Introduction
by Jay Shambaugh, Ryan Nunn, and Becca Portman

Section I: Understanding Wage Stagnation and Its Policy Solutions

Chapter 1: How Declining Dynamism Affects Wages
by Jay Shambaugh, Ryan Nunn, and Patrick Liu

Chapter 2: Returning to Education: The Hamilton Project on Human Capital and Wages
by Jay Shambaugh, Lauren Bauer, and Audrey Breitwieser


Section II: Policies to Boost Wages through Enhanced Productivity

Chapter 3: Stagnation in Lifetime Incomes: An Overview of Trends and Potential Causes
by Fatih Guvenen

Chapter 4: Coming and Going: Encouraging Geographic Mobility at College Entry and Exit to Lift Wages
by Abigail Wozniak

Chapter 5: The Importance of Strong Labor Demand
by Jared Bernstein

Section III: Policies to Boost Wages through Strengthened Worker Bargaining Power

Chapter 6: Reforming Non-Competes to Support Workers
by Matt Marx

Chapter 7: A Proposal for Protecting Low-Income Workers from Monopsony and Collusion
by Alan Krueger and Eric Posner

Chapter 8: Information is Power: Fostering Labor Market Competition through Transparent Wages
by Benjamin Harris

Chapter 9: Strengthening Labor Market Standards and Institutions to Promote Wage Growth
by Heidi Shierholz

Bernanke Interviews Yellen: Fed Chair as Interior Design Consultant, When Mozilo Switched Regulators, Deficit-Cutting Stimulus, and More

Earlier this week, the Hutchins Center on Fiscal and Monetary Policy at the Brookings Institution hosted “A Fed duet: Janet Yellen in conversation with Ben Bernanke” (February 27, 2018). Video, audio and a transcript are all available here.  I’ll focus here on a few of  Yellen’s comments that caught my eye.

For those not familiar with her career, Janet Yellen was a well-known UC-Berkeley economist in the 1980s and into the 1990s, when her career took a turn toward government roles. She was a member of the Federal Reserve Board of Governors from 1994-97; Chair of Clinton’s Council of Economic Advisers from 1997-99; President of the Federal Reserve Bank of San Francisco from 2004-2010; Vice-Chair of the Fed from 2010-2014, and then Chair of the Fed from 2014-2018. In short, she’s had a front-row seat for US economic policy-making for most of the last quarter-century.

Building Consensus as the chair of the Fed. The Federal Open Market Committee, the policy-making part of the Federal Reserve, doesn’t literally operate by consensus. But there has traditionally been an effort to try to build at least a rough consensus, and members have often been willing to coalesce behind a policy option that they found acceptable, even if it wasn’t necessarily their first choice. Yellen describes her process of managing these meetings in this way:

“And initially, at meetings we would have a lot of options on the table and there would be go-arounds and people would express their views. The options–there were people who would favor options that didn�t get a lot of support and they would tend to see that. You know, I love Option Number 9, but I was pretty much alone in doing that. And what I found was it was great. Over time people who favored options for which there wasn�t a lot of support tended to shift their support to options where there was greater support. And gradually, we narrowed things down to one and got complete agreement. 

“So I guess what I do is I often compare the job of managing the committee to the issue a designer would have to face who is trying to decide what�s the right color to paint a room. You have 19 people around the table, and you want to come up with a decision we can all live with on what color to paint the room. And we�d go around the table. Ben, what would you like? You think baby blue is just absolutely ideal. David, what do you think? Chartreuse you think is a lovely color. (Laughter) And we go around the room like that. And the question is, are we ever going to converge? 

“I would feel my job is get everybody to see that off-white is not a bad alternative. (Laughter) As brilliant as your choice was, maybe you could live with off-white, and it�s not so bad. And we can converge on that and it�s going to function just fine and maybe we can agree. So I felt I was often trying to get the committee to coalesce and decide. We�d come up with a good option that we could all agree on.”

The zero lower bound is likely to be a repeated problem in the future. The current policy of the Fed is to aim at an inflation rate of 2%. The current projections for the federal funds interest rate in the future is that it will be 2.75%. Thus, the next time the Fed wants to cut interest rates, it is going to have negative real rates very quickly, and run into the zero lower bound quite soon.

“[T]here is a problem and it�s a problem that I think I didn�t recognize when we chose 2 percent as a target [for inflation], how serious it would be. There had been only one country at that time, Japan, that hit the zero lower bound. That seemed like a rare circumstance. And since then, many advanced countries have faced the zero lower bound. There�s now growing agreement that somehow the new normal going forward is a world where productivity growth has been low. Perhaps we�ll be lucky and it�ll rise, but it has been low. We have aging populations and a strong demand for safe assets. It looks like interest rates, long and short, had generally been trending down among advanced countries even before the financial crisis. And I think there is now reason to believe that the new normal for the U.S. and many advanced economies will be on a lower average level of short-term rates. 

“The FOMC in their December projections projected the longer than normal level of the fund�s rate at 2.75, which is just three-quarters of a percent in real terms. And if that�s right and there are estimates of the equilibrium real rate that are even lower than that, zero bound episodes can be much more frequent. This means that for monetary policy, at least short-term rates have much less scope to be used to stabilize the economy. And I think the first thing is to recognize is that this really is a problem. It behooves policymakers and researchers more generally to think about are there changes we can make to the monetary policy framework that would be helpful in dealing with that?”

When Angelo Mozilo Switched Regulators. Angelo Mozilo started the mortgage lender Countrywide, which was heavily involved in subprime lending. In 2010, after being booted from the company, he signed an agreement with the Securities and Exchange Commission where he did not need to admit wrongdoing, but did pay fines of $67.5 million while agreeing to a lifetime ban “from ever again serving as an officer or director of a publicly traded company.”  Yellen tells the story of dealing with Mozilo when she was at the San Francisco Fed–and learning that Mozilo had decided to switch regulators.

“Our supervisory folks that I met with were alerting me to underwriting practices that were a huge concern. They were telling me about low-doc and no-doc loans, about the rising prevalence of ninja loans, no income/no jobs/no asset-type loans. We supervised Countrywide for a while and looked at their mortgage business which was growing enormously. I met pretty regularly with Angelo Mozilo. And the San Francisco Fed was quite concerned about what was going on. We tried to insist on tighter risk controls. 

“And one day Angelo came up and we had our regular quarterly meeting and he said to me, Janet, I have to tell you, it�s been terrific to be supervised by you. You guys are really on top of your game and we really appreciate all of the valuable advice that you�ve given us. But, you know, we�ve realized that we don�t actually need to be a bank holding company. We realized it would be okay to be a thrift holding company. And so we�re changing our charter. And indeed they did so and decided it would be nice to be supervised by the Office of Thrift Supervision that is no more. So that kind of gave me a sense of what was happening.  ….
“I think what I failed to appreciate was, what if housing prices began to fall? I just really did not understand how vulnerable the financial system and particularly the shadow banking system was, how leveraged it was, how much maturity transformation there was, how much of this risk that we thought was being disbursed through the economy was really remaining on the books of these institutions. So I wrongly thought if housing prices fell a medium amount it would do damage to the economy and the outlook, but it would not destroy the core of the financial system. And I think that was a failure to appreciate the weaknesses.”

Fiscal Austerity as a Stimulus Program. One of President Clinton’s first steps after taking office in 1993 was a deficit reduction act. It involved raising taxes, and literally every Republican in Congress voted against it. But the US economy did well in the rest of the 1990s, and Yellen gives the deficit reduction plan a portion of the credit. From the transcript:

“One is that Clinton�s first steps, first economic policies, put in place a plan that would lower budget deficits. There had been great concern about out-of-control budget deficits, and it was reflected in high long-term interest rates. But the Clinton administration was, rightly I think, very concerned that tightening fiscal policy when we had an economy that was just recovering. Unemployment remained high, and they were worried about the negative impacts of fiscal tightening on the economy. 

“So let me just say at the outset: in general, the view that tight fiscal policy tends to depress employment and economic activity�I believe to be correct, and I�m not questioning that. But the Clinton policy was one that phased in very slowly over time a tightening of fiscal policy, so it wasn�t a tightening in day one or year one that was dramatic. I believe it was a very credible multiyear commitment, which served to quickly bring down long-term interest rates dramatically. So in point of fact, I think for at least some several years this was a fiscal tightening that actually was expansionary because the decline in spending or increase in taxes didn�t occur immediately and long-term rates came down very quickly. The economy continued to recover. So the notion that a very well-designed fiscal tightening policy need not have adverse impact on economic activity was one lesson we took away.”

Time to Rein in Government Borrowing: The Case for a Spending-First Approach

In the aftermath of the Great Recession, government is higher in most of the world’s  high-income economies, including the United State. This year, the world economy is producing at close to its potential GDP. If there is ever going to be a time for thinking about long-term fiscal issues, this is it. The March 2018 issue of Finance & Development, published by the IMF, includes a symposium called “Balancing Act: Managing the Public Purse.” Here, I’ll quote from the discussion of debt in advanced economies by Alberto Alesina, Carlo A. Favero, and Francesco Giavazzi, “Climbing Out of Debt (pp. 6-11). They write:

“Almost a decade after the onset of the global financial crisis, national debt in advanced economies remains near its highest level since World War II, averaging 104 percent of GDP. In Japan, the ratio is 240 percent and in Greece almost 185 percent. In Italy and Portugal, debt exceeds 120 percent of GDP. Without measures either to cut spending or increase revenue, the situation will only get worse. As central banks abandon the extraordinary monetary measures they adopted to battle the crisis, interest rates will inevitably rise from historic lows. That means interest payments will eat up a growing share of government spending, leaving less money to deliver public services or take steps to ensure long-term economic growth, such as investing in infrastructure and education. …

“Which policies are more likely to result in a lower ratio of debt to GDP? A number of papers have addressed this question since at least the early 1990s (Alesina and Ardagna 2013 summarizes the early literature). We decided to take another look at the issue using new methodology and a much richer set of data covering 16 of the 35 countries belonging to the Organisation for Economic Co-operation and Development between 1981 and 2014, including Canada, Japan, the United States, and most of Europe, excluding postcommunist nations. Our analysis focused on some 3,500 policy changes geared toward reducing deficits either by raising taxes or by cutting spending. …

“More specifically, we found that on average, expenditure-based plans were associated with very small downturns in growth: a plan worth 1 percent of GDP implied a loss of about half a percentage point relative to the average GDP growth of the country. The loss in output typically lasted less than two years. Moreover, if an expenditure-based plan was launched during a period of economic growth, the output costs were zero, on average. This means that some expenditure-based fiscal plans were associated with small downturns, while others were associated with almost immediate surges in growth, a phenomenon sometimes known as �expansionary austerity� that was first identified by Giavazzi and Pagano (1990). By contrast, tax-based fiscal corrections were associated with large and long-lasting recessions. A tax-based plan amounting to 1 percent of GDP was followed, on average, by a 2 percent decline in GDP relative to its pre-austerity path. This large recessionary effect tends to last several years.

“Our second finding is that reductions in entitlement programs and other government transfers were less harmful to growth than tax increases. Such cuts were accompanied by mild and short-lived economic downturns, probably because taxpayers perceived them as permanent and so expected that the taxes needed to fund the programs would be lower in the future. Thus, the data suggest that reforms of social security rules aimed at reducing government spending are more like normal spending cuts than tax increases. Because social security reforms tend to be persistent, especially in countries with aging populations, they entail some of the smallest costs in terms of lost output.

In more detailed analysis, the authors consider various explanations and compare them to the data. For example, the advantages of cutting debt/GDP ratios with spending, rather than taxes, don’t seem to be associated with corresponding changes in monetary policy, exchange rates, or simultaneous packages of other economic reforms. The big difference seems to be that tackling the debt/GDP ratio with spending based tools is associated with a rise in private investment, while tacking it with tax increases is not.

I’m not someone who agonizes over finding short-term ways to cut budget deficits. But it does seem to me that the US economy has evolved in an uncomfortable direction of making future promises without providing financing for them, including not just government programs like Social Security and Medicare, but a number of private pensions as well. I’d like to see discussion of reforms that would either explicitly scale back on these future promises, or identify a stream of funds to finance them, or some combination of both.

Two Central Bankers Walk Into a Restaurant, and the Pawnbroker for All Seasons

Here’s the set-up line for the story: Two central bankers walk into a London restaurant …

Mervyn King tells the tale at the start of his lecture “Lessons from the Global Financial Crisis,” in a speech given upon receipt of the Paul A. Volcker Lifetime Achievement Award from the National Association for Business Economics (February 27, 2018). I’m quoting from the prepared text for the talk. Of course, King was Governor of the Bank of England from 2003-2013, while Paul Volcker was Chair of the Federal Reserve from 1979-1987. This is how King tells the story of their first meeting back in 1991:

“I first met Paul in 1991 just after I joined the Bank of England. He came to London and asked Marjorie Deane of the Economist magazine to arrange a dinner with the new central banker. The story of that dinner has never been told in public before. We dined in what was then Princess Diana�s favourite restaurant, and at the end of the evening Paul attempted to pay the bill. Paul carried neither cash nor credit cards, but only a cheque book, and a dollar cheque at that. Unfortunately, the restaurant would not accept it. So I paid with a sterling credit card and Paul gave me a US dollar cheque. This suited us both because I had just opened an account at the Bank of England and been asked, rather sniffily, how I intended to open my account. What better response than to say that I would open the account by depositing a cheque from the recently retired Chairman of the Federal Reserve. I basked in this reflected glory for two weeks. Then I received a letter from the Chief Cashier�s office saying that most unfortunately the cheque had bounced. Consternation! It turned out that Paul had forgotten to date the cheque. What to do? Do you really write to a former Chairman pointing out that his cheque had bounced? Do you simply accept the financial loss? After some thought, I hit upon the perfect solution. I dated the cheque myself and returned it to the Bank of England. They accepted it without question. I am hopeful that the statute of limitations is well past. But the episode taught me a lifelong lesson: to be effective, regulation should focus on substance not form.”

Maybe you need to be an economist to see the humor in the story. But consider:  One central bank chair trying to pay with US dollars in a London restaurant, and the other adding dates to someone else’s check. It’s a story that should vanquish any doubts about whether central bankers are fallible human beings.

The rest of the lecture has some good nuggets, tot. King points out that 10 years ago at this time in 2008, we were about two weeks before the failure of Bear Stearns. He says:

“During the crisis we were vividly reminded of three old lessons. First, our system of banking is fragile, and reflects what I call financial alchemy. Banks that appear to be well-capitalised one day are not the next. Solvency is in the eye of the lender. Second, banks that borrow short and lend long are, as we saw in 2008, subject to runs that threaten the payments system and hence the wider economy. Third, regulatory reform is well-intentioned but has fallen into the trap of excessive detail. The complexity of the current regulation of financial services is damaging and unsustainable.”

One of King’s proposals is for that central banks should become what he calls the Pawnbroker for All Seasons. The idea is that rather than putting central banks in a position where, in an emergency, they face a choice between uncontrolled lending and letting the financial system melt down, we need a plan in advance. King suggests that banks work with a central bank to think about the value of the bank’s collateral–say, the mortgages and other loans held by the bank. The bank and the central bank together would agree that if a bank finds itself caught in a financial crisis, it would give this collateral to the bank in exchange for a loan of a predetermined amount. King says:

“My proposal replaces the traditional lender of last resort function, and the provision of deposit insurance, by making the central bank a Pawnbroker for all Seasons. … In normal times, each bank would decide how much of its assets it would preposition at the central bank allowing plenty of time for the collateral to be assessed. … Adding up over all assets that had been pre-positioned, it would then be clear how much central bank money the bank would be entitled to borrow � with no questions asked � at any instant. The pawnbroker rule would be that the credit line of the bank would have to be sufficient to cover all liabilities that could run within a pre-determined period of, say, one month or possibly longer. …The scheme is not a pipedream. US banks have pre-positioned collateral with the Federal Reserve sufficient to produce a total lendable value of just under one trillion dollars. Together with deposits at the Federal Reserve, the cash credit line of banks is around one-quarter of total bank deposits.”

King’s point is that a bunch of complex financial regulations–where their very complexity means they can be gamed–isn’t a real answer. Solemnly swearing that the central bank won’t lend in a crisis, and will just let the financial system melt down, isn’t an answer, either. But thinking in advance about how much a central bank would lend to a bank, based on the collateral that bank has available, could actually help cushion the next (and there will be a next) financial crisis.

Design a site like this with WordPress.com
Get started