The Need for Generalists

One can make a reasonable argument that the concept of an economy and the study of economics begins with the idea of specialization, in the sense that those who function within an economy specialize in one kind of production, but then trade with others to consume a broader array of good. Along these lines, the first chapter of Adam Smith’s 1776 Wealth of Nations is titled, “On the Division of Labor.” But in the push for specialization, there can be a danger of neglecting the virtues of generalists. Even when it comes to assembly lines, specialization of tasks can be pushed so far that it becomes unproductive (as Smith recognized). In a broad array of jobs, including managers, journalists, legislators and politicians, and even editors of academic journals, there is a need for generalists who can synthesize and put in context the work of a range of specialists.

The need for generalists is not at all new, of course. Here’s a commentary from 80 years on “The Need for Generalists” from AG Black, who was Chief of the Bureau of Agricultural Economics at the US Department of Agriculture, ” published in the Journal of Farm Economics (November 1936, 18:4, pp. 657-661, and available vis JSTOR). Black is writing in particular about specialization within what is already a specialty of agricultural economics, but his point applies more broadly.

“The past generation, like several generations before it, has indeed been one of greater and greater specialization. This has resulted in great advances in agricultural economics. Our specialists have developed new technics of analysis, they have discovered new relationships, they have been able to give close attention to important facts and factors that might otherwise have escaped attention and by such escape might have led to wrong conclusions. Without this specialized attention our discipline in agricultural economics could not have attained the position it has reached today.

“This advance has not been attained without cost. The price has been the loss of minds, or the neglect to develop minds, trained to cope with the complex problems of today in the comprehensive, overall manner called for by such problems. Our specialists are splendidly equipped to solve a problem concerning the price of wheat, or of corn, or of cotton, or a technical question in cooperative marketing, farm management, taxation or international trade. But the more important problems almost never present themselves in those narrow terms; rather they may involve elements of all the above and perhaps several more. …

“Increased specialization itself tends to raise barriers between fields. It tends to create a system of professional jealousies that is not conducive to the development of generalists. The specialist who burrows deeper and deeper into a narrower and narrower hole becomes convinced that no one who has been sapping in a neighboring tunnel can possibly know as much about the peculiar twists and turns of his burrow as he, himself. And he is right. He knows that he can readily confound and confuse a neighboring specialist if the latter strays from his own confines, and what is more, he will. One of the greatest joys of the specialist is to make an associate appear infantile and ridiculous on the occasions when the latter appears to be getting out of his field.

“The specialist stakes out his claim and guards it as jealously as ever did a prospector of the ’40s, and woe to the unwary trespasser. As the specialist knows how he looks upon intruders, he knows how he would be treated if he had the temerity to wander outside his main field. Consequently he is usually quite willing to leave outside fields to other specialists.

“The development of the whole field takes on a honeycomb appearance with series upon series of well-marked and almost wholly isolated cells. These cell walls need to be broken down. There is need of men who can correlate and coordinate the specialized knowledge in the separate cells–men who can bring to bear on the larger problems the findings of the different specialists and who have sufficient perspective and sense of proportion to apply just the correct shade of emphasis to the contribution of each particular specialist. …

“Our whole organization has developed on the assumption that the generalizing function is not important, that it does not require quite the ability and training of the specialist, that it can be satisfactorily done by almost anyone and that certainly there is nothing about it that demands the attention of really first class men. If generalizing be done at all, it can safely be committed to the specialist who can play with it as relaxation from the really serious and important demands of his specialty, or to the administrator who can give it all of the attention it requires between telephone calls and committee meetings.

“All of this, I suppose, leads to the conclusion that in agricultural economics we need another specialist, that is a “specialist” who is a “generalist.” We need to make a place for the trained economist of highest ability who will be free from administrative demands as well as free from the tyranny of specialization, who will have the job of keeping abreast of the results of the various specialists and who can spend a good deal of time in analyzing findings having a bearing upon the ultimate solution of these same problems. … In other words, students need training in analysis and in synthesis. Today the ability to synthesize facts, research results and partial solutions into a well rounded whole, is too infrequently available.”

One of the many political cliches that makes me die a little bit inside is when someone claims that all we need to address a certain problem (health care, poverty, transportation, the Middle East, whatever) is to bring together a group of experts who will provide the common-sense solution that we have all been ignoring. But while bringing together a group of specialist experts can provide a great deal of information and insight, they are often not especially good at melding their specific insights into a general policy.

Homage: I ran across a mention of this article at Carola Binder’s always-useful “Quantitative Ease” website  last summer, and left myself a note to track it down. But given my time constraints and organizational skills, it took awhile for me to do so.

"Half the Money I Spend on Advertising is Wasted, and the Trouble is I Don’t Know Which Half"

There’s an old rueful line from firms that advertise: “We know that half of all we spend on advertising is wasted, but we don’t know which half.” It’s not clear who originally coined the phrase. But we do know that the effects of advertising have changed dramatically in a digital age. Half of all advertising spending may still be wasted, but now it’s for a very different reason.

I was raised with the folklore that John Wanamaker, founder of the eponymous department stores, was the originator of the phrase at hand. But the attribution gets pretty shaky, pretty fast. David Ogilvy, the head of the famous Ogilvy & Mather advertising agency, wrote in his 1963 book Confessions of an Advertising Man (pp. 86-87): “As Lord Leverhulme (and John Wanamaker after  him) complained, `Half the money I spend on advertising is wasted, and the trouble is I don’t know which half.”

So how about William Lever, Lord Leverhulme, who built a fortune in the soap business (with Sunlight Soap, and eventually Unilever)? Career advertising executive Jeremy Bullmore has looked into it, and wrote in the 2013 annual report of the British advertising and public relations firm WPP:

“There are at least a dozen minor variations of this sentiment that are confidently quoted and variously attributed but they all have in common the words �advertising�, �half� and �waste�. Google the line and you�ll get about nine million results. … As it happens, there�s little hard evidence that either William Lever or John Wanamaker (or indeed Ford or Penney) ever made such a remark. Certainly, neither the Wanamaker nor the Unilever archives contains any such reference. Yet for a hundred years or so, with no accredited source and no data to support it, this piece of folklore has survived and prospered.” 

Bullmore makes some compelling points. One is that even 100 years ago, it was widely believed tha advertising could be usefully shaped and targeted. He writes:

“Retail advertising in the days of John Wanamaker was mostly placed in local newspapers and was mainly used to shift specific stock. An ad for neckties read, �They�re not as good as they look, but they�re good enough. 25 cents.� The neckties sold out by closing time and so weren�t advertised again. Waste, zero. Experiment was commonplace. Every element of an advertisement � size, headline, position in paper � was tested for efficacy and discarded if found wanting. Waste, if not eliminated, was ruthlessly hounded.

“Claude Hopkins published Scientific Advertising in 1923. In it, he writes, �Advertising, once a gamble, has� become� one of the safest of business ventures. Certainly no other enterprise with comparable possibilities need involve so little risk.� Even allowing for adman�s exuberance, it strongly suggests that, within Wanamaker�s lifetime, there were very few advertisers who would have agreed that half their advertising money was wasted.”

Further, Bullmore points out that people are more comfortable buying certain products because “everyone knows” about them, and “everyone knows” because even those who don’t purchase the product have seen the ads.

“A common attribute of all successful, mass-market, repeat-purchase consumer brands is a kind of fame. And the kind of fame they enjoy is not targeted, circumscribed fame but a curiously indiscriminate fame that transcends its particular market sector. Coca-Cola is not just a famous soft drink. Dove is not just a famous soap. Ford is not just a famous car manufacturer. In all these cases, their fame depends on their being known to just about everyone in the world: even if they neither buy nor use. Show-biz publicists have understood this for ever. When The Beatles invaded America in 1964, their manager Brian Epstein didn�t arrange a series of targeted interviews in fan magazines; he brokered three appearances on the Ed Sullivan Show with an audience for each estimated at 70 million. Far fewer than half of that 70 million will have subsequently bought a Beatles record or a Beatles ticket; but it seems unlikely that Epstein thought this extended exposure in any way wasted.”

And of course, if large amounts of advertising are literally wasted, it seems as we should be able to observe a substantial number of companies who cut their advertising budget in half and suffered no measurable decline in sales. (In fact, if  half of advertising is always wasted, shouldn’t the firm then keep cutting the advertising budget by half, and half again, and half again, and so down to zero? Seems as if there must be a flaw in this logic!)

Of course, one of the major changes in advertising during the last decade or two is that print advertising has plummeted, while digital advertising has soared. More generally, digital technology has made it much more straightforward to create systematic variations in the quantity and qualities of advertising– and to track the results. Bullmore writes: “And given modern measurements and the growth of digital channels, it�s easier than ever for advertising to be held accountable; to be seen to be more investment than cost.”

But Bullmore is probably too optimistic here about how easy it is to hold advertising accountable, for a couple of reasons.

One problem is that the idea of targeting specific audiences for digital advertising is a lot more complicated in practice than it may seem at first. Judy Unger Franks of Medill School of Journalism, Media, Integrated Market Communications at Northwestern University explained the issues in a short essay late last summer: She wrote:

“Programmatic Advertising enables marketers to make advertising investments to select individuals in a media audience as opposed to having to buy the entire audience. Advertisers use a wealth of Big Data to learn about each audience member to then determine whether that audience member should be served with an advertisement and at what price. This all happens in near real-time and advertisers can therefore make near real-time adjustments to their approach to optimize the return-on-investment of its advertising expenditures.

“In theory, Programmatic Advertising should solve the issue of waste. However, in our attempt to eliminate waste from the advertising value chain, we may have made things worse. We have unleashed a dark side to Programmatic Advertising that comes at a significant cost. Now, we know exactly which half of the money spent on advertising is wasted: it�s the half that marketers must now spend on third parties who have inserted themselves into the Programmatic Advertising ecosystem just to keep our investments clean. … 

“How bad is it? How much money are advertisers spending on this murky supply chain? The IAB (Interactive Advertising Bureau) answered this for us when they released their White Paper, �The Programmatic Supply Chain: Deconstructing the Anatomy of a Programmatic CPM� in March of 2016. The IAB identified ten different value layers in the Programmatic ecosystem. I believe they are being overly generous by calling each a �value� layer. When you need an ad blocking service to avoid buying questionable content and a separate verification service to make sure that the ad was viewable by a human, how is this valuable? When you add up all the costs associated with the ten different layers, they account for 55% of the cpm (cost-per-thousand) that an advertiser pays for a programmatic ad. This means that for every dollar an advertiser spends in Programmatic Advertising over half (55%) of that dollar never reaches the publisher. It falls into the hands of all the third parties that are required to feed the beast that is the overly complex Programmatic Advertising ecosystem. We now know which half of an advertising investment is wasted. It�s wasted on infrastructure to prop up all those opportunities to buy individual audiences across the entire Programmatic Advertising supply chain.”

In other words, by the time an advertiser has spent the money to do the targeting, and to make sure that the mechanisms to do the targeting work, and to follow up on the targeting, the costs can be so high that the reason for targeting in the first place is in danger of being lost.

The other interesting problem is that academic studies that have tried to measure the returns to targeted online advertising have run into severe problems. For a discussion, see “The Unfavorable Economics of Measuring the Returns to Advertising,” by Randall A. Lewis and Justin M. Rao

(Quarterly Journal of Economics, 130:4, November 2015, pp. 1941�1973, available here). They describe the old “half of what I spend in advertising is wasted” slogan in these terms (citations omitted):

“In the United States, firms annually spend about $500 per person on advertising. To break even, this expenditure implies that the universe of advertisers needs to casually affect $1,500�2,200 in annual sales per person, or about $3,500�5,500 per household. A question that has remained open over the years is whether advertising affects purchasing behavior to the degree implied by prevailing advertising prices and firms� gross margins …”

The authors look at 25 studies of digital advertising. They find that the variations in what people buy and how much they spend are very large. Thus, it’s theoretically possible that if adverting causes even a small number of people to “tip” from spending only a little on a product to being big spender on a product, the advertising can pay off for the advertiser. But in statistical sense, given that people vary so much in their spending on products and change so much anyway,  it’s really hard to disentangle the effects of advertising from the changes in buying patterns that happen anyway. As the authors write: “[W]e are making the admittedly strong claim that most advertisers do not, and indeed some cannot, know the effectiveness of their advertising spend …”

Thus, the economics of spending on advertising remain largely unresolved, even in the digital age. Those interested in more on the economics of advertising might want to check my post on “The Case For and Against Advertising” (November 15, 2012).

Early Examples of Randomization in the Social Sciences

Randomization is one of the most persuasive techniques for determining cause and effect. Half of a certain group get a treatment; half don’t. Compare. If the groups were truly chosen at random, and the treatment was truly the only difference between them, and the differences in outcomes are meaningful and the size of the samples are also large enough for drawing statistically meaningful conclusions, then the differences can tell you something about causes.

Economists have surged into randomized experimental work in the last few decades. From the 1970s, up into the 1990s, such work was often focused on social policies, with experiments on different kinds of health insurance, job training and job search, changes in welfare rules, early childhood education, and others. More recently, such work has become very prominent in the development economics literature, as well as on a variety of focused economics topics like how incentive pay affects work or how charitable contributions could be increased. Running experiments is now part of the common tool-kit (for a small taste, see here, here, here, and the three-paper symposium in the Fall 2017 issue of the Journal of Economic Perspectives on “From Experiments to Economic Policy”).

Thus, economists and other social scientists may find it useful to keep some historical examples of randomization near at hand. Julian C. Jamison provides a trove of such examples in “The Entry of Randomized Assignment into the Social Sciences”  (World Bank Policy Research Working Paper 8062, May 2017).

Jamison has a run-through of the classic examples of randomization over the centuries, which were often in a medical context. For example, he quotes the correspondence between poet/writers Petrarch and  Boccaccio in 1364, in which Petrarch wrote:

“I solemnly affirm and believe, if a hundred or a thousand men of the same age, same temperament and habits, together with the same surroundings, were attacked at the same time by the same disease, that if one half followed the prescriptions of the doctors of the variety of those practicing at the present day, and that the other half took no medicine but relied on Nature�s instincts, I have no doubt as to which half would escape.” 

Or Jan Baptist van Helmont, a doctor writing in the first half of the 1600s, proposed that the two “cures” of the day–bloodletting vs. induced vomiting/defecation–be tested in this way:

“Let us take out of the Hospitals, out of the Camps, or from elsewhere, 200 or 500 poor People, that have Fevers, Pleurisies, etc. Let us divide them in halfes, let us cast lots, that one half of them may fall to my share, and the other to yours; I will cure them without bloodletting� we shall see how many Funerals both of us shall have.”

There are cases from the 1600s, 1700s, and and 1800s of randomization as a way of testing for the effectiveness of  treatments for scurvy, or smallpox, salt-based homeopathic treatments. Perhaps one of the best-known experiments was done by Louis Pasteur in 1881 to test his vaccine for sheep anthrax. Jamison writes:

“He was attempting to publicly prove that he had developed an animal anthrax vaccine (which may not have been his to begin with), so he asked for 60 sheep and split them into three groups: 10 would be left entirely alone; 25 would be given his vaccine and then exposed to a deadly strain of the disease; and 25 would be untreated but also exposed to the virus …. [A]ll of the exposed but untreated sheep died, while all of the vaccinated sheep survived healthily.”

There are LOTS of examples. But in Jamison’s telling, the earliest example of randomization in an experiment within a subject conventionally thought of as economics was actually done by two non-economists working on game theory, psychologist Richard Atkinson and polymath Patrick Suppes, in work published in 1957:

“Atkinson and Suppes (1957), also not economists by training, analyzed different learning models in two-person zero-sum games, and they explicitly �randomly assigned� pairs of subjects into one of three different treatment groups. This is the earliest instance of random assignment in experimental economics, for purposes of comparing treatments, that has been found to date.”

As to broader social experiments about the effects of policy interventions, the first one goes back to the 1940s:

“The first clearly and individually randomized social experiment was the Cambridge-Somerville youth study. This was devised by Richard Clarke Cabot, a physician and pioneer in advancing the field of social work. Running from 1942-45, the study randomized approximately 500 young boys who were at risk for delinquency into either a control group or a treatment group, the latter receiving counseling, medical treatment, and tutoring. Results (Powers and Witmer 1951) were highly disappointing, with no differences reported.” 

Back in high school, we had to design and carry out our own experiment in a psychology class. I wrote up the same message (a request for some saccharine and meaningless information) on two sets of postcards. One of the sets of postcards was typed; the other was handwritten. I chose the first 60 households in the local phone directory, and sent the postcards out at random. My working hypothesis was that the typewritten notes would get a higher response (perhaps because they would look more “professional,” but actually the handwritten notes got a much higher response (probably because they reeked of high school student). Even at the time, it felt like a silly little experiment to me. But the result felt powerful, nonetheless.

The Modern Shape-Up Labor Market

I’m taking some family vacation the next 10 days or so. The lake country of northern Minnesota calls. My wife says that I get a distinctively blissful expression when I’m sitting in the back of a canoe with a paddle in my hand. While I’m gone, I’ve prescheduled a string of posts that look at various things I’ve been reading or have run across in the last few months that are at least loosely connected to my usual themes of economics and academia.

It�s not unusual to hear predictions that in the future, we will all have opportunities to run our own companies, or that jobs will become a series of freelance contracts. Here�s a representative comment from business school professor Arun Sundararajan (�The Future of Work,� Finance & Development, June 2017, p. 7-11):

�To avoid further increases in the income and wealth inequality that stem from the sustained concentration of capital over the past 50 years, we must aim for a future of crowd-based capitalism in which most of the workforce shifts from a full-time job as a talent or labor provider to running a business of one�in effect a microentrepreneur who owns a tiny slice of society�s capital.”

To me, this description is reminiscent of what used to be called the �shape-up� system of hiring, described by journalist Malcolm Johnson in his Pulitzer-prize winning articles about crime on the docks of New York City in the late 1940s (Crime on the Labor Front, quotation from pp. 133-35),  which is perhaps best-remembered today for how it was depicted in the 1954 movie �On the Waterfront.�  Johnson described the process for a longshoreman of seeking and getting a job in this way:
�The scene is any pier along New York�s waterfront. At a designated hour, the longshoremen gather in a semicircle at the entrance to the pier. They are the men who load and unload the ships. They are looking for jobs and as they stand there in a semicircle their eyes are fixed on one man. He is the hiring stevedore and he stands alone, surveying the waiting men. At this crucial moment he possesses the crucial power of economic life or death over them and the men know it. Their faces betray that knowledge in tense anxiety, eagerness, and fear. They know that the hiring boss, a union man like themselves, can accept them or reject them at will. He can hire them or ignore them, for any reason or for no reason at all.  Now the hiring boss moves among them, choosing the man he wants, passing over others. He nod or points to the favored ones or calls out their names, indicating that they are hired. For those accepted, relief and joy. The pinched faces of the others reflect bleak disappointment, despair. �
�Under the shape-up, the longshoreman never knows from one day to the next whether he has a job or not. Under the shape-up, he may be hired today and rejected tomorrow, or hired in the morning and turned away in the afternoon. There is no security, no dignity, and no justice in the shape-up. � The shape-up fosters fear. Fear of not working. Fear of incurring the displeasure of the hiring boss.�

You can call it �crowd-based capitalism,� but to a lot of people, the idea of �running a business of one� does not sound attractive.  Many people don�t want to apply for a new job every day, or every week, or every month. They don’t want to be a “microentrepreneur who owns a tiny slice of society�s capital.” They don�t want to be treated as interchangeable cogs, at the discretionary power of a modern hiring boss. All workers know that others have the power of economic life and death over them, but many prefer not to  have that fact rubbed in our faces every day.


It seems to me that a lot of the concern about the modern labor market isn’t over whether the wage rates is going up a percentage point or two faster each year. It’s about a sense that careers which build skills are harder to find, and that the labor market for many people feels like a modern version of the shape-up. 

The Chicken Paper Conundrum

Harald Uhlig delivered a talk on “Money and Banking: Some DSGE Challenges” (video here, slides here) at the Nobel Symposium on Money and Banking recently held in Stockholm. He introduces the “Chicken Paper Conundrum,” which he attributes to Ed Prescott.

 I’ve definitely read academic papers, as well as listed to policy discussions, which follow this pattern.

Homage: I ran across this in the middle of two long blog posts by John Cochrane at his Grumpy Economist blog (here and here), which summarize and give links to many papers at this conference given by leading macroeconomists. Many have links to video, slides, and sometimes full papers. If you are interested in topics on the cutting edge of macroeocnomics, it’s well worth your time.

Early Childhood Education Fails Another Randomized Trial

Public programs for pre-K education have a worthy goal: reducing the gaps in educational achievement that manifest themselves in early grades. I find myself rooting for such programs to succeed. But there are now two major randomized control trial studies looking at the the results of publicly provided pre-K programs, and neither one finds lasting success. Mark W. Lipsey, Dale C. Farran, and Kelley Durkin report the results of the most recent study in “Effects of the Tennessee Prekindergarten Program on children�s achievement and behavior through third grade” (Early Childhood Research Quarterly, 2018, online version April 21, 2018). 

The Tennessee Voluntary Pre-K Program offers support for families with low-income levels. In 2008-2009, the program only had sufficient funding to cover 42% of applicants. Admission to the program was thus decided by a lottery. This selection procedure makes the eyes of academic researchers light up, because it means that there was random selection of who was in the program (the “treatment” group) and those who were not (the “control” group). As these students continue into Tennessee public schools, there is follow-up information on how these two groups performed. The result was that that students in the pre-K program has a short-term boost in performance, which quickly faded. The abstract of the study says:

“Low-income children (N = 2990) applying to oversubscribed programs were randomly assigned to receive offers of admission or remain on a waiting list. Data from pre-k through 3rd grade were obtained from state education records; additional data were collected for a subset of children with parental consent (N = 1076). At the end of pre-k, pre-k participants in the consented subsample performed better than control children on a battery of achievement tests, with non-native English speakers and children scoring lowest at base-line showing the greatest gains. During the kindergarten year and thereafter, the control children caught up with the pre-k participants on those tests and generally surpassed them. Similar results appeared on the 3rd grade state achievement tests for the full randomized sample � pre-k participants did not perform as well as the control children. Teacher ratings of classroom behavior did not favor either group overall, though some negative treatment effects were seen in 1st and 2nd grade.”

The previous major randomized control trial study of an early childhood education program was done on the federal Head start program. I discussed it here. Lipsey, Farran and Durkin summarize the finding in this way:

“This study [of Head Start] began in 2002 with a national sample of 5000 children who applied to 84 programs expected to have more applicants than spaces. Children were randomly selected for offers of admission with those not selected providing the control group.The 4-year-old children admitted to Head Start made greater gains across the pre-k year than nonparticipating children on measures of language and literacy, although not on math. However, by the  end of kindergarten the control children had caught up on most achievement outcomes; subsequent positive effects for Head Start participants were found on only one achievement measure at the end of 1st grade and another at the end of 3rd grade. There were no statistically significant effects on social�emotional measures at the end of the pre-k or kindergarten years. A few positive effects appeared in parent reports at the end of the 1st and 3rd grade years, but teacher and child reports in those years showed either null or negative effects.”

Of course, the fact that the two major studies of publicly provided pre-K find near-zero results by third grade doesn’t prove that such programs never or can’t work. There are many studies of early childhood education programs. The results of the Tennessee and Head Start studies don’t rule out the possibility of benefits from narrower programs specifically targeted at children with certain needs. Also, some studies with longer-term follow-up found that although measures of educational performance didn’t move much. pre-K programs had longer-term effects on outcomes like high school graduation rates.

But the case for believing that publicly provided  pre-K programs will boost long-term educational outcomes for the disadvantaged is not very strong. If positive results were clear-cut and of meaningful size, it seems as if they should have shown up in these major studies.

Some researchers in this area have suggested that interventions earlier than pre-K might be needed to close achievement gaps. For example, some evidence suggests that the black-white educational achievement gap is apparent by age 2. It may be, although the evidence on this point isn’t clear, that some of the funding now being spent on pre-K programs would have a bigger payoff if spent on home visits to the parents of very young children. Greater attention to the health status of pregnant women–including both personal health and exposure to environmental risks–might have a substantial payoff for the eventual educational performance of their children, too.

Conflict Minerals and Unexpected Tradeoffs

The cause seemed worthy, and the policy mild. Militia groups in the Democratic Republic of the Congo (DRC) were taxing and extorting revenue from those who mined like tin, tungsten, and tantalum. Thus, Section 1502 of the Dodd-Frank Act of 2010 required companies to disclose the source of their purchases of such minerals. The hope was to reduce funding for the militias, and thus to benefit people in the area. Human rights advocacy groups supported the idea. The Good Intentions Paving Company was up and running.

But tradeoffs are no respecters of good intentions. Dominic Parker describes some research on the tradeoffs that occurred in  “Conflict Minerals or Conflict Policies? New research on the unintended consequences of conflict-mineral regulation” (PERC Reports, Summer 2018, 37:1, pp. 36-40). Parker writes:

“First, Section 1502 initially caused a widespread, de facto boycott on minerals from the eastern DRC. Rather than engaging in costly due diligence to identify the sources of minerals�and risking being considered a supporter of rebel violence�some U.S. companies simply stopped buying minerals from the region. This de facto boycott had the intended effect of reducing funding to militias, but its unintended effect was to undercut families who depended on mining for income and access to health care. The decreases in mineral production rocked an artisanal mining sector that had supported an estimated 785,000 miners prior to Dodd-Frank, with spillovers from their economic activity thought to affect millions.

“Second, the legislation changed the relative value of controlling certain mining areas from the perspective of militias, who changed their tactics accordingly. Before the boycott, the militias could maximize revenue by taxing tin, tungsten, and tantalum at or near mining sites. They therefore had an interest in keeping mining areas productive and relatively safe for miners. After the legislation, the militias sought to make up for reduced revenue in other ways. According to the evidence, they started to loot civilians who were not necessarily involved in mining. They also started to fight for control over other commodities, including gold, which was in effect exempt from the regulation.”

One result of the economic losses in the area was a sharp rise in infant mortality rates: “The combined evidence suggests that Dodd-Frank increased the probability of infant deaths (that is, babies who died before reaching their first birthday) from 2010 to 2013 for children who lived part of or all of their first year in villages targeted by the legislation and mining ban. The most conservative estimate is that the legislation increased infant mortality from a baseline average of 60 deaths per 1,000 births to 146 deaths per 1,000 births over this period�a 143 percent increase.”

The level of violent conflict affecting civilians actually seemed to rise, rather that fall: “At the end of 2010, after the passage of Dodd-Frank, looting in the territories targeted by the mining policies became more common and remained that way through much of 2011 and 2012, when our study period ended. … The incidence of violence against civilians also increased in the policy regions after the legislation …”

One economic insight here is the “stationary bandit” theory that when a bandit remains in one location, there are incentives for the bandit to keep local workers and companies safe and productive.

The political insights are fuzzier. One can’t rule out that if the Dodd-Frank provisions had been better thought-out or better targeted, maybe the effects would have been better, too. Or maybe this is a case where long-run benefits of these provisions will outweigh short-run costs. But it’s also possible that an alternative strategy for bolstering the economy and human rights in the area might have worked better. And it’s quite clear that those who supported this particular conflict mineral policy did not predict or acknowledge that their good intentions could have these adverse consequences.

On Preferring A to B, While Also Preferring B to A

“In the last quarter-century, one of the most intriguing findings in behavioral science goes under the unlovely name of `preference reversals between joint and separate evaluations of options.’ The basic idea is that when people evaluate options A and B separately, they prefer A to B, but when they

evaluate the two jointly, they prefer B to A.” Thus, Cass R. Sunstein begins his interesting and readable paper “On preferring A to B, while also preferring B to A” (Rationality and Society 2018,  first published July 11, 2018, subscription required)

Here is one such problem that has been studied: 

Dictionary A: 20,000 entries, torn cover but otherwise like new
Dictionary B: 10,000 entries, like new

“When the two options are assessed separately, people are willing to pay more for B; when they are assessed jointly, they are willing to pay more for A.” A common explanation is that when assessed separately, people have no basis for knowing if 10,000 or 20,000 words is a medium or large number for a dictionary, so they tend to focus on “new” or “torn cover.” But when comparing the two, people focus on the number of words.

Here’s another example, which (as Sunstein notes is “involving an admittedly
outdated technology”:

CD Changer A: Can hold 5 CDs; Total Harmonic Distortion = 0.003%
CD Changer B: Can hold 20 CDs; Total Harmonic Distortion = 0.01%

“Subjects were informed that the smaller the Total Harmonic Distortion, the better the sound quality. In separate evaluation, they were willing to pay more for CD Changer B. In joint evaluation, they were willing to pay more for CD Changer A.” When looking at them separately, holding 20 CDs seems more more important. When comparing them, the sound quality in Total Harmonic Distortion seems more important–although most people have no basis for knowing if this difference ins sound quality would be meaningful to their ears or not.

And one more example:

Baseball Card Package A: 10 valuable baseball cards, 3 not-so-valuable baseball cards
Baseball Card Package B: 10 valuable baseball cards

“In separate evaluation, inexperienced baseball card traders would pay more for Package B than for Package A. In joint evaluation, they would pay more for Package A (naturally enough). Intriguingly, experienced traders also show a reversal, though it is less stark.” When comparing them, choosing A is obvious. But without comparing them, there is something about getting all valuable cards, with no less valuable cards mixed in, which seems attractive.

And yet another example:

Congressional Candidate A: Would create 5000 jobs; has been convicted of a misdemeanor
Congressional Candidate B: Would create 1000 jobs; has no criminal convictions

“In separate evaluation, people rated Candidate B more favorably, but in joint evaluation they preferred candidate A.” When looking at them separately, the focus is on criminal history; when looking at them together, the focus is on jobs.

And one more: 
Cause A: Program to improve detection of skin cancer in farm workers
Cause B: Fund to clean up and protect dolphin breeding locations
When people see the two in isolation, they show a higher satisfaction rating from giving to Cause B, and they are willing to pay about the same. But when they evaluate them jointly, they show a much higher satisfaction rating from A, and they want to pay far more for it.” The explanation here seems to be a form of category-bound thinking, where just thinking about the dolphins generates a stronger visceral response, but when comparing directly, the humans weigh more heavily. 
One temptation in these and many other examples given by Sunstein is that joint evaluation must be more meaningful, because there is more context for comparison. But he argues strongly that this conclusion is unwarranted. He writes: 

“In cases subject to preference reversals, the problem is that in separate evaluation, some characteristic of an option is difficult or impossible to evaluate�which means that it will not receive the attention that it may deserve. The risk, then, is that a characteristic that is important to welfare or actual experience will be ignored. In joint evaluation, the problem is that the characteristic that is evaluable may receive undue attention. The risk, then, is that a characteristic that is unimportant to welfare or to actual experience will be given excessive weight.”

In addition, life does not usually give us a random selection of choices and characteristics for our limited attention spans to consider. Instead, choices are defined and described by sellers of products, or by politicians selling policies. They choose how to frame issues. Sunstein writes: 

“Sellers can manipulate choosers in either separate evaluation or joint evaluation, and the design of the manipulation should now be clear. In separate evaluation, the challenge is to show choosers a characteristic that they can evaluate, if it is good (intact cover), and to show them a characteristic  that they cannot evaluate, if it is not so good (0.01 Total Harmonic Distortion). In joint evaluation, the challenge is to allow an easy comparison along a dimension that seems self-evidently important, whether or not the difference along that dimension matters to experience or to people�s lives. … Sellers (and others) can choose to display a range of easily evaluable characteristics (appealing ones) and also display a range of others that are difficult or impossible to assess (not so appealing ones). It is well known that some product attributes are �shrouded,� in the sense that they are hidden from view, either because of selective attention on the part of choosers or because of deliberative action on the part of sellers.” 

We often think of ourselves as having a set of personal preferences that are fundamental to who we are–part of our personality and self. But in many contexts, people (including me and you) can be influenced by the framing and presentation of choices. Whether the choice is between products or politicians, beware.

When Growth of US Education Attainment Went Flat

Human capital in general, and educational background in particular, are one of the key ingredients for economic growth. But the US had a period of about 20 years, for those born through most of the 1950s and 1960s, where educational attainment barely budged.  Urvi Neelakantan and Jessie Romero provide an overview in “Slowing Growth in Educational Attainment,” an Economic Brief written for the Federal Reserve Bank of Richmond (July 2018, EB18-07).

Here’s a figure showing years of schooling for Americans going back to the 1870s. You can see the steady rise for both men and women up until the birth cohorts, when the educational gains for women slow down and those for men go flat.

In their essay, Neelakantan and Romero argue that this strengthens the case for improving K-12 education, and offer some thoughts. Here are a few related points I would emphasize.

1) Lots of factors affect productivity growth for an economy. But rapid US education growth starting back in the 19th century has been tied to later US economic growth. And it’s probably not just a coincidence that when those born around 1950 were entering the workforce in the 1970s, there is a sustained slump in productivity that lasts about 20 years–into the early 1990s.

2) One reason for the rise in inequality of incomes that started in the late 1970s is that the demand for high-skilled workers was growing faster than the supply. For example, the wage gap between college-educates worker and workers with no more than a high school education increased substantially. As Neelakantan and Romero write: “This slowdown in skill acquisition, combined with growing demand for high-skill workers, contributed to a large increase in the `college premium’ � the higher wages and earnings of college graduates relative to workers with only high school degrees.” When educational attainment went flat, it also helped to create the conditions for US inequality to rise.

3) When a society has a period of a couple of decades where educational attainment doesn’t rise, there’s no way to go back later and “fix” it. The consequences like slower growth and higher inequality just march on through time. Similarly, the current generation of students–all of them, K-12, college and university–will be the next generation of US workers.

"The Seeds of the Declaration of Independence Are Yet Maturing"

John Quincy Adams, the sixth President of  the United States (and son of the second president John Adams and his wife Abigail) started a diary when he was 12 in 1779, and added to it continuously for almost 70 years. Some days the long entries were more than 5,000 words. There’s one stretch of 25 years where he didn’t miss a day. It sums up to more than 14,000 handwritten pages, and the magic of the Internet lets you see images of the 51 volumes here.

In thinking about the July 4 holiday, here’s a comment from Adams’s diary on December 27, 1819, about one of the basic questions confronting Americans of that time–and our own time. Thomas Jefferson both wrote the Declaration of Independence, and also was a slave-owner. As Adams writes: “With the Declaration of Independence on their lips, and the merciless scourge of slavery in their hands, a more flagrant image of human inconsistency can scarcely be conceived …” Of course, modern Americans and their leaders can sometimes have the Declaration on their lips and injustice in their hands, too.

Facing that contradiction, Adams responded in a way that was both ominous and, at least in my reading, tinged with hope. He wrote: “The seeds of the Declaration of Independence are yet maturing,” and that the result would be the “terrible sublime.” Here’s the passage, taken from John Quincy Adams, The Memoirs of John Quincy Adams (volume 4, pp. 492-493) available through the magic of the internet at the HathiTrust Digital library.

“[Thomas] Jefferson is one of the great men whom this country has produced, one of the men who has contributed largely to the formation of our national character � to much that is good and to not a little that is evil in our sentiments and manners. His Declaration of Independence is an abridged Alcoran of political doctrine, laying open the first foundations of civil society; but he does not appear to have been aware that it also laid open a precipice into which the slave-holding planters of his country sooner or later must fall. With the Declaration of Independence on their lips, and the merciless scourge of slavery in their hands, a more flagrant image of human inconsistency can scarcely be conceived than one of our Southern slave-holding republicans. Jefferson has been himself all his life a slave-holder, but he has published opinions so blasting to the very existence of slavery, that, how ever creditable they may be to his candor and humanity, they speak not much for his prudence or his forecast as a Virginian planter. The seeds of the Declaration of Independence are yet maturing. The harvest will be what West, the painter, calls the terrible sublime.”

A couple of the references here might bear a bit more explanation. The reference to the “Alcoran,” was a common contemporary spelling of what the AP Stylebook now spells as the “Quran.” It’s interesting to see a future US president in 1819 referring to the Quran as a parallel for the Declaration of Independence.

The final sentence refers to the painter Benjamin West, and presumably to paintings like his 1796 “Death on a Pale Horse.”

On July 4, I like to think that “the seeds of the Declaration of Independence are yet maturing,” with a full recognition that this process will not always provide its positive results through a happy cheerful parade of good feeling, but instead will sometimes be confrontational, wrenching, and difficult.

Design a site like this with WordPress.com
Get started