PfEMP1 variants identified in children with severe malaria!

I don’t have time to spend on this but I can’t wait to read it! 

You can’t build drugs to target malaria in large part because of the presence of “var” genes. The “var” stands for variable and it’s exactly what you’d think. Messed up, mutated, switched around and hard to target because of this.

This new paper provides a pipeline that integrates sequencing of the parasite with proteomics to the variants expressed, potentially opening the door to focused and targeted therapies. Maybe that’s a stretch (if you trust me on the biology stuff, you don’t know me very well) but — at the very least it is a pipeline in the right direction and it works in real infected human samples!

Bioinformazing — YPIC 2018 Results

The results of (Young Proteomics Investigators Challenge) YPIC 2018 are out and — this is just amazing all around. I rambled about last year’s a little here. 

2018 was a step up in complexity from all sides —

 How crazy is that?  One — that this challenge is possible to set up at all!?!  And 2 — that some scary talented young investigators solved it?

You can find out more about YPIC and join it here.

Selenoproteins identified and quantified with TMT proteomics!

I sometimes think that my most positive attribute is that I have no problem accepting that I’m a moron. I got one sentence into this new-ish JASMS paper and was like — “yup, I’m still totally dumb.”

I quote: “The essential trace element selenium (Se) functions through it’s incorporation as the 21st amino acid, selenocysteine (Sec; U), in proteins.”

If I ever knew this fact, I think I should investigate the frequency of head impacts I suffer and maybe consider just wearing a helmet all the time.

There are a dozens of selenoproteins in humans alone and they are super important. Here is a nice 3 page open review on them (and a paragraph stolen from said review).

….head impacts…..

While I’m at it and since I have never once in my entire life considered looking for this — yeah — it’s in UniMod.

This is the backdrop for this cool paper where these authors use TMT to study these proteins like this:

1) They raise mice on a purposely selenium deficient diet.
2) They take out the mouse macrophages  (I know how to do this and still have nightmares about doing it late at night in a creepy Hopkins sub-basement — Aha! Proof I do remember something before 2013!)
3) They culture macrophages with and without Selenium
4) They activate the macrophages with LPS (the generic bacterial immune activator thing)
5) They extract the proteins, reduce, alkylate (with 2-chloroacetamide — possibly important here) and TMT label as normal.

They use an Orbitrap Fusion with the TMT MS3 method under what looks like typical conditions. I don’t want to read back through — I should totally do something other than read about these today — but I think they did 3 biological replicates.

Observations?
1) They totally screwed up the abundance of the selenoproteins in their response to LPS activation!
2) My favorite part — they never actually identify (and since it’s TMT, thereby quantify) any peptides from the selenium region. (The image at the very top of this blog post shows — no coverage of the 21st amino acid range!)

#1 is cool because — holy cow — if just depleting/supplementing an element can alter one of the most basic and critically important responses of the general immune system  — what else does it do??!?

#2 I mean, obviously, right? If you’re getting solid coverage of the rest of these proteins — what on earth is going on at the Sec amino acid? How does the reduction, alkylation and labeling affect the Selenium? How well does a selenopeptide ionize? I’m sure one of the >27,000(!) references Google Scholar drags up for this has all these answers but still — that’s totally cool, right?

I’d probably be wasting a lot more time on my Sunday afternoon trying to figure this out, but the files aren’t publicly uploaded…..

Okay — I’m not sure why I’m so curious about this, but if you also are as well — here is a great study where you can get the data files (found it through ProteomeXchange!).

The data for this study is Orbitrap Elite and all the files I downloaded have MS/MS in the ion trap so open searching and verification will take a lot longer than I have left free today. WHOA! The ETD MS/MS spectra are beautiful. This paper and corresponding dataset deserve some investigation.

Big question: HOW MANY MORE THINGS LIKE THIS (Selenocysteine!?!?) ARE THERE!?!??!  Hundreds? Thousands? Millions? More? All those unidentified spectra are seeming a lot less strange these days and might be bordering on surprising that unmodified peptide regions from any proteoforms exist at all….

Proteome Discovererer 2.4 Scripting nodes!!

Yes — Perseus has basically always been able to interface with external code and programs and has ugly little icons to link all this stuff together — but Proteome Discoverer never has without being in the ultra special developer’s club!

TADAAAAA!!!!  The Proteome Discovererererererererer  2.4 Scripting nodes!

It’s SO EASY THAT I CAN DO IT! (KIND OF!)

I had to follow this poster that isn’t quite exactly what was commercially released, but close enough to be helpful.

I still can’t make it do all that stuff.  But I made it do something!

First of all — there are two scripting nodes — one in the Consensus and one in the processing.

You can do simple things with it like have it send whatever data you want to different executables.

Here is the super easy example (click image below to expand)

In this case I just had it send the protein descriptions to NotePad. TADAAA! Totally dumb, right? But could you do that before? I couldn’t.

But you can send stuff from Proteome Discoverer to R or Python or other things if you want. For R, you’re going to need a few things first.

You’re going to need R Studio (which will require R)
You’re going to need an interpreter to convert JSON files into R (RJSON can be found here)
It appears to require some other tools, but it will tell you when you go to install it.

Actually install.packages (“RJSONIO”) seems to do it, but it complains about something and now I forgot what that actually was.

Okay — so sending stuff out of PD is cool — but what is really really cool (and beyond me at this point) is the ability to do a calculation and bring it back in.

The example I’m working on — I’d like to know my peak width in every file. If you use Minora you get your Apex RT, you’re left RT and your Right RT. In minutes. I’d like to take the absolute value of left vs right and multiple it by 60 and bring in a new column that has my peak width in seconds.

I’d never ever have been able to do it before, but in Proteome Discoverer 2.4 — I’ll still probably never be able to do it, but the tools are there!

Maximizing Ion Trap MS/MS Acquisitions! (Get 12% more IDs out of any system?!?!)

I took a semester off from school in undergrad. Up to that point my priorities were running around in a costume (I was my school’s mascot), dancing and tumbling poorly in said costume at basketball games I got into for free and had the best seat possible for (! no seat!) and going to a lot of dumb parties. I lost a scholarship and ran a gas line trimmer through one summer and the following semester before sobering up and swearing I’d never get anything but an A for the rest of my life.

Then I took Dr. Iulia Lazar’s mass spectrometry class in grad school (Bioanalytical instrumentation) and got the first B since I made that oath (and last one of my life so far).

I bring this up because I’m an oversharer — BUT holy cow — that B is my excuse for never even thinking of distorting this very basic tenet of tandem mass spectrometry we’ve all been taking for granted for a decade or two as this group describes in this awesome paper. After reading the abstract you will probably subconsciously start working on your excuse.

You don’t live in Madison? …probably valid….

For the generation of newer people to our field who have been exclusively using high resolution things like an Orbitrap — it doesn’t matter what your mass range is there. All the ions get in and the frequency is read while they’re in there. A 1.4 Da SIM scan at 35,000 resolution takes the same amount of time as a 35,000 resolution full scan.

Okay — concept #1 — so a scanning device like a quadrupole OR AN ION TRAP has a certain speed per mass range it covers. In an ion trap all the ions it can hold are there and then it sequentially ejects them. (In a quad it sequentially lets them through, but that doesn’t matter here — but if you try to do a full scan on a triple quad you’ll be shocked by how slow it is). Ejection is super fast, but it isn’t instantaneous. It takes longer to eject a full scan than it does a SIM scan (overheads and all sorts of other things do play a role because they’re not instantaneous either, but let’s keep going)

#2 — there is some stuff in every MS/MS scan that we don’t reeeeaaaaaaaly care about. Probably ions that are on the high and low end of the m/z. Do people in DIA typically let their scan windows go all the way to the bottom? Do you care about that arginine? Really? Probably not! DIA scan ranges are, as far as I can tell, always static.

#3 — We’re basically always letting the instrument determine on the fly what scan range to use. For the “high-low” stuff where you’re doing, for example, Orbitrap MS1 followed by (or synchronous with, probably a combination of the two) MS/MS in an ion trap, the automatic settings are just taking your center point and the “1/3 (3/8?) cutoff rule” as the low mass.

What this group did was sit back and say — wait a minute — are we possibly hurting our results by letting an arbitrary rule of physics (that whole stability diagram thing) decide how we do our biological measurements?

YES. Yes we are. How does 12% strike you? A 12% gain in unique identifications!!! People will buy entirely new systems for a 12% increase in unique identifications.

Unless I’m missing something, this should be applicable to every ion trap system doing proteomics out there and should apply to anything where data dependent acquisition is used in complex mixtures.

If you’re using an ion trap to identify anything complex you should definitely check this fantastic new study out.

Stalking e botte alla fidanzata, i carabinieri arrestano il ballerino Andy Milo

La donna soccorsa in casa e medicata al Sant’Anna, poi denuncia di essere vittima da diversi mesi di maltrattamenti da parte del cantante padovano finito su ‘Striscia’

Copparo. Violenze fisiche, psicologiche e minacce alla fidanzata che voleva lasciarlo. Sarebbero durate per mesi, fino alla notte tra il 10 e l’11 settembre quando lei, dopo essere stata picchiata per l’ennesima volta, si è decisa a chiamare i carabinieri. A finire in carcere è un uomo già noto alle cronache nazionali per via di un servizio di Striscia la Notizia per una presunta estorsione a sfondo sessuale. Si tratta di Andrea Franzon, aspirante ballerino e cantante latin-pop padovano di 34 anni, conosciuto con il nome d’arte di Andy Milo.

Per lui è scattata una denuncia per  maltrattamenti in famiglia, lesioni personali e minaccia, ma al momento, visto che i due con convivevano, il pubblico ministero Alberto Savino contesta lo stalking – consistito in numerose chiamate e messaggi ingiuriosi, imbrattamento delle mura di casa con scritte offensive – e le lesioni aggravate.

I militari della stazione di Ambrogio, aiutati dai colleghi di Jolanda di Savoia e del Norm di Copparo, sono intervenuti nella tarda serata del 10 settembre nella frazione copparese a seguito di una telefonata al 112 da parte di una donna impaurita perché picchiata dal fidanzato.

Quando i militari sono arrivati sul posto hanno trovato la donna in grave stato agitazione per le percosse ricevute. Anche Franzon si trovava ancora nell’abitazione.

La lite sarebbe scoppiata per la decisione della vittima di lasciare il ragazzo in quanto violento ed eccessivamente geloso. La donna aveva delle escoriazioni alla nuca e alle braccia ed è stata trasportata a bordo di un’ambulanza al pronto soccorso dell’Arcispedale Sant’Anna di Ferrara, dove le sono stati diagnosticati politraumi giudicati guaribili in 5 giorni.

Una volta dimessa, la donna ha sporto formale denuncia riferendo di essere vittima da diversi mesi di violenze fisiche e psicologiche nonché minacce gravi da parte di Franzon, aggiungendo di aver tentato più volte d’interrompere il rapporto sentimentale, ma, per paura delle minacce ricevute, non avrebbe avuto il coraggio di separarsi definitivamente dall’uomo che anche in varie occasioni precedenti l’avrebbe percossa.

Franzon nel maggio di quest’anno finì nelle cronache nazionali a seguito di un servizio di Striscia la Notizia, braccato in una pasticceria nel padovano per via di una presunta estorsione e persecuzione di cui si sarebbe reso protagonista ai danni di un ragazzo conosciuto tramite un annuncio sul web: per mesi avrebbe tempestato la vittima di messaggi al cellulare, chiedendo soldi per non diffondere notizie a sfondo sessuale e foto compromettenti.

A giugno, riportano le cronache locali, è stato condannato a 4 anni per estorsione per aver minacciato un uomo che aveva messo un annuncio in un sito di incontri omosessuali, affermando che lo avrebbe denunciato se non gli avesse dato 1.300 euro, poi accreditati su una carta ricaricabile PostePay. Milo si sarebbe spacciato come agente del sito specializzato, facendo credere alla vittima che l’annuncio era a pagamento.

Secondo quanto riporta Il Mattino di Padova, il cantante-ballerino è da giugno sotto processo a Rovigo con l’accusa di tentata estorsione ai danni della società pubblica AS2 Servizi Strumentali che aveva messo sotto fermo amministrativo la sua auto e lui, per riaverla, avrebbe minacciato i dipendenti, si sarebbe finto avvocato accusando la società di aver compiuto un’estorsione ai danni di Franzon, chiedendo un risarcimento entro 12 ore.

Nel 2015, secondo quanto riporta La Nuova Venezia, Milo era già stato condannato per fatti simili a 2 anni e 6 mesi di reclusione e nel 2018 aveva accusato di diffamazione la modella Nina Moric, poi assolta.

‘Operation Illogic’ takes down alleged multimillion-dollar counterfeit e-cigarette ring

3 charged in million-dollar fake vape scheme on Long Island - ABC7 New York

A father and his two sons have been charged in an alleged multimillion-dollar counterfeit vaping product ring.


Investigators say a lengthy investigation led to the arrests of Asgar Ali, and his sons Moosa and Zar, of East Meadow. They say more than 10,000 pieces of counterfeit electronic cigarettes, e-cig refills and vape pens were seized by authorities, with an overall value around $1.5 million.


The three were charged with trademark counterfeiting. They pleaded not guilty.
Prosecutors say the family was having the products shipped from China. They say they’ve made more than $4 million over the past two years.
Nassau County Police Commissioner Patrick Ryder says the family used social media, Amazon, eBay and other methods to sell the products. Investigators say they were also selling the products at five different stores they owned, including a Card Smart in East Meadow.


“I know one thing the DA’s office is going to have to prove — that they actually knew that they were selling counterfeit trademark goods,” says Kevin O’Donnell, attorney for the Ali family. “That’s one hurdle the DA is going to have to jump over.”
Officials say it’s hard to tell if a product is counterfeit. They also say there are health concerns involved.


“One puff equals one cigarette, think about that,” says Nassau County Executive Laura Curran. “It’s highly addictive, so you have that aspect, and now when you’ve got them coming in illegally, you don’t know what the heck is in here.”
All three men are expected to post bail on the condition that their passports be seized. If convicted, they each face up to 15 years in prison.

Time for Fiscal Rules?

Image result for Fiscal Rules

Should governments set rules to constrain the size of government borrowing on an annual basis or government debt accumulated over time? Pierre Yared discusses the question in “Rising Government Debt: Causes and Solutions for a Decades-Old Trend,” in the Spring 2019 issue of the Journal of Economic Perspectives.

There’s really no economic case to be made for the plain-vanilla rule that national governments should balance their budget every year. During a recession, for example, tax revenues will fall as income falls, and government spending on  programs like unemployment insurance, Medicaid, and food stamps will rise. If in the face of these forces the government wanted to keep a balanced budget during a recession, it would thus need to find ways to raise its tax revenues and cut other spending even while the economy is weak. A more sensible strategy is to find ways for these fiscal “automatic stabilizers” to function more strongly.

But the foolishness of a simplistic rule to balance the budget every year doesn’t mean that no rules at all can work. But as Yared writes (citations omitted):  “Thus, governments across the world have adopted fiscal rules�such as mandated deficit, spending, or revenue limits�to curtail future increases in government debt. In 2015, 92 countries had fiscal rules in place, a dramatic increase from
1990, when only seven countries had them.”

The form of these rules varies across countries. A basic lesson seems to be that all fiscal rules are imperfect, and can be gamed or avoided if a government wishes to do so, but also that well-designed rules–even with looseness and imperfections–do offer some constraints and limits that can hold down the amount of government borrowing.

Yared mentions an IMF study by Luc Eyraud, Xavier Debrun, Andrew Hodge, Victor Duarte Lledo, and Catherine A Pattillo called “Second-Generation Fiscal Rules : Balancing Simplicity, Flexibility, and Enforceability” (IMF Staff Discussion Note, SDN/18/04,  April 13, 2018).  They sum up the situation with fiscal rules in this way:

By improving fiscal performance, well-designed rules help build and preserve fiscal space while allowing its sensible use. Good rules encourage building buffers in good times and allow fiscal policy to support the economy in bad times. This implies letting automatic stabilizers operate symmetrically over the cycle and including escape clauses that allow discretionary fiscal support when needed. By supporting a credible commitment to fiscal sustainability, rules can also create space in the budget for financing growth-enhancing reforms and inclusive policies. 

To be effective, fiscal rules should have three main properties�simplicity, flexibility, and enforceability. These three properties are very difficult to achieve simultaneously, and past reforms have struggled to find the right balance. In the past decade, �second-generation� reforms have expanded the flexibility provisions (for example, with new escape clauses) and improved enforceability (by introducing independent fiscal councils, broader sanctions, and correction mechanisms). However, these innovations as well as the incremental nature of the reforms have made the systems of rules more complicated to operate, while compliance has not improved. … 

This paper presents new evidence that well-designed rules are indeed effective in constraining excessive deficits. Country experiences show that successful rules generally have broad institutional coverage, are tightly linked to fiscal sustainability objectives, are easy to understand and monitor, and support countercyclical fiscal policy. Supporting institutions, like fiscal councils, are also important. In contrast, rules that are poorly designed and do not align well with country circumstances can be counterproductive. Novel empirical research finds that fiscal rules can reduce the deficit bias even when they are not complied with.

In his essay in JEP, Yared offers some more detailed insights. In some ways, the key issue isn’t the fiscal rule you set, but rather what consequences will arise if the rule is broken. Here’s Yared:

There are several issues to take into account when considering punishments for breaking fiscal rules. First, whether or not rules have been broken might be unclear. There are numerous examples of how governments can use creative accounting to circumvent rules. Frankel and Schreger (2013) describe how euro-area governments use overoptimistic growth forecasts to comply with fiscal rules. Many US states compensate government employees with future pension payments, which increases off-balance-sheet entitlement liabilities not subject to fiscal rules (Bouton, Lizzeri, and Persico 2016). In 2016, President Dilma Rousseff of Brazil was impeached for illegally using state-run banks to pay government expenses and bypass the fiscal responsibility law (Leahy 2016). Given this transparency problem, many countries have established independent fiscal councils to assess and monitor compliance with fiscal rules (Debrun et al. 2013).

A second issue to consider is the credibility of punishments. As an example, the Excessive Deficit Procedure against France and Germany in 2003 was stalled by disagreement between the European Commission and the European Council; consequently, French and German deficits persisted without penalty  …

A third issue is the response of the private sector to the violation of rules, which can also serve as a form of punishment. For example, Eyraud, Debrun, Hodge, Lled�, and Pattillo (2018) [in the IMF study mentioned above] find that the violation of fiscal rules is associated with a significant increase in interest rate spreads for sovereign borrowing. Such an increase in financing costs immediately penalizes a government for breaching a rule. …

Many governments� fiscal rules feature an escape clause that allows violating the rule under exceptional circumstances (Lled� et al. 2017). Triggering an escape clause typically involves a review process, which culminates in a final decision by an independent fiscal council, a legislature, or citizens via a referendum. In Switzerland, for example, the government can deviate from a fiscal rule with a legislative supermajority in the cases of natural disaster, severe recession, or changes in accounting method. The cost of triggering an escape clause deters governments from using them too frequently. Moreover, because these costs largely involve a facilitation of information gathering to promote efficient fiscal policy, escape clauses are useful even in the presence of perfect rule enforcement.

Again, a theme that emerges is that a government which is serious about a fiscal rule will want to set up procedures to be followed when that rule is being broken. In turn, those procedures should be high-profile at least in a publicity sense, so that the decision to break the fiscal rule must be explained, justified, and evaluated by an independent commission.

Another issue Yared mentions is that a fiscal rule can be designed with different categories: instrument-based rules that focus on specific categories of  spending or taxes, or overall target-based rules. He writes:

In practice, fiscal rules can constrain different instruments of policy, such as specific categories of government spending or tax rates. Different instruments may call for different thresholds … For instance, due to volatile geopolitical conditions, military spending needs may be less forecastable than other spending needs, and may thus demand more flexibility. Capital spending is another category where allowing increased flexibility may be optimal, as the benefits of capital spending accrue well into the future and are thus subject to a less-severe present bias. Thus, many countries have �golden rules,� which limit spending net of a government�s capital expenditure. … Overall, the evidence [suggests that rules that distinguish across categories are indeed associated with better fiscal and macroeconomic outcomes (for discussion, see Eyraud, Lled�, Dudine, and Peralta 2018). Moreover, it can be optimal to set multiple layers of rules, for example specifying a fiscal threshold for individual categories of taxes and spending as well as on the total level of taxes and spending in the form of a (forecasted) deficit rule.  

Ultimately, Yared argued for the benefits of a hybrid rule, “which allows for an instrument threshold that is relaxed whenever a target threshold is satisfied.” 
In short, practical fiscal rules are quite possible, at least according the 90-plus countries that have them. And research suggests that such rule do constrain government borrowing, even given that they are going to be broken from time to time. But simple-minded fiscal rules like the US government “debt ceiling” will be essentially pointless, except for connoisseurs of short-term political dramas. Meaningful fiscal rules will not be simple, and will need to pay detailed attention not just to the overall goal, but to the practical issues of how much flexibility should surround the goal and what consequences will result when government borrowing that break through even a flexible rule. 

Origins of "Microeconomics" and "Macroeconomics"

Related image

Economists have written about topics that we would now classify under the headings of “microeocnomics” or “macroeconomics” for centuries. But the terms themselves are much more recent, emerging only in the early 1940s. For background, I turn to the entry on “Microeconomics” by Hal R. Varian published in The New Palgrave: A Dictionary of Economics, dating back to the first edition in 1987.

The use of “micro-” and “macro-” seems to date back to the work of Ragnar Frisch in 1933, but he referred to micro-dynamics and macro-dynamics. As Varian writes:

Frisch used the words �micro-dynamic� and �macro-dynamic�, albeit in a way closely related to the current usage of the terms �microeconomic� and �macroeconomic�: 

“The micro-dynamic analysis is an analysis by which we try to explain in some detail the behaviour of a certain section of the huge economic mechanism, taking for granted that certain general parameters are given … The macrodynamic analysis, on the other hand, tries to give an account of the whole economic system taken in its entirety (Frisch 1933).” 

Elsewhere Frisch gives a more explicit definition of these terms that is closely akin to the modern usage of micro and macroeconomics: �Microdynamics is concerned with particular markets, enterprises, etc., while macro-dynamics relate to the economic system as a whole� …. 

John Maynard Keynes does not seem to have used the micro- and macro- language. But Varian quotes a passage from the General Theory in 1936  show that Keynes was quite aware of the distinction. Keynes wrote:

The division of Economics between the Theory of Value and Distribution on the one hand and the Theory of Money on the other hand is, I think, a false division. The right dichotomy is, I suggest, between the Theory of the Individual Industry or Firm and of the rewards and the distribution of a given quantity of resources on the one hand, and the Theory of Output and Employment as a whole on the other hand [emphasis in the original]. 

Varian points to a somewhat obscure economist P. de Wolff as the first to use “microeconomic” and “macroeconomic” in 1941. Varian writes:

The earliest published reference that explicitly uses the term �microeconomics� that I have been able to locate is de Wolff (1941). De Wolff, a colleague of Tinbergen at the Netherlands Statistical Institute, was well aware of the macrodynamic modelling efforts of Frisch, and may have been inspired to extend Frisch�s use of �micro-dynamics� to the more general expression of �microeconomics�. De Wolff�s note is concerned with what we now call the �aggregation problem� � how to move from the theory of the individual consuming unit to the behaviour of aggregate consumption. … He [de Wolff] is quite clear about the distinction between micro- and macroeconomics: 

“The concept of income elasticity of demand has been used with two entirely different meanings: a micro- and macro-economic one. The micro-economic interpretation refers to the relation between income and outlay on a certain commodity for a single person or family. The macro-economic interpretation is derived from the corresponding relation between total income and total outlay for a large group of persons or families (social strata, nations, etc.) [emphasis in original].”

In Varian’s telling, the terms of macroeconomics start popping up in academic journals and even some lesser-used textbooks in the 1940s, are in widespread use by the mid-1950s, and first appear in Paul Samuelson’s canonical intro economics textbook in the 1958 edition.

Strengthening Automatic Stabilizers

For economists, “automatic stabilizers” refers to how tax and spending policies adjust without any additional legislative policy or change during economic upturns and downturns–and do so in a way that tends to stabilize the economy. For example, in an economic downturn, a standard macroeconomic prescription is to stimulate the economy with lower taxes and higher spending. But in an economic downturn, taxes fall to some extent automatically, as a result of lower incomes. Government spending rises to some extent automatically, as a result of more people becoming eligible for unemployment insurance, Medicaid, food stamps, and so on. Thus, even before the government undertakes additional discretionary stimulus legislation, the automatic stabilizers are kicking in.

Might it be possible to redesign the automatic stabilizers of tax and spending policy in advance so that they would offer a quicker and stronger counterbalance when (not if) the next recession comes? The question is especially important because in past recessions, the Federal Reserve often cut the policy interest rate (the “federal funds” interest rate) by about five percentage points. But interest rates are lower around the world for a variety of reasons, and the federal funds interest rate is now at 2.5%. So when the next recession comes, monetary policy will be limited in how much it can reduce interest rates before those rates hit zero percent, and will instead need to rely on nontraditional monetary policy tools like quantitative easing, forward guidance, and perhaps even experiments with a negative policy interest rate.

Heather Boushey, Ryan Nunn, and Jay Shambaugh have edited a collection of eight essays under the title Recession Ready: Fiscal Policies to Stabilize the American Economy (May 2019, Hamilton Project at the Brookings Institution and Washington Center for Equitable Growth).

In one of the essays, Louise Sheiner and Michael Ng look at US experience with fiscal policy during recessions in recent decades, and find that it has consistently had the effect of counterbalancing economic fluctuations. They write: “Fiscal policy has been strongly countercyclical over the past four decades, with the degree of cyclicality somewhat stronger in the past 20 years than the previous 20. Automatic stabilizers, mostly through the tax system and unemployment insurance, provide roughly half the stabilization, with discretionary fiscal policy in the form of enacted tax cuts and increased spending accounting for the other half.”

Automatic stabilizers are important in part because the adjustments can happen fairly quickly. In contrast, when the discretionary Obama stimulus package–American Recovery and Reinvestment Act of 2009–was signed into law in February 2019, the Great Recession had started 14 months earlier.
In another essay, Claudia Sahm proposes that along with the already-existing built-in shifts in taxes and spending, fiscal stabilizers could be designed to kick in automatically when a recession starts. In particular, she proposes that the trigger for such actions could be when “the three-month moving average of the national unemployment rate has exceeded its minimum during the preceding 12 months by at least 0.5 percentage points. … The Sahm rule calls each of the last five recessions within 4 to 5 months  of its actual start. … The Sahm rule would not have generated any incorrect signals in the last 50 years.”
Sahm argues that when this trigger is hit, the federal government should have legislation in place that would immediately make a direct payment–which could be repeated a year later if the recession persists. She makes the case for a total payment of about 0.7% of GDP (given current GDP of around $20 trillion, this would be $140 billion). She writes: “All adults would receive the same base payment, and in addition, parents of minor dependents would receive one half the base payment
per dependent.” This isn’t cheap! But a lasting and persistent recession is considerably more expensive. 
Other chapters of the book focus on a number of other proposals, which include: 
  • “[T]ransfer federal funds to state governments during periods of economic weakness by automatically increasing the federal share of expenditures under Medicaid and the Children�s Health Insurance Program”
  • “[C]reating a transportation infrastructure spending plan that would be automatically triggered during a recession”
  • Publicize availability of unemployment benefits when the unemployment rate starts rising, and extend the length of unemployment insurance payments at this time
  • Expand Temporary Assistance for Needy Families to include subsidized jobs in recessions
  • An automatic rise of 15% in Supplemental Nutrition Assistance Program (SNAP) benefits during recessions

The list isn’t exhaustive, of course. For example, one policy used during the Great Recession was to have a temporary cut in the payroll taxes that workers pay to support Social Security and Medicare. For most workers, these taxes are larger than their income taxes. And there is a quick and easy way to get this money to people, just by reducing what is withheld from paychecks. 

The broader issues here, of course, is not about the details of specific actions, some of which are more attractive to me than others. It’s whether we seize the opportunity now to reduce the sting of the next recession.

For estimates of automatic stabilizers in the past, see “The Size of Automatic Stabilizers in the US Budget” (November 23, 2015).

Here’s a table of contents for the book edited by Boushey, Nunn, and Shambaugh:

Design a site like this with WordPress.com
Get started