Policy Institutes

North Korea’s Kim Jong Un is doing everything in his power to ensure that he remains atop the United States’ enemies list. For months, his government has been test-launching missiles and issuing threats. This week the rhetoric got even hotter. President Trump pledged to rain “fire and fury like the world has never seen” on North Korea. The North Koreans responded with a promise to attack the U.S. base at Guam.

Notwithstanding Secretary of State Rex Tillerson’s statements last week and in April that the United States does not seek regime change in Pyongyang, other tin-pot dictators have heard similar assurances before. If KJU doesn’t want to go the way of Slobodan Milosevic, Saddam Hussein and Muammar Gaddafi, he’ll hold onto his nukes.

Unsurprisingly, hawks in Washington – who don’t like being so deterred – are urging President Trump to launch a preventive war, and denude the latest Crazy Kim of his dangerous toys.

For example, John Bolton explained last week that, since diplomacy is unlikely to be successful, Trump has only three options: “pre-emptively strike at Pyongyang’s known nuclear facilities, ballistic-missile factories and launch sites, and submarine bases”; “wait until a missile is poised for launch toward America, and then destroy it”; or launch “airstrikes or [deploy] special forces to decapitate North Korea’s national command authority, sowing chaos, and then sweep in on the ground from South Korea to seize Pyongyang, nuclear assets, key military sites and other territory.”

To summarize: small war now, small war later, or big war now. And, of the middle option, Bolton warns that a preemptive strike would “provide more time but at the cost of increased risk” and that “Intelligence is never perfect” – so that leaves war now (or soon).

Bolton grudgingly admitted “All these scenarios pose dangers for South Korea, especially civilians in Seoul,” and that “The U.S. should obviously seek South Korea’s agreement (and Japan’s) before using force, but no foreign government, even a close ally, can veto an action to protect Americans from Kim Jong Un’s nuclear weapons.”  

Along similar lines, Lindsey Graham explained “Japan, South Korea, China would all be in the crosshairs of a war if we started one with North Korea. But if [North Korea gets] a missile they can hit California, maybe other parts of America.”

“If there’s going to be a war to stop [Kim Jong Un],” Graham continued, “it will be over there. If thousands die, they’re going to die over there. They’re not going to die here.”

This leaves aside the rather obvious fact that the American troops carrying out a war with North Korea would be risking death. That factor should also weigh heavily on the president’s mind. The American people, bitten by other wars that Bolton and Graham championed, are highly averse to new ones–especially those that are likely to result in large numbers of Americans getting killed. 

As I point out in a new article at The Skeptics

A recent paper finds that in the 2016 election, Donald Trump performed well in those communities that paid the heaviest price during America’s post–9/11 wars in Afghanistan and Iraq. Voters in these communities may even have provided the margin he needed to win the presidency.

“Trump’s ability to connect with voters in communities exhausted by more than fifteen years of war,” write Douglas Kriner and Francis Shen, “may have been critically important to his narrow election victory.”

“If Trump wants to win again in 2020, his electoral fate may well rest on the administration’s approach to the human costs of war. Trump should remain highly sensitive to American combat casualties.”

Could public sentiment really constrain a president convinced that military action is the last best course of action? Maybe, argue Kriner and Shen. “The significant inroads,” they write, “that Trump made among constituencies exhausted by fifteen years of war—coupled with his razor thin electoral margin (which approached negative three million votes in the national popular vote tally)—should make Trump even more cautious in pursuing ground wars.”

I’m skeptical.

“Trump” and “cautious” are two words that rarely go together. And not all U.S. wars are ground wars.

The human cost of war should factor into any president’s decision to start one. But Donald Trump’s limited understanding of modern warfare and international politics might convince him that he can pick a few cheap and easy fights to boost his popularity and secure a few quick wins. Though he might be disinclined to initiate a major conflict, that doesn’t mean that Trump is reluctant to use force. And those superficially limited military engagements have a nasty tendency to morph into honest-to-goodness full-blown wars.

You can read the whole thing here.

Maranda O’Donnell was arrested for driving with a suspended license and bail was set at the prescheduled amount of $2,500, which she could not pay. Ms. O’Donnell was not alone in having bail set at an amount that could not be paid. Robert Ford’s bail was set at $5,000 for misdemeanor theft of property and Loetha McGruder’s bail was set at $5,000 for the misdemeanor of giving a false name to a police officer. There are many other such examples; all of these bail amounts were set according to a predetermined schedule based on the offense. None of the defendants could afford the bail and so were forced to stay in jail.

According to one report, 81% of misdemeanor arrestees in Harris County (Houston), Texas, were unable to post bail at booking, and 40% were never able to post bail. Ms. O’Donnell sued Harris County and various government officials on behalf of herself and all others similarly situated for violating the Fourteenth Amendment’s Due Process and Equal Protection Clauses by setting bail amounts higher than defendants could pay, which detained indigent people much longer than those financially able to pay.

The federal district court found that the predetermined bail schedule was treated as a “nearly irrebuttable presumption in favor of applying secured money bail at the prescheduled amount.” Further, the court found that Harris County did not even provide “timely hearings” to prove their inability to pay or even the reasons why defendants were being denied a bail they could paid. The court issued a preliminary injunction ordering the county to release misdemeanor defendants on personal bond—not secured by cash in advance—within 24 hours of being arrested.

The county appealed to the U.S. Court of Appeals for the Fifth Circuit, where Cato has now filed an amicus brief supporting the injunction based on the history of bail.

Bail has ancient roots going back to before Magna Carta. Even before the United States existed, English courts required that bail be set on an individualized basis based on the financial ability of the defendant. When the king’s sheriffs issued bail that was too high to be paid, the prohibition on excessive bail was created in the English Bill of Rights, which was then incorporated into the U.S. Constitution in the Eighth Amendment. As both the Supreme Court and D.C. Circuit held in 1835, “to require larger bail than the prisoner could give would be to require excessive bail, and to deny bail” in violation of the Constitution. This was the understanding in America for more than 100 years after the Founding.

The modern Supreme Court has continued to recognize the protection that these ancient requirements for bail provide: “In our society, liberty is the norm, and detention prior to trial or without trial is the carefully limited exception.” United States v. Salerno (1987). These long-standing bail customs require an individualized determination of the bail amount, which was not provided by Harris County, violating the defendants’ right to the due process of law as guaranteed by the Fourteenth Amendment. The Fifth Circuit should maintain the preliminary injunction in O’Donnell v. Harris County.

In a recent column, AEI scholar Abby McCloskey claims that “[m]ost people on the right and on the left” want government-sponsored paid family leave. McCloskey links to an admiring summary of a 2016 public opinion poll as evidence.

The summary does not provide the associated poll topline (questionnaire), but Morning Consult kindly provided some questions upon request. They included “Do you support or oppose requiring employers to offer paid parental leave for new parents?” and “if the federal government required employers to offer paid parental leave for new parents, how long should that leave be?”

Unfortunately, the poll’s questions are not sound from a psychology of survey response perspective. As analysts know, the question’s language makes an enormous difference in poll results. When people are asked whether or not they would like a particular benefit sans the process or cost, many will respond affirmatively.

But if costs are mentioned, public opinion transforms (see polling on healthcare for example). As a result, polling can be confusing at best and calculated to elicit certain responses at worst. In the first question, the Morning Consult poll does not describe who will be requiring employers to provide paid family leave or how they will do so. It does not mention tradeoffs. In the second question it asks respondents to accept that government is providing paid leave and then pick the length.

Usually it would be hard to know how much the absence of the “whos” or “hows” mattered for the results. But fortunately, Pew Research asked the same questions and made the details explicit.

Specifically, Pew asked whether A) the federal government should require employers to provide paid family leave or B) employers should be able to decide for themselves whether to provide paid family leave. 

Pew found “there is no consensus,” and public opinion was split evenly. Contrary to McCloskey’s summary article which claims political parties unanimously approve of a government mandate for paid family leave, Pew described public opinion as being divided along political lines. Democrats were the only group where a majority strongly favored the lightest of paid family leave measures; using government tax credits as an incentive for employers to provide paid leave. 

Far from being a top policy priority, another recent Pew poll suggests that “expanding access to paid family and medical leave ranks at the bottom of a list of 21 policy items.”  7 of 10 workers (69%) are at least somewhat satisfied with the benefits their employer already provides, and Americans prioritize other issues over creating an entitlement to a benefit most already receive63% of workers who took parental, family, or medical leave say their employer paid for part or all of it.

It seems that the public is divided after all. That said, do Americans like or want paid family leave benefits? Of course, and when asked they say so. But the right question is not whether they like or desire leave, but whether they desire federal involvement. The answer is often “no.”

One fortunate aspect of President Trump’s bill to reduce legal immigration by 50 percent is that it has started the conversation on how to reform the nation’s legal immigration system—even if it started it on the wrong foot. Members of Congress now have an opportunity to respond with legislation that would increase legal immigration and fix the various problems with the system, which are numerous.

1) Employment-based quotas haven’t changed since 1990, even as the economy doubled in size. Unlike many other countries, the legislative branch establishes hard ceilings on immigration, rather than flexible targets or administratively determined limits. In 1990, Congress passed the Immigration Act of 1990, which established the current limit at 140,000 visas for immigrants whom employers sponsor for legal permanent residency. Since then, U.S. real Gross Domestic Product increased from 8.9 trillion to 17 trillion. At the same time, the computer and Internet revolutions transformed the economy, yet the quota remained the same.

  • Congress should double the 1990 employment-based quota to at least 280,000 and index the quota to GDP growth. Senators Ron Johnson and John McCain incorporate GDP indexing in their State-Sponsored Pilot Program Act, which would allow states to sponsor temporary workers (see p. 24).

2) Half the quota for immigrant workers is filled by family. Of the 140,000 employer-sponsored visas, sponsored employees actually use less than half. That’s because the George H.W. Bush administration in 1991 adopted an interpretation of the law that found that spouses and children of the immigrants—who are entitled to a visa with the primary applicant as well—count against the quota. As I’ve written before, it is far from clear that this is the correct interpretation of the law, but it makes little sense in any case. The quota is targeting the number of workers that the economy needs. Why should married workers take away slots from other applicants? If the quota is hit after a worker receives his visa but before his family does, why should we separate them? For these reasons, all temporary worker categories exempt spouses and children from those caps.

  • Congress should clarify that spouses and children of immigrant workers do not count against the green card limits. This would require amending 8 U.S.C. 1153(d) with a statement that the visas or status issued under that subsection don’t reduce the number of visas available to primary applicants.

3) America discriminates against applicants from more populous countries. The law states that the total number of permanent residency visas made available to nationals of any single country may not exceed 7 percent. This means that countries with few applicants, like Iceland and Moldova, receive priority over countries with many applicants like China and India, creating massive wait times for specific nationalities. The wait for Indian employer-sponsored immigrants is so long many of the applicants will die before they see their visas. Nativists, however, openly admit that they prefer the per-country limits specifically because they make the system so frustrating that immigrant workers from certain countries want to give up.

  • Congress should remove the per-country limits. The Fairness for High Skilled Immigrants Act (H.R. 392), which has 238 cosponsors, would phase out the per-country limits for employment-based immigrants and double the limits for family-based.

4) Foreign workers can work here for 10 years legally, but still not receive the right to live here permanently. As I’ve said before, America treats its high-skilled immigrants worse than it treats its lowest skilled refugees, who receive permanent residency after one year. Employers can sponsor H-1B high skilled temporary workers for permanent residency, and the law allows them to extend their status indefinitely. Despite living and working in the country for many years—even a decade—they cannot enjoy their full rights. They cannot work for whomever they want. They cannot start businesses. They must apply for extensions every single year. Other work visa categories, such as the E-2 temporary visa for entrepreneurs and investors and the O-1 visa for entrepreneurs and other outstanding achievers, have no visa category for which they can apply because they have no employer to sponsor them. They can work for decades and receive no path to permanent residency.

  • Congress should create a path to permanent residency for any worker who has worked legally in the United States for an aggregate period of more than 10 years. It should similarly allow anyone who is waiting abroad to enter if they have waited for 10 years. That would place a hard limit on backlogs and create an incentive for legal immigrants to stay and not abandon the American dream for the Canadian dream.

5) Children of legal foreign workers grow up in the United States but are deported at adulthood—even if they were already waiting in line for permanent residency. This “aging out” problem is one of the more cruel aspects of America’s immigration system. H-1B foreign workers and their spouses and minor children receive a temporary visa that is renewable indefinitely if their employer sponsors the worker for permanent residency. The worker may add his spouse and minor children to the permanent residency application, and the whole family waits together in the U.S. Thus, many children of H-1B workers grow up in the United States, graduate a U.S. high school, and attend U.S. colleges. You can read here about how accomplished these young people are. Yet because only minor children are eligible, they receive a removal order as soon as they reach age 21 if their parent has yet to receive permanent residency. As I’ve written before, they are essentially young immigrant “Dreamers.”

  • Congress should end “aging out.” If a person is already waiting in line for permanent residency when they hit 21, they should remain in the green card queue and in legal status in the United States. This could be done in a number of ways, but perhaps the best is section 3(c) of the Johnson-McCain state-sponsored visa bill (pp. 36–37). The Johnson-McCain bill language would also solve a number of other problems for high-skilled workers, including the inability to change jobs and the prohibition on spousal work (which President Obama partially ended).

6) If the administration fails to issue the required permanent residency visas for a year, immigrants and employers are out of luck. This fact is unbelievable, but true. Every year from 1992 to 2009, with the exception of 2008, the government simply failed to issue the full allotment of visas. According to the 2010 U.S. Citizenship and Immigration Services Ombudsman report, nearly 750,000 visas went unused during this time. Immigrants who are the beneficiary of an approved immigrant petition from a U.S. employer cannot apply for permanent residency until their “priority date” comes up. The State Department estimates the priority date, but it cannot know the date for certain. It depends on how many people are waiting and how many of those who are waiting apply once their number does come up. Both of these factors are unknown. Thus, the State Department must guess. If it guesses wrong, not enough immigrants apply, and visa slots are lost.

  • Congress should recapture all of the lost visas since 1992 and create a provision that increases the quota in the following year by the number of visas that go unused in the prior year. These provisions were included in the Section 2304 of the Senate-passed 2013 immigration bill (p. 371).

7) The quota for immigrant workers without a bachelor’s degree is just 5,000. This figure is laughably low in light of the more than 11 million unauthorized immigrants in the United States—85 percent of whom have no college degree. It’s also absurd given that even in 2020, only 35 percent of job openings will require a four-year degree, while 36 percent will require no education at all after high school. These positions are not all “low-skilled” either. Dozens of occupations, like these, require no bachelor’s degree, but pay over $70,000, which is close to the threshold for getting “points” under the Trump immigration bill. Opponents of low-skilled immigration claim that these workers are a detriment to U.S. workers, but the empirical evidence indicates that this is false.

  • Congress should make available 100,000 visas for workers without a college degree. This is where a points system actually makes more sense. For college grads, the degree is a decent predictor of labor market success. For those with less than a college degree, there is significantly more variability in outcomes, so a points system could be a better predictor. The Senate bill’s Merit-Based Track 1, Tier 2 (pp. 354–356) provides a model for this type of point system.

8) The U.S. educates and trains a million foreign students and then sends them home to compete with us. This must rank highly among America’s worst economic policies. According to the National Academy of Science 2016 report on the fiscal effects of immigration, each foreign bachelor’s degree holder contributes, in net present value terms, between $210,000 and $330,000 more in taxes than they receive in benefits over their entire lifetime. For those with advanced degrees, it’s between $427,000 and $635,000 (p. 341). As my colleague Alex Nowrasteh has detailed, foreign-born immigrants contribute massively to innovation, entrepreneurship, and economic growth. Yet if the U.S. continues its current policy, they will do those things in other countries.

  • Congress should exempt from the immigration quotas foreign graduates of U.S. universities, at least for all science, technology, engineering, and math fields. The Senate bill’s Section 2307 would have exempted foreign physicians, doctorate degree holders from U.S. universities, and all advanced degree holders in science, technology, engineering, and math (pp. 407–409). This would be a good start.

9) The U.S. has a limit on the number of “extraordinary” immigrants that it will admit. The EB-1 visa category is for immigrants with “extraordinary ability,” “outstanding professors and researchers,” and multinational executives. These include Nobel Prize winners and those with “original scientific, scholarly, artistic, athletic, or business-related contributions of major significance to the field.” Yet bafflingly, we subject these immigrants to the same quota as other immigrants.

  • Congress should exempt all employment-based first preference immigrants from the quota system. The Senate bill’s Section 2307 would have implemented this change (pp. 404–407). Congress should immediately adopt these changes.

10) America has no entrepreneurship visa. Immigrants are roughly twice as likely to start a business in the United States as native-born Americans. Immigrants founded more than half of all new businesses in Silicon Valley from 1995 to 2005. In 2011, nearly 70,000 New York City immigrants owned more than 60 percent of the city’s small businesses. Almost all of the city’s dry cleaning and laundry services and taxi and limo services were immigrant owned. Yet somehow there is no permanent residency category for entrepreneurs. It goes without saying that this is exceptionally counterproductive. It’s important to emphasize that most immigrant entrepreneurs will not start the next Google, but even small business owners play an important role in keeping America’s economy competitive and innovative.

  • Congress should create a visa category for businesses owners and entrepreneurs. Sen. Jerry Moran’s Startup Act is the best available option to do so.

Note that these are just the reforms related to the process for permanent residency. There are equally as many reforms needed to the temporary work visa system. Moreover, the RAISE Act, the president’s preferred legal immigration reform, contains only one of these reforms (#3). It also makes #7 worse by completely eliminating all permanent residency visas for non-college grads. Surprisingly, the situation in #9 would be worse as well because the RAISE Act does not increase skilled visas at all. Instead, it would completely eliminate the EB-1 extraordinary ability category and replace it with a point system that is so convoluted that Nobel Prize winners may do worse than certain bachelor’s degree holders, as I’ve explained before.

CSBA’s Katherine Blakeley has published a brief but highly informative analysis of the prospects for a major military spending boost.

Bottom line up front: The combination of “procedural and political hurdles” in Congress make an increase along the lines of what the Trump administration requested (approx. $54 billion) unlikely. The substantially larger increases passed out of the House and Senate Armed Services Committees (roughly $30–33 billion more than the president’s request) seem even more fanciful.

Blakeley concludes:

The wide gulfs between the political parties, and between the defense hawks and the fiscal hawks, will not be closed soon. Additionally, the full legislative calendar of the Congress before September 30, 2017, including Obamacare repeal, FY 2018 appropriations, and an impending debt ceiling debate, increase the likelihood that FY 2018 will begin with a several-month-long continuing resolution, rather than a substantial increase in defense spending.  

This aligns with what I’ve suspected all along – but Blakeley provides critical details to back up her conclusions.

For years now, we’ve heard defense hawks say that adequately funding the defense budget shouldn’t be a struggle for a country as wealthy as the United States. A mere 4 percent of GDP, for example, should be a piece of cake. And, at one level, that is absolutely correct. It should be easy. But when you dig into it, as Blakeley has done, you discover that even 3 percent is a real struggle. After all, $50 billion – a rounding error in a $19 trillion economy – threatened to bring the entire budget process to a screeching halt in late June, and may do so again.

If and when a final budget deal is hammered out, the Pentagon’s Overseas Contingency Operations (OCO) account may provide at least some of the additional billions that the HASC and the SASC want. Because OCO is exempted from the bipartisan Budget Control Act’s spending caps, additional defense dollars do not have to come at the expense of non-defense discretionary spending, as President Trump’s budget proposed.

But many billions from the Pentagon’s base budget (i.e. non-war spending) have been shoved into the OCO for years now, and the gimmick is starting to wear thin – after all, the wars in Iraq and Afghanistan peaked years ago. The voices in Congress and beyond who pushed the BCA in the first place, and who remain committed to reducing the deficit (e.g. current OMB chief Mick Mulvaney), are likely to feel that they’re being played.

The defense vs. non-defense spending debate is, and always has been, about politics, not math. And it isn’t obvious that the Pentagon will win this political battle. Given this uncertainty, we should adapt our military’s objectives to the means available to achieve them. We should prioritize U.S. security and defending vital national interests, and approach foreign adventures that don’t advance these interests with great caution. Expecting our soldiers, sailors, airmen and Marines to do the same – or more – with less money isn’t fair to them, and isn’t likely to work.

The rising opioid overdose death rate is a serious problem and deserves serious attention. Yesterday, during his working vacation, President Trump convened a group of experts to give him a briefing on the issue and to suggest further action. Some, like New Jersey Governor Chris Christie, who heads the White House Drug Addiction Task Force, are calling for him to declare a “national public health emergency.” But calling it a “national emergency” is not helpful. It only fosters an air of panic, which all-too-often leads to hastily conceived policy decisions that are not evidence-based, and have deleterious unintended consequences.

While most states have made the opioid overdose antidote naloxone more readily available to patients and first responders, policies have mainly focused on health care practitioners trying to help their patients suffering from genuine pain, as well as efforts to cut back on the legal manufacture of opioid drugs.

For example, 49 states have established Prescription Drug Monitoring Programs (PDMPs) that monitor the prescriptions written by providers and filled by patients. These programs are aimed at getting physicians to reduce their prescription rate so they are not “outliers” in comparison with their peers. And they alert prescribers of patients who have filled multiple prescriptions within a given timeframe. In some states, the number of opioids that may be prescribed for most conditions is limited to a 7-day supply.

The Drug Enforcement Administration continues to seek ways to reduce the number of opioids produced legally, hoping to negatively impact the supply to the illegal market.

Meanwhile, as patients suffer needlessly, many in desperation seek relief in the illegal market where they are exposed to dangerous, often adulterated or tainted drugs, and oftentimes to heroin.

The CDC has reported that opioid prescriptions are consistently coming down, while the overdose rate keeps climbing and the drug predominantly responsible is now heroin. But the proposals we hear are more of the same.

We need a calmer, more deliberate and thoughtful reassessment of our policy towards the use of both licit and illicit drugs. Calling it a “national emergency” is not the way to do that.

Last week, the Trump Justice Department announced that it would scrutinize colleges’ consideration of applicants’ race in their admissions decisions. The announcement suggests the DOJ’s current leadership believes school policies intended to boost enrollments of some minority groups violate anti-discrimination laws and improperly reduce admissions for other groups.

Over the weekend, Washington Post columnist Christine Emba responded that “Black People Aren’t Keeping White Americans Out of College. Rich People Are.” She argues that some wealthy parents “buy” their kids’ way into selective colleges when those kids don’t have strong applications. As a result, fewer seats are available for non-wealthy kids with stronger applications.

Regardless of what one might think of the consideration of race in the application process, one should understand that Emba’s analysis is incorrect. “Rich kid admissions” help non-rich kids to attend college, and reducing the number of enrolled rich kids would reduce the enrollment of other students, whatever their demographics.

Last year, Regulation published a pair of articles debating the Bennett hypothesis, the idea that colleges raise their tuition and fees whenever government increases college aid to students. One of the articles, by William & Mary economists Robert Archibald and David Feldman, includes an insightful discussion of the economics of college admissions and price setting (i.e., scholarship decisions).

Selective colleges practice what economists call price discrimination, in which admissions and prices are set with an eye to a student’s willingness (and ability) to pay–what schools politely call “need aware” admissions. Applicants with limited admission prospects but who have wealthy parents may be admitted, but they will be charged a high price. These are the kids and parents who pay the staggering $50,000+ a year “list price” that selective private schools are quick to say that few of their students pay. Most other enrollees, on the other hand, had applications that admissions officers considered more desirable, but the students had less willingness to pay, so they were awarded scholarships, i.e., large price discounts. The discounts, in turn, are financed in part by the high prices paid by the rich kids and their parents.

Archibald and Feldman explain:

In order to meet revenue and enrollment goals, almost all selective programs admit and enroll students with lower admission ratings [than their ideal applicants]. Knowing the odds of enrolling students with successively lower admission ratings, schools can eventually craft a class with the highest possible average admission rating that satisfies the tuition revenue requirement while filling the seats in the entering class. In its enrollment decisions, a school may find that many of its [mid-tier applicants] have a higher willingness to pay than many or most of the [top tier]. These lower-ranked applicants have fewer opportunities to earn merit scholarships at more selective schools, and many come from high-income families that do not qualify for need-based aid. For some schools this means that a student from the [mid tier] with a very high willingness to pay may get preference over a student from [an upper tier] with a very low willingness to pay.

If the rich kids were denied admission, then fewer non-rich kids would gain admissions because schools would have less money to subsidize them. And the students who would attend would have to pay higher prices because, again, there would be less money to subsidize them.

It may be frustrating that rich parents buy their kids’ way into college. But it would be far more frustrating if many of the non-rich kids who benefit from those payments were to lose their way into selective schools. So, contra Emba, rich kids aren’t taking seats away from non-rich kids, they’re helping to put non-rich kids–black and white–through college.

So far, throughout this primer, I’ve claimed that central banks have one overarching task to perform:  their job, I said, is to “regulate the overall availability of liquid assets, and through it the general course of spending, prices, and employment, in the economies they oversee.” I’ve also shown how, prior to the recent crisis, the Fed pursued this task, sometimes competently, and sometimes ineptly, by means of “open-market operations,” meaning routine purchases (and occasional sales) of short-term Treasury securities.

But this picture isn’t complete, because it says nothing about central banks’ role as “lenders of last resort.” It overlooks, in other words, the part they play as institutions to which particular private-market firms, and banks especially, can turn for support when they find themselves short of cash, and can’t borrow it from private sources.

For many, the “lender of last resort” role of central banks is an indispensable complement to their task of regulating the overall course of spending. Unless central banks play that distinct role, it is said, financial panics will occasionally play havoc with nations’ monetary systems.

Eventually I plan to challenge this way of thinking. But first we must consider the reasoning behind it.

h4>The Conventional Theory of Panics

The conventional view rests on the belief that fractional-reserve banking systems are inherently fragile. That’s so, the argument goes, because, unless it’s squelched at once, disquiet about any one bank or small group of banks will spread rapidly and indiscriminately to others. The tendency is only natural, since most people don’t know exactly what  their banks have been up to. For that reason, upon hearing that any bank is in trouble, people have reason to fear that their own banks may also be in hot water.

Because it’s better to be safe than sorry, worried depositors will try to get their money back, and — since banks have only limited cash reserves — the sooner the better. So fear of bank failures leads to widespread bank runs. Unless besieged banks can borrow enough cash to cover panicking customers’ withdrawals, the runs will ruin them. Yet the more widespread the panic, the harder it is for affected banks to secure private-market credit; if it spreads widely enough, the whole banking system can end-up going belly-up.

An alert lender of last resort can avoid that catastrophic outcome, while also keeping sound banks afloat, by throwing a lifeline, consisting of a standing offer of emergency support, to any solvent bank that’s threatened by a run. Ideally, the standing offer alone should suffice to bolster depositors’ confidence, so that in practice there needn’t be all that much actual emergency central bank lending after all.[1]

It’s a Wonderful Theory

A striking feature of this common understanding is its depiction of a gossamer-like banking system, so frail that the merest whiff of trouble is enough to bring it crashing down.  At very least, the depiction suggests that any banking system lacking a trustworthy lender of last resort, or its equivalent, is bound to be periodically ravaged by financial panics.

And therein lies a problem. For however much it may appeal to one’s intuition, the conventional theory of banking panics is simply not consistent with the historical record.  Among other things, that record shows

  • that banks seldom fail simply because panicking depositors rush to get their money out. Instead, runs are almost always “information based,” with depositors rushing to get money out of banks that got themselves in hot water beforehand;
  • that individual bank runs and failures generally aren’t “contagious.”  Although trouble at one bank can lead to runs on banks that are affiliated with the first bank, or ones that are known to be among that bank’s important creditors, panic seldom if ever spreads to other banks that would otherwise be unscathed by the first bank’s failure;
  • that, while isolated bank failures, including failures of important banks, have occurred in all historical banking systems, system-wide banking crises have generally been relatively rare events, though they have been much more common in some banking systems than in others;
  • that the lack of a central bank or other lender of last resort is not a good predictor of whether a  banking system will be especially crisis-prone; and
  • that the lack of heavy-handed banking regulations is also a poor predictor of the frequency of banking crises. Instead, some heavily-regulated banking systems have endured crisis after crisis, while some of the least regulated systems have been famously crisis-free.

That the conventional theory of banking panics is not easily reconciled with historical experience may help to explain why its proponents often illustrate it, as Ben Bernanke did in the first of a series of lectures he gave on the subprime crisis, not by instancing some real-world bank run, but by referring to the run on the Bailey Bros. Building & Loan in “It’s a Wonderful Life”!  In the movie, although George Bailey’s bank is fundamentally sound, it suffers a run when word gets out that Bailey’s absent-minded Uncle Billy mislaid $8000 of the otherwise solvent bank’s cash.

The Richmond Fed’s Tim Sablik likewise treats Frank Capra’s Christmas-movie bank run as exhibit A in his account of what transpired during the 2007-8 financial crisis:

George Bailey is en route to his honeymoon when he sees a crowd gathered outside his family business …. He finds that the people are depositors looking to pull their money out because they fear that the Building and Loan might fail before they get the chance. His bank is in the midst of a run.

Bailey tries, unsuccessfully, to explain to the members of the crowd that their deposits aren’t all sitting in a vault at the bank — they have been loaned out to other individuals and businesses in town. If they are just patient, they will get their money back in time. In financial terms, he’s telling them that the Building and Loan is solvent but temporarily illiquid. The crowd is not convinced, however, and Bailey ends up using the money he had saved for his honeymoon to supplement the Building and Loan’s cash holdings and meet depositor demand…

As the movie hints at, the liquidity risk that banks face arises, at least to some extent, from the services they provide. At their core, banks serve as intermediaries between savers and borrowers. Banks take on short-maturity, liquid liabilities like deposits to make loans, which have a longer maturity and are less liquid. This maturity and liquidity transformation allows banks to take advantage of the interest rate spread between their short-term liabilities and their long-term assets to earn a profit. But it means banks cannot quickly convert their assets into something liquid like cash to meet a sudden increase in demand on their liability side. Banks typically hold some cash in reserve in order to meet small fluctuations in demand, but not enough to fulfill all obligations at once.

There you have it: banks by their very nature are vulnerable to runs. Hence banking systems are inherently vulnerable to crises. Hence crises like that of 2007-8. Hence the need for a lender of last resort (or something equivalent, like government deposit insurance) to keep perfectly sound banks from being gutted by panic-stricken clients.

But is that really all there is too it? Were the runs of 2007-8 triggered by nothing more than some minor banking peccadilloes, if not by mere unfounded fears? Not by any stretch! For starters, the most destructive runs that took place during the 2007-8 crisis were runs, not on ordinary (commercial) banks, or thrifts (like George Bailey’s outfit), but on non-bank financial intermediaries, a.k.a. “shadow banks,”  including big investment banks such as Bear Stearns and Lehman Brothers, and money-market mutual funds, such as Reserve Primary Fund.

Far from having been random or inspired by shear panic, all of these runs were clearly information based: Bear and Lehman were both highly leveraged and heavily exposed to subprime mortgage losses when the market for such mortgages collapsed, while Reserve Primary — the money market fund that suffered most in the crisis — was heavily invested in Lehman Brother’s commercial paper.

As for genuine bank runs, there were just five of them in all, and every one was triggered by well-founded news that the banks involved — Countrywide, IndyMac, Washington Mutual, Wachovia, and Citigroup — had suffered heavy losses in connection with risky mortgage lending. Indeed, with the possible exception of Wachovia, the banks were almost certainly insolvent when the runs on them began. To suggest that these banks were as innocent of improprieties, and as little deserving of failure, as the fictitious Bailey Bros. Building and Loan, is worse than misleading: it’s grotesque.

Not having been random, the runs of 2007-8 also weren’t contagious. The short-term funds siphoned from failing investment banks and money market funds went elsewhere. Relatively safe “Treasuries only” money market funds, for example, gained at riskier funds’ expense. The same thing happened in banking: for every bank that was perceived to be in trouble, many others were understood to be sound. Instead of being removed, as paper currency, from the banking system, deposits migrated from weaker to stronger banks, such as JP Morgan, Wells Fargo, and BB&T. While a few bad apples tried to fend-off runs, in part by seeking public support, other banks struggled to cope with unexpected cash inflows.

Yet because the runs were front-page news, and the corresponding redeposits weren’t, it was easy for many to believe that a general panic had taken hold. That sound and unsound banks alike were forced to accept TARP bailout money only reinforced this wrong impression. Evidently we have traveled far from the quaint hamlet of Bedford Falls, where George Bailey’s bank nearly went belly-up.

Nor were we ever really there. During the Great Depression, for example, most of the banks that failed, including those that faced runs, were rural banks that suffered heavy losses as fallen crop prices and land values caused farmers to default on their loans. Few if any unquestionably solvent banks failed, and bank run contagions, with panic spreading from unsound to sound banks, were far less common than is often supposed. Even the widespread cash withdrawals of February and early March, 1933, which led FDR to proclaim a national bank holiday, weren’t proof of any general loss of confidence in banks. Instead, they reflected growing fears that FDR planned to devalue the dollar upon taking office. Those fears in turn led bank depositors to cash in their deposits for Federal Reserve notes, in order to convert those notes into gold. What looked like distrust of commercial banks’ ability to keep their promises was really distrust of the U.S. government’s ability to keep its promises.

Regulate, Have Crisis, Repeat

If bank runs are mainly a threat to under-diversified or badly-managed banks, it’s no less the case that banking crises, in which relatively large numbers of banks all find themselves in hot water at the same time, are mainly a problem in badly-regulated banking systems. To find proof of this claim, one only has to compare the records, both recent and historical, of different banking systems. Do that and you’ll see that, while some systems have been especially vulnerable to crises, others have been relatively crisis free. Any theory of banking crisis that can’t account for these varying experiences is one that ought not to be trusted.

But just how can one account for the different experiences? The conventional theory of panics implies that the more crisis-prone systems must have lacked a lender of last resort or deposit insurance (which also serves to discourage runs) or both. It may also be tempting to assume that they lacked substantial restrictions upon the activities banks could engage in, the interest rates they could charge and offer, the places where they could do business, and other aspects of the banking business.

Wrong; and wrong again. Central banks, deposit insurance, and relatively heavy-handed prudential regulations aren’t the things that distinguished history’s relatively robust banking systems from their crisis-prone counterparts. On the contrary: central bank misconduct, the perverse incentives created by both explicit and implicit deposit guarantees, and misguided restrictions on banking activities including barriers to branch banking, portfolio restrictions, and mandated business structures, have been among the more important causes of banking-system instability. Some of the most famously stable banking systems of the past, on the other hand, lacked either central banks or deposit insurance, and placed relatively few limits on what banks were allowed to do.

Northern Exposures

It would take a treatise to review the whole, gruesome history of financial crises for the sake of revealing how unnecessary and ill-considered, if not corrupt, regulations of all sorts contributed to  every one of them.[2] For our little primer we must instead settle for four especially revealing case studies: those of the U.S. and Canada on the one hand and of England and Scotland on the other. The banking systems of Scotland between 1772 and 1845 and Canada from 1870 to 1914 and again from 1919  until 1935 were remarkably free of both crises and government interference. In comparison, the neighboring banking systems of England and the United States were both more heavily regulated and more frequently stricken by crises.

To first return again to the Great Depression, in the U.S. between 1930 and 1933, some 9000 mostly rural banks failed. That impressive record of failure could never have occurred had it not been for laws that prevented almost all U.S. banks from opening branch offices, either in their home states or elsewhere. The result was a tremendous number of mostly tiny and severely under-diversified banks.

Canada’s banks, in contrast, were allowed to establish nationwide branch networks. Consequently, not a single Canadian bank failed during the 1930s, despite the fact that Canada had no central bank until 1935, and no deposit insurance until 1967, and also despite the fact that Canada’s depression was especially severe. The few U.S. states that allowed branch banking also had significantly lower bank failure rates.

Comparing the performance of the Canadian and U.S. banking  systems between 1870 and 1914 tells a similar story. Although the U.S. didn’t yet have a central bank, and so was at least free of that particular source of financial instability (yes, you read that last clause correctly), thanks to other kinds of government intervention in banking, and especially to barriers to branch banking and to banks’ ability to issue circulating notes put in place during the Civil War, the U.S. system was shaken by one financial crisis after another. Yet during the same period Canada, which also had no central bank, but which didn’t subject its commercial banks to such restrictions,  avoided serious banking crises.

Although naturally different in its details, the Scotland-vs.-England story is remarkably similar in its broadest brushstrokes. Scotland’s banks, like Canada’s, were generally left alone, while in England privileges were heaped upon the Bank of England, leaving other banks enfeebled and at its mercy. In particular, between 1709 and 1826, the so-called “six partner rule” allowed only small partnerships to issue banknotes, effectively granting the Bank of England a monopoly of public or “joint stock” banking. In an 1826 Parliamentary speech Robert Jenkinson, the 2nd Lord Liverpool, described the system as one having “not one recommendation to stand on.” It was, he continued, a system

of the fullest liberty as to what was rotten and bad; but of the most complete restriction, as to all that was good. By it, a cobbler or a cheesemonger, without any proof of his ability to meet them, might issue his notes, unrestricted by any check whatever; while, on the other hand, more than six persons, however respectable, were not permitted to become partners in a bank, with whose notes the whole business of a country might be transacted. Altogether, this system was one so absurd, both in theory and practice, that it would not appear to deserve the slightest support, if it was attentively considered, even for a single moment.

Liverpool made these remarks in the wake of the financial panic that struck Great Britain in 1825, putting roughly 10 percent of the note-issuing cobblers and cheesemongers of England and Wales out of business. Yet in Scotland, where the six-partner rule didn’t apply, that same panic caused nary a ripple.

Although Scotland and Canada offer the most well-known instances of relatively unregulated and stable banking systems, other free banking experiences also lend at least some support to the thesis that those governments that governed their banks least often governed them pretty darn well.

Bagehot Bowdlerized

The aforementioned Panic of 1825 was one of the first instances, if not the first instance, in which the Bank of England served as a “lender of last resort,” albeit too late to avert the crisis. It was that intervention by the Bank, as well as the lending it did during the Overend-Gurney crisis of 1866, that inspired Walter Bagehot to formulate, in his 1873 book Lombard Street, his now-famous “classical” rule of last-resort lending, to wit: that when faced with a crisis, the Bank of England should lend freely, while taking care to charge a relatively high rate for its loans, and to secure them by pledging “good banking securities.”

Nowadays central bankers like to credit Bagehot for the modern understanding that every nation, or perhaps every group of nations, must have a central bank that serves as a lender of last resort to rescue it from crises. Were that actually Bagehot’s view, he might be grateful for the recognition if only he could hear it.  In fact he’s more likely to be spinning in his grave.

How come? Because far from having been a fan of the Bank of England, or (by implication) of central banks more generally, Bagehot, like Lord Liverpool, considered the Bank of England’s monopoly privileges the fundamental cause of British financial instability. Contrasting England’s “one reserve” system, which was a byproduct of the Bank of England’s privileged status, with a “natural,” “many-reserve” system, like the Scottish system (especially before the Bank Act of 1845 thoughtlessly placed English-style limits on Scottish banks’ freedom to issue notes), Bagehot unequivocally preferred the latter. That is, he preferred a system in which no bank was so exalted as to be capable of serving as a lender of last resort, because he was quite certain that such a system had no need for a lender of last resort!

Why, then, did Bagehot bother to offer his famous formula for last-resort lending? Simply: because he saw no hope, in 1873, of having the Bank of England stripped of its destructive privileges. “I know it will be said, ” he wrote in the concluding passages of Lombard Street,

that in this work I have pointed out a deep malady, and only suggested a superficial remedy. I have tediously insisted that the natural system of banking is that of many banks keeping their own cash reserve, with the penalty of failure before them if they neglect it. I have shown that our system is that of a single bank keeping the whole reserve under no effectual penalty of failure. And yet I propose to retain that system, and only attempt to mend and palliate it.

I can only reply that I propose to retain this system because I am quite sure that it is of no manner of use proposing to alter it… . You might as well, or better, try to alter the English monarchy and substitute a republic.

Perhaps today’s Bagehot-loving central bankers didn’t read those last pages. Or perhaps they read them, but preferred to forget them.

The Flexible Open-Market Alternative

If Great Britain was stuck with the Bank of England by 1873, as Bagehot believed, then we are no less stuck with the Fed, at least for the foreseable future. And unless we can tame it properly, we may also be stuck with its “unnatural” capacity to destabilize the U.S. financial system, in part by being all-too-willing to rescue banks and other financial firms that have behaved recklessly, even to the point of becoming insolvent.

Consequently, getting the Fed to follow Bagehot’s classical last-resort lending rules may, for the time being, be our best hope for securing financial stability. But doing that is a lot easier said than done. For despite all the lip-service central bankers pay to Bagehot’s rules, they tend to honor those rules more in the breach than in the observance. One need only consider the relatively recent history of the Fed’s last-resort lending operations, especially before 2003 (when it finally began setting a “penalty” discount rate) and during the subprime crisis, to uncover one flagrant violation Bagehot’s basic principles after another.

There is, I believe, a better way to make the Fed abide by Bagehot’s rules for last-resort lending. Paradoxically, it would do away altogether with conventional central bank lending to troubled banks, and also with the conventional distinction between a central banks’ monetary policy operations and its emergency lending. Instead, it would make emergency lending an incidental and automatic extension of the Fed’s routine monetary policy operations, and specifically of what I call “flexible” open-market operations, or “Flexible OMOs,” for short.

The basic idea is simple. Under the Fed’s conventional pre-crisis set-up, only a score or so of so-called “primary dealers” took direct part in its routine open-market operations aimed at regulating the total supply of money and credit in the economy. Also, those operations were — again, traditionally — limited to purchases and sales of short-term U.S. Treasury securities. Consequently, access to the Fed’s routine liquidity auctions was very strictly limited. A bank in need of last-resort liquidity, that was not a primary dealer, or even a primary dealer lacking short-term Treasury securities, would have to go elsewhere, meaning either to some private-market lender or to the Fed’s discount window, where to borrow from the latter was to risk being “stigmatized” as a bank that might well be in trouble.

“Flexible” open-market operations would instead allow any bank that might qualify for a Fed discount-window loan to take part, along with non-bank primary dealers, in its open-market credit auctions. It would also allow the Fed’s expanded set of counterparties to bid for credit at those auctions using, not just Treasury securities, but any of the marketable securities that presently qualify as collateral for discount-window loans, with the same  margins or “haircuts” applied to relatively risky collateral as would be applied were they used as collateral for discount-window loans. A “product-mix” auction, such as that the Bank of England has been using in its “Indexed Long-Term Repo Operations,” would allow multiple bids made using different kinds of securities to be efficiently dealt with, so that credit gets to the parties willing to pay the most for it.

So, instead of having a discount-window for emergency loans, not to mention various ad-hoc lending programs, in addition to routine liquidity auctions for the conduct of “ordinary” monetary policy, the Fed would supply liquid funds through routine auctions only, while making the auctions sufficiently “flexible”to allow any illiquid financial institute with “good banking securities” to successfully bid for such funds. Thanks to this set up, the Fed would no longer have to concern itself with emergency lending as such. Its job would simply be to get the total amount of liquidity right, while leaving it to the competitive auction process to put that liquidity where it commands the highest value. In other words, it really would have no other duty save that of regulating “the overall availability of liquid assets, and through it the general course of spending, prices, and employment.”

Continue Reading A Monetary Policy Primer:


[1]Deposit insurance can serve a function similar to that of having an alert lender of last resort. Most banking systems today rely on a combination of insurance and last-resort lending.

Diamond and Dybig’s famous 1983 model — one of the most influential works among modern economic writings — is in essence a clever, formal presentation of the conventional wisdom, in which deposit insurance is treated as a solution to the problem of banking panics. For critical appraisals of Diamond and Dybvig see Kevin Dowd, “Models of Banking Instability,” and chapter 6 of Lawrence White’s The Theory of Monetary Institutions.

[2]Although that badly-needed treatise has yet to be written, considerable chunks of the relevant record are covered in Charles W. Calomiris and Stephen Haber’s 2014 excellent work, Fragile by Design: The Political Origins of Banking Crises and Scarce Credit. I offer a much briefer survey of relevant evidence, including evidence of the harmful consequences of governments’ involvement in the regulation and monopolization of paper currency, in chapter 3 of Money: Free and Unfree. I express my (mostly minor) differences with Calomiris and Haber here.

[Cross-posted from Alt-M.org]

Illegal immigration is at its lowest point since the Great Depression. President Trump has claimed success, but nearly all of the decrease occurred under prior administrations. The president’s campaign rhetoric does appear to have caused a small increase in illegal immigration prior to assuming office. Because immigrants moved up their arrival dates a few months, the typical amount in illegal entries failed to materialize in the spring. But these recent changes are small in the big picture: 98.2 percent of the reduction in illegal immigration from 1986 to 2017 occurred before Trump assumed office.

Naturally, illegal border crossings are difficult to measure. The only consistently reported data are the number of immigrants that Border Patrol catches attempting to cross. Border Patrol has concluded that the number of people who make it across is proportional to the number of people it catches. All else being equal, more apprehensions mean more total crossers. Of course, the agency could catch more people because it has deployed more agents. But we can control for the level of enforcement by focusing on the number of people each agent catches.

Figure 1 provides the number of people that each Border Patrol agent brought into their custody during each of the last 50 years. As it shows, illegal immigration peaked in the mid-1980s. From 1977 to 1986, each border agent apprehended almost 400 people per year. After the 1986 amnesty legislation that authorized new agents and walls, the flow fell at a fairly steady rate. Following the housing bubble burst, the 2009 recession, and the concomitant border buildup, the flow has essentially flatlined. In 2016, each border agent nabbed fewer than 17 people over the course of the entire year. That’s one apprehension every two and a half weeks of work. The “crisis” is over and has been for a decade.

Figure 1: Apprehensions Per Border Patrol Agent, FY 1957-2017

Sources: Apprehensions 1957-2016: Border Patrol; Apprehensions FY 2017 (projected from October-June data): Border PatrolBorder Patrol Staffing: Border Patrol, INS Statistical Yearbooks and INS Annual Reports

Following Trump’s election, the flow did fall further, but this was mostly a continuation of the existing trend. Before Trump assumed office, there was a slight departure from the trend (represented as a dotted line in the Figure 2), but this June’s apprehension figures are roughly where we would expect based on the last decade and a half of data. I interpret this to mean that we saw a Trump effect before he assumed office, when some additional asylum seekers and immigrants came to the border a few months ahead of schedule in fear of the changes that he might bring. But the effect dissipated after he assumed office.

Figure 2: Monthly Apprehensions Per Border Agent and Exponential Trendline, October 1999 to June 2017

Sources: Apprehensions FY 2000-16: Border Patrol; Apprehensions FY 2017: Border Patrol; Border Patrol Staffing: Border Patrol

Zooming in further on the Obama and Trump administration months only reinforces the interpretation of a pre-election Trump effect (Figure 3). In every pre-Trump year, illegal flows spiked during the month of May or earlier (a phenomenon which goes back to at least 2000). Donald Trump launched his campaign in June 2015. Instead of waiting until the spring, immigrants started coming to the border during the winter months for the first time, peaking in December. In 2016, there was the typical spike in the spring, but then after Trump won the Republican nomination, apprehensions rose quickly peaking in November at well over the Spring numbers.

Figure 3: Monthly Apprehensions Per Agent and Exponential Trendline, January 2009 to June 2017

Sources: See Figure 2

There were 90,000 more apprehensions made from August 2016 to January 2017 than in the pre-Trump August 2014 to January 2015. Assuming that this is a sign of a Trump effect—and immigrants moved their travel plans up roughly six months earlier than they otherwise would have—each month from February to June under the Trump administration would have seen 15,000 or so more arrivals and from August to January 15,000 fewer per month. This would place the first months of the new president’s tenure right about on the trend line from the Obama administration (Orange line in Figure 3).

In this case, Trump is benefiting not so much from his current rhetoric or policies, but from his rhetoric on the campaign trail. Immigrants chose to come earlier than they would have, and so the normal spring rush failed to materialize. If this is the case, then it’s possible that apprehensions will return to the normal trend next year. Of course, the administration’s new policies may have started to make an impact by that point. Only time will tell.

Even if the normal trend returns, large scale illegal immigration is over. Whoever deserves credit, the job is done. Congress should move on and start talking about real issues.

“Europe’s Taxes Aren’t as Progressive as Its Leaders Like to Think,” wrote the Wall Street Journal’s Joseph C. Sternberg yesterday. Citing tax expert Stefan Bach from the German Institute for Economic Research, Sternberg shows how Germany’s tax system is only mildly progressive overall. Sternberg therefore states that politicians need to “tackle” indirect taxation if they want to have a major impact on the economy.

Now, Sternberg is undoubtedly right that broad-based tax systems which incorporate social contributions and VATs tend to be less progressive than those which rely more heavily on progressive income taxes. That is, if we narrowly look at the effects of taxes alone, rather than government spending. But does it make any economic sense to look at a tax system in isolation?

Good economic theory would suggest that to the extent we care about progressivity and redistribution, revenues should be collected in the least distortionary way possible, with redistribution done via cash transfers. So judging the desirability of a tax system by its degree of progressivity is not a good starting point. From an economic perspective, the assessment should be how distortionary different taxation systems across the world are. European tax systems have huge problems in this regard, but their progressivity or otherwise should not be a major consideration.

The second and more important related point is that assessing progressivity should not seek to separate the issues of taxes from transfers. To judge progressivity, one must look at the position of households across the income spectrum after both, not least because one person’s taxes are (now or later) another person’s cash transfer.

I cannot find figures to do this for Germany, but am familiar with some headline UK and US stats.

Every year when the UK Office for National Statistics (ONS) releases its publication The effects of taxes and benefits on household income, historical datasets a similar lament to Sternberg’s arises. Calculating total taxes paid as a proportion of gross income (market income plus government cash transfers), critics of the tax system assert that the poorest quintile pay 35.0% of their gross income in taxes, on average, which is almost identical to the average 34.1% for the top quintile (2015/16 figures). Like Sternberg, many conclude that the tax system is not progressive enough.

Yet a few seconds’ thought to what these figures show highlights how misleading this is. Gross income (the denominator in the calculation) includes cash transfers, which are transfers from one group to another. That a household uses money redistributed to it to spend, in turn paying what the ONS describes as indirect taxes (things like VAT, beer duty, tobacco duty, the TV license and fuel duty), can hardly be described as “regressive”.  This is akin to taking from Peter to pay Paul and then saying that – because Paul spends a large proportion of this money – the  tax system is unfair.

Put simply, benefits don’t fall like manna from heaven. One person’s taxes are someone else’s cash transfers. That the tax system is not ultra-progressive then is not what matters – it’s what the overall tax AND transfers system does that counts.

Thought of in this way, we can calculate effective tax rates, which measure the net contribution for the average household in each income quintile as a proportion of their market income. This is key: how much of the income that you earn is being taxed away and given to others? I.e. how progressive is the taxpayer-funded welfare state.

Table 1 shows the poorest fifth of households in the UK on average actually face an effective tax rate (all taxes minus cash benefits, divided by earned income) of -34.1 per cent, while the richest fifth face an average tax rate of 31.8 per cent. This means that, for every £1 earned in market income, the average household in the poorest quintile is transferred another 34.1p in cash benefits, while the average household in the top decile pays 31.8p in tax. The tax and cash transfers system, in other words, is very progressive.

But even this excludes so-called “benefits in kind,” which the UK state provides lots of, and which disproportionately benefit the poor. Once benefits-in-kind (education, healthcare, and subsidies for housing, rail and buses) are considered, these effective tax rates for the average household in the richest and poorest fifths become -140.2 per cent and 25.3 per cent respectively (see Table 2).

Now, the UK figures are not completely comprehensive. Unlike the figures below for the US (below), they do not seek to assign the impact of corporate income taxes on workers across the income distribution. They also exclude the cash-value benefits of other public goods, such as defense, law and order etc., and the courts which uphold property rights, where there is an argument that the rich benefit disproportionately (though development work stresses the importance of property rights for the poor too). But overall, it’s clear that welfare states are hugely redistributive.

Can similar figures be found for the US? The best comparator figures I can find come from the CBO’s June 2016 report The Distribution of Household Income and Federal Taxes, 2013.” There are three important differences in the methodology from the UK figures which means they are likely to look more progressive on the surface: the quintiles are assigned by market income, rather than disposable income; the transfers include transfers from state and local programs but only federal taxes (and sales taxes tend to be more regressive); and the figures presented include in-kind assistance, meaning they are closer in methodology to Table 2 than Table 1. But the results show the same trend.

The average household in the bottom quintile receives around $2.90 in cash transfers or in-kind benefits for every $1 earned, against the top quintile which faced an effective tax rate of 24.8%. Interestingly, as Greg Mankiw noted before, middle income Americans have shifted since 1979 from being, on average, net contributors to net beneficiaries under this measure.

Of course, averages hide a lot of information. Government programs redistribute heavily to those with children and to old people. But if we are going to assess crude measures of progressivity by looking across the income spectrum, it makes sense to include transfers too.

In conclusion, there are many problems with tax systems here and in Europe. But aiming to make them more progressive should not be an underlying economic aim. To the extent that redistribution is considered a valid goal, it should be undertaken through spending, and these stats above show countries such as the US and UK are already hugely redistributive or “progressive” in this regard. 

AEI scholar Abby McCloskey’s recent column on paid family leave argues that just “12 percent of private-sector employees have access to paid family leave from their employer.” For McCloskey, this is one of many reasons that the federal government should create a paid family leave entitlement program.

The 12 percent figure surely sounds appallingly low. In fact, it is so low that it seems suspect: it doesn’t match well with real-life experience or casual observation. The figure also doesn’t match with data from nationally representative surveys. For example, 63 percent of employed mothers said their employer provided paid maternity leave benefits in one national study, a 50 percentage point difference from the most recent BLS figure.[1]

So what gives? It seems many U.S. women take paid parental leave, but the Bureau of Labor Statistics (BLS) doesn’t count it. The BLS requires paid family leave be provided “in addition to any sick leave, vacation, personal leave, or short-term disability leave that is available to the employee.” This means that when employees take paid leave for family purposes, it doesn’t count if it could have been used for another purpose. 

In the real world, parents with conventional benefit programs often save and pool paid personal leave, vacation, sick leave, and short-term disability in the event of a birth or adoption. On average, employees with five years of service are provided 22 days of sick and vacation leave. A majority of private-sector employees can carry over unused sick days from previous years, which adds to the tally. Meanwhile, the median short-term disability benefit is 26 weeks for private-sector workers; six to eight weeks can be used toward paid maternity leave.

These benefits do exactly the same thing as paid family leave. As Human Resources Inc. puts it, “family-leave is usually created from a variety of benefits that include sick leave, vacation, holiday time, personal days, short-term disability…” And although not all employers, especially small businesses, have official paid family leave policies, “Many employers are flexible and can work out an agreement with you.” Benefits that aren’t spelled out in the company manual are surely undercounted by BLS figures, too.

Paid leave doesn’t always fit neatly under the BLS’s survey categories for other reasons. Unconventional benefit packages, like consolidated paid leave (or PTO banks) allow employees to use paid leave for any reason, family or otherwise. Consolidated paid leave is on the rise; the BLS reports that 35 percent of private-sector employees receive it. In some industries, more than half of employees receive this flexible benefit.

Unlimited paid leave plans are also growing in certain industries. These plans allow employees to take as much leave as they want, whenever they want, assuming they meet performance expectations. But unlimited and consolidated paid leave don’t provide paid family leave separately, so neither count.

As a result, BLS figures seem to grossly underestimate paid family leave availability. BLS methods penalize employers that provide flexible benefits, by pretending their benefits don’t exist.

This helps to explain why BLS figures differ dramatically from other surveys. In spite of that, don’t expect government-sponsored paid leave advocates to update their figures any time soon.

[1] Note that the Listening to Mothers III study focused on employed mothers; BLS focuses on private-sector employees.

“Is Amazon getting too big?” asks Washington Post columnist Steven Pearlstein, in a 4000-word column seeking justification for the Democrat Party’s quixotic pledge to “break up big companies” in its recent “Better Deal.” “Just this week,” notes Pearlstein, “Democrats cited stepped-up antitrust enforcement as a centerpiece of their plan to deliver ‘a better deal’ for Americans should they regain control of Congress and the White House.” He concludes by saying “it sometimes takes a little public power to keep private power in check.” But maybe it takes a lot of public power to write antitrust lawyers some big checks.

Politics aside, the question “Is Amazon getting too Big?” should have nothing to do with antitrust, which is supposedly about preventing monopolies from charging high prices. Surely no sane person would dare accuse Amazon of monopoly or high prices. 

Even Mr. Pearlstein has doubts: “Is Amazon so successful, is it getting so big, that it poses a threat to consumers or competition? By current antitrust standards, certainly not… Here is a company, after all, known for disrupting and turbocharging competition in every market it enters, lowering prices and forcing rivals to match the relentless efficiency of its operations and the quality of its service. That is, after all, usually how firms come to dominate an industry…”

That should have ended this story “by current antitrust standards.” But if we simply lower those standards, then “Better Way” antitrust shakedown threats could become far more numerous, unpredictable, and lucrative for politically-generous antitrust law firms

Among the 19 largest law firm contributions to political parties in 2015/2016, according to Open Secrets, all but one, Jones Day, contributed overwhelmingly to Democrats. More to the point, all of these law firms contributing most generously to the Democratic Party are specialists in antitrust and mergers: They appear on U.S. News list of top Antitrust attorneys. And the Trial Lawyers Association (now disguised as “American Association for Justice”) contributed over $2.1 million to Democrats, over $1 million to liberal organizations and $67,500 to Republicans.

Antitrust law is a very big, profitable and concentrated industry. Antitrust lawyers’ have a special interest in greatly expanding the reach and grip of antitrust law. They were surely delighted by Pearlstein’s prominent endorsement of law journal paper by Lina Khan, a 28-year old student and fellow at the “liberal-leaning” think tank New America.

Ms. Kahn believes it self-evident that low operating profits must prove Amazon is “choosing to price below-cost.” That’s uninformed accounting. What low profits actually show is that Amazon has been plowing-back rapidly expanding cash flow into capital expenditures, such cloud computing, a movie studio, and unique consumer electronics (Kindle and Echo).

 “If Amazon is not a monopolist, Khan asks, why are financial markets pricing its stock as if it is going to be?” That’s uninformed finance theory. Investors rightly see Amazon’s current and future growth of cash flow (the result of expensive investments) as the source of future dividends and/or capital gains (more net assets per share).

Kahn believes antitrust has been unduly constrained by “The Chicago School approach to antitrust, which gained mainstream prominence and credibility in the 1970s and 1980s.” She thinks Chicago’s “undue focus on consumer welfare is misguided. It betrays legislative history, which reveals that Congress passed antitrust laws to promote a host of political economic ends.”

The trouble with grounding policy on legal precedent is that Congress passed many laws to promote the special interest of producers at the expense of consumers—including the Interstate Commerce Commission (1887), the National Economic Recovery Act (1933), the Civil Aeronautics Board (1938), and numerous tariffs and regulations designed to benefit interest groups and the politicians who represent them.

The well-named chapter “Antitrust Pork Barrel” in The Causes and Consequences of Antitrust quotes Judge Richard Posner noting that antitrust investigations are usually initiated “at the behest of corporations, trade associations, and trade unions whose motivation is at best to shift the cost of their private litigation to the taxpayer and at worse to harass competitors.”

To grasp how and why anti-trust is easily abused as a rent-seeking device, it helps to relearn the wisdom of Frederic Bastiat: “The seller wants the goods on the market to be scarce, in short supply, and expensive. The [buyer] wants them abundant, in plentiful supply, and cheap. Our laws should at least be neutral, not take the side of the seller against the buyer, of the producer against the consumer, of high prices against low prices, of scarcity against abundance [emphasis added].” 

Contrary to Bastiat, however, Ms. Kahn claims to have found “growing evidence shows that the consumer welfare frame has led to higher prices and few efficiencies.” 

Growing evidence turns out to mean three papers, one of which seems to say what she says it does (but only about mergers, not concentration): “Research by John Kwoka of Northeastern University,” Pearlstein writes, “has found that three-quarters of mergers have resulted in [were followed by?] price increases without any offsetting benefits. Kwoka cited industries such as airlines, hotels, car rentals, cable television and eyeglasses.” 

If you believe that, mergers left consumers overcharged by the Marriott hotel and Enterprise Rent-A-Car ‘monopolies.’ Even if that sounds plausible, Kwoka’s evidence does not. Two-thirds of his sample covers just three industries (petroleum, airlines, and professional journal publishing), the price estimates are unweighted without standard error data, and several mergers date back to 1976-82. As Federal Trade Commission economists Vita and Osinksi charitably noted, “Kwoka has drawn inferences and reached conclusions … that are unjustified by his data and his methods.”

Pearlstein turns to another paper in Kahn’s trio: “There is little debate that this cramped [Chicago] view of antitrust law has resulted in an economy where two-thirds of all industries are more concentrated than they were 20 years ago, according to a study by President Barack Obama’s Council of Economic Advisers, and many are dominated by three or four firms.” 

Nothing in Pearlstein’s statement is even approximately correct. The Obama CEA looked at shares revenue earned by two different lists of Top 50 firms (not “three or four”) in just 13 industries (not “all industries”) in 1997 and 2012. Pearlstein’s “two-thirds of all” really means 10 out of 13, though the U.S. has considerably more than 13 industries. In transportation, retailing, finance, real estate, utilities, and education, for example, the Top 50 had a slightly larger share of sales in 2012 than in 1997. So what?

Should we fear monopoly price gouging simply because 50 firms account for a larger share of the nation’s very large number of retail stores, real estate brokers, or finance companies? Of course not. “An increase in revenue concentration at the national level,” the Obama CEA concedes, “is neither a necessary nor sufficient condition in market power.”

The Obama CEA did add that “in a few industries… there is some evidence of increasing market concentration.” How few? Just three: Hospitals, railroads, and wireless providers. Those industries are heavily regulated, as is banking. 

The CEA notes the 10 largest banks had a larger share of bank loans in 2010 than in 1980, which is hardly a surprise. Hundreds of banks that existed before the 1981-82 stagflation and 2008-09 Great Recession had closed by 2010. More lending now flows through nonbanks and securities. And the Internet (e.g., lending tree) makes shopping for loans or credit cards more competitive than ever.

Did the Obama CEA present any evidence that its extraneous data about industry-level or market concentration “has led to higher prices and few efficiencies”? Certainly not. They made no such claim because so many previous efforts have failed. “The Market Concentration Doctrine” could not explain higher prices when Harold Demsetz examined it in 1973, and it still can’t.


Generally speaking, the Washington Post editorial board does a great job on trade issues. They are pro-trade and they see trade agreements as a way to liberalize trade. However, I want to offer a response to something in a recent Post editorial about one particular technical aspect of the NAFTA renegotiation. Here’s the passage:

Alas, the administration also specified that the trade deficit with Mexico and the (smaller) one with Canada be reduced as a result of the talks, which isn’t possible and wouldn’t necessarily be desirable even if it were. Possibly even more counterproductive, Mr. Trump’s goals include the elimination of the so-called Chapter 19 dispute-resolution mechanism, which creates a special NAFTA-based forum to challenge a member country’s claims that another is selling exports below cost (“dumping”). This check against potentially protectionist litigation brought by U.S. industries in U.S. forums was Canada’s precondition for joining the U.S.-Canada free-trade agreement, upon which NAFTA was built; and it’s one reason that exports from Canada and Mexico are far less likely than those of other nations to face penalties in the United States.

Eliminating Chapter 19 probably would be a dealbreaker for Canada. And why would Mr. Trump seeks its elimination? After all, as he said in that call with Mr. Peña Nieto, “Canada is no problem . . . we have had a very fair relationship with Canada. It has been much more balanced and much more fair.” Perhaps he means the proposal as a bargaining chip, to be traded for some other, more valuable concession. Or perhaps he will be willing to finesse it behind closed doors, just as he pleaded with Mr. Peña Nieto to help him wiggle out of his unwise promise to make Mexico pay for a border wall. We certainly hope the administration can be pragmatic on this point, lest it trigger the trade war with our neighbors that Mr. Trump once promised but so far has sidestepped.

Starting with some technical points, let me note that dumping is defined as more than just sales below cost, as it could also mean export sales that are below the price in the home market or a third country market. (It’s a fundamentally arbitrary calculation, which you can read more about here.) Also, Chapter 19 covers countervailing duties (extra tariffs imposed on imports of subsidized goods) as well.

But the main issue is with the Post’s substantive defense of Chapter 19. It’s important to understand how the process works. U.S. agencies—the Department of Commerce (DOC) and the International Trade Commission (ITC)—make decisions about whether anti-dumping and countervailing duties are necessary in particular cases. These decisions, like other administrative decisions, can then be appealed to U.S. federal courts, in this case the Court of International Trade (CIT) in New York. Under NAFTA Chapter 19, however, Canadians and Mexicans have the option to appeal the agency decision to a special NAFTA panel (they can also go to the Court of International Trade, and sometimes do that instead of Chapter 19).

So, when someone says that NAFTA Chapter 19 panels protect against the abuse of antidumping/countervailing duties, in essence this means they think the U.S. courts are not up to the job of reviewing these agency decisions.

Is it possible that U.S. courts are insufficient here? In my opinion, we don’t have enough data on this question yet. What we would need to see in this regard is evidence of the impartiality (or partiality) of U.S. courts. For example, a basic piece of evidence would be how U.S. courts have ruled on these agency decisions. My colleague Dan Ikenson looked at some data on this a while back:

Between January 2004 and June 2005, IA [the Import Administration of the DOC] published 26 redeterminations of antidumping proceedings pursuant to remand orders from the courts. As Table 4 indicates, seven of the remand orders required IA to explain how its decisions were consistent with the law and did not expressly mandate that IA make any changes. But of the 19 remands that did require methodological changes, 14 produced lower antidumping duty rates upon recalculation for at least one of the foreign companies involved.

At first glance, this looks like a functioning judicial review of agency decisions, but we need to expand on this data, and then compare it to the results from NAFTA Chapter 19 panels (that is, compare the results of NAFTA panel decisions to those of CIT decisions; and compare the DOC and ITC responses to each kind of ruling). We have a Cato Trade Policy Center project to gather this data going on right now.

As for the point that “exports from Canada and Mexico are far less likely than those of other nations to face penalties in the United States,” it’s true in a sense. If you compare Canada and Mexico to the rest of the world, fewer of their imports are covered by these special duties. On the other hand, when you compare countries that are similarly situated in terms of development levels, you get a different result. For example, imports from Canada are more likely to be subject to duties than are imports from the United Kingdom and Germany.

And there’s also the question of whether NAFTA Chapter 19 is constitutional—it’s not clear whether this kind of agency decision review can be carried out by anyone other than a U.S. court.

Chapter 19 looks like it might play an outsized role in the NAFTA renegotiation, as people on both sides have latched on to it as a symbol of their cause. The reality is more nuanced, and it’s important to gather some hard evidence in order to assess the real value of the provision.

Vox’s Dylan Scott offers “An oral history of Obamacare’s 7 near-death experiences.” It’s a well-balanced take on how close the Affordable Care Act and ObamaCare—they are different animals—have come to oblivion.

Scott includes an excerpt of an email I sent him on his original, planned theme of “ObamaCare’s nine lives.” But I thought it might be worthwhile to include my entire response to him. I have lightly edited my email for clarity, added two illegal acts I had forgotten (#5 and #7), and added hyperlinks to useful references.

Just nine? The premise is inapposite, though. 

Jonathan Gruber was right: had the public known what the Affordable Care Act does, it never would have passed, because even more Democrats would have voted to kill it. ObamaCare keeps surviving not because it has nine lives, but because the executive and judicial branches keep rewriting the Affordable Care Act, outside the legislative process, to save it from constitutional and political accountability.

These unconstitutional and illegal actions began before the ink was dry on Obama’s signature. They include:

  1. Allowing Congress to remain in the FEHBP from 2010 until 2014;
  2. The bevy of exemptions Sebelius issued unions and other firms from various regulations;
  3. The threats Sebelius made to insurers who spoke publicly and truthfully about the cost of those regulations; 
  4. Sebelius soliciting funds for Enroll America from companies she regulates;
  5. Sebelius raiding the Prevention and Public Health Fund to the tune of $454 million to fund federal Exchanges; 
  6. The Supreme Court rewriting the individual mandate in 2012; 
  7. Sebelius gutting the Supreme Court’s Medicaid ruling by coercing states to implement parts of the Medicaid expansion the Court made optional;
  8. Obama’s illegal “if you like your health plan” fix/grandmothered-plans exemptions;
  9. The IRS issuing subsidies through federal exchanges;
  10. The Supreme Court upholding ObamaCare’s subsidies and penalties in federal-Exchange states;
  11. The Obama administration making illegal CSR payments;
  12. The Obama administration illegally diverting reinsurance payments from the treasury to insurance companies;
  13. The Obama administration declaring Congress to be a small business; 
  14. The Obama administration giving members of Congress an illegal $12,000 premium contribution to their SHOP premiums;
  15. The Trump administration continuing to make illegal CSR payments;
  16. The Trump administration continuing to give Congress and illegal exemption from the ACA…

Etc., etc.

It’s not that Obamacare has nine lives. It’s that ObamaCare has 90 or 900 or 9,000 committed ideologues who are willing to violate the law to protect it from the voters. 

What we have left is no longer the law Congress enacted. The ACA was a legitimate law, duly passed by Congress. ObamaCare is an illegitimate law that no Congress ever passed or ever could have passed. 

The ACA is dead. Long live ObamaCare.

Absent these unlawful actions, Republicans’ recent attempt to repeal ObamaCare would have succeeded.

Years ago.

Trying to tamp down impeachment talk earlier this year, House minority leader Nancy Pelosi (D-CA) insisted that President Donald Trump’s erratic behavior didn’t justify that remedy: “When and if he breaks the law, that is when something like that would come up.” 

Normally, there isn’t much that Pelosi and Tea Party populist Rep. Dave Brat (R-VA) agree on, but they’re on the same page here. In a recent appearance on Trump’s favorite morning show, “Fox & Friends,” Brat hammered Democrats calling for the president’s impeachment: “there’s no statute that’s been violated,” Brat kept insisting: They cannot name the statute!” 

Actually, they did: it’s “Obstruction of Justice, as defined in 18 U.S.C. § 1512 (b)(3),” according to Rep. Brad Sherman (D-CA) who introduced an article of impeachment against Trump on July 12. Did Trump break that law when he fired FBI director James Comey over “this Russia thing”? Maybe; maybe not. But even if “no reasonable prosecutor” would bring a charge of obstruction on the available evidence, that wouldn’t mean impeachment is off-limits. Impeachable offenses aren’t limited to crimes.

That’s a settled point among constitutional scholars: even those, like Cass Sunstein, who take a restrictive view of the scope of “high Crimes and Misdemeanors” recognize that “an impeachable offense, to qualify as such, need not be a crime.” University of North Carolina law professor Michael Gerhardt sums up the academic consensus: “The major disagreement is not over whether impeachable offenses should be strictly limited to indictable crimes, but rather over the range of nonindictable offenses on which an impeachment may be based.” 

In some ways, popular confusion on this point is understandable. Impeachment’s structure echoes criminal procedure: “indictment” in the House, trial in the Senate—and the constitutional text, to modern ears, sounds something like “grave felonies, and maybe lesser criminal offenses too.”

But “high crimes and misdemeanors,” a term of art in British impeachment proceedings for four centuries before the Framers adopted it, was understood to reach a wide range of offenses that, whether or not criminal in nature, indicated behavior incompatible with the nature of the office. For James Madison, impeachment was the “indispensable” remedy for “Incapacity, negligence, or perfidy” on the part of the president—categories of conduct dangerous to the republic, only some of which will also constitute crimes. 

The criminal law is designed to punish and deter, but those goals are secondary to impeachment, which aims at removing federal officers unfit for continued service. And where the criminal law deprives the convicted party of liberty, the constitutional penalties for impeachable offenses “shall not extend further than to removal from Office,” and possible disqualification from future officeholding. As Justice Joseph Story explained, the remedy “is not so much designed to punish an offender, as to secure the state against gross official misdemeanors. It touches neither his person, nor his property; but simply divests him of his political capacity.”

No doubt being ejected from a position of power on the grounds that you’re no longer worthy of the public’s trust can feel like a punishment. But the mere fact that removal is stigmatizing doesn’t suggest that criminal law standards apply. Raoul Berger once illustrated that point with an analogy Donald Trump would probably find insulting: “to the extent that impeachment retains a residual punitive aura, it may be compared to deportation, which is attended by very painful consequences, but which, the Supreme Court held, ‘is not a punishment for a crime.’”

Had the Framers restricted impeachment to statutory offenses, they’d have rendered the power a “nullity” from the start. In the early Republic, there were very few federal crimes, and certainly not enough to cover the range of misdeeds that would rightly disqualify public officials from continued service.

Criminality wasn’t an issue in the first impeachment to result in the removal of a federal officer: the 1804 case of district court judge John Pickering. Pickering’s offense was showing up to work drunk and ranting like a maniac in court. He’d committed no crime; instead, he’d revealed himself to be a man “of loose morals and intemperate habits,” guilty of “high misdemeanors, disgraceful to his own character as a judge.”

As Justice Story noted in 1833, in the impeachment cases since ratification, “no one of the charges has rested upon any statutable misdemeanours.” In fact, over our entire constitutional history, fewer than a third of the impeachments approved by the House “have specifically invoked a criminal statute.” What’s been far more common, according to a comprehensive report by the Nixon-era House Judiciary Committee, are “allegations that the officer has violated his duties or his oath or seriously undermined public confidence in his ability to perform his official functions.”

The president’s violation of a particular criminal statute can serve as evidence of unfitness, but not all such violations do. That’s obvious when one considers the enormous growth of the federal criminal code in recent decades. Overcriminalization may have reached the point where Donald Trump, like everyone else, is potentially guilty of “Three Felonies a Day,” but even in Lawrence Tribe’s wildest imaginings, that wouldn’t translate to three impeachable offenses daily. If Trump were to import crocodile feet in opaque containers, fill an (expansively defined) wetland on one of his golf courses, or misappropriate the likeness of “Smokey Bear,” he’d have broken the law, but would not have committed an impeachable offense.

It’s also easy enough to imagine a president behaving in a fashion that violates no law, but nonetheless justifies his removal. To borrow an example from the legal scholar Charles Black, if the president proposed to do his job remotely so he could “move to Saudi Arabia [and] have four wives” (as well as his very own glowing orb), he couldn’t be prosecuted for it. Still, Black asks: “is it possible that such gross and wanton neglect of duty could not be grounds for impeachment”?

A more plausible impeachment scenario presented itself recently, with reports that President Trump had “asked his advisers about his power to pardon aides, family members and even himself” in connection with the special counsel’s Russia investigation. The president’s power to self-pardon is an open question, but his power to pardon others has few limits. There’s little doubt Trump could issue broad prospective pardons for Don Jr., Jared Kushner, Paul Manafort, Mike Flynn, and anyone else who might end up in the Mueller’s crosshairs—and it would be perfectly legal. It would also be impeachable, as James Madison suggested at the Virginia Ratifying Convention: “if the President be connected, in any suspicious manner, with any person, and there be grounds to believe he will shelter him, the House of Representatives can impeach him; [and he can be removed] if found guilty.”

Some years ago, I put together a collection of essays on the expansion of the criminal sanction into areas of American life where it doesn’t belong—published under the title, Go Directly to Jail: The Criminalization of Almost Everything. The idea that criminal law concepts had infected and weakened the constitutional remedy of impeachment wasn’t quite what I had in mind with that subtitle, but it seems to fit.

Congress has made the problem worse by outsourcing its investigative responsibilities to the executive branch. As Princeton’s Keith Whittington observes in a recent essay for the Niskanen Center, “relying so heavily on prosecutors to develop the underlying charges supporting impeachment has come at a high cost…it has created the widespread impression that the impeachment power can only appropriately be used when criminal offenses have been proven.”

It’s important to get this straight, because confusing impeachment with a criminal process can be harmful to our political health. It may lead us to stretch the criminal law to “get” the president or his associates, warping its future application to ordinary citizens. And it can leave the country saddled with a dangerously unfit president whose contempt for the rule of law is apparent, even if he hasn’t yet committed a crime.

E-Verify is the federal government’s national identification system that some employers currently use to verify the employment authorization of their new hires either voluntarily or under a requirement by state law. The Legal Workforce Act (LWA) would make this program mandatory for all employers in all states. Its proponents contend that E-Verify is simpler than the current I-9 form process and that it will protect employers from government raids and I-9 audits. But these talking points are false, and seemingly for this reason, employers refuse to use it voluntarily.

In 2017, 729,595 employers participated in E-Verify. Using a calculation from a USCIS-commissioned study, this figure corresponds to 9.5 percent of the 7.7 million private sector employers, which is somewhat higher than the actual level because some E-Verify users are public employers. It also greatly inflates the level of voluntary compliance. Only 10 states and D.C. have greater than 10 percent participation in E-Verify—all of them have expansive E-Verify mandates of some kind. To achieve this level of compliance, states need to require E-Verify for some private sector employers—either by fining non-users or rescinding subsidies from them. President Bush’s 2008 executive order mandating E-Verify for federal contractors drives the relatively high level of compliance in D.C.

Nearly half of all employers who use E-Verify operate in the 10 top E-Verify-using states, while only 17 percent of the businesses operate there. Only 6 percent of the 40 states without the more expansive mandates use E-Verify. As the Figure below shows, the true level of “voluntary” compliance is undoubtedly much lower than 6 percent. Another dozen states have various E-Verify requirements for public employers or contractors, and federal contractors exist in every state, meaning that it’s impossible to determine the precise level of voluntary participation.

Figure: E-Verify Participation Rates, 2017

Sources: Author’s calculations based on USCIS; SBA; BLS (Top 10 states: Alabama, Arizona, Georgia, Mississippi, Missouri, Nebraska, North Carolina, Rhode Island, South Carolina, Tennessee, and Utah, plus D.C.)

Unfortunately, USCIS has not released more recent data since 2013 on the size of the businesses using E-Verify, but assuming the 2013 proportions, 98 percent of businesses with less than 10 employees were not using E-Verify in 2017. The numbers also reveal that even when the law requires businesses to use E-Verify, they are reluctant to do so. Even in Alabama, which has achieved the highest E-Verify participation rate with a universal E-Verify requirement, less than half of the state’s businesses actually use the program. Arizona, which has the oldest universal E-Verify requirement and where two intentional violations can result in the “business death penalty,” actually fares worse. Only 39 percent of businesses use the program.

These facts demonstrate that most businesses—especially small businesses—do not consider E-Verify “business friendly.” 

E-Verify is a government-run national identification system that some U.S. employers currently use to verify the employment status of their new hires on a voluntary basis or a compulsory basis if they are federal contractors or operate in states with an E-Verify mandate. The Legal Workforce Act (LWA), which has passed the House Judiciary Committee on three occasions since 2012, would mandate that all employers use the program. In scope, LWA would surpass all other regulations in U.S. history, applying to every single employer and every single worker—illegal and legal—with deleterious consequences for both.

Proponents see E-Verify as an inexpensive silver bullet to end illegal immigration. But naturally, this technocratic dream fails to fit reality. As my colleagues’ recent study shows, E-Verify does slightly reduce unauthorized immigrant wages, but not nearly far enough to “turn off the jobs magnet.” Unsurprisingly, the market finds a way to connect willing workers with willing employers. However, while E-Verify fails to separate illegal workers from their jobs, it does manage to do exactly that for many legal workers—U.S. citizens and work authorized immigrants.

E-Verify salesmen neglect to mention that the program applies to all workers, not just those here illegally, and that U.S. citizens and legal workers can end up caught in the system. I have previously explained how, from 2006 to 2016, legal workers already had 580,000 jobs held up due to E-Verify errors, and that of these, 130,000 lost their jobs completely. These shocking numbers would grow worse under mandatory E-Verify. Under the most conservative estimate, if applied to all employers, E-Verify would delay at least 1.7 million jobs for legal workers and eliminate nearly half a million jobs over 10 years.

How E-Verify already harms U.S. workers

LWA requires employers to submit the information employees provide them on the I-9 forms to E-Verify. If the information fails to match the records of the Department of Homeland Security or Social Security Administration, E-Verify issues a “tentative nonconfirmation” (TNC). Under LWA, people who receive a TNC would need to challenge it within 2 weeks or it would become a “final nonconfirmation” (FNC), which requires an employer to immediately terminate their employment or face major fines or jail time.

Errors can occur because employers enter the name incorrectly. This mistake is particularly common for people with multiple or hyphenated last names or names with difficult spellings. They also happen when bureaucrats incorrectly enter information into their databases or when employees fail to fully update their information after a name change.

To sort out the problem, employees then have to visit in person the Social Security Administration or U.S. Citizenship and Immigration Services (USCIS)—the new DMVs for employment. Employees and employers have to stumble through this process in the dark because E-Verify is unable to tell them the origin of the problem. Workers may need to file Privacy Act requests to access their records and fix the issue. In these cases, it can take more than three months to even obtain a response. LWA allows employers to delay hiring a worker until they clear this bureaucracy.

Even worse, E-Verify can cause legal workers to lose their jobs entirely. Authorized job seekers can receive an FNC if they fail to challenge the TNC or if their employer fails to notify them. According to a USCIS-commissioned study, 17 percent of FNC errors were the fault of the employee not following the regulations. The other 83 percent were the result of employers not informing the worker about the TNC, so they could challenge it.

E-Verify’s boosters tout its 99.8 percent accuracy rate, implying that U.S. workers have little to fear. But even a low rate applied to a population as large as the U.S. workforce would result in hundreds of thousands of errors. Indeed, in 2016, under voluntary use of the system, E-Verify caught up 63,000 legal workers in its regulatory scheme. It will become much worse if Congress mandates it for all employers.

Legal Workforce Act will harm even more U.S. workers

In order to project the number of errors under LWA, we need to know the number of hires that employers will make through the system and the rate at which E-Verify will wrongly not confirm a job applicant.

The Census Bureau reports the number of new hires, almost 100 million in 2015, but LWA would also allow employers for the first time to voluntarily check their existing workers, a practice that the current regulations prohibit. This means that the number of E-Verify checks will exceed the number of annual hires that the Census records. Unfortunately, we cannot know by how much because it depends on the desire of employers to use this procedure, but it could raise the estimated number of new checks by tens of millions. For this estimate, I took the most conservative position and assume no company will check any of its existing employees.

The future error rate is more difficult to assess. The E-Verify system’s accuracy has improved over time, so this trend will likely continue as the bureaucracy, employers, and employees figure out the system. However, there is likely a natural floor beneath which the system cannot improve—perfect accuracy is almost certainly impossible (especially considering the causes of the errors). For this reason, it is unlikely that the error rates will continue to improve at the current rate indefinitely.

On the other hand, the rates could grow much worse, especially following the initial rollout, because LWA would flood it with new employers who have no desire or ability to use the program. For the last 10 years, the system has mostly incorporated larger employers. About 90 percent of employers have less than 15 employees, compared to only 8 percent of E-Verify businesses, as another USCIS-commissioned study found. Smaller employers have less human resource staff to implement these types of regulations, and there will certainly be a learning curve regardless.

LWA would also likely incorporate a large new population of employees who are more likely to be the victims of E-Verify errors. According to the USCIS-commissioned study, this group includes legal immigrants and Hispanics. Because many Hispanics and legal immigrants live in states where E-Verify is not currently mandatory at the state-level, it is likely that LWA’s national mandate would increase errors for them. Given the uncertainties, this estimate takes the conservative assumption that the E-Verify improvements would continue at the same pace that it did under the Obama administration (2009-2016), an 8.5 percent reduction in the error rate annually, even though this is unlikely to continue.

Finally, LWA will definitely increase the rate of job loss due to E-Verify. FNCs currently only happen to legal workers if they fail to challenge or an employer fails to notify them about a TNC. LWA would introduce a new way to receive a FNC: inability to resolve the issue in time. LWA requires that a person prove their right to work in less than 10 working days or be fired. The bill allows a single one-time extension at the discretion of the Secretary of Homeland Security (p. 36). Yet as explained above, we know that it often takes much longer than that to resolve a TNC error.

Unfortunately, USCIS only provides case resolution details in groups: such as, less than 3 business days, 3 to 8 days, or more than 8 days. In 2012, the last year for which we have data, 36 percent of all erroneous TNCs took more than 9 or more days to resolve. If we assume that the full 36 percent cannot obtain an extension of the 10-day limit, then roughly 57,000 legal workers would lose their jobs due to this one provision of LWA alone.

However, for this estimate, I will assume that all of these workers will receive the one-time extension. If we further assume that the daily number of TNC resolutions in the 3 to 8 day period continued at the same rate thereafter, then 13.3 percent of all TNCs currently take more than 20 business days to resolve. The true share is likely higher than this because if someone cannot sort the error out in 8 days, it likely has a more complicated origin than those resolved in the first week. Nonetheless, this projection assumes that this share will continue into the future.

Estimate of the number of job delays and losses due to E-Verify

The Legal Workforce Act imposes the mandate on all U.S. employers in stages based on the size of the firm over a 2-year period. The table below begins the year in which E-Verify becomes fully mandatory for all employers. Year 1 uses the 2016 error rate as the starting point. Under these assumptions, nearly 1.5 million legal workers over 10 years would receive erroneous TNCs, and of these, nearly 430,000 would lose their jobs completely.

Projection of E-Verify Errors Under Mandatory E-Verify

  Total Hires TNC Job Delays FNC Error Job Loss LWA Job Loss From TNC Delays Total Job Losses Total Errors Year 1







Year 2







Year 3







Year 4







Year 5







Year 6







Year 7







Year 8







Year 9







Year 10














Sources: Author’s calculations based on: Total hires: Census Bureau; TNC share overcome (2011-2016): U.S. Citizenship and Immigration Services (USCIS) USCIS archived pages; E-Verify Erroneous TNC and FNC rates (2006-2010): Westat; LWA Job Loss From TNC Delays: Cato Institute. (Note FNC error rate in Westat expressed in terms of share of FNCs. See here for how the FNC rate was calculated for years without data.)

The number of TNCs reported above come from public numbers from USCIS, but the non-public numbers in USCIS in response to Cato’s 2013 Freedom of Information Act request for years 2008 to 2012 show 50,000 more TNCs during this period. Again, this means that this estimate is the lowest possible outcome for mandatory E-Verify.

LWA also increases the consequences of a TNC for a legal worker relative to current law. The legislation would allow the employer to delay hiring the job applicant until after they clear this process, which means that people would lose wages throughout the job delay period. Moreover, anyone who is attempting short-term employment could lose their job completely, even if they ultimately cleared E-Verify (p. 20). This provision highlights how E-Verify purports to be pro-U.S. worker legislation, but actually is anti-U.S. workers.

Congress should reject mandatory E-Verify. It’s a big government waste of resources. It won’t accomplish its intended goal, but it will punish Americans seeking jobs.

Leaders at all levels of government and civil society are alarmed at the continued rise, year after year, in the death rate from opioid overdose. The latest numbers for 2015 report a record 33,000 deaths, the majority of which are now from heroin. Health insurers are not a disinterested party in this matter.

Cigna, America’s fifth largest insurer, recently announced it has made good progress towards its goal of reducing opioid use by its patients by 25% by mid-2019. To that end, Cigna is limiting the quantities of opioids that can be dispensed to patients and requiring authorizations for most long acting opioid prescriptions. Cigna is encouraging its providers to curtail their use of opioid prescriptions for pain patients and is providing them with data on the opioid use patterns of their patients (prescription drug monitoring programs) with an aim towards reducing abuse.

In a Washington Post report on this announcement Cigna CEO David Cordani is quoted as saying, “We determined that despite no profit rationale—in fact it’s contrary to that—that societally we needed to step into the void and we stepped in pretty aggressively.”

No profit rationale?

Paying for fewer opioids saves the insurer money in the short run. And opioids have become costlier as “tamper-resistant” reformulations, encouraged by the FDA, have led to new patents allowing manufacturers to demand higher prices.

There is growing evidence that, as doctors curtail their opioid prescriptions for genuine pain patients, many in desperation seek relief in the illegal market, where they are exposed to adulterated opioids as well as heroin. For the same reason, recent studies on the effect of state-based Prescription Drug Monitoring Programs (PDMPs) suggest they have not led to reductions in opioid overdose rates and may actually be contributing to the increase. It is reasonable to be skeptical that Cigna’s internal prescription drug monitoring program will work any differently.

All of this intersects with a problem generated by the community rating regulations of the Affordable Care Act. The ACA requires insurance companies to sell their policies to people who have very expensive health conditions for the same premiums they charge healthy people. In addition, the ACA’s “risk-adjustment” programs, aimed at reimbursing insurers for losses due to a disproportionate share of the sickest patients, systematically underpay insurers for many of these enrollees. This penalizes insurers whose networks and drug formularies are desirable to those who are sick. Insurers respond to this disincentive by designing policies with provider networks, drug formularies, and prescription co-payment schedules that are unattractive to such patients, hoping they will seek their coverage elsewhere. This “race to the bottom” between the health plans results in decreased access and suboptimal health care for many of the sickest patients.

Researchers at the University of Texas and Harvard University, in a National Bureau of Economic Research working paper, show “some consumers are unprofitable in a way that is predictable by their prescription drug demand,” and “…Exchange insurers design formularies as screening devices that are differentially unattractive to unprofitable consumer types,” resulting in lower levels of coverage for patients in those categories. They rank drug classes by net-loss to the insurer (per capita enrollee spending minus per capital enrollee revenue). Opioid antagonists, used to treat opioid addiction, exact the third highest penalty on insurers, about $6,000 for every opioid antagonist user. (See Table 2.)

This suggests that patients suffering from opioid dependency and/or addiction (there is a difference) are victims of the race to the bottom spawned by the ACA’s community rating mandate.

Thus, the opioid overdose crisis and the ACA mandates—especially community rating—combine to make the “perfect storm.” Insurers team up with state and federal regulators to curtail the prescription of opioids for chronic pain patients, leading many to suffer needlessly and driving some, in desperation, to the illegal drug market and the risk of death from overdose. Meanwhile, those seeking rescue from the torment of dependency and addiction must access a health insurance system that is penalized for providing help.