Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

The New York Times has a special investigative report about the militaristic drug raids that are now happening every day in the United States. 

Here is an excerpt:

As policing has militarized to fight a faltering war on drugs, few tactics have proved as dangerous as the use of forcible-entry raids to serve narcotics search warrants, which regularly introduce staggering levels of violence into missions that might be accomplished through patient stakeouts or simple knocks at the door.

Thousands of times a year, these “dynamic entry” raids exploit the element of surprise to effect seizures and arrests of neighborhood drug dealers. But they have also led time and again to avoidable deaths, gruesome injuries, demolished property, enduring trauma, blackened reputations and multimillion-dollar legal settlements at taxpayer expense, an investigation by The New York Times found.

For the most part, governments at all levels have chosen not to quantify the toll by requiring reporting on SWAT operations. But The Times’s investigation, which relied on dozens of open-record requests and thousands of pages from police and court files, found that at least 81 civilians and 13 law enforcement officers died in such raids from 2010 through 2016. Scores of others were maimed or wounded.

It’s terrific reporting that covers so many of the problems: the unnecessary violence, the dilution of constitutional safeguards, the flimsy police investigative work, the cover-ups when things go bad, and the lawsuits that will ultimately burden taxpayers.

Cato has been sounding the alarm on this trend since 1999, with the publication of “Warrior Cops.” That was followed by Radley Balko’s study, “Overkill,” and there have been countless events, media appearances, opinion articles, and book chapters since. Indeed, one of the NYT’s own reporters, Matt Apuzzo, acknowledged a few years ago that “the criticism of the so-called militarization of police has largely come from libertarian quarters for several years. They have kind of been the lone voice on this, folks like the Cato institute.” 

For related Cato scholarship, go here.



The moment has arrived: this week, we finally have Supreme Court confirmation hearings before the Senate Judiciary Committee. This is the culmination of a series of unusual political events that took place after Justice Antonin Scalia’s untimely death in February 2016.

Indeed, when Scalia died, President Barack Obama had almost a year left in office, so it seemed likely that he would get to select the Court’s next justice. But it was an election year—and the last time that a Senate controlled by the party not in the White House confirmed a Supreme Court nominee to a vacancy that arose during a presidential election year was 1888. Accordingly, Republicans vowed not to consider any high-court nominee until after the election. In a politically polarized nation that had reelected a Democrat to the presidency in 2012 and then gave Senate control to the GOP in 2014, they were determined to let the people have another say regarding who would get to appoint the next justice.

Nevertheless, Obama nominated Judge Merrick Garland, a seemingly uncontroversial pick designed to pressure Senate Republicans to cave. As Donald Trump became the Republican nominee and the electoral winds blew harder against the GOP, Senate Majority Leader Mitch McConnell’s #NoHearingsNoVotes gambit (which I supported) seemed increasingly ill-advised. But the unlikely happened: Trump not only won the presidency, but he picked his nominee from a gold-plated list of 21 candidates that he had issued during his campaign.

Since Judge Neil Gorsuch of the Denver-based U.S. Court of Appeals for the Tenth Circuit was nominated on January 31, his chances of joining the high court have only improved. A recent survey showed that 91 percent of Democratic congressional staffers expect him to be confirmed, as Democratic senators have failed to find any salient items that would merit disqualification. Sure, activists will attempt to tar Gorsuch as anti-women, anti-worker, anti-this-that-and-the-other, but the mild-mannered originalist is anything but the cartoon Monopoly Man this caricature tries to paint. And the argument about how this is a #StolenSeat isn’t going anywhere because that was litigated at the election.

So this may all be anti-climactic. As I wrote in The Federalist on Friday:

To be sure, such hearings have become kabuki theater. Senators from the president’s party toss softballs that let the nominee display his or her erudition, while opposing senators ask “gotcha” questions that anybody skilled enough to be nominated can evade with ease. Indeed, the nominee in the supposed hot seat has been trained for weeks to talk a lot while revealing very little, literally running out the clock allotted for each senator’s questions while executing what’s been called the (Ruth Bader) Ginsburg “pincer movement”: refusing to analyze hypothetical cases because those issues might come before the court and then declining to discuss broader doctrinal issues because judges should only deal in specifics.

As one observer put it: “When the Senate ceases to engage nominees in meaningful discussion of legal issues, the confirmation process takes on an air of vacuity and farce, and the Senate becomes incapable of either properly evaluating nominees or appropriately educating the public.” Untenured law professor Elena Kagan was not wrong in writing that back in 1995, even if the would-be justice recanted her emperor-has-no-clothes logic when she herself became a nominee.

But it doesn’t have to be that way. This is a singular chance to educate the American people about constitutionalism and the legal process. Senators could ask real questions, about the meaning of different constitutional or statutory provisions divorced from any pending or hypothetical cases. They can try to gauge whether the nominee’s commitment to stare decisis (not overturning incorrect but longstanding precedent) is relatively strong (like Scalia) or less so (like Justice Clarence Thomas). Especially given Gorsuch’s Oxford doctorate in legal philosophy, they can get at some deeper jurisprudential or philosophical issues without asking the nominee to either comment on pending cases (like the immigration executive orders) or generate out-of-context fodder for the evening news (anything about Roe v. Wade).

For possible questions, read the rest of my piece, or George Will’s latest column, or a more detailed document that I prepared; I wonder if any senator will hit these detailed questions. And in last fall’s National Affairs, Randy Barnett and Josh Blackman have a longer essay about how to make confirmation hearings great again.

Finally, here’s a sketch of the logistics. On Monday, starting at 11 am, we’ll get opening statements from the senators—this will likely show what lines of attack the Democrats plan to pursue—plus Gorsuch’s swearing-in and opening statement. Tuesday and Wednesday will be the senators’ actual questioning of the nominee: two, maybe three rounds. Thursday will feature panels of witnesses testifying for and against Gorsuch. (I won’t be there because will be speaking on this very subject at Michigan State Law School, but catch my views Tuesday and Wednesday evenings on PBS NewsHour and Friday morning on Fox News, among other media.)

It’s game time!

Last year, I put forward a statutory argument that President Trump’s proposal to ban immigrants from several majority Muslim countries was illegal because it violated a 1965 law that specifically banned discrimination against immigrants based on race, gender, nationality or place of residence or birth. On the night that the original executive order was released, I wrote an op-ed in the New York Times laying out the case again.

Now, finally, a ruling from a federal district court judge in Maryland addressed the issue, agreed with me in part, and partially stayed the executive order on this basis. This afternoon, the Trump administration appealed the ruling to the Fourth Circuit. The portion of ruling relevant to the statutory argument states:

Plaintiffs argue that by generally barring the entry of citizens of the Designated Countries, the Second Order violates Section 202(a) of the INA, codified at 8 U.S.C. 1152(a) (“1152(a)”), which provides that, with certain exceptions:

No person shall receive any preference or priority or be discriminated against in the issuance of an immigrant visa because of his race, sex, nationality, place of birth, or place of residence.

Section 1152(a) was enacted as part of the Immigration and Nationality Act of 1965, which was adopted expressly to abolish the “national origins system” imposed by the Immigration Act of 1924, which keyed yearly immigration quotas for particular nations to the percentage of foreign-born individuals of that nationality who were living in the continental United States, based on the 1920 census, in order to “maintain, to some degree, the ethnic composition of the American people.” H. Rep. No. 89-745, at 9 (1965). President Johnson sought this reform because the national origins system was at odds with “our basic American tradition” that we “ask not where a person comes from but what are his personal qualities.”

…although the Second Executive Order speaks only of barring entry, it would have the specific effect of halting the issuance of visas to nationals of the Designated Countries. Under the plain language of the statute, the barring of immigrant visas on that basis would run contrary to 1152(a).

This ruling is a huge win and is the first to directly deal with the statutes at play. In another case in Washington (Ali v. Trump), the plaintiffs made the same arguments to Judge James Robart (who earlier in Washington v. Trump suspended the implementation of the first order). Judge Robart also appeared to agree with the plaintiffs on this point during oral arguments. The plain language of the statute forbids discrimination based on nationality in the issuance of immigrant visas (for people coming to the United States to live permanently).

Rejecting the Government’s Arguments

The court dispensed with some of the arguments from the government that I have previously addressed as well. The administration’s primary argument is that it has the statutory authority under 8 U.S.C. 1182(f) to “suspend entry” of “any class of aliens” that the president deems a “detriment to the United States.” Of this argument, the judge writes:

Section 1152(a) requires a particular result, namely non-discrimination in the issuance of immigrant visas on specific, enumerated bases. Section 1182(f), by contrast, mandates no particular action, but instead sets out general parameters for the President’s power to bar entry. Thus, to the extent that sections 1152(a) and 1182(f) may conflict on the question whether the President can bar the issuance of immigrant visas based on nationality, section 1152(a), as the more specific provision, controls the more general section 1182(f). See Edmond v. United States, 520 U.S. 651, 657 (1997) (“Ordinarily, where a specific provision conflicts with a general one, the specific governs.”); United States v. Smith, 812 F.2d 161, 166 (4th Cir. 1987).

I made this point earlier that the more specific statute should be seen as limiting the more general statute. Section 1152(a) not only requires a certain result, but requires it for a single groups of “aliens” (i.e. immigrant visa applicants). The ruling continues:

Moreover, section 1152(a) explicitly excludes certain sections of the INA from its scope, specifically sections 1101(a)(27), 1151(b)(2)(A)(i), and 1153. 8 U.S.C. 1152(a)(1)(A). Section 1182(f) is not among the exceptions. Because the enumerated exceptions illustrate that Congress “knows how to expand ‘the jurisdictional reach of a statute, ‘” the absence of any reference to section 1182(f) among these exceptions provides strong evidence that Congress did not intend for section 1182(f) to be exempt from the anti-discrimination provision of section 1152(a).

I made this point here, noting that not only does it list exceptions, it lists them with unnecessary added emphasis: “except as specifically provided”. Nonetheless, the government argued that it could discriminate based on nationality by virtue of an exception to the non-discrimination provision in subparagraph (B) of section 1152(a)(1) that states, “Nothing in this paragraph shall be construed to limit the authority of the Secretary of State to determine the procedures for the processing of immigrant visa applications or the locations where such applications will be processed.” The government argued that its ban was a “procedure.” On this, the ruling states:

Even if the Court were to construe Plaintiffs’ claim to be that the State Department’s anticipated denial of immigrant visas based on nationality for a period of 90 days would run contrary to section 1152(a), the text of section 1152(a)(1)(B) does not comfortably establish that such a delay falls within this exception. Although section 1152(a)(1)(B) specifically allows the Secretary to vary “locations” and “procedures” without running afoul of the non-discrimination provision, it does not include within the exception any authority to make temporal adjustments. Because time, place, and manner are different concepts, and section 1152(a)(1)(B) addresses only place and manner, the Court cannot readily conclude that section 1152(a)(1)(B) permits the imminent 90-day ban on immigrant visas based on nationality despite its apparent violation of the non-discrimination provision of section 1152(a)(1)(A).

Even the government had trouble initially getting out this argument with any certitude. Its brief in Washington v. Trump merely stated that the language of subparagraph by of section 1152(a)(1) “suggests that maybe” the ban could be viewed as “procedure.” It was hesitant with good reason.

I have previously addressed this argument by noting that a “procedure for processing” cannot mean “no procedure for no processing” as in the case of an outright ban. Moreover, if section 1152(a)(1)()A) does not apply to the decision to issue or deny a visa, then it would apply to nothing at all. Section 1152(a)(1)(B) was added to address a 1995 D.C. circuit decision that used the non-discrimination rule to stop a requirement that Vietnamese in Hong Kong return to Vietnam to apply for immigrant visas. This legislative history certainly reflects a very narrow purpose. Thus, when Congress added the exception for procedures or locations in subparagraph (B), it specifically left subparagraph (A) in place, demonstrating its intention that it still carry weight.

To these facts, the court adds that Congress specifically listed changes in manner and location of processing, but left out time. If “manner” (procedures) was to include all three, the location language would be unnecessary. A basic cannon of interpretation holds that courts should not read statutes to include “surplusage” or words without effect. This interpretation is reinforced by the fact that section 202(a)(1)(A) bans discrimination in visa issuance, which is exactly what the executive order controls, not procedures or places.

Finally, the Government asserts that the President has the authority to bar the issuance of visas based on nationality pursuant to Section 215(a) of the INA, codified at 8 U.S.C. 1185(a) (“section 1185(a)”), which provides that:

Unless otherwise ordered by the President, it shall be unlawful for an alien to depart from or enter or attempt to depart from or enter the United States except under such reasonable rules, regulations, and orders, and subject to such limitations and exceptions as the President may prescribe.

8 U.S.C. 1185(a)(1). As support for this interpretation, the Government cites President Carter’s invocation of 8 U.S.C. 1185(a)(l) to bar entry of Iranian nationals during the Iran Hostage Crisis in 1979. Crucially, however, President Carter used section 1185(a)(1) to “prescribe limitations and exceptions on the rules and regulations” governing “Iranians holding nonimmigrant visas,” a category that is outside the ambit of section 1I52(a). 44 Fed. Reg. 67947, 67947 (1979). The Government has identified no instance in which section 1185(a) has been used to control the immigrant visa issuance process.

This is exactly the point that I made in my piece in the New York Times about the Carter administration’s actions. I further noted in my first post that the Carter administration did not implement this policy for people that it believed carried valid visas. It only implemented this policy with respect to those that it was uncertain whether the visas were valid or not because the Iranian revolutionaries had taken over the embassy in Tehran that housed the visa printing machine.

The government have also urged the courts to see this executive order as a brief “delay in visa decision-making.” But there is nothing in the category prohibition subparagraph (A) to indicate that a decision not to issue based on nationality could be done on a temporary basis, but more importantly, the executive order makes absolutely clear that the ban is indefinite, and that the 90 days is just the beginning. The government has already restarted the 90-day clock once, and each day that passes drains water from this claim.

Only a partial victory (but that could still allow total victory)

However, the ruling is only a partial victory for the plaintiffs because the court implausibly ruled that section 1182(f) impacts only the “entry of aliens” into the country, not “visa issuance.” Neither the government nor the plaintiffs agree with this view. Both argued against it also during oral arguments in the Ali v. Trump case when Judge Robart appeared to want to go this direction. The U.S. attorney explained, “The visa process is one aspect of the entire entry process.”

Not to argue the government’s case, but section 1201 does apply 1182 to the visa issuance process without limiting it to certain subsections. Every single administration—including this one—has interpreted 1182(f) to restrict both visa issuance and entry. That is why they were all printed in the Foreign Affairs Manual for consular officers. The ruling implies that all prior orders barring visas to people covered by section 1182(f)—including war criminals and members of military juntas, etc.—were improper because it determined that the section did not apply to visa issuance. This is much greater departure from precedent and practice than anything the plaintiffs urged. It also implies that it would also be improper for the executive order to restrict visas to nonimmigrants under section 1182(f) as well.

Moreover, this view of section 202 implies that Congress was very concerned with unbiased issuance of documentation, but not concerned with its immigration consequences. This is totally at odds with the statutory scheme and the legislative history, which clearly point to the intention to prevent the distribution of immigrants from being skewed based on nationality beyond what Congress specifically allowed. No need to go into the lengthy and definitive legislative history on this. The rest of section 202 which contains the per-country visa caps clearly is not intending to create an equal distribution of visa documents among the nations. It is intending to create an equal distribution of immigration.  

This view is reinforced by the fact that implies that the government is free to discriminate at all in the issuance of status—which can occur either at the border at entry or inside the United States—and that Congress was okay with such discrimination. If you are already in the United States, you are protected from discrimination, but outside, you are not. Section 1255 clearly instructs Secretary of State to treat status determinations against the visa cap under section 1152, demonstrating that Congress wanted those determinations—whether they happen at the border or inside the United States—to be treated the same as visa issuance determinations.

In any case, this split-the-baby approach creates an absurd result that Justice Scalia among others have urged courts to avoid. Under this decision, the government would be required to allow tens of thousands of immigrants to board planes and arrive at U.S. airports, creating exactly the type of chaos that the first order did. Hopefully, someone points out to these judges that this result is entirely at odds with the legislative scheme, and they will prevent this from occurring. No one—the government nor the plaintiffs—believe that this outcome would be just.

Fortunately, because the court also found that the executive order violates the First Amendment of the Constitution, we are protected from this situation. If it did arise, habeas corpus might be used to free the individuals, as it was in January, and provide a loophole to jump past the entry restrictions. This would be a pleasant, if not strange, way to arrive at victory.

Martin Feldstein has a new short paper out with some thoughts on a relatively under-researched subject: Why is Growth Better in the United States than in other Industrial Countries?

He begins:

In 2015, real GDP per capita was $56,000 in the United States. On a purchasing power basis, the real GDP per capita in the same year was only $47,000 in Germany, $41,000 in France and the United Kingdom, and just $36,000 in Italy. So the official measures of real GDP clearly point to the cumulative result of higher sustained real growth rates in the United States than in the major industrial countries of Europe and Asia.

Over the very long term, this is a truism. In order for the U.S. to be that much richer, it must have experienced faster real GDP per capita growth than comparator countries. We know from figures collated by the Maddison Project that the U.S. had around half the level of GDP per capita of the UK in the early 18th century, but by 1900 it was overtaking the UK as the richest country by income per head, and has remained in that leading position for almost all the period since.

But showing higher levels of income does not necessarily mean that the U.S. growth of GDP per capita was higher than other countries over more recent periods.

Figure 1 outlines the growth performance of G7 countries since 1970. As can be seen, U.S. average annual real GDP growth has indeed been higher than the rest, but average annual real GDP per capita growth was actually stronger in Japan, the UK and Germany over that period. In other words, the U.S. relative real GDP growth strength over that period is primarily about demographics.

Figure 1: Average annual real GDP and real GDP per capita growth, 1970-2015.


Source: World Bank Databank

The U.S. is still richer than the other countries, of course. That is, the level of real GDP per capita is still higher, but that gap already existed in 1970. Some other countries have grown more quickly since then from a lower base. In 1970, Japan’s real GDP per capita was 79 percent of the U.S., whereas in 2015 it was 91 percent. For the UK, the figure has increased from 77 percent to 80 percent. But no country appears to be fully converging.

The real question then appears to be this: though individual countries have converged somewhat towards the U.S. during periods since 1945, why does there seem to be a permanent gap in the level of GDP between the U.S. and other major economies?

In other words, what structural features in the U.S. help to make it permanently richer than other major economies?

Feldstein posits 10 possible explanations:

  1. An entrepreneurial culture
  2. A developed system of equity finance and local banks
  3. World class research universities
  4. Relatively free labour markets
  5. A growing population
  6. Culture and policy that encourages hard work and long hours
  7. Abundant energy combined with private mineral rights
  8. A favorable regulatory environment
  9. A smaller size of government than in other industrial countries
  10. A decentralized political system in which states compete

Whole theses could be written about each. But I’ll limit myself to seven quick comments here:

  1. It’s incredibly difficult to measure the individual effects of these things on growth, not least because they are inter-connected too in complex ways. That America was founded by self-selecting migrants might have brought with it a more entrepreneurial and harder-working culture, and also one which then leads to a greater acceptance for fairly liberal migration in future. A smaller government, with a lower tax burden increases the return to entrepreneurial activity. And so on.
  2. A lot of these explanations beg other questions, the most common of which is, “yes, but why?” What is the cultural, institutional or policy reason for the U.S. having 15 of the highest ranked 25 universities in the world, for example?
  3. There are potentially omitted explanations too, such as the sheer size of the internal American market under common language and customs (compare this to heavy national and cultural barriers in Europe even within the single market), and having a system of common law.
  4. Some of Feldstein’s explanations need not necessarily be “good things” from a libertarian perspective. For example, if people in the U.S. simply have a relative preference for working longer hours compared to leisure than the French, then the fact Americans work longer hours is not “better”. (Of course, if the difference is down to damaging policies, then that is another matter).
  5. Most of the explanations at some point come back to, as Feldstein puts it, “the general intellectual and political climate of the country” and how this affects the economy both directly and indirectly. Policies and systems of government do not fall manna from heaven, but tend to change over long periods to reflect ideas.
  6. Though the U.S. still has relative advantages over other major economies, it clearly faces significant challenges in many of these areas. Tyler Cowen’s new book highlights how the U.S. is becoming less entrepreneurial on many measures. Opposition to liberal migration seems to be hardening. Younger people seem to be more open to the idea of socialism, which could in future affect 4, 8 and 9 negatively.
  7. The experience of my own country ceding the forefront of the technological frontier suggests these relative advantage need not always hold. In the past 15 years, that the U.S. is doing relatively well largely reflects more significant relative deterioration in other countries than its own success.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our “Current Wisdom.”

A story this week that has been making the rounds in the climate-media complex finds that natural variability is responsible for perhaps as much as 50% of the summertime decrease in Arctic sea ice that has taken place over the past 30 years or so (anthropogenic climate change is the presumed factor in the remainder).

This isn’t new. The last (2013) science report from the UN’s Intergovernmental panel on climate change said:

Using climate model simulations from the NCAR CCSM4…inferred that approximately half (56%) of the observed rate of decline from 19979 to 2005 was externally (anthropogenically) forced, with the other half associated with natural internal variability.

Ten years ago, a study was conducted by a team led by Julienne Stroeve that looked at the observed rate of Arctic sea ice loss and compared it to climate model expectations. [A side note here: the loss of Arctic sea ice (which is floating ice) does not lead to sea level rise just as the melting of ice in your cocktail doesn’t lead to your glass overflowing]. What Stroeve and colleagues found was the Arctic sea ice was being lost at a far brisker pace than climate models had predicted (Figure 1).

Figure 1. Arctic sea ice extent from observations (red think line) and climate models (colored spaghetti), from Stroeve et al. (2007).


Is this an instance where human-caused climate change is progressing at a pace that is “worse than expected”?


In the 10 years since the Stroeve study, most research has blamed natural variability for the faster-than-modeled pace of Arctic ice loss. The new study, led by University of California Santa Barbara’s Qinghua Ding, announced the same this week. Co-author Axel Schweiger explained to the Christian Science Monitor that there were two possible reasons for the existing model/observation discrepancy:

“1) [the models] are not sensitive to greenhouse gases because they oversimplify physical processes [i.e., climate change is worse than expected] or 2) natural variability has added to the observed trend and the models are fine, [but] because by design of the experiment…they cannot match the observed trend.”

He then added:

“Our results suggest that [natural variability] is a good explanation for the discrepancy”

The new study was rather unique in that the research team attempted to quantify the role of natural variability (acting primarily through circulation changes that helped both to directly usher ice out of the Arctic as well as to introduce warm water (from more southerly latitudes) into it) versus that of anthropogenic climate change. As to what they found the authors write:

“Internal variability dominates the Arctic summer circulation trend and may be responsible for about 30–50% of the overall decline in September sea ice since 1979.”

It is this result that the press reports tended to focus on. But in doing so, they missed an important implication of this finding—one that we first touched on back in 2011. It is this: Arctic ice loss is a positive feedback on temperature as the loss of ice (both acting to reduce the area of a highly reflective surface as well as exposing a larger area of warm water) leads to rising temperatures which leads to more ice loss, and so on. This is the primary reason why the warming in the Arctic is supposed to exceed the global average warming rate.

But if a sizeable proportion of the ice loss is being caused by natural variability (and not greenhouse gas emissions), then some proportion of the warming observed over the past 30 years must be caused by the same forces of natural variability. 

This means that when comparing the rates of observed warming with the rates of warming expected by climate models, that natural variability acting on Arctic sea ice has been making the models seem to be closer to reality than they actually are. In other words, this form of natural variability is (fortuitously) acting to improve (apparent) model/observed agreement.

And considering that the climate models are already performing poorly as it is, the new finding means that they are actually faring even worse than has been generally realized. And accounting for this strengthens the case for a lukewarming future from greenhouse gas emissions.

Ring up another strike against the climate models, and another reason why basing government policy on their output is a bad idea.



Ding, Q., et al., 2017. Influence of high-latitude atmospheric circulation

changes on summertime Arctic sea ice. Nature Climate Change, doi:10.1038/nclimate3241.

Stroeve, J., et al., 2007. Arctic sea ice decline: Faster than forecast. Geophysical Research Letters, 34, L09501, doi:10.1029/2007GL029703.

The Trump administration’s North Korea policy started taking shape this week. On his first official trip to East Asia, Secretary of State Rex Tillerson declared an end to the Obama administration’s policy of strategic patience, ruled out negotiations with North Korea unless the North gave up nuclear weapons, and said that the United States would not rule out military action. Despite Tillerson’s attempt to put daylight between the Obama administration and the Trump administration, insisting on denuclearization and not ruling out the use of military force have been features of U.S. policy toward North Korea under both Bush and Obama. And this is precisely why the Trump administration’s approach, as it stands now, has little chance of succeeding.

Reining in North Korea has rapidly risen to the top of the Trump administration’s list of international challenges. In February, Pyongyang tested a solid-fueled ballistic missile with a tracked transporter erector launcher that will be very difficult for the United States to track and target in a preemptive attack. The following month, during the annual U.S.-South Korean Foal Eagle military exercises, which North Korea views as a dress rehearsal for an invasion, the North launched at least four ballistic missiles into the Sea of Japan simulating a nuclear attack against a U.S. Marine Corps air station at Iwakuni. Jeffrey Lewis of the Middlebury Institute for International Studies at Monterey succinctly described the Foal Eagle/missile launch interaction, “If we are practicing an invasion, they are practicing nuking us to repel that invasion.”

The (Limited) Role of China

The amount of pressure that the Trump administration may place on China could distinguish its approach from previous administrations. However, the assumption that China is the only roadblock to solving the North Korea problem is seriously flawed. Threatening to increase regional ballistic missile defenses if Beijing doesn’t pressure Pyongyang or sanctioning Chinese financial institutions that do business with North Korea create costs for China to contend with. But these costs are not likely to outweigh the strategic value that China places on North Korea as a buffer state.

Moreover, even if Beijing’s strategic calculus did change and it wanted to exert more pressure on Pyongyang’s behavior, its ability to do so is limited for two reasons.

First, China’s ability to influence Kim Jong Un’s behavior via diplomacy or some other form of soft power is very constrained. Chinese-North Korean relations have deteriorated under Kim Jong Un. He has eliminated several officials that were pro-China, most notably his uncle Jang Song Thaek, who was executed in December 2013. According to a February 2017 article in the New York Times, “some Chinese analysts say [China-North Korea relations] are at their lowest point since the founding of the North as a separate country after World War II.”

Second, North Korea’s nuclear weapons give it an insurance policy against regime change. If China is unable to influence Kim Jong Un, then removing him and installing a new leader may be the next “best” option for defusing the current spiral of tensions. Removing Kim in a targeted strike could succeed, but if a leadership vacuum leads to domestic instability then China could face other problems like a refugee crisis or the danger of “loose nukes.” A larger-scale regime change attempt, similar to the U.S. invasion of Iraq in 2003, would be vulnerable to nuclear attacks. While China is developing ballistic missile defense capabilities, they are probably not sophisticated enough to offer an acceptable degree of protection against North Korean missiles.

Influencing Kim’s behavior or removing him from power are not the only coercive options available to China, but they have the best chance of producing the United States’ political objective of denuclearization. These options are also unrealistic given the high costs of implementing them for uncertain benefits. If U.S. policymakers see pressuring China as the means to an end, then the Trump administration would have to lower its expectations of what China is willing and able to do vis-à-vis North Korea. However, Tillerson’s recent statements suggest that Trump will not be willing to back down from the goal of North Korean denuclearization.

Hoping for a Hail Mary

The North Korea problem is intractable because all of the players involved are unwilling to make the big political moves necessary to break the status quo. The United States maintains the unrealistic goal of denuclearization and advertises its willingness to engage in preemptive strikes against North Korea. This encourages the North to see nuclear weapons, and a willingness to use them first in a conflict, as the key to its survival. China makes peace offers that it knows are unacceptable to the United States and its allies while the United States pressures China to do more to constrain North Korea that Beijing is unable and unwilling to do. Finally, South Korea and Japan, without nuclear deterrents of their own, develop military technology and doctrines that would allow them to engage in preemptive strikes in order to limit the damage they would face from a North Korean attack.

This is a recipe for crisis instability and diplomatic deadlock that could result in disaster. One of the parties involved needs to take the initiative and throw a diplomatic Hail Mary to try to break the impasse and get the region to step back from the brink. Bilateral negotiations between the United States and North Korea, a U.S.-China agreement to withdraw all U.S. troops from South Korea in exchange for Chinese-led regime change and stabilization of the North, and South Korea abandoning its plans for preemptive strikes are potential examples.

Of course, a diplomatic Hail Mary would be a massive risk. Breaking the status quo would require one or more parties to accept high costs, and any plan could fail. But if the parties involved stay the course it is difficult to see how the North Korea problem is resolved without catastrophe. 

Ad hominem has always been a feature of politics, but Senator John McCain (R-AZ) elevated it to a new level earlier this week. The incident occurred when McCain came to the Senate floor to ask for unanimous consent to move forward on a vote formally bringing Montenegro, a small country in the Balkans, into the NATO alliance. Senator Rand Paul (R-KY) objected. McCain responded by suggesting Paul was a traitor to his country and accusing him of “working for Vladimir Putin.”

McCain seemed particularly incensed that Paul objected without explaining his reasons. As reported at the Daily Beast:

“I note the senator from Kentucky leaving the floor without justification or any rationale for the action he has just taken. That is really remarkable, that a senator blocking a treaty that is supported by the overwhelming number—perhaps 98, at least, of his colleagues—would come to the floor and object and walk away.”

He then directly connected Paul to the Russian government: “The only conclusion you can draw when he walks away is he has no justification for his objection to having a small nation be part of NATO that is under assault from the Russians.

“So I repeat again, the senator from Kentucky is now working for Vladimir Putin.”

Paul later issued a statement in response:

“Currently, the United States has troops in dozens of countries and is actively fighting in Iraq, Syria, Libya, and Yemen (with the occasional drone strike in Pakistan)…In addition, the United States is pledged to defend 28 countries in NATO. It is unwise to expand the monetary and military obligations of the United States given the burden of our $20 trillion debt.”

That seems like a reasonable position to hold, and certainly not one that requires Paul to be a Russian stooge.

Indeed, many of America’s most reputable officials and academics have opposed post-Cold War NATO expansion for substantive reasons. George Kennan, perhaps our most famous Cold War diplomat and widely considered to be the father of the United States’ containment strategy, famously opposed NATO expansion in the 1990s, writing in the New York Times that expanding NATO would be a “fateful error” that would “inflame the nationalistic, anti-Western and militaristic tendencies in Russian opinion” and “restore the atmosphere of the cold war to East-West relations.” Like Senator Paul, Kennan also worried about the problems of credibility and overextension. Would McCain accuse Kennan of treason?

In 1995, a group of almost two dozen retired Foreign Service, State Department, and Department of Defense officers who served during the Cold War signed an open letter opposing NATO expansion on grounds similar to Paul and Kennan. They argued it risked exacerbating instability and “convincing most Russians that the United States and the West are attempting to isolate, encircle, and subordinate them.” The signatories included Paul H. Nitze, former Secretary of the Navy and Deputy Secretary of Defense, as well as Jack F. Matlock, Jr., former Ambassador to the USSR, and John A. Armitage, former Deputy Assistant Secretary of State for European Affairs. Were these gentlemen also secret Russian moles?

Within the academic international relations literature, there are a host of reasons to be skeptical of the wisdom of NATO expansion. Contrary to advocates of expansion, there is little reason to believe NATO expansion spreads democracy. Furthermore, scholars widely acknowledge that, in an attempt to expand Western security cooperation and deter Russian assertiveness in Eastern Europe, NATO expansion feeds Russian insecurities and provokes Moscow to take actions to preserve its sphere of influence in its near abroad. As Jonathan Masters of the Council on Foreign Relations explains, Putin cited NATO expansion to justify Russia’s military interventions in Georgia and Ukraine, a “clear signal of Moscow’s intentions to protect what it sees as its sphere of influence.”

Beyond the classic security dilemma, NATO expansion does not serve U.S. interests. Expanding security commitments to more European states puts U.S. credibility on the line with little to no strategic benefits in return. What happens in Montenegro does not affect our security as a nation, except that by taking on additional responsibilities to defend other countries, we risk being sucked into unnecessary conflicts that could otherwise be avoided. As MIT’s Barry Posen explains, “Once committed to defend allies everywhere, a state becomes obsessed with its political and military prestige, and vulnerable to the claim that ‘small’ wars must be fought in the hope of deterring large ones. This is especially true when the actual strategic value of these allies is modest.”

Notwithstanding Russia’s reprehensible actions in places like Georgia and Ukraine, Europe is a relatively benign security environment that doesn’t need to be under the American security umbrella. To the extent that Europe needs military capabilities to deter Russian aggression, the region is rich and powerful enough to provide for its own security.

It would be a shame if sensible, earnest, and well-informed perspectives continue to be shut out of the debate by slanderous accusations that opposition to NATO expansion is a signpost for treason.

Of the many questions reporters asked Janet Yellen on Wednesday, at her press conference following the FOMC’s decision to raise the Fed’s policy rates, my favorite was the very first, posed by the Financial Times’ U.S. Economics Editor, Sam Fleming.

Here is Mr. Fleming’s question:

[You’ve stated that the Fed wants to delay*] balance sheet normalization until [interest rate*] normalization is well under way. Could you give us some sense about “what well under way” means, at least in your mind — what kind of hurdles are you setting, what kind of economic conditions would you like to see, is it a matter of the level of the short term federal funds rate as being the main issue? What kind of role do you see the role of the balance sheet playing in the mobilization process over longer term? Is it an active tool or passive tool? Thanks.

And here is Chair Yellen’s response:

Let me start with the second question first. We have emphasized for quite some time that the committee wishes to use variations in the fed funds rate target or short term interest rate target as our key active tool of policy. We think it’s much easier, in using that tool, to communicate the stance of policy. We have much more experience with it, and have a better idea of its impact on the economy. So, while the balance sheet asset purchases are a tool that we could conceivably resort to if we found ourselves in a serious downturn where we were again up against the zero bound, and faced with substantial weakness in the economy, it’s not a tool that we would want to use as a routine tool of policy.

Mr. Fleming didn’t ask a follow-up question, so naturally I had no way of knowing what he thought of Yellen’s answer. I did, however, know just what I myself thought of it, which was, not much.

In fact, I thought so little of it that I wrote Mr. Fleming to say so. As many of you may also have heard the same exchange, I thought to share my remarks to him publicly, so here they are, minus a paragraph that repeated Yellen’s statement:

Dear Mr. Fleming,

Of the questions posed to Janet Yellen at today’s press conference, I thought yours especially worth asking, and thank you for having asked it. However, I also thought Yellen’s answer unsatisfactory and misleading.

Before the crisis and LSAPs, the Fed didn’t use rate changes instead of balance sheet changes for monetary control. It relied on balance sheet changes, a.k.a. open-market operations, to achieve whatever fed funds rate target it set. In other words, it had decades, dating back to the 1930s, of experience using balance sheet asset purchases (or sales) as, not only “a” policy tool, but as its principal policy tool! Rate change announcements, on the other hand, though they did indeed serve to “communicate the [Fed’s planned] stance” to the public, were incapable by themselves of implementing that stance.

In suggesting that balance sheet changes are an option the Fed might “conceivably resort to…were we again up against the zero bound,”  Yellen seems to equate balance sheet changes in general with “quantitative easing,” where the last refers to the special case, never resorted to until 2008, in which the Fed no longer treats asset purchases as a means to achieve some (positive) fed funds rate target. That confusion, if I’m correct in seeing it as such, represents a pretty elementary error on Yellen’s part. Once again, balance sheet changes were the normal means for the Fed to implement policy before 2008, though they were changes aimed at hitting an announced single-value (and market determined) effective fed funds rate.

Today, with IOER and ON-RRP between them defining the Fed’s ffr target “range” — a “target” it can’t possibly miss — the Fed’s administered rate hike changes ARE the policy changes, since no balance sheet adjustments are required to make them happen. But this approach has only been in effect since the crisis, and has only been employed 3 times so far. So for Yellen to pretend that it’s one that the Fed has “more experience with” is really rather disingenuous.

Once again, thank you for asking the question.



The public needs to see the Fed’s arguments for maintaining its present, boated balance sheet for what they are: mere excuses. It also needs to see the delay for what IT really is:  mere stalling while the Fed and its apologists rally support, in Congress and beyond, for a permanently enlarged Fed footprint on the U.S. credit system.

(For more on the balance sheet question, see here, here, and here.)


*The words in brackets were partly inaudible, so I reproduce them according to my own understanding. Yellen’s statement itself leaves little room for doubt concerning what Mr. Fleming meant to ask.

[Cross-posted from]

President Trump’s 2018 budget takes a meat cleaver to many federal programs. In my issue areas–transportation, housing, and public lands–it would end the Federal Transit Administration’s New Starts program; end funding for Amtrak’s long-distance trains; eliminate HUD community development block grants; and reduce funding for public land acquisition.

Trump calls this the “America First” budget. What it really is is a “Federal Funding Last” budget, as Trump proposes to devolve to state and local governments and private parties a number of programs now funded by the feds. In theory, the result should be greater efficiency and less regulation. However, in most of the areas I know about, Trump could have gone further and produced even better results.

Transportation: I applaud the elimination of New Starts but am disappointed that Trump would continue to fund projects with full-funding grant agreements. There are several insanely expensive projects, including the Maryland Purple Line and the Minneapolis Southwest Line, that have such agreements but haven’t started construction and should be eliminated. A number of streetcar and bus-rapid transit projects also fall into this category. If Congress is willing to live with no more full-funding grant agreements, it should allow the administration to also review and eliminate projects that haven’t yet begun construction or have made only token construction efforts.

The proposal to eliminate Amtrak long-distance trains is politically problematic. Since Amtrak’s other trains reach just 22 states, while the long-distance trains add 26 more, this proposal will look like it is favoring some states over others. As an alternative, I would have suggested that the federal government offer to cover 25 percent (or less) of the fully allocated costs (including depreciation) of each train or route, including the Northeast Corridor. If fares don’t cover the other 75 percent, then state support would be required or the trains would be cut. This is much more fair, especially because some state-supported trains actually require subsidies per passenger mile that are much larger than many of the long-distance trains.

Three other transportation proposals look good. One would transfer air traffic control to an independent, non-governmental organization, which would quickly install new equipment and increase airline capacities and safety. A second would eliminate the so-called Essential Air Service program, which subsidizes airports in smaller communities. Trump also proposes to eliminate the TIGER grant program, a relic of the 2009 stimulus bill, that has funded streetcars and other ridiculous projects.

Housing: Eliminating community development block grants would save $3 billion, which is a lot of money. This should be cut, says the proposal, because “the program is not well-targeted to the poorest populations and has not demonstrated results.” The budget would also cut $35 million for affordable housing. I support both of those cuts because I think such federal funding allows cities to do crazy things that make housing more expensive knowing they can use federal dollars to mitigate the problem by building a few units of so-called affordable housing. The budget keeps the mortgage interest deduction and most other subsidies to homeownership; these are clearly unnecessary as many other nations have higher homeownership rates than we do without these programs.

Public lands: Some of Trump’s public land proposals are vague, especially one that “streamlines operations while providing the necessary resources” to manage the lands. Less vaguely, it reduces land acquisition by $120 million (though I’m not sure what is left); fully funds fire protection at the ten-year average (which is too much money); and provides funding for national park maintenance that has been deferred. At least some of that deferred maintenance is for employee housing, which I think the Park Service should eliminate. In general, these proposals tinker at the edges of public land management. Some more radical programs, such as more user fees and contracting fire suppression out to the states by having the federal government pay the states the same as private landowners pay, could have greatly improved management and saved billions of dollars.

In short, most of Trump’s proposals on issues that I am familiar with are good. But they could have been great if the administration had taken one more step and saved federal dollars in ways that improve incentives for other parties to work more efficiently.

As I note in a post at Overlawyered, the House of Representatives has been moving quickly on litigation reform, both on perennial measures long stymied by Democratic opposition and on others of newer vintage (more). Of particular interest, two measures track recommendations Cato scholars have been making for years, while a third has been scaled back in a way that at least nods to concerns Cato scholars have expressed.

The new 8th edition Cato Handbook for Policymakers contains a chapter on tort and class action law prepared by Robert Levy, Mark Moller, and me. Its first federal-level recommendation is that “Congress should restore meaningful sanctions for meritless litigation in federal court.” On March 10, by a largely party-line vote of 230-188, the House passed the Lawsuit Abuse Reduction Act (LARA), H.R. 720, which would restore the regime of strong Rule 11 sanctions in federal litigation that were gutted in 1993 (committee report here). LARA has been proposed in one form or another for many Congresses and has passed the House more than once before stalling in the Senate; more on it here.

Our handbook chapter also recommends that Congress “implement further reforms for class actions that cross state lines,” a type of suit that often enables state courts to assert their power over transactions and parties in other states. While our recommendations are multi-faceted, many of them overlap with provisions in the pending H.R. 985, the Fairness in Class Action Litigation Act (committee report; passed the House March 9, 220-201). FICALA in turn adds other provisions of its own; attorney Andrew Trask, author of multiple essays on class action law for the Cato Supreme Court Review, takes a relatively favorable view of its overall impact.

Finally, there has been a development worth noting on H.R. 1215, the Protecting Access To Care Act, which passed committee by an 18-17 vote on Feb. 28. I and others have repeatedly criticized federal medical liability bills on the grounds that they run into serious problems of federalism and enumerated powers, seeking to justify federal involvement by way of loose New Deal doctrines of impact on interstate commerce, and overriding the workings of state courts even as to the large mass of medical malpractice disputes in which both parties to the lawsuit are local to the state and the costs of error are apt to be local as well. As I argued in this space:

That doesn’t mean federal policymakers are to be left with no role at all. For example, if Washington is paying for a large share of hospital stays, it may make sense as a cost containment measure for it to steer beneficiaries into lower-cost ways of resolving disputes over care quality, or even to ask beneficiaries as a condition of treatment to agree not to file certain suits at all. But that would require stepping back toward a more careful—and more Constitutionally appropriate—view of the federal role.

This year, PACA includes a new limiting provision. To quote Rep. Bob Goodlatte, on the bill’s latest version:

Unlike past iterations, this bill only applies to claims concerning the provision of goods or services for which coverage is provided in whole or in part via a Federal program, subsidy, or tax benefit, giving it a clear federal nexus. Wherever federal policy affects the distribution of health care, there is a clear federal interest in reducing the costs of such federal policies.

Whether the provision in question is drafted in such a way as to pass federalist muster is a question for another day — but it does at least seem that someone on Capitol Hill may have been listening to our past critiques.

The White House released President Donald Trump’s first budget today. In the opening message, the president says, “Our Budget Blueprint insists on $54 billion in reductions to non-defense programs. We are going to do more with less, and make the government lean and accountable to the people.”

I would rather that the government do less with a lot less, but I appreciate that Trump is proposing to fully offset to his defense spending increases. His Republican predecessor in office pushed for large increases in defense, education, health care, and other spending without sufficient offsets, thus putting us on our current path of endless deficits and rising debt.

Budget director Mick Mulvaney has assembled a thoughtful array of cuts, including:

  • Ending the Community Development Block Grant.
  • Ending the Economic Development Administration.
  • Ending the Minority Business Development Agency.
  • Ending Essential Air Service (EAS) subsidies.
  • Cutting an array of K-12 subsidies.
  • Moving air traffic control to the private sector.
  • Cutting the EPA budget by almost one-third.
  • Cutting subsidies for local water and sewer.
  • Cutting rural subsidies.
  • Ending various energy subsidy programs.
  • Ending the Low Income Energy Assistance Program (LIHEAP).
  • Ending the Community Services Block Grant.
  • Cutting pork-barrel transportation grants.
  • Cutting Amtrak subsidies.
  • Cutting job-training subsidies.
  • Ending the Weatherization Assistance Program.
  • Cutting funding for the United Nations and the World Bank.
  • Ending funding for the National Endowment for the Arts, the National Endowment for the Humanities, and the Corporation for Public Broadcasting.

Nearly all of these cuts have been recommended by

Many of them would remove the federal government from activities that are properly state, local, and private. So while liberals and lobby groups will complain, individual states can fund programs such as LIHEAP and EAS themselves. If EAS really is “essential,” then local businesses and governments in affected communities will have a strong incentive to raise their own funding for it.

Kudos to Mulvaney and his team for proposing such a broad range of reforms.

Prevailing wisdom holds that this is a time of stagnating incomes and economic struggle for American families. That is indeed a reality in many homes. But as economist and advisory board member Mark Perry recently pointed out, most American families are doing better than the prevailing wisdom might have them believe. 

After adjusting for inflation, it turns out that median income for families reached a record high in 2015, the last year for which the U.S. Census Bureau has data. Families that include married couples, particularly those where both spouses participate in the labor force, did even better and also saw their incomes break records in 2015. The Census Bureau defines a family as “a group of two people or more … related by birth, marriage, or adoption and residing together.”


Please note that median family income is not the same thing as median household income, as the latter includes non-family households.  Median household income has been more stagnant. It was 56,516 dollars in 2015, around 14,000 dollars less than median income for family households. Interestingly, 65 percent of U.S. households were family households in 2016, the most recent year of data.

All family types saw a somewhat notable median income uptick in 2015, allowing each type to outperform pre-Great Recession income levels. Of course, some families have done better than others. Families headed by single women (simplified to “single mothers” in the above graph) have seen their incomes rise only slowly, while families headed by single men (“single fathers” in the graph) have seen their incomes essentially stagnate since the 1970s.

However, most families fall into the categories that have made impressive real income gains. 73 percent of family households include married couples, while 19 percent are headed by single women and only 8 percent are headed by single men. Moreover, both spouses now work in over 60 percent of married couple families, placing them in the highest-earning category of family.

The median income for all U.S. families was only 28,144 dollars in 1947, compared to 70,697 dollars in 2015. That is an increase of 151 percent. Again, that is after adjusting for inflation.

So despite the popular narrative of economic decline pushed by some politicians and newspeople, the American family is earning more than ever before recorded.

I’ve referred often in these pages to the virtues of Canada’s late-19th century currency system, with its heavy reliance upon circulating notes issued by several dozen commercial banks, most of which commanded extensive nationwide branch networks. I’ve also lamented the fact that so few monetary economists today, let alone members of the general public, seem aware of that arrangement, the superiority of which, both absolutely and compared to its U.S. counterpart, was once widely celebrated. For I’m certain that, if more people were aware of it, the scales might drop from their eyes, plainly revealing the gigantic blunder our nation (and most others) committed by entrusting the management of paper currency to a government-sponsored monopoly managed by bureaucrats.

So you might expect me to be jumping for joy after seeing this new Bank of Canada Staff Working Paper by Ben Fung, Scott Hendry, and Warren E. Weber, on “Canadian Bank Notes and Dominion Notes: Lessons for Digital Currencies.”  But no such luck: instead, after reading it, I’ve been in a blue funk.

How come? Because, instead of drawing badly-needed attention to the substantial merits of Canada’s private currency system, Messrs. Fung, Hendry, and Weber focus on its shortcomings, claiming that it suffered from serious flaws that only the government could fix. They then go on to argue that government intervention may also be needed to keep today’s private digital currencies from displaying similar flaws. In short, according to them, Canada’s experience, instead of casting doubt on the desirability of special government regulation of private currencies, supplies grist for regulators’ mill.

Is their perspective compelling? I don’t think so. As I plan to show, and as even a cautious reading of Fung et al.’s own assessment will suggest to persons familiar with other nations’ experiences, the imperfections of Canada’s private banknote currency were minor ones, especially in comparison to those of the concurrent U.S. arrangement. Nor is it even clear that they were genuine flaws, in the sense that implies market failure. The reforms that eventually eliminated the imperfections were, in any case, not imposed on Canada’s commercial bankers against their wishes, but instigated by those bankers themselves. Finally, the suggested analogy between Canada’s 19th-century banknotes and modern digital currencies, far from supplying solid grounds for supposing that unregulated digital currencies are likely to exhibit the same (real or presumed) shortcomings as their 19th-century Canadian counterparts, is so forced as to be utterly unconvincing. For all these reasons, those seeking to draw useful lessons from Canada’s private currency experience will be well-advised to look for them elsewhere.

Because there’s so much I feel compelled to say about Fung et al.’s paper, I’ve decided to devote several posts to it. In this one, I’ll assess that paper’s claims regarding the supposed shortcomings of Canada’s private banknote currency. In the follow-ups, I’ll address their claim that it took government regulations to perfect that currency, and their claim that Canada’s experience with private banknotes points to the likely need for government intervention to correct inherent shortcomings of today’s digital currencies. Finally, I’ll share my thoughts regarding the real lessons to be learned from Canada’s 19th-century currency system.

Supposed Shortcomings of Canada’s Private Currency

Following Canada’s Confederation in 1867, that country’s paper currency consisted mainly of the circulating notes (or “bills,” as the Canadians called them) of a couple dozen commercial banks, plus some government-issued paper money known as “Dominion” notes. Although Dominion notes were made legal tender, both they and bank notes were payable on demand in specie. Unlike the notes of U.S. national banks, which had to be secured by certain U.S. government bonds, Canadian bank notes were backed by their issuers’ general assets. Canada’s banks were also free, unlike their U.S. counterparts, to establish note-issuing branches anywhere in that country, and even beyond it (several had New York City branches). Nor were the banks required to maintain any specific amount of cash reserves. After 1871, banknotes were limited to denominations of $4 or more; and in 1880 that minimum was raised to $5. The banks’ charters also limited their circulation to their paid-in capital; however that restriction didn’t become binding until the outbreak of the U.S. Panic of 1907. In short, the supply of Canadian banknote currency came very close to being completely unregulated.

That lack of regulation, according to Fung et al., caused Canada’s private banknote currency to go awry in several ways. It was, they say, subject to “considerable” counterfeiting.  And prior to the passage of the Bank Act of 1890 , it was also neither perfectly safe nor perfectly uniform. Bank failures sometimes exposed note holders to long delays in payment, if not to outright losses; and banknotes sometimes traded at a discount from their face values.


How serious were these imperfections? Although Fung et al. speak of “considerable” counterfeiting, the adjective merely means that, at one time or another, attempts were made to counterfeit most of them, and that now and then substantial amounts of counterfeits were produced. It doesn’t follow that the counterfeits in question were capable of fooling experienced bank tellers (Fung et al. themselves recognize that many were of “poor quality”) much less that they were a serious menace to legitimate banks of issue (if they were, the record is silent about it). Still, counterfeits had their victims, and as such were a blemish on the Canadian system’s record.


Regarding banknotes’ safety, Fung et al. note that, of 55 Canadian banks that operated at some time between 1867 and 1895, only three wound-up without paying their note holders in full. The Bank of Acadia, which failed in 1873, left most of its outstanding notes unpaid, whereas the Mechanics Bank of Montreal, which failed in 1879, and the Bank of Prince Edward Island, which failed in 1881, paid 57½ and 59½ cents on the dollar, respectively.

Considering the size of the banks that failed, as measured by their total note circulation, these already very modest losses appear even less significant. At the time of its failure, the Mechanics Bank of Montreal had only $168,132 in notes outstanding. The circulation of the Bank of Prince Edward Island, at $264,000, wasn’t all that much greater. The Bank of Acadia, finally, was an outright fraud, for which no actual circulation figures exist. Assigning to its circulation the almost certainly too-generous value of $50,000, and allowing for a total circulation of all Canadian banks of about $25 million, the notes of the three failed banks made up less than 2 percent of the total. Not perfect, to be sure; but not bad at all.

Stepping back to take in a still bigger view, the Canadian banks’ record looks even better: in all, between 1867 and the end of the century, the grand total of losses of all creditors of failed Canadian banks amounted to $2,000,000, which was less than 1 percent of the banks’ obligations.

But to really appreciate how safe Canada’s banks were, one needs to compare their performance to that of banks elsewhere — something Fung et al. don’t bother to do. The contrast between Canadian bank’s safety and that of their contemporary U.S. counterparts — the notes of which were, remember, fully backed by U.S. government bonds — is especially striking. According to Andrew Frame, between 1863 and 1896, 330 national banks failed.  Of $98,322,170 in accumulated claims against them, less than 64 percent had been paid by the end of the period, leaving $35,556,026 still due to creditors. Another 1,234 state banks also failed, leaving $I20,541,262 out of $220,629,988 in debts unpaid. In other words, the record of recoveries from U.S. bank failures taken as a whole was not much better than that of two of the Canadian system’s three worst deadbeats! 

There was, however, as Fung et al. also point out, more to the imperfect safety of Canadian banknotes than these recovery statistics suggest, for in some cases in which holders of failed banks’ notes were eventually paid in full, they had to wait for months, and in one instance more than two years, to be paid in full, or else had to settle for less by selling their notes at a discount. But here again, the extent of the losses involved mustn’t be exaggerated. Of eight banks that failed other than those that never paid their notes in full, five had fewer than $50,000 in outstanding notes, and the circulation of one — the Bank of Liverpool — was just $3,368! Furthermore, according to George Hague (1825-1915), a long-time Canadian banker and author of several highly-regarded works on the history and workings of the Canadian system, the notes of failed banks “have generally maintained their value, or fallen only to a slight discount until finally paid.” Payment also appears to have been made, in most cases, in a matter of a few months at most.

Fung et al. point, on the other hand, to two notorious cases — those of the Consolidated Bank of Canada (circulation $423,819), which suspended in 1879, and of the Maritime Bank of the Dominion of Canada (circulation $314,288), which did so in 1887. Some holders of the Consolidated’s notes, they observe, submitted to discounts of 10-25 percent rather than wait for payment, while it took those who held notes of the Maritime Bank more than two years to be paid in full.

Concerning these exceptions, in one case — that of the Maritime Bank — the bank’s liquidators were sued by the Receiver-General of the Province of New Brunswick, which was among its depositors. The Receiver-General claimed that, because it was a representative of the Imperial Government, the royal prerogative gave it priority over the banks’ other creditors, including note holders. (This was, it bears noting, notwithstanding the 1880 statute giving bank note holders a first lien.) The suit set in motion a protracted  sequence of trials and appeals, culminating in a Supreme Court verdict in the provincial governments’ favor. It’s therefore not inaccurate to say that, if the Maritime bank’s note-holders, who were eventually paid in full, suffered in the meantime, the fault was neither the bank’s, nor its liquidator’s, but that of Canadian government authorities themselves, who were more interested in pressing their own claims than in satisfying those of Canada’s citizens at large.

The case of the Consolidated Bank of Canada was, on the other hand, a genuine, if singular, blot on the Canadian system’s record, which led to the indictment, and nearly to the conviction, of several of the bank’s officers for making a “willfully false and deceptive statement” regarding the bank’s condition prior to its failure. (The gory details can be found here.) Eventually a broker paid $260,000 for the Consolidated’s assets, which it reported as being worth over $3 million at the time of its suspension!


Finally, note discounts. It’s true that, before 1890, the notes of perfectly solvent Canadian banks sometimes commanded less than their full face values at places remote from their sources. Unlike information concerning discounts on antebellum U.S. bank notes, which can be had from numerous “banknote reporters” published at the time, details concerning Canadian banknote discounts are relatively few and far between — a fact that itself suggests that the problem was not so severe. What few details there are also suggest that note discounts were modest. According to L. Carroll Root,  (p. 323), before 1890 notes from Nova Scotia and New Brunswick tended to pass at a “slight” discount in both Toronto and Montreal, while those of Toronto and Montreal tended to be discounted — again, slightly — in the Northwest only, while passing current in Toronto.

The Nirvana Fallacy

There’s no denying that, whatever its merits, Canada’s private currency was less than perfect. But so what? Imperfection alone is no proof of market failure. To assume otherwise is to subscribe to what Harold Demsetz famously named “the Nirvana fallacy”:  the view that “implicitly presents the relevant policy choice as between an ideal norm and an existing ‘imperfect’ institutional arrangement,” instead of recognizing that the relevant choice must be one between alternative realizable arrangements. More concretely and precisely, it’s necessary to ask, not whether Canada’s private banknote currency was “imperfect,” but whether it was inefficient. Alas, that is something Fung et al. never do.


Yet the imperfections of Canada’s commercial banknote currency were certainly not such as might supply prima facie grounds for supposing that it was inefficient. Take counterfeiting. Yes, Canadian banknotes were counterfeited. But the same may be said for virtually every paper currency that has ever been issued, including every sort of official (that is, government or central-bank issued) paper currency. Experience shows, moreover, that even the most elaborate — and expensive — schemes for thwarting counterfeiters are incapable of deterring them. One need only consider the Fed’s frequent, futile efforts to render its currency counterfeit-proof. Nor has the Bank of Canada been much luckier.

Indeed, if you’re looking for troublesome high-tech counterfeits, the best place to look for them is, not among Canada’s private banknotes, but among those paper currencies, including Canada’s 19th-century Dominion notes and today’s fiat monies, that qualify as legal tender. Counterfeits are usually detected by expert tellers working for legitimate currency issuers, rather than by ordinary members of the public. Counterfeit detection rates therefore depend on how often an issuer’s currency returns to it for processing. Unlike commercial banknotes, which tend to circulate only for relatively short periods before being returned to their supposed source, legal tender currencies tend to circulate until they wear out, that is, for years rather than a few days. Consequently the risk of fakes, and good ones especially, being quickly detected is relatively low. And the more widely a legal tender currency circulates, the more safe it is to imitate, other things equal.

It was partly for that reason (but also because their designs were no better than those of commercial banknotes) that circulating Dominion notes were no strangers to the forgers’ wiles. Fung et al. themselves point out that $1 Toronto (Dominion) notes of 1870, and both $1 and $2 Montreal and Toronto notes of 1878, were “extensively counterfeited.” (According to one numismatic reference work, counterfeit specimens of the $2 Dominion notes of 1878 actually outnumber genuine ones!) 1887 $2 notes were also counterfeited, though sources do not say how extensively. All this may not sound so bad, until you realize that, apart from some 25¢ shinplasters, $1 and $2 bills were the only Dominion note denominations that actually circulated, larger ones having been used only as bank reserves. Nor is that all: the 1870 and 1878 $1 notes were the only $1 notes supplied before 1897, while the 1870, 1878, and 1887 $2 notes were the only $2 notes available until 1897. In short, to observe, as Fung et al. do, that the appearance of Dominion notes “did not improve the situation with regard to counterfeiting,” is beating around the bush. The truth is that all pre-1897 Dominion notes were counterfeited, and most were counterfeited “extensively.”

To treat the fact that Canada’s private banknotes were counterfeited as a flaw, despite the even more aggressive and troublesome counterfeiting of their closest substitutes, Dominion notes, is a perfect example of the Nirvana fallacy at work.


The same may be said of Fung et al.’s claim that Canadian banknote currency was flawed because persons who held it sometimes suffered losses when banks whose notes they held failed. Although we tend today to take for granted that currency should be free of default risk, we do so in part because we’re used to irredeemable fiat monies, which of course aren’t IOUs at all; alternatively, they are, in the immortal words of former Federal Reserve Bank President John Exter, “IOU nothings.” Still more precisely, fiat currencies, or most of them at any rate (including Federal Reserve Notes and modern Bank of Canada Notes), are free of default risk because their issuers have already defaulted. It’s hard to break an already broken promise!

Things were, of course, different in the days of the gold standard. To their credit Fung et al. recognize this when, in discussing the characteristics of Dominion notes, they observe (p. 23) that “no fiduciary currency is 100 per cent safe.” The question, then, is whether Canada’s commercial banknotes were excessively risky compared to other fiduciary alternatives. Since no holder of Dominion notes suffered any loss until Canada went off the gold standard during the Great Depression, it’s easy enough to conclude commercial banknotes were riskier. But it doesn’t follow that they were excessively so, because the extra safety of Dominion notes came at a price, consisting of their exceptionally high specie backing.

Would the extra cost have been a price worth paying to spare Canadian banknote holders their relatively modest losses? I doubt it: according to George Hague, although the Canadian system exposed note holders to some risk of loss, it also “rendered the small amount of active capital possessed in a partially developed country available to the utmost extent possible.” As anyone knows who has read Book II, Chapter II of The Wealth of Nations, or my own Cliff Notes version, available here and here, or Rondo Cameron’s Banking in the Early Stages of Industrialization, maintaining such heavy specie reserves meant devoting fewer funds toward productive investment.

Still more do I doubt that it would have made sense to sacrifice the famous “elasticity” of Canada’s commercial banknote currency — a feature that helped Canada to avoid U.S.-style currency panics — for the sake of giving banknote holders a little more security. Yet such a sacrifice is exactly what would have been required had Canada stuck to it’s original plan to have Dominion notes, with their 100-percet marginal specie reserve requirement, supplant banknotes. I very much doubt it, though I cannot be certain. What I do know is that Fung et al. never bother to demonstrate that available alternatives to Canada’s imperfect banknotes would actually have been better, let alone perfect.


And how about those “slight” discounts to which notes of solvent banks were sometimes subject. Were they proof positive of market failure? Hardly. Just as banks that deal in foreign currency notes today must cover the costs involved in shipping those notes to and from their sources, early banks and banknote brokers had to be compensated for the cost of returning domestic banknotes to their (sometimes far-away) sources for redemption. In Canada, those costs were anything but trivial. Though Canada’s combined provinces are geographically larger than the U.S., at the time of its first, 1871 census it had only 2,779 miles of railroad to the United States’ 45,000 miles.  Nor was the first trans-Canada railroad  completed until 1885, some 16 years after the driving of the golden spike at Promontory Summit. Canada also had only about one-tenth as many people, most of whom lived far away from its cities. And “cities” is being generous. Only nine held more than 10,000 people, and only one (Montreal) had a population exceeding 100,000. Small wonder that banknotes found far from their sources were likely to be discounted!

Moreover the tendency, independent of any legislative interference, was for those discounts to decline over time, and often to vanish altogether, as banks expanded their branch networks (and also, in time, as they found it worthwhile to form clearinghouses). For according to Roeliff Morton Breckenridge, upon whom Fung et al. rely for much of their information concerning Canadian banknotes’ lack of uniformity, it was only “the notes of a bank without a branch in the neighborhood [that] did not circulate at their par value in localities remote from where they were payable” (my emphasis).

That Canada’s established banks at first hesitated to establish branches in any but the most settled and thriving communities was only reasonable. As George Hague explains, in the early days “restless, and even reckless, persons” outnumbered other sorts in Canada’s less populous towns and villages, so that any branch located in them had to exercise great care to keep it’s customers’ savings from “being lost in foolish projects and hastily considered enterprises.” Even so, according to Breckenridge (p. 354), the banks “rendered yeoman service…extending their field of operations as fast, probably, as the growth of the country warranted,” so that, by the early1890s Canada’s banks collectively had branches, or their headquarters, “in almost every community where there is accumulation, commerce, and credit.”

If Canadian banks’ branch networks didn’t grow, and note discounts therefore didn’t disappear, more rapidly than they did, the parsimonious explanation is, not that there was a market failure, but that a faster rate of expansion wouldn’t have been economically worthwhile. Better to let people bear a discount on “foreign” banknotes now and then than waste resources by placing bank branches (let alone clearinghouses) where the risks are too great, much less where wolves still outnumbered people.

The gains from other schemes for keeping banknotes current might likewise fall short of the schemes’ cost. Consider what transpired in the U.S. There, a “uniform” currency was first established in 1864. But by what means, and at what price? The deed was done, first, by stamping-out the notes of state-authorized banks (a step that itself almost certainly did more harm than good), and, second, by including a clause (section 32) in the 1864 National Bank Act stipulating that every national bank must “take and receive at par, for any debt or liability to it” the notes of all other national banks.[1] That of course meant that the banks could no longer afford to sort, mail, and return notes of rival banks to their sources for payment, as they would normally have been inclined to do. But, hey! The Draconian measure did after all give us a “uniform” currency, so what difference did it make whether the banks liked it or not?

Plenty, actually. Remarking on the routine redemption of Canada’s commercial banknotes, Sir Byron Walker, the General Manager of the Canadian Bank of Commerce and a V.P. of the Canadian Bankers’ Association, speaking at a bankers’ gathering in Chicago in 1893, observed that

This great feature in our system as compared with the National Banking System, is generally overlooked, but it is because of this daily actual redemption that we have never had any serious inflation of our currency, if indeed there has ever been any inflation at all (p. 14).

You see, a currency that was scarcely ever sent back to its sources for redemption bore the characteristics, not of ordinary, low-powered banknotes, but of a high-powered fiat money. The national banks might therefore have been tempted to issue them without restraint, had they not been simultaneously burdened by bond-backing requirements and, until 1875, a limit on their aggregate note issues.

U.S. authorities, who wished to remove that aggregate limit, were fully aware of the problem, and sought to correct it by establishing, in 1874, a national banknote redemption Bureau in Washington. Unfortunately that solution never worked very well: for the most part, banks only sent the Bureau tattered, worn, and soiled notes that they couldn’t get rid of otherwise.  It was, as Larry White and I have explained elsewhere, partly owing to the Bureau’s inadequacy, which informed legislators’ reluctance to relax bond-collateral constraints limiting national banks’ capacity to issue more notes, that the U.S. found itself saddled, for the rest of the century and beyond, with an utterly inelastic currency system, and consequent, recurring crises.

Might U.S. citizens have been better-off, after all, putting up with the occasional banknote discount?[2] To anyone conversant with the sad history of late 19th-century U.S. panics, the question answers itself. Once again, imperfect doesn’t always mean inefficient.

(To be continued.)


[1] Fung et al. (p. 30) mistakenly claim that it was only after the establishment of the Washington note redemption bureau in 1874 that “notes of the various national banks exchanged at par.” For the real purpose — and inadequacy — of that bureau, keep reading.

[2]  When I say “occasional,” I mean it: as shown in my paper on state banknotes also linked to above, by the autumn of 1863, or not long before the reforms that were to establish a uniform U.S. currency, the aggregate discount on state banknotes (excluding notes of banks in the Confederacy, which were then no longer trading in Northern markets) was trifling. Had one purchased all the notes in question at face value, and then sold them to brokers in either Chicago or New York for what the brokers were paying in those markets, the loss would have amounted to less than one percent, even reckoning any note that brokers listed as unknown or uncertain as worthless.

[Cross-posted from]

The multi-faceted controversy over Donald Trump’s taxes has been rejuvenated by a partial leak of his 2005 tax return.

Interestingly, it appears that Trump pays a lot of tax. At least for that one year. Which is contrary to what a lot of people have suspected—including me in the column I wrote on this topic last year for Time.

Some Trump supporters are even highlighting the fact that Trump’s effective tax rate that year was higher than what’s been paid by other political figures in more recent years.

But I’m not impressed. First, we have no idea what Trump’s tax rate was in other years. So the people defending Trump on that basis may wind up with egg on their face if tax returns from other years ever get published.

Second, why is it a good thing that Trump paid so much tax? I realize I’m a curmudgeonly libertarian, but I was one of the people who applauded Trump for saying that he does everything possible to minimize the amount of money he turns over to the IRS. As far as I’m concerned, he failed in 2005.

But let’s set politics aside and focus on the fact that Trump coughed up $38 million to the IRS in 2005. If that’s representative of what he pays every year (and I realize that’s a big “if”), my main thought is that he should move to Italy.

Yes, I realize that sounds crazy given Italy’s awful fiscal system and grim outlook. But there’s actually a new special tax regime to lure wealthy foreigners. Regardless of their income, rich people who move to Italy from other nations can pay a flat amount of €100,000 every year. Note that we’re talking about a flat amount, not a flat rate.

Here’s how the reform was characterized by an Asian news outlet.

Italy on Wednesday (Mar 8) introduced a flat tax for wealthy foreigners in a bid to compete with similar incentives offered in Britain and Spain, which have successfully attracted a slew of rich footballers and entertainers. The new flat rate tax of €100,000 (US$105,000) a year will apply to all worldwide income for foreigners who declare Italy to be their residency for tax purposes.

Here’s how Bloomberg/BNA described the new initiative.

Italy unveiled a plan to allow the ultra-wealthy willing to take up residency in the country to pay an annual “flat tax” of 100,000 euros ($105,000) regardless of their level of income. A former Italian tax official told Bloomberg BNA the initiative is an attempt to entice high-net-worth individuals based in the U.K. to set up residency in Italy… Individuals paying the flat tax can add family members for an additional 25,000 euros ($26,250) each. The local media speculated that the measure would attract at least 1,000 high-income individuals.

Think about this from Donald Trump’s perspective. Would he rather pay $38 million to the charming people at the IRS, or would he rather make an annual payment of €100,000 (plus another €50,000 for his wife and youngest son) to the Agenzia Entrate?

Seems like a no-brainer to me, especially since Italy is one of the most beautiful nations in the world. Like France, it’s not a place where it’s easy to become rich, but it’s a great place to live if you already have money.

But if Trump prefers cold rain over Mediterranean sunshine, he could also pick the Isle of Man for his new home.

There are no capital gains, inheritance tax or stamp duty, and personal income tax has a 10% standard rate and 20% higher rate.  In addition there is a tax cap on total income payable of £125,000 per person, which has encouraged a steady flow of wealthy individuals and families to settle on the Island.

Though there are other options, as David Schrieberg explained for Forbes.

Italy is not exactly breaking new ground here. Various countries including Portugal, Malta, Cyprus and Ireland have been chasing high net worth individuals with various incentives. In 2014, some 60% of Swiss voters rejected a Socialist Party bid to end a 152-year-old tax break through which an estimated 5,600 wealthy foreigners pay a single lump sum similar to the new Italian regime.

Though all of these options are inferior to Monaco, where rich people (and everyone else) don’t pay any income tax. Same with the Cayman Islands and Bermuda. And don’t forget Vanuatu.

If you think all of this sounds too good to be true, you’re right. At least for Donald Trump and other Americans. The United States has a very onerous worldwide tax system based on citizenship.

In other words, unlike folks in the rest of the world, Americans have to give up their passports in order to benefit from these attractive options. And the IRS insists that such people pay a Soviet-style exit tax on their way out the door.

President Donald Trump signed an executive order to create a “Comprehensive Plan for Reorganizing the Executive Branch.” The order requires his budget director, Mick Mulvaney, to complete a plan recommending specific spending cuts based on input from federal agencies and outside scholars.

This is a promising initiative. It will be up to Congress to enact the administration’s plan into law, but Mulvaney is a serious reformer who will likely use this opportunity to push for substantial terminations.

The executive order does not just ask for modest efficiency gains, but for major cuts:

The proposed plan shall include, as appropriate, recommendations to eliminate unnecessary agencies, components of agencies, and agency programs, and to merge functions.

The plan contemplates a revival of federalism:

In developing the proposed plan … the Director shall consider … whether some or all of the functions of an agency, a component, or a program are appropriate for the Federal Government or would be better left to State or local governments or to the private sector through free enterprise.

As it turns out, the federal budget includes 1,100 aid-to-state programs costing almost $700 billion a year that “would be better left to state and local governments.” As for free enterprise, we could start by weaning farmers off welfare and allowing them to earn a living in the marketplace like the rest of us do.

The executive order asks Mulvaney to consider, “whether the costs of continuing to operate an agency, a component, or a program are justified by the public benefits it provides.” This is a call for Mulvaney to initiate detailed cost-benefit analyses of spending programs. Federal law currently requires cost-benefit analyses of regulations, but there is no similar accountability for spending programs.

Consider, for example, that Congress spends $8 billion a year on farm insurance subsidies. Taxpayers are supposed to take it on faith that this is a good use of their money. Sorry, but that is just not good enough anymore in an era of $600 billion budget deficits.

So, as a first step, Mulvaney should identify a few dozen major programs that outside experts have pointed to as dubious (such as farm insurance subsidies) and subject them to a rigorous cost-benefit analysis. Such analyses would include the deadweight losses imposed by each program’s needed tax funding, as well as other sorts of damage to society.

Prior presidents have “reorganized” the government in harmful and expansive ways. George W. Bush compounded bureaucracy, wasted money, and reduced efficiency by creating a Homeland Security superstructure on top of 22 existing federal agencies.

The language of Trump’s executive order suggests that he will move in the opposite direction, and Mulvaney is the right man to lead this effort.

To understand the failings of federal bureaucracies, see here.

For a menu of high-priority cuts, see here.

The Cato Institute has long been unique in Washington, D.C.’s foreign policy debate. For years, our scholars have argued that there is essentially no debate over grand strategy here in the nation’s capital. Vigorous political battles about U.S. foreign policy tend to happen only within a very narrow range of opinion, usually centering on tactics rather than competing strategic visions. These surface level disagreements mask a bipartisan consensus in favor of a grand strategy of primacy (alternatively termed “liberal hegemony” or “deep engagement”), which is further buttressed by an extensive network of foreign policy professionals within the national security bureaucracies. The consensus sees the United States as the indispensable nation - the policeman of the world - that must maintain military preponderance and extensive security commitments in Europe, the Middle East, and Asia in the name of upholding the international order.

The election of Donald Trump to the presidency has, in an odd way, created incentives for a debate about grand strategy. The president’s erratic and often contradictory utterances on alliances, free trade, and interventionism - what the Brooking Institution’s Thomas Wright describes as Trump’s Jekyll and Hyde foreign policy -  has occasionally questioned the core foundations of U.S. grand strategy in the post-WWII era. Unfortunately, Trump is just about the worst vessel for ushering in such a debate, for reasons that are too numerous to count but include his economic protectionism, chauvinistic nationalism, habitual threat inflation, and worrying illiberal tendencies. Nevertheless, the shock to the status quo that is Trump’s rise has elicited number of well-publicized defenses of primacy by people in the Washington foreign policy community.

And that’s what makes an upcoming Cato Institute event so timely and important. The debate over grand strategy in academia has always been comparatively robust, and two leading scholars who advocate the continuation of America’s deep engagement, Stephen G. Brooks and William C. Wohlforth, both professors at Dartmouth College, will be here on March 21 to discuss their newest book, America Abroad: The United States’ Global Role in the 21st Century. Our two discussants are at the other end of the spectrum on grand strategy: Cato’s own Benjamin H. Friedman and Eugene Gholz, professor at the University of Texas at Austin and Cato adjunct scholar. 

Please join us for this vital discussion of America’s role in the world. Register to attend the event here

A new Congressional Budget Office report projecting the effects of the House Republican leadership’s American Health Care Act weakens the case for the bill’s ObamaCare-lite approach, and strengthens the case for full repeal. The CBO projects that over the next two years, the AHCA would cause average premiums to rise 15 percent to 20 percent above ObamaCare’s already high premium levels. The report raises the prospect that insurance markets may collapse under the AHCA, just as they are collapsing under ObamaCare. It makes unreasonable assumptions about Medicaid spending; more reasonable assumptions could completely eliminate the bill’s projected deficit reduction. Finally, the CBO projects more people will lose coverage under the AHCA than under full repeal.

ObamaCare-Lite, ObamaCare-Forever

The AHCA purports to repeal and replace ObamaCare. In reality, it would do no such thing.

In a previous post, I wrote:

This bill is a train wreck waiting to happen.

The House leadership bill isn’t even a repeal bill. Not by a long shot. It would repeal far less of ObamaCare than the bill Republicans sent to President Obama one year ago…

[It] merely applies a new coat of paint to a building that Republicans themselves have already condemned…If this is the choice, it would be better if Congress simply did nothing.

The ACHA retains all the powers ObamaCare gives the federal government over private insurance, gives those powers a bipartisan imprimatur, and therefore gives them immortality. Its repeal of ObamaCare’s Medicaid expansion would likely never take effect. It fails to create real block grants in Medicaid, and preserves perverse incentives from both the “old” Medicaid program and the expansion. It would create an ongoing series of crises in the individual market, for which Republicans would take the blame and suffer at the polls, at the same time it would create pressure for more taxes and government spending. It’s hard to imagine what House Republicans were thinking.

Premiums and Market Stability

Full repeal, in particular repeal of ObamaCare’s health-insurance regulations, would cause premiums to fall for the vast majority of consumers in the individual market.

In contrast, the AHCA would increase premiums from their already high ObamaCare levels. “In 2018 and 2019…average premiums for single policyholders in the nongroup market would be 15 percent to 20 percent higher than under current law,” the CBO reported.

Premium increases of that magnitude could further destabilize ObamaCare’s health-insurance Exchanges. Adverse selection has already led to an exodus of insurers from the individual market. ObamaCare has driven every last insurer from the Exchange in 16 counties in Tennessee, leaving 43,000 residents with no health insurance options for 2018. In a thousand other counties around the country, the law has driven all but one insurer from the Exchange. Nearly 3 million people in those counties are just one carrier exit from being in the same position as those 43,000 Tennesseans.

The CBO posits that, nonetheless, “the nongroup market would probably be stable in most areas under either current law or the legislation.”

In most areas. Probably.

Supporters of the legislation note that the CBO projects the average premiums would then begin to fall after 2019. One reason is that the AHCA would end one of ObamaCare’s health-insurance regulations (actuarial-value requirements). Another is that the CBO predicts states would use the AHCA’s new Patient and State Stability Fund to subsidize high-cost enrollees.

There are reasons to doubt this prediction. First, it assumes the Exchanges survive the ensuing adverse selection and make it to 2020. Second, the Patient and State Stability Fund would not reduce premiums. Like ObamaCare’s reinsurance program, it would hide a portion of the full premium by shifting it to taxpayers. So even though the CBO reports that the portion of the premium that consumers see would fall 10 percent by 2026, it is not accurate to say premiums would fall. We don’t know if the full premium would fall or rise after 2019, because the CBO isn’t telling us.


On paper the AHCA cuts taxes and government spending. But it also sets forces in motion that could undo those gains.

The CBO projects the AHCA would reduce federal spending by $1.2 trillion over ten years and reduce tax revenues by $883 billion, for a total reduction in the deficit of $337 billion. That certainly makes the bill appear attractive. Until you look at the details.

Take the bill’s Medicaid provisions. The CBO projects the bill would reduce Medicaid spending by $880 billion. The reduction would come both from phasing out ObamaCare’s Medicaid expansion, and from changing how the federal government pays for each state’s Medicaid program.

I doubt these savings will materialize. In my previous post, I wrote:

When eventually we see a Congressional Budget Office score of the bill (House leadership has numbers, but they’re not sharing them), it may show a reduction in federal spending on the Medicaid expansion after 2020. I would not bet on that happening.

True enough, the CBO bases those projected spending reductions on assumptions I do not find reasonable.

For instance, the CBO assumes that under current law, some number of the 19 states that have refused to implement ObamaCare’s Medicaid expansion would do so. The AHCA reduces the cost to states of implementing the expansion. But rather than assume even more states would implement the expansion under the AHCA, however, the CBO assumes no states would. That makes no sense.

The AHCA would reduce the risks to states of implementing the expansion. Prior to or absent the AHCA, states face the risk that Congress might reduce the enhanced federal funding ObamaCare provides states for Medicaid-expansion enrollees. Such a change would mean states would go from paying 10 percent of the cost of the expansion to paying 50 percent of the cost. A five-fold increase. The AHCA eliminates that risk by holding expansion states completely harmless with respect to Medicaid-expansion enrollees who enroll prior to 2020. It would guarantee states would continue to pay only 10 percent of the cost for every Medicaid expansion enrollee, even after the bill would “repeal” the expansion by barring new enrollments starting in 2020.

The cost of expanding Medicaid would go down, yet fewer states would do it. And here I thought demand curves slope downward.

If I’m correct that more states would expand Medicaid and go on an enrollment binge prior to 2020— and especially if those decisions pressured Congress to scrap “repeal” of the expansion—the CBO’s projected savings from the AHCA would prove too optimistic. If just half of the projected Medicaid savings fail to materialize, that would wipe out all of the AHCA’s presumed budget savings.

If states game the new per-enrollee matching grant system of federal Medicaid funding, even more of those presumed spending reductions would evaporate.

Likewise, if the AHCA were to create even more instability in the individual market, it would create even more pressure for additional taxes and government spending to stabilize the market. Even more of the AHCA’s projected savings would disappear.

Coverage Levels

In January, the CBO projected that completely repealing ObamaCare, without a replacement, would increase the uninsured by 23 million people by 2026. The agency projects the AHCA’s non-repeal approach would increase the uninsured by even more—24 million people. As my colleague Josh Blackman notes, there is ample reason to believe the CBO models overstate the coverage gains achieved by ObamaCare’s individual mandate, and the coverage losses the agency projects would follow its repeal.

Even so, the CBO score confirms the folly of the House Republicans’ approach, and that there is no reason not to repeal ObamaCare in full. Like it or not, the CBO’s estimates of coverage impacts are the ones ObamaCare’s defenders and the media will cite. If Republicans are going to take the same amount of heat either way, they might as well do the right thing and do a full repeal.

Republicans could then repurpose the $361 billion they planned to spend on tax credits on expanding tax-free health savings accounts—a reform that would drive down health care prices for the poor, that Congress can enact via reconciliation, and that does not divide ObamaCare opponents like tax credits do, not least because HSAs do not subsidize abortion like tax credits do. They could convert Medicaid into an actual system of block grants, giving states the flexibility to target Medicaid funds to those who still could not afford the care they need. 

Four decades ago, the United States began a dramatic change in domestic policy, repealing swaths of economic regulation and abolishing whole agencies charged with managing sectors of the U.S. economy.

If you mention this “deregulation” today, most people think it refers to wild Reagan administration efforts to undo environmental, health, and safety protections. In fact, the deregulation movement predated Ronald Reagan’s presidency, had broad bipartisan support, and had little to do with health, safety, or environmental policy. Rather, deregulation targeted regulations that directed business operations in different sectors of the American economy: which airlines could service which routes, what railroads could charge what amounts for their services, how telephone service would be billed and what technologies would be used, how the power industry was organized, and much more.

For decades, policy researchers had compiled evidence that those regulations harmed consumers and stunted economic growth by suppressing competition and innovation. With America mired in the stagflation of the 1970s, policymakers decided to stop sheltering (some) U.S. businesses from the demands of consumers and the competition of upstart and foreign rivals.

That policy change now seems obviously virtuous, but at the time some commentators predicted it would unleash mayhem and disaster: a crippled economy, spiraling prices, “ruinous” competition, frightened consumers, plane crashes, hobbled communications, and other horribles. Fortunately, those frightful predictions did not obstruct reform. Today, the 1970s–1990s deregulations are broadly recognized as having yielded great benefits to consumers and contributed to the two decades of American prosperity that ended the 20th century. (For more on deregulation, see the soon-to-be-released spring issue of Regulation, celebrating the magazine’s 40th anniversary. Links forthcoming.)

Which brings us to current criticisms of Trump administration efforts to launch a new wave of deregulation. Like yesteryear, the critics are predicting mayhem and disaster. But their arguments aren’t convincing.

Consider, for instance, Northwestern University law professor Andrew Koppelman’s warning that “Trump’s ‘Libertarianism’ Endangers the Public.” (Credit Koppelman for using scare quotes to indicate that President Trump isn’t a libertarian.) Specifically, he worries about Trump’s recent order on regulation, which instructs agencies to (temporarily) keep the nation’s aggregate cost of regulatory compliance at its current level and to repeal two regulations for every new one adopted.

Writes Koppelman:

When he was President, [Barack Obama] demanded (following a principle laid down by Ronald Reagan!) that any new regulations survive rigorous cost–benefit analysis. … Trump, on the other hand, has replaced cost–benefit analysis with cost analysis. Benefits are ignored. … Consumer fraud, tainted food, pollution, unsafe airplanes and trains, epidemic disease all have to be put up with, if stopping them would increase the costs of regulation.

Koppelman properly praises cost–benefit analysis, the idea that proposed regulations should be scrutinized to ensure that they do not produce more harm (cost) on net than good (benefit). But even if we assume that all federal regulations were covered by Obama’s order (they weren’t) and all of the cost–benefit analyses were accurate (ditto), there is still a serious problem with Koppelman’s argument.

He assumes that passing a cost–benefit test should be sufficient for a new rule to be implemented. Yet, human resources are limited, and resources devoted to complying with regulations cannot be devoted to producing other benefits. Put another way, all rules—even terrific ones—have opportunity costs, and ignoring those costs is bad public policy. As President Jimmy Carter explained back in 1979:

Our society’s resources are vast, but they are not infinite. Americans are willing to spend a fair share of those resources to achieve social goals through regulation. Their support falls away, however, when they see needless rules, excessive costs, and duplicative paperwork.

(H/T Richard Williams; link forthcoming.)

Federal cost–benefit analysis is supposed to take opportunity costs into account, but that accounting is dicey to say the least. So it’s sensible to place a limit—which in essence is what Trump’s order does—on the United States’ total regulatory compliance cost in order to ensure that plenty of resources can be devoted to other benefits.

Admittedly, the Trump order is a crude way to do this. The limit de facto assumes the current level of U.S. spending on regulatory compliance is the right amount, whereas Koppelman apparently believes there is no limit to what the nation should spend on new rules so long as they pass a cost–benefit test. On the other hand, many Americans say that they are already overburdened with the costs of regulatory compliance.

But crude initiatives can be useful. In theory, capping compliance costs should prompt regulators to prioritize current and prospective rules, embracing those with high net benefits and dispensing with those with low (and even negative) net benefits. It’s outlandish to think that good regulations protecting from “consumer fraud, tainted food, pollution, unsafe airplanes and trains, epidemic disease” should be prioritized over, say, low-value but costly regulations.

Of course, the devil is in the details, and the Trump administration’s performance so far gives little confidence about its ability to manage details. For instance, how reliable will his agencies be at estimating the costs of regulations implemented and repealed? What does it mean to repeal “two” regulations—repeal two small provisions or two whole rules? Doesn’t repeal of a regulation require the writing of a new regulation striking the old one? Will, say, the Occupational Safety and Health Administration give up on a low-value worker safety rule in order for a high-value Environmental Protection Agency to advance?

Still, the cost limit and the one-in, two-out requirement (versions of which have been tried in other countries) could be useful exercises to cull poor federal regulations. Contra Professor Koppelman, they shouldn’t be dismissed out-of-hand.

Even when it comes to protecting children, good intentions are not enough. is

a website that grew rich on classified ads for services like escorts, body rubs and exotic dancers. Far from being a marketplace for consensual exchanges, Backpage, the authorities said, often used teasers like “Amber Alert” and “Lolita” to signal that children were for sale.

In the midst of a Senate investigation, a federal grand jury inquiry in Arizona, two federal lawsuits and criminal charges in California accusing Backpage’s operators of pimping children, the website abruptly bowed to pressure in January and replaced its sex ads with the word “Censored” in red.

And the consequences? 

Tiffany — a street name — did not stop using the site, she said. Instead, her ads moved to Backpage’s dating section. “New in town,” read a recent one, using words that have become code for selling sex. “Looking for someone to hang out with.” Other recent dating ads listed one female as “100% young” and suggested that “oh daddy can i be your candy.”


For Tiffany, 18, the demise of Backpage’s adult listings has made things far more unpredictable — and dangerous, she said. The old ads allowed her to try to vet customers by contacting them before meetings, via phone or text message. With far fewer inquiries from the dating ads, she said, her first encounters with men now take place more often on the street as she gets into cars in red light districts around the Bay Area.

For an earlier Cato discussion of the relevant First Amendment issues, see here.

For months, we’ve been following the saga of a misguided agency regulation that would have deprived some of the most vulnerable Americans of their basic due process rights. In May of last year, the Obama administration proposed a rule designating everyone who uses a “representative payee” (usually a friend or relative) to aid in filing social security disability forms as “mentally defective.” The practical consequence of such a change is that those deemed “mentally defective” (itself a vague and insulting term from a bygone legal era) will automatically fail their federal background check if they attempt to buy a gun. This presumption of unfitness can only be overcome after a lengthy, years-long bureaucratic process to prove one’s own competency. 

We’ve written extensively on why this rule is prejudicial and unfair. During the rule’s “notice and comment” period, Cato’s Center for Constitutional Studies submitted its first-ever public regulatory comment, objecting to the rule on 10 different grounds. We pointed out that the rule is vastly overbroad, since those filers who use a “representative payee” include anyone the Social Security Agency believes “would be served thereby … regardless of the legal competency or incompetency of the individual.” Moreover, the rule is counterproductive even when applied to those who do suffer from a psychiatric disability, because those people are more likely to be the victims of violent crimes rather than the perpetrators. Finally, we explained that the rule violates constitutional due process; the burden of proof must fall on the government before it can deprive an individual of a constitutional right.

But despite these efforts, the Obama administration forged ahead, finalizing the rule two days before President Trump took office. This seemed to be the final chapter of the story. Now, however, we can report a much happier ending, thanks to a vital law called the Congressional Review Act (CRA).

The CRA was enacted in 1996 to preserve the legislature’s role in American policy-making when agencies try to unilaterally create sweeping national rules. The Act requires that agencies must submit every newly promulgated rule to Congress for review. Once a new rule has both been submitted to Congress and published in the Federal Register, Congress has a period of 60 legislative days—about six months of real time in practice—in which both houses can pass a disapproval resolution by simple majority vote (no Senate filibusters or parliamentary stall tactics are allowed). If such a resolution is passed by both houses and signed by the president, the rule in question is abolished, and no similar rule can be enacted in the future except by statute.

Soon after the “representative payee” rule was finalized, a movement began urging Congress to implement the CRA in overturning it. The arguments were bipartisan; one of us (Blackman) joined with authors from the Autistic Self Advocacy Network and the National Disability Rights Network to explain why the rule was terrible for both gun rights and disability rights. Whatever one’s views are on the gun debate in America, both sides could agree that “individuals with a disability should not be scapegoated to advance gun control.”

This campaign caught on. Many of the arguments that we and others had made to agency regulators—to no avail at the time—were echoed by the people’s elected representatives. House Majority Leader Kevin McCarthy, for example, wrote that the rule “would elevate the Social Security Administration to the position of an illegitimate arbiter of the Second Amendment.”

The disapproval resolution passed both houses and has now been signed by the president, putting an end to the rule once and for all.

Elections have consequences. In this instance, it’s satisfying that one such consequence has been the end of a stigmatizing rule that never should have been proposed in the first place. As this case has demonstrated, the CRA has the potential to be an enormously important tool in the fight against misconceived regulations. The “mentally defective” rule is one of three regulations that have already been revoked using the CRA during the Trump Administration, and 11 more could be on the chopping block soon, with disapproval resolutions having passed in at least one house of Congress.

Even if common-sense arguments for the protection of individual rights fall on deaf ears in the federal bureaucracy, the people’s representatives still retain the ultimate power to create federal policy and vindicate those rights. That is the system the Framers designed, and that is the system the CRA helps preserve. For more on the ambitious project to use the CRA to reverse harmful regulations, see Pacific Legal Foundation’s

We thank Cato legal associate Tommy Berry for his help with this blog post.