Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Even after twenty years, the North American Free Trade Agreement remains highly controversial.  Donald Trump claims that NAFTA has “destroyed this country economically,” apparently unaware that the U.S. economy is still pretty fantastic.  He has promised to pull out of the landmark free trade agreement between the United States, Mexico, and Canada unless he can renegotiate it. 

Hillary Clinton has also promised to renegotiate NAFTA.  Trump has erroneously claimed that Clinton only came out against NAFTA after he made an issue of it.  She made the same promise during her 2008 presidential campaign.  Trumps also claims (much less erroneously) that Clinton will probably back out of any promise to renegotiate NAFTA after she’s elected.

If you ask the Obama administration, however, they’ll say they’ve already renegotiated NAFTA by creating the Trans-Pacific Partnership, which includes Canada and Mexico.  The idea behind that claim is that the TPP includes stronger and more enforceable labor and environment rules than NAFTA.  NAFTA’s critics on the Left have complained since its inception that NAFTA’s rules are inadequate.  Obama’s claim to have fixed that “problem” through the TPP is a bit comical, even if nominally true, since those same critics also despise the TPP for largely the same reasons.

Clinton has raised the labor and environment complaint in her own condemnations of NAFTA, along with concern over investment rules and dispute settlement procedures.

Trump’s criticisms of NAFTA have been characteristically vague.  He seems to be caught up in an economically misguided concern for bilateral trade deficits.  Presumably, Trump would want to raise tariffs on goods from Mexico and somehow have Mexico agree to that within NAFTA.

The reason no one is going to renegotiate NAFTA is that Mexico and Canada are not interested.  Most likely, their trade officials understand that U.S. elections bring out a lot of protectionist saber-rattling that gets lost in the shuffle once new Presidents get to work crafting and implementing their international economic policy, which presumably doesn’t involve impoverishing the United States and its neighbors. 

Two way trade between the United States and Mexico rounds out to about $1.4 billion per day.  That figure includes well-established cross-border supply chains that enable integrated North American industries.  The economic consequences of disrupting that trade are serious, and pulling out of NAFTA  might actually accomplish all the horrible things Trump has blamed NAFTA for doing. 

Last week, the massive backlog in cases in the federal immigration courts crossed the half a million threshold. Immigrants currently being processed will have waited an average of almost two years for a judge to adjudicate their cases. The backlog has grown at a time when illegal immigration has fallen dramatically and the unauthorized population has shrunk. 

Summary of Findings

  • The immigration courts’ lower productivity accounts for all of the increase in pending cases since 2009.
  • Immigration courts are finishing far fewer cases in recent years, completing just 58 percent as many cases in 2015 as they did in 2005.   
  • Nearly 100 percent of the decline in productivity from 2005 to 2016 occurred before the surge in unaccompanied children in 2014.
  • The number of immigration judges, which has increased 18.5 percent from 2005 to the first quarter of 2016, does not explain the backlog.
  • Each immigration judge completed just 60 percent as many cases in 2015 as they did in 2005.
  • The complexity of the cases fails to explain the decreased efficiency. Immigration judges are on pace to complete just 44 percent as many cases in which they rule on the merits in 2016 as they did in 2006.
  • In October 2012, the Department of Justice’s Inspector General called procedural continuances a “primary factor” in the court’s inefficiency.

More Immigration Judges Are Completing Fewer Cases

The most common explanation for the growing number of cases pending before the immigration courts is that the number of immigration judges has not kept up with the number of new cases. But as Figure 1 shows, the number of immigration judges has increased sharply since the early 2000s, yet the backlog continues to grow.

Figure 1: Immigration Court Judges and New Cases (FY 2001 to First Quarter of FY 2016)


Sources: Judges: Office of Personnel Management via TRAC Immigration of Syracuse University (1998-2009), Director of the Executive Office for Immigration Review (2010-2016); Cases: TRAC Immigration of Syracuse University

The number of pending cases only grows when the immigration courts complete fewer cases than they receive. As can be seen in Figure 2, the courts were completing almost as many cases as they received from 2000 to 2008, and the backlog remained roughly constant. The courts became more productive from 2001 to 2006 before the trend reversed. They steadily finished fewer cases from 2007 to 2014. The lower productivity has persisted even years after the hiring surge from 2009 to 2011.

Figure 2: Completed Cases, New Cases, and Immigration Judges (2000-2016)

Source: Judges: See Figure 1, Completions:  TRAC Immigration of Syracuse University (Completions sum all removals, voluntary departures, grants of relief, terminations, and closures)

Since fewer cases are being completed while more immigration judges are being added to the courts, the number of completions per judge has fallen to its lowest level ever. As early as October 2012, the Inspector General (IG) for the Department of Justice (DOJ) reported:

During this same 5-year period [from 2006 to 2010] that the completion rate was declining, the number of immigration judges was increasing…. Despite the increase in judges, the overall efficiency of the courts did not improve. 

Figure 3 gives the number of cases that the average adjudicating immigration judge completed from 1998 to 2016.

Figure 3: Cases Completed Per Immigration Judge

Source: See sources for Figures 1 and 2

If immigration judges completed the same number of cases from 2009 to 2016 that they averaged from 1998 to 2008, they would have finished 345,000 more cases than they actually did. Figure 4 shows the actual backlog compared to the projected backlog if judges had continued at their previous average. As can be seen, judges were completing above their historical average in 2009 before slowing down thereafter.

Figure 4: Actual Pending Immigration Case Backlog and Projected Case Backlog with Average Completion Rate Per Judge From 1998 to 2008


Source: Author’s calculation based on sources for Figures 1 and 2.

Recent Immigration Cases Are Not More Labor Intensive

One possible explanation for the backlog growth is that current immigration cases are more labor intensive for judges than in previous years. The least labor intensive cases are terminations and closures. Termination of proceedings is the court’s way of dropping charges against the immigrant. It is an act of prosecutorial discretion most used when the immigrant has eligibility for a visa. Closures set cases aside without a formal decision being rendered for or against the immigrant. It is another way of prioritizing the docket, but the case can be placed back on the calendar at any time. 

In March 2013, the Chief Immigration Judge Brian O’Leary told immigration judges to close more cases as a “legitimate method of… preserving limited adjudicative resources.”  As Figure 5 shows, the collapse in court productivity actually coincides with a large increase in the number of cases being administratively closed or terminated from 2009 to 2016. While terminations saw a twofold increase, and closures a fivefold increase, removals and voluntary departures fell by half, and findings in favor of relief by 40 percent.

Figure 5: Immigration Court Outcomes—Removals and Voluntary Departures, Relief, and Closures and Terminations (1998 to 2016)

Source: TRAC Immigration of Syracuse University

While terminations could be seen as a “completion” because the court’s jurisdiction over the case is removed, administrative closures do not actually conclude the court’s business with the case. The immigrant is still under the jurisdiction of the court, and the court can place the case back on the docket at any time. In its October 2012 report, DOJ’s Inspector General chastised the courts for including closures as “completions.” The whole point of administrative closure is to increase the processing capacity of the court, yet as Figure 6 shows, focusing solely on cases in which the courts removed their jurisdiction demonstrates an even steeper decline in productivity.

Figure 6: Non-Closure Court Cases Completed Per Immigration Judge (1998-2016)


Source: TRAC Immigration of Syracuse University

The courts closed almost 28,000 more cases per year from 2012 to 2016 than they did from 1998 to 2011—138,000 in total. If the courts had completed as many non-closure cases as they did during the earlier period, while closing as many more cases as they did during the later period, the backlog would be gone entirely.

Asylum Seekers and Unaccompanied Children Have Not Caused the Backlog

The most difficult type of case is one in which the immigrant puts forward a claim of asylum in the United States, asserting a fear of persecution in their home country. These cases require a significant time for the immigrant to gather any evidence and find and present witnesses, and for Homeland Security to evaluate or rebut the claim. But during the key period from 2005 to 2011, there was no increase in the number of asylum decisions—denied or affirmed—in the courts. In fact, the number of decisions declined by 36.8 percent.

Figure 7: Asylum Decisions in Immigration Court, Total Decisions, and Asylum Grant Rate (2001-2014)

Sources: TRAC Immigration of Syracuse University

The administration has placed most of the blame for the backlog on those asylum seekers who claim a “credible fear” of persecution and unaccompanied alien children (UACs) coming to the border. Juan Osuna, the head of the immigration courts, told Congress in December 2015 that “the 2014 border surge put unprecedented pressures on EOIR.” That year, the director decided to fast track cases involving children, putting them in front of the line. The problem with this explanation, however, is that almost the entire decline in productivity occurred before the surge. In fact, Figure 7 shows that 2014 actually marked the end of the rapid decline in productivity, rather than its start.

Figure 8: UACs, Credible Fear Claims, and Non-Closure Completed Cases Per Immigration Judge

Source: Competitions: TRAC Immigration of Syracuse University; UACs: Customs and Border Protection (2008-2011, 2012-June 2016); Credible Fear: U.S. Citizenship and Immigration Services (2009-April 2016), Rempel (2008).

It’s also inaccurate to state that the courts have not previously handled a similarly rapid growth in their caseload. In 2005 and 2006, an influx of non-Mexican arrivals from Central America and Brazil resulted in a truly unprecedented jump in new cases. Similar to the UACs and women with children cases in 2014, the courts prioritized processing them. As Figure 9 demonstrates, the courts saw nearly 100,000 more new cases in those two years than in 2014 and 2015 and, despite having 12 percent fewer judges, processed 97.1 percent of them compared to 76.5 percent in 2014 and 2015.

Figure 9: Immigration Judges, New Cases, Completed Cases (2005, 2006, 2014, 2015)

Sources: See Figure 3

Immigration court proceedings initiated based on a claim of criminality tend to be more labor intensive as well, but again there was no increase in the number of criminal removal proceedings during the key period from 2006-2014.

Figure 10: Criminal Charges and Total Immigration Court Non-Closure Completions (2001-2016)

Source: TRAC Immigration of Syracuse University (Projected for 2016)

Courts Are Spending More Time Not Deciding Cases

Because courts are not completing as many cases, they are clearly spending more time on other matters. The share of all court activities—proceedings, motions, bond hearings—spent on decisions in which the judge made a decision on the merits of the case, as opposed to deciding procedural matters, steadily declined from 2007 to 2011.

Figure 11: Merits Decisions and Other Court Hearings as a Share of All Hearings (2007-2011)


Source: Benson and Wheeler.

The other hearings that saw the largest increases were incomplete proceedings and other completed proceedings, such as a change of venue, that did not end with a merits decision. Bond hearings also saw a large increase over the period, while other motions held constant.

Figure 12: Non-Merits Decision Hearings as a Share of all Hearings (2007-2011) 

Source: Benson and Wheeler.

As can be inferred from the incomplete proceedings number, judges are postponing hearings at a much greater rate in recent years. As Figure 13 demonstrates, the number of continuances relative to the number of completed proceedings—which includes the growing number of changes of venue and other non-merits decision hearings—trended upward during the key period in which court productivity sank.

Figure 13: Ratio of Continuances to Completed Proceedings (2005, 2008, 2010)

Source: Benson and Wheeler.

In its October 2012 report, DOJ’s IG found that “frequent and lengthy continuances are a primary factor contributing to case processing times.” Judges granted continuances in about half of the sample that the IG reviewed, and those cases were delayed an average of 368 days. In a majority of cases, the immigrants initiated the request for continuance, but their share of such requests has declined, while DHS prosecutors initiated the vast majority of other requests (Figure 14).

Figure 14: Sources of Continuances in Immigration Court (2005, 2008, 2010, 2011)

Source: 2005-2010: Benson and Wheeler; 2011: Office of the Inspector General for the Department of Justice. October 2012.

In 2011, time to seek representation was the most common reason immigrants asked for continuances (23 percent) followed closely by more time to prepare their cases (21 percent). The DOJ’s IG report stated, “EOIR advised us that a lack of representation can significantly delay proceedings because of the extra time needed to provide explanations to, and solicit information from, the aliens.” But lower representation does not appear to have been a factor in lower productivity. The share of represented cases actually increased significantly from about 32 percent to 45 percent from 2009 to 2012.

This raises the possibility that representation itself slows down the process, but yet again, this appears to be off the mark. The absolute number of represented cases moved only slightly upwards from 74,955 to 76,336 (Figure 15). As the study that documented these trends concluded, “increasing representation rates appear to be more a matter of decreasing volume of judicial decisions, rather than increasing involvement of attorney representatives.”

Figure 15: Representation in Immigration Courts in Cases Decided on the Merits 

Source: Eagly, Ingrid and Shafer, Steven.

While immigrants were more likely to request continuances, Homeland Security prosecutor-initiated delays were 12 percent longer in duration. These delays were mainly necessary to finish background checks and forensic analysis, which was also the longest form of delay (132 days). Although an increase in wait times for DHS processing of visa applications occurred in the middle of the period when court efficiency declined, the fact that wait times have since reverted back to their norms without a corresponding increase in efficiency supports the idea that it was not a major factor in the court backlog.

Chief Judge O’Leary’s March 2013 letter to immigration judges stated that “it is beyond dispute that multiple continuances result in delay in the individual case, and when viewed across the entire immigration court system, exacerbate our already crowded dockets.” He clarified the legal standards for granting continuances in the clear hopes that judges would more critically assess requests for continuance. While it is unclear if this happened immediately or not, the decision to expedite the cases of unaccompanied alien children and women with children resulted in a 16-day drop in the median first continuance.

Additional Considerations 

Several reports and congressional testimony have reviewed the efficiency and effectiveness of the immigration courts in recent years (see here, here, here, here, here, here, here, and here). These reports provide a large number of recommendations far beyond what can be evaluated in this survey. Suffice it to say that they detail a variety of ways in which the courts suffer from structural deficiencies that prevent it from completing their task as well as they should given the resources available.

Two factors should be given particular attention. First, resources are not distributed proportionately to the amount of cases being filed in jurisdictions creating wild variations in the court backlog across the country—292 days in North Carolina but 965 in Colorado. Second, unlike other courts, defendants in immigration courts lack court-provided legal representation, which results in unnecessary delays for immigrants to find counsel. 

“So, yes, the president was saying – two months after the news broke – that the whole IRS thing was just a ‘phony scandal.’” That’s a tidbit passed along by Kim Strassel in her much-talked-about new book, The Intimidation Game. It references the scandal over Internal Revenue Service targeting of Tea Party and “patriot” groups for delay and for bizarrely burdensome documentation demands concerning their personnel and activities. Although President Obama offered what seemed to be heartfelt apologies at the time, and a couple of top officials departed the agency (including director of nonprofit organizations Lois Lerner, who had taken Fifth Amendment protection), there was soon an effort to recast the affair as a matter of merely incompetent mix-ups, rather than a lapse of public integrity and the rule of law. In June a Washington Post editorial took this line, to which I responded by pointing out that the discriminatory handling of groups with adversary political viewpoints was so systematic and intense as to be hard to explain by mere inadvertence.

On Friday morning, a panel of the D.C. Circuit Court of Appeals issued a unanimous opinion (PDF) ordering the reinstatement of a suit against the IRS by two conservative groups, True the Vote and Linchpins of Liberty, seeking a court order against future IRS abuse. The IRS had sought the dismissal of the action as moot, arguing, in best let’s-move-on manner, that it had ceased the unlawful targeting and remedied its effects. Not so, the court said: not only had the IRS not given adequate guarantees that it would not resume improper targeting, but there is evidence that it hasn’t even stopped the practice. 

The D.C. Circuit opinion abounds in scathing language about the Service’s misconduct (it is “plain … that the IRS cannot defend its discriminatory conduct on the merits,” there being “little factual dispute” about it). You can read my write-up of the case at Ricochet, which might help you stay ahead of the news: as of this morning, four days after the court’s ruling, some large news organizations have still not seen fit to report on it.

Arguments against immigration come across my desk every day but their variety is limited – rarely do I encounter a unique one.  Several times a year I give presentations about these arguments and rebut their points.  These are the main arguments against immigration and my quick responses to them:

1.  “Immigrants will take our jobs and lower our wages, especially hurting the poor.”

This is the most common argument and also the one with the greatest amount of evidence rebutting it.  First, the displacement effect is small if it even affects natives at all.  Immigrants are typically attracted to growing regions and they increase the supply and demand sides of the economy once they are there, expanding employment opportunities.  Second, the debate over immigrant impacts on American wages is confined to the lower single digits – immigrants may increase the relative wages for some Americans by a tiny amount and decrease them by a larger amount for the few Americans who directly compete against them.  Immigrants likely compete most directly against other immigrants so the effects on less-skilled native-born Americans might be very small or even positive.

New research by Harvard professor George Borjas on the effect of the Mariel Boatlift – a giant shock to Miami’s labor market that increased the size of its population by 7 percent in 42 days – finds large negative wage effects concentrated on Americans with less than a high school degree.  To put the scale of that shock to Miami in context, it would be as if 22.4 million immigrants moved to America in a six-week period – which will not happen.  Some doubt Borjas’ finding (here is Borjas’ response to the critics and here is a summary of the debate) but what is not in doubt is that immigration has increased the wages and income of Americans on net.  The smallest estimates immigration surplus, as it is called, is equal to about 0.24 percent of GDP – which excludes the gains to immigrants and just focuses on those of native-born Americans.

2. “Immigrants abuse the welfare state.”

Most legal immigrants do not have access to means-tested welfare for their first five years here with few exceptions and unauthorized immigrants don’t have access at all – except for emergency Medicaid. 

Immigrants are less likely to use means-tested welfare benefits that similar native-born Americans.  When they do use welfare, the dollar value of benefits consumed is smaller.  If poor native-born Americans used Medicaid at the same rate and consumed the same value of benefits as poor immigrants, the program would be 42 percent smaller. 

Immigrants also make large net contributions to Medicare and Social Security, the largest portions of the welfare state, because of their ages, ineligibility, and their greater likelihood of retiring in other countries.  Far from draining the welfare state, immigrants have given the entitlement portions a few more years of operation before bankruptcy.  If you’re still worried about immigrant use of the welfare state, as I am, then it is far easier and cheaper to build a higher wall around the welfare state, instead of around the country.

 3. “Immigrants are a net fiscal cost.”

Related to the welfare argument is that immigrants consume more in government benefits than they generate in tax revenue.  The empirics on this are fairly consistent – immigrants in the United States have a net-zero impact on government budgets (the published version of that working paper is published here). 

It seems odd that poor immigrants don’t create a larger deficit but there are many factors pushing explaining that.  The first is that higher immigrant fertility and the long run productivity of those people born in the United States generates a lot of tax revenue.  The second is that immigrants grow the economy considerably (this is different from the immigration surplus discussed above) and increase tax revenue.  The third is that many immigrants come when they are young but not young enough to consume public schools, thus they work and pay taxes before consuming hundreds of thousands of dollars in public schools costs and welfare benefits – meaning they give an immediate fiscal boost.  There are many other reasons as well. 

Although the tax incidence from immigrants is what matters for the fiscal consequences, between 50 percent and 75 percent of illegal immigrants comply with federal tax law.  States that rely on consumption or property taxes tend to garner a surplus from taxes paid by unlawful immigrants while those that rely on income taxes do not.              

4.  “Immigrants increase economic inequality.”

In a post-Piketty world, the argument that immigration is increasing economic inequality within nations is getting some attention.  While most forms of economic inequality are increasing among people within nations, global inequality is likely falling due and at a historic low point due to rapid economic growth in much of the world over the last generation

The evidence on how immigration affects economic inequality in the United States is mixed – some research finds relatively small effects and others find substantial ones.  The variance in findings can be explained by research methods – there is a big difference in outcomes between a study that measures how immigration affects economic inequality only among natives and another study that includes immigrants and their earnings.  Both methods seem reasonable but the effects on inequality are small compared to other factors.

Frankly, I don’t see the problem if an immigrant quadruples his income by coming to the United States, barely affects the wages of native-born Americans here, and increases economic inequality as a result.  The standard of living is much more important than the earnings distribution and everybody in this situation either wins or is unaffected. 

 5.  “Today’s immigrants don’t assimilate like previous immigrant groups did.”

There is a large amount of research that indicates immigrants are assimilating as well as or better than previous immigrant groups – even Mexicans.  The first piece of research is the National Academy of Science’s (NAS) September 2015 book titled The Integration of Immigrants into American Society.  It’s a thorough and brilliant summation of the relevant academic literature on immigrant assimilation.  Bottom line:  Assimilation is never perfect and always takes time, but it’s going very well.

The second book is a July 2015 book entitled Indicators of Immigrant Integration 2015 that analyses immigrant and second generation integration on 27 measurable indicators across the OECD and EU countries.  This report finds more problems with immigrant assimilation in Europe, especially for those from outside of the European Union, but the findings for the United States are quite positive.

The third work by University of Washington economist Jacob Vigdor compares modern immigrant civic and cultural assimilation to that of immigrants from the early 20th century (an earlier draft of his book chapter is here, the published version is available in this collection).  If you think early 20th century immigrants and their descendants eventually assimilated successfully, Vigdor’s conclusion is reassuring:

“While there are reasons to think of contemporary migration from Spanish-speaking nations as distinct from earlier waves of immigration, evidence does not support the notion that this wave of migration poses a true threat to the institutions that withstood those earlier waves.  Basic indicators of assimilation, from naturalization to English ability, are if anything stronger now than they were a century ago.”

For the nostalgic among us who believe that immigrants assimilated so much more smoothly in the past, the plethora of ethnic and anti-Catholic riots, the nativist Know-Nothing movement, and immigrant groups that refused to assimilate are a useful tonic.  Immigrant assimilation is always messy and it looks bad from the middle of that process where we are right now, but the trends are positive and pointing in the right direction.

6.  “Immigrants are especially crime prone.”

This myth has been around for over a century.  It wasn’t true in 1896, 1909, 1931, 1994, and more recently.  Immigrants are less likely to be incarcerated for violent and property crimes and cities with more immigrants and their descendants are more peaceful.  Some immigrants do commit violent and property crimes but, on the whole, they are less likely to do so.

7.  “Immigrants pose a unique risk today because of terrorism.”

Terrorism is not a modern strategy.  There were a large number of bombings and terrorist attacks in the early 20th century, most of them committed by immigrants, socialists, and their fellow travelers. 

Today, the deaths from terrorism committed by immigrants are greater than they were a century ago but the risk is still low compared to the benefits of immigration.  For instance, the chance of an American being killed in a terrorist attack committed on U.S. soil by a refugee was one in 3.6 billion from 1975 to 2015.  For all foreign-born terrorists on U.S. soil, the chance of being murdered in a terrorist attack is one in 3.6 million during the same period of time.  Almost 99 percent of those murders occurred on 9/11 and were committed by foreigners on tourist visas and one student visa, not immigrants.  Cato has a paper coming out in September that explores this in greater detail.  Every death from terrorism is a tragedy but immigrants pose a relatively small threat relative to the big benefits of them being here (remember the immigration surplus above). 

8.  “It’s easy to immigrate to America and we’re the most open country in the world.”

It is very difficult to immigrate to the United States.  Ellis Island closed down a long time ago.  In most cases, there isn’t a line and when there is, it can take decades or centuries.  This chart shows the confusing and difficult path to a green card.  Does that look easy to you?

America allows greater numbers of immigrants than any other country.  However, the annual flow of immigrants as a percent of our population is below most other OECD countries because the United States is so large.  The percentage of our population that is foreign-born is about 13 percent – below historical highs in the United States and less than half of what it is in modern New Zealand and Australia.  America is great at assimilating immigrants but other countries are much more open.   

9.  “Amnesty or failure to enforce our immigration laws will destroy the Rule of Law in the United States.”

For a law to be consistent with Rule of Law principle, it must be applied equally, have roughly ex ante predictable outcomes based on the circumstances, and be consistent with our Anglo-Saxon traditions of personal autonomy and liberty.  Our current immigration laws violate all of those.  They are applied differently based on people’s country of birth via arbitrary quotas and other regulations, the outcomes are certainly not predictable, and they are hardly consistent with America’s traditional immigration policy and our conceptions of liberty.

For the Rule of Law to be present, good laws are required, not just strict adherence to government enforcement of impossible to follow rules.  An amnesty is an admission that our past laws have failed, they need reform, and the net cost of enforcing them in the meantime exceeds the benefits.  That’s why there have been numerous amnesties throughout American history. 

Enforcing laws that are inherently capricious and that are contrary to our traditions is inconsistent with a stable Rule of Law that is a necessary, although not sufficient, precondition for economic growth.  Enforcing bad laws poorly is better than enforcing bad laws uniformly despite the uncertainty.  In immigration, poor enforcement of our destructive laws is preferable to strict enforcement but liberalization is the best choice of all.  Admitting our laws failed, granting an amnesty for law-breakers, and reforming the laws does not doom the Rule of Law in the United States – it strengthens it.

10. “National sovereignty.”

By not exercising control over borders through actively blocking immigrants, the users of this argument warn, the United States government will surrender a vital component of its national sovereignty.  Rarely do users of this argument explain to whom the U.S. government would actually surrender sovereignty in this situation.  Even in the most extremely open immigration policy imaginable, total open borders, national sovereignty is not diminished assuming that our government’s institutions chose such a policy (I am not supporting totally open borders here, I am just using it as a foil to show that even in this extreme situation this argument fails).  How can that be?   

The standard Weberian definition of a government is an institution that has a monopoly (or near monopoly) on the legitimate use of violence within a certain geographical area.  The way it achieves this monopoly is by keeping out other competing sovereigns that want to be that monopoly.  Our government maintains its sovereignty is by excluding the militaries of other nations and by stopping insurgents.

However, U.S. immigration laws are not primarily designed or intended to keep out foreign armies, spies, or insurgents.  The main effect of our immigration laws is to keep out willing foreign workers from selling their labor to voluntary American purchasers.  Such economic controls do not aid in the maintenance of national sovereignty and relaxing or removing them would not infringe upon the government’s national sovereignty any more than a policy of unilateral free trade would.  If the United States would return to its 1790-1875 immigration policy, foreign militaries crossing U.S. borders would be countered by the U.S. military.  Allowing the free flow of non-violent and healthy foreign nationals does nothing to diminish the U.S. government’s legitimate monopoly on the use of force in the Weberian world.

There is also a historical argument that free immigration and U.S. national sovereignty are not in conflict.  From 1790-1875 the federal government placed almost no restrictions on immigration.  At the time, states imposed restrictions on the immigration of free blacks and likely indigents through outright bars, taxes, passenger regulations, and bonds.  Many of those restrictions weren’t enforced by state governments and were lifted in the 1840s after Supreme Court decisions.  However, that open immigration policy did not stop the United States from fighting two wars against foreign powers – the War of 1812 and the Mexican-American War – and the Civil War.  The U.S. government’s monopoly on the legitimate use of force during that time was certainly challenged from within and without but the U.S. government maintained its national sovereignty even with near open borders.

The U.S. government was also clearly sovereign during that period of history.  Those who claim the U.S. government would lose its national sovereignty under a regime of free immigration have yet to reconcile that with America’s past of doing just that.  To argue that open borders would destroy American sovereignty is to argue that the United States was not a sovereign country when George Washington, Andrew Jackson, or Abraham Lincoln were Presidents.  We do not have to choose between free immigration and U.S. national sovereignty.

Furthermore, national sovereign control over immigrations means that the government can do whatever it wants with that power – including relinquishing it entirely.  It would be odd to argue that sovereign states have complete control over their border except they can’t open them too much.  Of course, they can – that is the essence of sovereignty.  After all, I’m arguing that the United States government should change its laws to allow for more legal immigration, not that the U.S. government should cede all of its power to a foreign sovereign. 

11.  “Immigrants won’t vote for the Republican Party – look at what happened to California.”

This is an argument used by some Republicans to oppose liberalized immigration.  They point to my home state of California as an example of what happens when there are too many immigrants and their descendants: Democratic control.  The evidence is clear that Hispanic and immigrant voters in California in the early to mid-1990s did turn the state blue but that was a reaction to the state GOP declaring political war on them.  Those who claim that changing demographics due to immigration is solely responsible for the shift in California’s politics have to explain the severe drop-off in support for the GOP at exactly the same time that the party was using anti-immigration propositions and arguments to win the 1994 election.  They would further have to why Texas Hispanics are so much more Republican than those in California.  Nativism has never been the path toward national party success and frequently contributes to their downfall.  In other words, whether immigrants vote for Republicans is mostly up to how Republicans treat them.    

Republicans should look toward the inclusive and relatively pro-immigration policies and positions adopted by their fellow party members in Texas and their subsequent electoral success there rather than trying to replicate the foolish nativist politics pursued by the California Republican Party.  My comment here assumes that locking people out of the United States because they might disproportionality vote for one of the two major parties is a legitimate use of government power – I do not believe that it is.   

12.  “Immigrants bring with them their bad cultures, ideas, or other factors that will undermine and destroy our economic and political institutions.  The resultant weakening in economic growth means that immigrants will destroy more wealth than they will create.”

This is the most intelligent anti-immigration argument and the one most likely to be correct, although the evidence currently doesn’t support it being true.  Economics Michael Clemens lays out a wonderful model of how immigrants could theoretically weaken the growth potential of any receiving countries.  In his model, he assumes that immigrants transmit these anti-growth factors to the United States.  However, as the immigrants assimilate into American ideas and notions, these anti-growth factors weaken over time.  Congestion could counteract that assimilation process when there are too many immigrants with too many bad ideas, thus overwhelming assimilative forces.  Clemens is rightly skeptical that this is occurring but his paper lays out the theoretical point where immigration restrictions would be efficient – where they balance the benefits of economic expansion from immigration with the costs of institutional degradation.

Empirical evidence doesn’t point to this effect either.  In a recent academic paper, my coauthors and I compared economic freedom scores with immigrant populations across 100 countries over 21 years.  Some countries were majority immigrant while some had virtually none.  We found that the larger a country’s immigrant population was in 1990, the more economic freedom increased in the same country by 2011.  The immigrant’s country of origin, and whether they came from a poor nation or a rich one, didn’t affect the outcome.  These results held for the United States federal government but not for state governments.  States with greater immigrant populations in 1990 had less economic freedom in 2011 than those with fewer immigrants, but the difference was small.  The national increase in economic freedom more than outweighed the small decrease in economic freedom in states with more immigrants.  Large immigrant populations also don’t increase the size of welfare programs or other public programs across American states and there is a lot of evidence that more immigrants in European countries actually decreases support for big government. 

Although this anti-immigration argument could be true, it seems unlikely to be so for several reasons.  First, it is very hard to upend established political and economic institutions through immigration.  Immigrants change to fit into the existing order rather than vice versa.  Institutions are ontologically collective – my American conceptions of private property rights wouldn’t accompany me in any meaningful way if I went to Cuba and vice versa.  It would take a rapid inundation of immigrants and replacement of natives to change institutions in most places.     

The second possibility is immigrant self-selection: Those who decide to come here mostly admire American institutions or have policy opinions that are very similar to those of native-born Americans.  As a result, adding more immigrants who already broadly share the opinions of most Americans would not affect policy.  This appears to be the case in the United States.

The third explanation is that foreigners and Americans have very similar policy opinions. This hypothesis is related to those above, but it indicates an area where Americans may be unexceptional compared to the rest of the world.  According to this theory, Americans are not more supportive of free markets than most other peoples, we’re just lucky that we inherited excellent institutions from our ancestors.

The fourth reason is that more open immigration makes native voters oppose welfare or expanded government because they believe immigrants will disproportionately consume the benefits (regardless of the fact that poor immigrants actually under—consume welfare compared to poor Americans).  In essence, voters hold back the expansion of those programs based on the belief that immigrants may take advantage of them.  As Paul Krugman aptly observed, “Absent those [immigration] restrictions, there would have been many claims, justified or not, about people flocking to America to take advantage of [New Deal] welfare programs.”

As the late labor historian (and immigration restrictionist) Vernon M. Briggs Jr. wrote, “This era [of immigration restrictions] witnessed the enactment of the most progressive worker and family legislation the nation has ever adopted.”  None of those programs would have been politically possible to create amidst mass immigration. Government grows the fastest when immigration is the most restricted, and it slows dramatically when the borders are more open.

Even Karl Marx and Friedrich Engels thought that the prospects for working class revolution in the United States were diminished due to the varied immigrant origins of the workers who were divided by a high degree of ethnic, sectarian, and racial diversity.  That immigrant-led diversity may be why the United States never had a popular workers, labor, or socialist party. 

The most plausible argument against liberalizing immigration is that immigrants will worsen our economic and political institutions, thus slowing economic growth and killing the goose that lays the golden eggs.  Fortunately, the academic and policy literature does not support this argument and there is some evidence that immigration could actually improve our institutions.  Even the best argument against immigration is still unconvincing.

13.  “The brain drain of smart immigrants to the United State impoverished other countries.”

The results of the empirical evidence on this point are conclusive: The flow of skilled workers from low-productivity countries to high-productivity nations increases the incomes of people in the destination country, enriches the immigrant, and helps (or at least doesn’t hurt) those left behind.  Furthermore, remittances that immigrants send home are often large enough to offset any loss in home country productivity by emigration.  In the long run, the potential to immigrate and the higher returns from education increase the incentive for workers in the Developing World to acquire skills that they otherwise might not – increasing the quantity of human capital.  Instead of being called a brain drain, this phenomenon should be accurately called a skill flow.   

Economic development should be about increasing the incomes of people not the amount of economic activity in specific geographical regions.  Immigration and emigration do just that.       

14.  “Immigrants will increase crowding, harm the environment, and [insert misanthropic statement here].”

The late economist Julian Simon spent much of his career showing that people are an economic and environmental blessing, not a curse.  Despite his work, numerous anti-immigration organizations today were funded and founded to oppose immigration because it would increase the number of high-income Americans who would then harm the environment more.  Yes, seriously – just read about John Tanton who is the Johnny Appleseed of modern American nativism.

Concern about crowding is focused on publicly provided goods or services – like schools, roads, and heavily zoned urban areas.  Private businesses don’t complain about crowding, they expand to meet demand which increases their profits.  If crowding was really an issue then privatizing government functions so they have an incentive to rapidly meet demand is a cheap and easy option.  Even if the government doesn’t do that, and I don’t suspect they will in the near future, the problems of crowding are manageable because more immigrants also mean a larger tax base.  Reforming or removing local land use laws that prevent development would also go a long way to alleviating any concerns over crowding. 

Although we should think of these issues on the margin, would you rather be stuck with the problems of crowding like they have in Houston or the problem of not enough crowding like in Detroit? 

15.  “Some races and ethnic groups are genetically inferior.  They need to be prevented from coming here, breeding, and decreasing America’s good ethnic stock.”

These arguments were more popular a century ago when notions of eugenics and racism were widely believed, based on extraordinarily bad research, and were some of the main arguments for passage of the Immigration Act of 1924.  They have resurfaced in the comment sections of some blogs and on twitter, frequently directed at yours truly, but these types of arguments still aren’t publicly aired very often and are quite silly.  I don’t spend time engaging with them but I had to mention that they are still out there.

There are other arguments that people use in opposition to immigration.  Many of those arguments revolve around issues of “fairness” – a word with a fuzzy meaning that differs dramatically between people and cultures.  Arguments about fairness often depend on feelings and, usually, a misunderstanding of the facts that is quickly corrected by reference to my 8th point above.

On its front page today, the Washington Post writes about legal and regulatory obstacles to building small second housing units on single-family lots, often for aging family members.

Second homes, often called “granny flats,” have become a new front in the conflict that pits the need for more housing in the country’s most expensive cities against the wishes of neighbors who want to preserve their communities. The same battles flare over large developments that might loom over single-family neighborhoods. But even this modest idea for new housing — let homeowners build it in their own back yards — has run into not-in-my-back-yard resistance….

Homes like the Coffees’, proponents argue, could help ease housing shortages that have made $2,000-a-month one-bedrooms look like a bargain in cities such as Los Angeles. They could yield new affordable housing at no cost to the public. They could add rentals and economic diversity to more neighborhoods. And they could expand housing options for a population in which baby boomers are aging and millennials are stuck at home.

Many neighbors, though, protest that a glut of back yard building would spoil the character of neighborhoods designed around the American ideal of one family on one lot surrounded by verdant lawn. …

“You have surging housing prices in the most prosperous cities in the country, and at the same time income inequality is growing, and there’s a cultural and demographic resurgence of urban living,” [Alan Durning, executive director of the Sightline Institute] said. Young people with less money, in particular, he adds, are “slamming into their parents and grandparents’ regulatory regimes of strict limits on construction of new housing.”

It’s not the first time I’d heard of the problem. In 1996 George Liebmann wrote in Regulation about how “Zoning makes it more difficult to keep aged parents close by and care for them.” He recommended that “Duplex homes and accessory apartments should be permitted in all new residential construction. Housing options such as these allow elderly persons to live near their adult children without intruding on their children’s privacy.” (“Modernization of Zoning,” pp. 71, 75). Note that he was talking not about separate structures but simply residential units attached to the main house. And even those were impeded by zoning regulations. I mentioned them briefly in my 1997 book Libertarianism: A Primer and my 2015 update, The Libertarian Mind (p. 309).

Local officials think their zoning rules are more important than keeping families together.  They fume that allowing such small structures for grandma would “turn our zoning ordinance upside down.” And what’s more important, saving money and keeping grandma near her family or strict adherence to zoning regulations? The Post article, featuring a conflict in Los Angeles, notes the problem of NIMBY or “not in my back yard” attitudes by neighbors. And in this case, as reporter Emily Badger notes, it’s actually in your back yard. Or technically, it’s a matter of “not in my neighbor’s back yard.”

Brink Lindsey wrote about how zoning limits affordable housing in his recent paper on regressive regulation, as did Edward Glaeser and Joseph Gyourko in Regulation.

Newly anointed GOP presidential nominee Donald Trump wasted no time in criticizing the foreign policy legacy of Barack Obama and Hillary Clinton. For decades the GOP has claimed to uniquely represent American military personnel.

Service members aren’t allowed to become publicly involved in partisan politics. However, they do speak indirectly, via polls and contributions.

It turns out that they favor neither Democrats nor Republicans. Rather, this campaign a plurality is supporting the least militaristic of the candidates, Libertarian Party nominee Gary Johnson.

The LP is a perennial and distant third place contender. But this election might be different. Johnson has been polling in double digits and could hold the balance of power, especially with the help of military voters. For instance, a July poll found Johnson well ahead of the two major party candidates among active duty personnel. 

Almost 39 percent of active duty members backed him. Just 31 percent supported Donald Trump and only 14 percent were for Hillary Clinton. Johnson carried every service except the Navy. He enjoyed the biggest margin in the Marines corps, 44 percent to 27 percent for Trump.

This isn’t the first time a libertarian led the presidential race among military personnel. Republican Ron Paul, a congressman long known as “Dr. No,” was a consistent outlier on foreign policy. While the other Republicans advocated more intervention and war, Paul highlighted the problems of “blowback”—terrorism as a response to Washington’s persistent willingness to bomb, invade, and occupy other nations and drone and bomb other peoples.

The conventional wisdom seemed to be that military personnel favored war. Yet, wrote Timothy Egan in the New York Times in 2011, Paul had “more financial support from active duty members of the service than any other politician.” At one point Paul had collected 87 percent of the military contributions for GOP candidates.

As of March 2012, Paul had received more than twice the amount for Obama, almost ten times the amount for Mitt Romney, more than ten times as much as Newt Gingrich, and about 32 times the amount for Rick Santorum. The latter three were inveterate war hawks who themselves never served in the military. In contrast, Obama presented himself as a critic of unnecessary war.

Paul even led his Republican competitors among military contractors (though he trailed Obama). Analyst Loren Thompson explained that “Just because people work in the defense industry doesn’t mean that they always vote their economic interests.”

While service personnel are willing to serve in combat, most do not want to do so absent compelling circumstances. And few of the interests involved in Washington’s conflicts can be considered serious let alone vital. A Marine corps veteran who supported Paul told Egan that service members “realize they’re being utilized for other purposes—nation building and being world’s policeman—and it’s not what they signed up for.”

As I wrote in Rare: “Despite the support of so many military members, Ron Paul was never able to significantly broaden his appeal. Johnson has a unique opportunity given widespread dislike of his two major opponents.”

Who can keep Americans safe? That obviously is one of the most important questions this election. Uniformed military personnel are giving a surprising answer.

The so-called Islamic State is losing ground. The liberation of Mosul, Iraq’s third most populous city, may be the Baghdad government’s next objective.

Yet even as the “caliphate” shrinks in the Middle East, Daesh, as the group also is known, is increasing its murderous attacks on Western civilians. Washington’s intervention actually has endangered Americans.

In contrast to al-Qaeda, which always conducted terrorism, ISIS originally focused on creating a caliphate, or quasi-state. Daesh’s territorial designed conflicted with many nations in the Mideast: Iraq, Syria, Iran, Turkey, Libya, Jordan, Lebanon, and the Gulf kingdoms.

The Obama administration did not intervene out of necessity: ISIS ignored America. Moreover, the movement faced enemies which collectively had a million men under arms; several possessed sophisticated air forces.

Washington’s concern for those being killed by the Islamic State was real, but casualties lagged well behind the number of deaths in other lands routinely ignored by the U.S. The administration seemed most motivated by the sadistic murder of two Americans who had been captured by ISIS in Syria. Although barbaric, these acts did not justify intervention in another Mideast war.

Washington took control of the anti-ISIS campaign but waged a surprisingly lackluster effort. The administration recognized that there was no domestic support for ground troops so mixed bombing and drone strikes with support for “moderate” Syrian rebels, who proved to be generally ineffective.

Turkey sought to play the U.S., pushing Washington to oust Syria’s Bashar al-Assad, while tolerating Daesh. The Syrian government attempted to use the specter of the Islamic State to weaken Western support for its overthrow.

Iraq’s Shia-dominated government wanted a bail-out while maintaining sectarian rule. The Sunni Gulf countries expected America to take care of their problems, as usual. Saudi Arabia dragged Washington into a most foolish military diversion, a war with Yemen’s Houthis.

Now ISIS is in retreat—it has lost almost half of the territory it held in Iraq and one-fifth in Syria. But the group remains surprisingly resilient for an increasingly unpopular and only modestly armed group facing a coalition including the U.S., Europe, and most of the Middle East.

Unfortunately, defeat has turned Daesh toward terrorism, including in the West. As was predictable.

After the horrid attacks in Paris late last year, French President Francois Hollande declared that his nation was at war. But it had been bombing ISIS-held territory for 14 months. The only surprise was that it took Daesh so long to retaliate so spectacularly.

Had the U.S. and Europe left the battle to those directly threatened, the latter would have had no choice but to take the lead. And the fight would have been largely contained within the Middle East. The Islamic State would have had to focus on the enemy literally at its gates, rather than its abstract Western enemies afar.

As ISIS recedes Washington should step back. Unfortunately, the administration’s plan to increase U.S. forces by 560 to about 6000 for the coming Mosul assault will tie America more closely to the sectarian government in Baghdad and its close partner, Iran-supported Shia militias which have committed numerous civilian atrocities.

Washington understandably prefers an unsympathetic Iraqi government to a threatening Islamic State. But should be left responsible for its own mistakes and crimes.

Seeking to build up a “moderate” insurgency to battle both Damascus and ISIS has proved to be mostly a fool’s errand. Backing Riyadh in Yemen is a disaster. Saudi Arabia has turned the long-running insurgency into a sectarian conflict, while Yemenis blame America for civilian atrocities committed by the Saudis.

As I wrote in National Interest: “America faces a genuine terrorist threat, though it is largely indigenous, inspired by foreign killers. Alas, the number of terrorists will continue to increase as Washington makes other people’s conflicts its own. The U.S. must learn to focus on its own enemies.”

Housing affordability, an issue paid considerable attention over the previous two decades, doesn’t show signs of meaningful national improvement. This even despite the almost $50 billion HUD spends in taxpayer dollars annually on solving the affordability crisis and related concerns.

So what gives? One likely culprit is the language we use to describe the problem.

Take the word “affordable.” Affordable housing – used in a public policy context – is a misnomer of sorts: affordability implies the ability to pay for something given your budget. But budgets vary considerably between households, and so the definition of affordability varies considerably, too.

There are only two – improbable – ways that any given housing could be affordable to the aggregate U.S. population. One option is that everyone’s incomes are identical. Another option is that housing is altogether free.

Although HUD makes attempts at each via its current policy mélange, neither is a practicable objective in a free society. And aside from these objectives being met, objectively affordable housing cannot exist.

Instead, it is useful to reframe the debate as a need for low-cost housing. Rather than lowering the bar, low-cost housing offers a higher ideal: even when housing is affordable it isn’t necessarily low-cost, for example. But, conversely, when housing is low-cost enough it is nearly always affordable.

Essentially, re-labeling reminds us to solve the real problem by obliging us to ask “how do we make housing cost less?”

Fortunately, low-cost housing can be realized in myriad ways. Many of these market solutions are currently underemphasized by politicians and housing policy specialists. Examples include terminating protectionist housing policies, like those surrounding mortgage interest tax deductions, curbing future reactionary meddling in mortgage underwriting standards, and of course, repealing land use and zoning laws, which are inimical to low-cost housing.

In this way, we diagnose the problem and treat it directly, and avoid a hopeless fixation on high-cost housing’s irresolvable bi-products. 

Of all issues energizing environmentalists, hydraulic fracturing, or fracking, of the subsurface rocks during the production of oil and gas is near the very top of the list.

In spite of this, several recent government decisions and court rulings have come down on the side of fracking. For example, overshadowed in May by Brexit coverage, elected officials in Yorkshire, England gave the thumbs up to fracking operations there in an effort to boost natural gas production. Also in May, the Colorado Supreme Court struck down fracking bans passed by the city governments of Fort Collins and Longmont, in a ruling that upheld the legality of the state to regulate fracking, versus the municipalities’ lack of standing to ban use of the technology.

In June, a federal judge in Wyoming struck down rules proposed by the Interior Department’s Bureau of Land Management (BLM) to further regulate hydraulic fracturing on federal lands. BLM’s fracking rules were designed to collect additional information already gathered and regulated by states to assure protection of groundwater supplies in oil and gas producing areas. The energy industry argued these fracking rules are duplicative, expensive to carry out, and the net result would not increase environmental protection. Worse yet, the BLM and existing state rules in places countermand each other. Consequently, BLM’s fracking rules were challenged in court on the same day they were issued, and were then set aside last year by the same federal judge who struck them down last month.

The timing and tenure of the judge’s decision has several interesting aspects. From the legal standpoint, the administration does not have the congressional authority to regulate fracking. The judge’s opinion stated “the BLM’s efforts to do so through the fracking rule is in excess of its statutory authority and contrary to law.” He further explained that “Congress’ inability or unwillingness to pass a law desired by the executive branch does not default authority to the executive branch to act independently, regardless of whether hydraulic fracturing is good or bad for the environment for the citizens of the United States.”

The judge’s decision to strike the BLM rules also comports with the major finding in EPA’s June 2015 “Assessment of the Potential Impacts of Hydraulic Fracturing for Oil and Gas on Drinking Water Resources.” In it, the agency’s key findings stated “we did not find evidence that these mechanisms (fracking) have led to widespread, systemic impacts on drinking water resources in the United States.

The recent legal decisions striking down rules and bans on fracking are based on fundamental constitutional law, nevertheless they are occurring even as the environmental hysteria over fracking appears to be increasing. However, the theory of groundwater contamination due to rock fracturing many thousands of feet below the groundwater table has lost much of its credibility, particularly with last year’s publication of the EPA study.

In the end, BLM’s fracking rules sought to create a blanket regulation of a critical proven technology that, when combined with horizontal drilling, provided the foundation of the shale revolution—in turn contributing mightily to the success of the American energy renaissance. In fact, the Energy Information Administration (EIA) in May reported that the U.S. remained the world’s top producer of petroleum and natural gas for 2015, having surpassed the production of Russia in 2012 and Saudi Arabia in 2013, due to surging production brought about by fracking and horizontal drilling.

Finally, forgotten among the energy production statistics is the fact that over the past five years, the increased use of natural gas brought about by the fracking and horizontal drilling of the shale revolution has cut the carbon intensivity of new fossil fuel power production by roughly 50 percent. With regard to the EPA, decreased carbon emissions should be as much of a key issue as groundwater safety. As the public and elected officials become better educated on this important topic—so central to our uninterrupted energy supply—it appears increasingly likely that any environmental “lightening” being hurled at hydraulic fracturing technology will be harmlessly directed to ground.

There are hints of possible interest in acquiring nuclear weapons in both South Korea and Japan, especially since the rise of Donald Trump. Such a policy shift would be neither quick nor easy. Yet the presumption that the benefits of nuclear nonproliferation are worth the costs of maintaining a nuclear “umbrella” is outdated.

Since the development of the atomic bomb America has been committed to nuclear war. Many Americans probably believed that meant if their own nation’s survival was in doubt. But Washington always has been far more likely to use nukes on behalf of its allies.

The cost of America’s many commitments is high. The U.S. promises to sacrifice thousands if not millions of its own citizens for modest or even minimal interests.

Unfortunately, war is possible. Deterrence often fails, and Washington might have to decide if it will fight an unanticipated nuclear war or back down.

Friendly proliferation might be a better option. I write in Foreign Affairs: “Instead of being in the middle of a Northeast Asia in which only the bad guys—China, North Korea, Russia—had nukes, the U.S. could remain out of the fray. If something went wrong, the tragedy would not automatically include America.”

There obviously remain good reasons for the U.S. to be wary of encouraging proliferation. Yet opening a debate over the issue may be the most effective way to convince China to take more serious action against the DPRK.

Friendly proliferation might be the best of a bad set of options.

Donald Trump is a competitive person. He likes to have bigger things than other people. He says that he has really big hands. His tax cut was larger than the other GOP candidates. And now he says that his infrastructure plan will be double the size of Hillary Clinton’s.

The problem is, with the federal government, smaller is almost always better than bigger. That’s the message of this fashionable Cato T-shirt, which, by the way, would go nicely with Donald’s Make America Great hat. So for the Trump campaign, I’ll put aside a really big T-shirt for Donald, while a medium would look great on Melania.

With regard to infrastructure, I describe why the federal government ought to reduce its role in this essay. There are few, if any, advantages of federal involvement, and many disadvantages, including bureaucracy, pork barrel, misallocation, cost overruns, and regulation.

State and local governments can raise money for their own highways, bridges, seaports, and airports anytime they want to. Indeed, about half the states have raised their gas taxes to fund transportation infrastructure in just the past four years. There’s no need for more federal taxes or debt, as Trump is proposing.

As a businessman, Trump ought to be thinking about expanding the private role in America’s infrastructure, not trying to one-up Clinton on central control. A global trend toward infrastructure privatization has swept the world since the 1980s. To make America great again, we should adopt the best practices from around the world, and that means the efficiency and innovation that comes with privatization.

When he thinks about his hometown, does Trump think that New York’s government-owned airports provide Trump-level quality and efficiency? Does he think that the Port Authority of New York and New Jersey is a model of good corporate governance? If Trump is elected president, would he hand over management of his many fine properties to the city government during his stay in D.C.?

Of course he wouldn’t. Trump knows that governments are a complete screw-up when it comes to managing and constructing big projects. Even little projects: he’s the one who took over the city’s failing Wollman Skating Rink in Central Park and turned it into a big success, on-time and on-budget.

So Trump should rethink his big government plan for infrastructure. If elected, he will discover that federal bureaucracies do not operate the way that his well-oiled real estate enterprises do. The reasons are deep-seated, and no amount of big talk from the Oval Office will turn the vastly bloated federal bureaucracy into a success like the Wollman Rink.  

Air temperature and precipitation, in the words of Chattopadhyay and Edwards (2016), are “two of the most important variables in the fields of climate sciences and hydrology.” Understanding how and why they change has long been the subject of research, and reliable detection and characterization of trends in these variables is necessary, especially at the scale of a political decision-making entity such as a state. Chattopadhyay and Edwards evaluated trends in precipitation and air temperature for the Commonwealth of Kentucky in the hopes that their analysis would “serve as a necessary input to forecasting, decision-making and planning processes to mitigate any adverse consequences of changing climate.”

Data used in their study originated from the National Oceanic and Atmospheric Administration and consisted of time series of daily precipitation and maximum and minimum air temperatures for each Kentucky county. The two researchers focused on the 61-year period from 1950-2010 to maximize standardization among stations and to ensure acceptable record length. In all, a total of 84 stations met their initial criteria. Next, Chattopadhyay and Edwards subjected the individual station records to a series of statistical analyses to test for homogeneity, which reduced the number of stations analyzed for precipitation and temperature trends to 60 and 42, respectively. Thereafter, these remaining station records were subjected to non-parametric Mann-Kendall testing to assess the presence of significant trends and the Theil-Sen approach to quantify the significance of any linear trends in the time series. What did these procedures reveal?

For precipitation, Chattopadhyay and Edwards report only two of the 60 stations exhibited a significant trend in precipitation, leading the two University of Kentucky researchers to state “the findings clearly indicate that, according to the dataset and methods used in this study, annual rainfall depths in Kentucky generally exhibit no statistically significant trends with respect to time. With respect to temperature, a similar result was found. Only three of the 42 stations examined had a significant trend. Once again, Chattopadhyay and Edwards conclude the data analyzed in their study “indicate that, broadly speaking, mean annual temperatures in Kentucky have not demonstrated a statistically significant trend with regard to time.”

Given such findings, it would seem that the vast bulk of anthropogenic CO2 emissions that have been emitted into the atmosphere since 1950 have had little impact on Kentucky temperature and precipitation, because there have been no systematic trends in either variable.



Chattopadhyay, S. and Edwards, D.R. 2016. Long-term trend analysis of precipitation and air temperature for Kentucky, United States. Climate 4: 10; doi:10.3390/cli4010010.

India’s move to replace varying federal, state and interstate sales tax with a uniform Value Added Tax (VAT) makes a lot of sense (unlike Brazil which has high sales taxes and a 19% VAT). 

Yet India should take care not to let the 17.3% tax rate creep up, because VAT too is subject to the “Laffer Curve.”

As the graph shows (using 2013 OECD data) the VAT never brings in revenue higher than 10% GDP even in countries that mistakenly raised the VAT to 20% or more - 23% in Greece and Ireland, 25% in Scandanavian countries, 26% in Iceland, 27% in Hungary. New Zealand collected 9.7% of GDP with a 15% standard VAT, while countries with much higher VATs brought in less.

No tax fails to inflict economic damage, of course, which holds down the revenue collected from income and payroll taxes. A study by James Alm of Tulane University and Asmaa El-Ganainy of the IMF found “a one percentage point increase in the VAT rate leads to roughly a one percent reduction in the level of aggregate consumption in the short run and to a somewhat larger reduction in the long run.” Such a reduction in consumer spending means fewer jobs, wages and profits, so a higher VAT leads to lower revenue from taxes on personal income and profits – a fact borne out by Japan’s experience.

Our government forays into the housing market have been a disaster, to say the least.

The mortgage interest deduction goes solely to the wealthy and costs the government nearly $100 billion a year. For some perspective of how out of whack this subsidy is, the residents of Nancy Pelosi’s pricey San Francisco neighborhood get roughly 100 times the benefit, per household, of the denizens of my middle class home town in central Illinois.

Of course, our government-sponsored enterprises are an even more ill-conceived subsidy for home buying. Fannie Mae and Freddie Mac ostensibly increase the amount of capital available to finance home buying by purchasing mortgages from banks and other mortgage originators, packaging them into mortgage-backed securities, and then selling them to pensions, hedge funds, and the like. But it is dubious that their existence meaningfully increases home ownership rates: The Census Bureau announced last month that the U.S. homeownership rate was 62.9 percent, its lowest since 1965 and well below most EU countries, virtually none of which has anything akin to either the mortgage interest deduction or government-sponsored enterprises buying up mortgages.

Not only does the MID and the GSEs fail to boost home ownership but they also can exacerbate broader problems in the housing market, and financial markets in general. The MID encourages people to purchase as much house as they can possibly afford in order to take full advantage of the tax break, which set up many people for disaster when they lost their job in the Great Recession of 2008-2009.

The pressure the federal government put on the GSEs to extend credit to low-income borrowers in order to help boost home ownership amongst the middle and lower-classes ended in tears for millions of Americans as well, as the swings in the housing market destroyed the value in their homes and left them unable to afford to continue living there.

The plunging home prices cratered the portfolio of the GSEs and led the Treasury to use the 2008 Home Equity and Recovery Act, or HERA, to place them into a conservatorship, with the shareholders seeing their share of the company slashed to just 20%, with the government assuming the rest. 

However, the demise of Fannie and Freddie was premature: the reported losses of the GSEs were just temporary, a fact that was clear to many shareholders who held onto their stock or jumped into the company post-crisis. For these people, holding onto the stock post-crisis appeared as if it would work out to be a good bet, especially once real estate prices returned to their previous levels.

Unfortunately for them such a bet failed to account for the vagaries of government action. In 2012 the Treasury imposed an amendment to HERA that effectively nationalized the GSEs and cut out the shareholders from any residual profits. The massive profits generated by Fannie Mae and Freddie Mac–which Treasury officials fully anticipated prior to the takeover–went directly into Treasury’s coffers, helping the Obama Administration claim a victory over the federal budget. The 2012 deficit was “just” $1.1 trillion, or $200 billion less than the previous year, helped by th outsized GSE profits that year.

While the third amendment may have been a short-term political salve, it has created longer-term problems that the next president will have to deal with. The major problem is that because the third amendment “sweeps” the entire net worth of each GSE into Treasury’s coffers each quarter, it means that neither of them have sufficient capital to withstand a sub-par quarter without a draw from Treasury. Should the real-estate market so much as hiccup it’s going to mean a loss for one or both, and a new injection of taxpayer dollars from the government could create a political firestorm and a round of finger-pointing that could lead to another short-term fix that makes a further mess of the lending market for homes.

The plight of the GSEs has not been grist for the masses but it’s time we began some sort of dialogue about our nation’s housing policy going forward nevertheless. Sacrificing $1 trillion of tax revenue and socializing the gains and losses from home mortgage has done nothing to boost home ownership or improve our economy.

Or, to be more precise, worse than nothing: the effective expropriation of the shares of Fannie and Freddie held by the public for short-term political expediency will assuredly have repercussions down the road, as investors become more wary of trusting the government to hold up its side of a deal. The virtual disappearance of any private label mortgage-backed- securities is a side-effect of the capriciousness of the government in mortgage markets in recent years.

The optimal policy solution is simple: we should begin by ending the mortgage-interest deduction, which is by far the most regressive blue-state tax break in existence, and follow that by gutting the government’s footprint in the market for mortgage-backed securities.

Accomplishing the latter is tricky even if it were to become politically possible: the problem is that it is hard to conceive of an entity that is buying, packaging and reselling mortgages operating without there being some sort of implicit government backstop, even if the government swears up and down it would never bail out such an entity. If it were to grow large enough there would inevitably be some sort of systemic risk with the collapse of an entity issuing MBS’s, and once the market perceived that to be true it would be duly exploited by both sides of the transaction.

One solution would be to simply have a number of smaller, competing entities and take steps to prevent one from growing “too” large, a proposal others have already offered. Another step might be to give back the private GSE shareholders their share of the company and for the government to sell its share off as well, and come up with a way for the federal government to capture the value of its effective MBS guarantee in some way.

Or, if we’re being creative, we can try to come up with a way for the Treasury to effectively tie its hands so as to make it impossible for it to intervene in the market, even if it were to collapse. However, I fear that this is as doable as it would be for the federal government to forswear helping people who build in floodplains when their houses inevitably get damaged by flooding, to paraphrase a famous example by the Nobel prize-winning economists Finn Kydland and Ed Prescott.

The current quasi-nationalization of our mortgage markets and billions of dollars of tax breaks for homeowners has not done one whit for home ownership while creating all sorts of potential problems in the housing market.

There’s always a risk in attempting any sort of major reform; what’s unclear is whether we can possibly do worse than the status quo.

It looks like Deutsche Bank is heading toward failure. Why might we be concerned?

The problem is that Deutsche is too big to fail — more precisely, that the new Basel III bank resolution procedures now in place are unlikely to be adequate if it defaults.

Let’s review recent developments. In June 2013 FDIC Vice Chairman Thomas M. Hoenig lambasted Deutsche in a Reuters interview. “Its horrible, I mean they’re horribly undercapitalized,” he said. They have no margin of error.” A little over a year later, it was revealed that the New York Fed had issued a stiff letter to Deutsche’s U.S. arm warning that the bank was suffering from a litany of problems that amounted to a “systemic breakdown” in its risk controls and reporting. Deutsche’s operational problems led it to fail the next CCAR — the Comprehensive Capital Analysis and Review aka the Fed’s stress tests – in March 2015.

Major senior management changes were made throughout 2015 and Deutsche was retrenching sharply with plans to cut its workforce by 35,000. This retrenchment failed to reassure the markets. Between January 1st and February 9th this year, the bank’s share price fell 41 percent and the prices of Deutsche’s CoCos (or Contingent Convertible bonds) were down to 70 cents on the euro.[1] Co-Chair John Cryan responded with an open letter to reassure employees: “Deutsche Bank remains absolutely rock-solid, given our strong capital and risk position,” he wrote. The situation was sufficiently serious that the German finance minister Wolfgang Schäuble felt obliged to explain that he “had no concerns” about Deutsche. Finance ministers never need to provide reassurances about strong banks.

On February 12th, Deutsche launched an audacious counter-attack: it would buy back $5.4 billion of its own bonds. The prices of its bonds — and especially of its CoCos — rallied and the immediate danger receded.

Fast forward to the day after the June 23rd Brexit vote and Deutsche’s share price plunged 14 percent. Deutsche then took three further hits at the end of June. First, spreads on Credit Default Swaps (CDSs) spiked sharply to 230 basis points, up from 95 basis points at the start of the year. These spreads indicate the market’s odds on a default. Second, Deutsche flunked the CCAR again. Then the latest IMF Country Report stated that “Deutsche Bank appears to be the most important net contributor to systemic risks” in the world financial system and warned the German authorities to urgently (re)examine their bank resolution procedures.

Less than a week later, Deutsche’s CoCos had collapsed again, trading at 75 cents on the euro. The Italian Prime Minister, Matteo Renzi, then put his boot in, suggesting that the difficulties facing Italian banks over their well-publicised bad loans were minuscule compared to the problems that other European banks had with their derivatives. To quote:

If this nonperforming loan problem is worth one, the question of derivatives at other banks, big banks, is worth one hundred. This is the ratio: one to one hundred.

He was referring to the enormous size of Deutsche’s derivatives book.

In this post I take a look at Deutsche’s financial position using information drawn mainly from its last Annual Report. I wish to make two points. The first is that although there are problems with the lack of transparency of Deutsche’s derivatives positions, their size alone is not the concern. Instead, the main concern is the bank’s leverage ratio — the size of its ‘risk cushion’ relative to its exposure or amount at risk — which is low and falling fast.

Deutsche’s derivatives positions

On p. 157 of its 2015 Annual Report: Passion to Perform, Deutsche reports that the total notional amount of its derivatives book as of 31 December 2015 was just over €41.9 trillion, equivalent to about $46 trillion, over twice U.S. GDP. This number is large, but is largely a scare number. What matters is not the size of Deutsche’s notional derivatives book but the size of its derivatives exposure, i.e., how much does Deutsche stand to lose?

There can be little question that this exposure will be nowhere near the notional value and may only be a small fraction of it. One reason is that the notional value of some derivatives – such as some swaps – can bear no relationship to any sensible notion of exposure. A second reason is that many of these derivatives will have offsetting exposures, so that losses on one position will be offset by gains on others.

On the same page, Deutsche reports the net market value of its derivatives book: €18.3 billion, only 0.04 percent of its notional amount. However, this figure is almost certainly an under-estimate of Deutsche’s derivatives exposure.

It is unreliable because many of its derivatives are valued using unreliable methods. Like many banks, Deutsche uses a three-level hierarchy to report the fair values of its assets. The most reliable, Level 1, applies to traded assets and fair-values them at their market prices. Level 2 assets (such as mortgage-backed securities) are not traded on open markets and are fair-valued using models calibrated to observable inputs such as other market prices. The murkiest, Level 3, applies to the most esoteric instruments (such as the more complex/illiquid Credit Default Swaps and Collateralized Debt Obligations) that are fair-valued using models not calibrated to market data – in practice, mark-to-myth. The scope for error and abuse is too obvious to need spelling out.  P. 296 of the Annual Report values its Level 2 assets at €709.1 billion and its Level 3 assets at €31.5 billion, or 1,456 percent and 65 percent respectively of its preferred core capital measure, Tier 1 capital. There is no way for outsiders to check these valuations, leaving analysts with no choice but to work with these numbers while doubtful of their reliability.

The €18.3 billion net market value of its derivatives is likely to be an under-estimate because it is based on assumptions (e.g., about hedge effectiveness) and model-based valuations that are will be biased on the rosy side. Deutsche’s management will want their reports to impress the analysts and investors on whose confidence they depend.

The International Financial Reporting Standards (IFRS) used by Deutsche also allow considerable scope for creative fiddling, and not just for derivatives: weaknesses include deficiencies in provisions for expected losses and IFRS’s vulnerability to retained earnings manipulation.[2]

Experience confirms that losses on some derivatives positions (e.g., CDSs) can be many multiples of accounting-based or model-based projections of their exposures.[3]

For all these reasons, the true derivatives exposure is likely to be considerably greater – and I would guesstimate many multiples of – any net market value number.[4]

In short, Deutsche’s derivatives exposure is much greater than €18.3 billion but only a small fraction of the ‘headline’ €41.9 trillion scare number.

Bank accounting is the blackest of black holes.

Deutsche’s reported 3.5 percent leverage ratio

Recall that the number to focus on when gauging a bank’s risk exposure is its leverage ratio.[5] Traditionally, the term ‘leverage’ (or sometimes ‘leverage ratio’) was used to describe the ratio of a bank’s total assets to its core capital. However, under the Basel III capital rules, that same term ‘leverage ratio’ is now used to describe the ratio of a bank’s core capital to a new measure known as its leverage exposure. Basel III uses leverage exposure instead of total assets because the former measure takes account of some of the off-balance-sheet risks that the latter fails to include. However, there is usually not much difference between the total asset and leverage exposure numbers in practice. We can therefore think of the Basel III leverage ratio as being (approximately) the inverse of the traditional leverage (ratio) measure.

Armed with these definitions, let’s look at the numbers. On pp. 31, 130 and 137 of its 2015 Annual Report, Deutsche reports that at the end of 2015, its Basel III-defined leverage ratio, the ratio of its Tier 1 capital (€48.7 billion) to its leverage exposure (€1,395 billion) was 3.5 percent.[6] This leverage ratio implies that a loss of only 3.5 percent on its leverage exposure (or approximately, on its total assets) would be enough to wipe out all its Tier 1 capital.

If you think that 3.5 percent is a low capital buffer, you would be right. Deutsche’s 3.5 percent leverage ratio is also lower than that of any of its competitors and about half that of major U.S. banks.

One can also compare Deutsche’s reported 3.5 percent leverage ratio to regulatory standards. Under the Basel III rules, the absolute minimum required (Tier 1) leverage ratio is 3 percent. Under the U.S. Prompt Corrective Action (PCA) framework, a bank is regarded as ‘well-capitalized’ if it has a leverage ratio of at least 5 percent, ‘adequately capitalized’ if that ratio is at least 4 percent, ‘undercapitalized’ if that ratio is less than 4 percent, ‘significantly undercapitalized’ if that ratio is less than 3 percent, and ‘critically undercapitalized’ if its tangible equity to total assets ratio is less than or equal to 2 percent.[7]

The Federal Reserve is in the process of imposing a 5 percent minimum leverage ratio requirement on the 8 U.S. G-SIB (Globally Systemically Important Bank) holding companies and a 6 percent minimum leverage ratio on their federally insured subsidiaries, effective 1 January 2018.[8]

So, Deutsche’s leverage ratio is (a) not much bigger than Basel III’s absolute minimum, (b) ‘undercapitalized’ under the PCA framework and (c) well below the minimum requirements coming through for the big U.S. banks.

The 3.5 percent leverage ratio is also a fraction of the minimum capital standards proposed by experts. On this issue, an important 2010 letter to the Financial Times  by Anat Admati and 19 other renowned experts recommended a minimum leverage ratio of at least 15 percent. Independently, John Allison, Martin Hutchinson, Allan Meltzer, Thomas Mayer and I have also advocated minimum leverage ratios of at least 15 percent, which happens to be close to the average leverage ratio of U.S. banks when the Fed was founded.

There are also reasons to believe that the reported 3.5 percent figure overstates the bank’s ‘true’ leverage ratio. Leaving aside the incentives on the bank’s part to overstate the bank’s financial strength, which I touched upon earlier, the first points to note here are that Deutsche uses Tier 1 capital as the numerator, and that €48.7 billion is a small capital cushion for a systemically important bank.

The numerator in the leverage ratio: core capital

So let’s consider the numerator further, and then the denominator.

You might recall that I described the numerator in the leverage ratio as ‘core’ capital. Now the point of core capital is that it is the ‘fire resistant’ capital can be counted on to support the bank in the heat of a crisis. The acid test of a core capital instrument is simple: if the bank were to fail tomorrow, would the capital instrument be worth anything? If the answer is Yes, the capital instrument is core; if the answer is No, then it is not.

Examples of capital instruments that would fail this test but are still commonly but incorrectly included in core capital measures are goodwill and Deferred Tax Assets (DTAs), which allow a bank to claim back tax on incurred losses if/when the bank subsequently returns to profitability.

Deutsche also reports (p. 130) a more conservative capital measure, Core Equity Tier 1 (CET1), equal to €44.1 billion. This CET1 measure would have been more appropriate because it excludes softer non-core capital instruments – Additional Tier Capital – that are included in the Tier 1 measure.

If one now replaces Tier 1 capital with CET1, one gets a leverage ratio of 44.1/1,395 = 3.16 percent.

Even CET1 overstates the bank’s core capital, however. One reason for this overstatement is that the regulatory definition of CET1 includes a ‘sin bucket’ of up to 15 percent of non-CET1 (i.e., softer) capital instruments, including DTAs, Mortgage Servicing Rights,and the capital instruments of other financial institutions.[9] The consequence is that Basel III-approved CET1 can overstate the ‘true’ CET1 by up to 1/0.85 -1 = 17.5 percent.

Yet even stripped of its silly sin bucket — which was a concession to banks’ lobbying to weaken capital requirements — a ‘pure’ CET1 measure still overstates core capital. Basel III defines CET1 as (approximately) Tangible Common Equity (TCE), plus realised earnings, accumulated other income and other disclosed returns.[10] Of these items, only TCE really belongs in a measure of core capital, because the other items (especially retained earnings) are manipulable, i.e., these items are not core capital at all.

And what exactly is Tangible Common Equity? Well, the ‘tangible’ in TCE means that the measure excludes soft capital like goodwill or DTAs, and the ‘common’ in TCE means that it excludes more senior capital instruments like preference shares or hybrid capital instruments such as CoCos.

The importance of TCE as the ultimate core capital measure was highlighted in a 2011 speech by Federal Reserve Governor Daniel Tarullo. When reflecting on the experience of the Global Financial Crisis, Tarullo observed:

It is instructive that during the height of the crisis, counterparties and other market actors looked almost exclusively to the amount of tangible common equity held by financial institutions in evaluating the creditworthiness and overall stability of those institutions and essentially ignored any broader capital measures altogether.[11]

As a consequence, CET1 is itself too broad a capital measure but we don’t have data on the TCE core capital measure we would really want.

The denominator in the leverage ratio: leverage exposure vs. total assets, both too low

Turning to the denominator, first note that the leverage exposure (€1,395 billion) is less than the reported total assets (€1,629 billion, p. 184). You might recall that the leverage exposure is supposed to take account of the off-balance-sheet risks that the total assets measure ignores, but it doesn’t. Instead, the measure that does take account of (some) off-balance-sheet exposure is less than the total assets measure that does not. If this has you scratching your head, then your brain is working. The leverage exposure is too low.

If one replaces the leverage exposure in the denominator with total assets, one gets a leverage ratio equal to 44.1/1,629 = 2.71 percent, comfortably below Basel III’s 3 percent absolute minimum. But this total assets measure is itself too low, because it ignores the off-balance-sheet risks, which typically dwarf the on-balance-sheet exposures.

In short the 2.71 percent leverage ratio overstates the ‘true’ leverage ratio because it overstates the numerator and understates the denominator.

Market-value vs. book-value leverage ratios

There are yet still more problems. The 2.71 percent leverage ratio is a book value estimate. Corresponding to the book-value estimate is the market-value leverage ratio, which is the estimate reflected in Deutsche’s stock price. In the present context, the latter is the better indicator, because it reflects the information available to the market, whereas the book value merely reflects information in the accounts. If there is new information, or if the market does not believe the the accounts, then the market value will reflect that market view, but the book value will not.

One can obtain the market value estimate by multiplying the book-value leverage ratio by the bank’s price-to-book ratio, which was 44.4 percent at the end of 2015. Thus, the contemporary market-value estimate of Deutsche’s leverage ratio was 2.71 percent times 44.4 percent = 1.20 percent.

Since then Deutsche’s share price has fallen by almost 43 percent and Deutsche’s latest market-value leverage ratio is now about 0.71 percent.

Policy implications

So what’s next for the world’s most systemically dangerous bank?

At the risk of having to eat my words, I can’t see Deutsche continuing to operate for much longer without some intervention: chronic has become acute. Besides its balance sheet problems, there is a cost of funding that exceeds its return on assets, its poor risk management, its antiquated IT legacy infrastructure, its inability to manage its own complexity and its collapsing profits — and the peak pain is still to hit.  Deutsche reminds me of nothing more than a boxer on the ropes: one more blow could knock him out.

If am I correct, there are only three policy possibilities. #1 Deutsche will be allowed to fail, #2 it will be bailed-in and #3 it will be bailed-out.

We can rule out #1: the German/ECB authorities allowing Deutsche to go into bankruptcy. They would be worried that that would trigger a collapse of the European financial system and they can’t afford to take the risk. Deutsche is too-big-to-fail.

Their preferred option would be #2, a bail-in, the only resolution procedure allowed under EU rules, but this won’t work. Authorities would be afraid to upset bail-in-able investors and there isn’t enough bail-in-able capital anyway.

Which consideration leads to the policy option of last resort — a good-old bad-old taxpayer-financed bail-out. Never mind that EU rules don’t allow it and never mind that we were promised never again.  Never mind, whatever it takes.


[1] CoCo investors feared that their bonds’ triggers would be breached and that their bonds would be converted into equity, which might soon become worthless.

[2] See T. Bush, “UK and Irish Banks Capital Losses – Post Mortem,” Local Authority Pension Fund Forum, 2011) and G. Kerr, “Law of Opposites: Illusory Profits in the Financial Sector,” Adam Smith Institute, 2011.

[3] A. G. Haldane, “Capital Discipline,” speech given to the American Economic Association, Denver, Colorado, January 9, 2011, chart 3.

[4] Indeed, as partial confirmation of this conjecture, on p. 137, the Annual Report gives an estimated ‘total derivatives exposure’ of  €215 billion, nearly 12 times the €18.3 billion net market value of its derivatives book and over 4 times its Tier 1 capital.

[5] In its Annual Report, Deutsche highlights as ‘headline’ capital ratio its Tier 1 capital ratio, the ratio of Tier 1 capital to Risk-Weighted Assets (RWA): this ratio is 12.3 percent if one uses the ‘fully loaded’ measure, which assumes that CCR/CRD 4, the Capital Requirements Directive, has been fully implemented. However, the RWA measure is discredited (see here) and these so-called capital ratios are fictitious, not least because they assume that most assets have no risk.To illustrate, the ratio of Deutsche’s RWA to total assets is only 24.4 percent, which suggests that 75.6 percent of its assets are riskless!

[6] These numbers refer to the CRR/CRD 4 ‘fully loaded’ measures.

[7] See here, p. 2.1-8.

[8] See Board of Governors of the Federal Reserve System,” Agencies Adopt Enhanced Supplementary Leverage Ratio Final Rule and Issue Supplementary Leverage Ratio Notice of Proposed Rulemaking.” Press release, April 8 2014.

[9] For more on the sin bucket, see T. F. Huertas, “Safe to Fail: How Resolution Will Revolutionise Banking,” New York: Palgrave, 2014, p. 23; and “Basel III: A global regulatory framework for more resilient banks and banking systems,” (Basel Committee, June 2011), pp. 21-6 and Annex 2.

[10] For a more complete definition of CET1 capital, see Basel Committee on Banking Supervision (BCBS) “Basel III: A global regulatory framework for more resilient banks and banking systems,” (Basel Committee, June 2011), p. 13.

[11] D. K. Tarullo, “The Evolution of Capital Regulation,” speech to the Clearing House Business Meeting and Conference, New York, November 9, 2011.

[Cross-posted from]

The New York Times, in its infinite wisdom, has figured out how poor states can become rich states: simply put, they need only to increase taxes and spending. It recently publish a piece entitled “the Path to Prosperity is Blue” which suggested that the states that have maintained solid growth the last three decades largely owe that growth to high state government spending, and it suggested that the poor states follow that formula as well. 

The statistical derivation of this conclusion comes from the fact that the wealthiest states of the U.S. tend to be blue states, which have higher taxes and spending. By this logic, spending drives growth. 

While there is indeed a relationship between a state’s spending and its GDP, the causality is completely contrary to what the Times portrays. The reality is that states that become prosperous invariably spend more money. Some of that can represent more spending on public goods–Connecticut does seem to have better schools than Mississippi–but far more of it is simply captured by government interests. While California may have made have created a quality public university system in the 1950s and 1960s with its newfound wealth, the reason its taxes are so high today is because it has a ruinous public pension system it needs to finance. Their high spending isn’t doing its citizenry any good at all. 

New York City and California. two high tax regions, became prosperous in large part because they were (and remain) a hub for immigrants and ambitious, entrepreneurial Americans who helped create the industries that to this day drive the economies of each state. California’s defense and IT industry did benefit from public investment as well, of course, but it was investment from the federal government, and in each case it merely served as a catalyst for the development of industries that went far beyond the government’s initial investment. 

To tell Mississippi that it could become prosperous and pull its citizenry out of poverty if it only doubled taxes is an absurd notion that amounts to economic malpractice. What Mississippi has to do is figure out how to attract and retain talented individuals, which is easier said than done. Unfortunately, the Jacksons and Peorias of the world are not lures to the ambitious Indian engineer or Chinese IT professional, who’d rather take their chances in Silicon Valley, Los Angeles, or anywhere else where the quality of life is good and jobs are plenty.

The lesson to take away from a comparison of the economic status of the fifty states is that economies of agglomeration is a vaguely-understood but critically important phenomenon, location matters, and that it is enormously difficult for states to pivot when their main industries falter. None of these can be said to be driven by government spending.

An updated body camera scorecard highlights a disturbing state of affairs in body camera policy that lawmakers should strongly resist. A majority of the body camera policies examined by Upturn and the Leadership Conference on Civil and Human Rights received the lowest possible score when it came to officer review of footage and citizens alleging misconduct having access to footage, meaning that the departments were either silent on the issues or have policies in place that are contrary to the civil rights principles outlined in the scorecard. Such policies do not promote transparency and accountability and serve as a reminder that body cameras can only play a valuable role in criminal justice reform if they’re governed by the right policies.

Upturn and the Leadership Conference on Civil and Human Rights looked at the body camera policies in fifty departments, including all departments in major cities that have either outfitted their officers with body cameras or will do so in the near future. Other departments that were scored include departments that received at least $500,000 in body camera grants from the Department of Justice as well as Baton Rouge Police Department and the Ferguson Police Department.

Each department was given one of four possible scores in eight categories (personal privacy, officer review, biometric use, footage retention, etc.). Departments were either awarded a red ex, a yellow circle, or a green check, depending on how consistent their body camera policy is with the civil rights principles outlined in the scorecard, with a red ex indicating inconsistency or silence and a green check indicating consistency. A fourth score, the “?”, was awarded to policies that were not publicly available.

Below are the scoring criteria for officer review and footage access for citizens filing complaints:




Forty of the fifty departments received the lowest possible score for “Officer Review,” and not one received a green check.

When it comes to access to footage the scores are marginally better, with four departments being awarded green checks. However, thirty-nine of departments in the “Footage Access” category received the lowest score.

Thirty-five (70%) of the departments received the lowest possible score for both officer review and access to footage. Among these departments are some of the America’s largest, including the Los Angeles Police Department, the New York Police Department, the Houston Police Department, and the Philadelphia Police Department.

Regrettably, the federal government has sent body camera funds to departments with the lowest-scoring officer review and footage access policies. Eleven of the thirty-five departments that received a red ex for officer review and footage access were awarded at least $500,000 in body camera grants by the Department of Justice.

Body cameras can only be tools for increased transparency and accountability in law enforcement with the right policies in place. Unfortunately, Upturn and the Leadership Conference on Civil and Human Rights’ scorecard reveals not only that many departments have poor accountability and transparency policies but also that the Department of Justice does not review these policies as disqualifying when it comes to body camera grants. 


Last Thursday, a Chicago police officer shot unarmed 18-year-old Paul O’Neal in the back, killing him. O’Neal reportedly crashed a stolen car into a police vehicle during a chase and then fled on foot. Two officers then fired at O’Neal. This is the kind of incident where body camera footage would be very helpful to investigators. The officer who shot O’Neal was outfitted with a body camera. Unfortunately, the camera wasn’t on during the shooting, raising difficult questions about the rules governing non-compliance with body camera policy. While there is undoubtedly a learning curve associated with body cameras officers who fail to have them on during use-of-force incidents should face harsh consequences.

Body camera footage of O’Neal’s shooting would make the legality of the killing easier to determine. The Supreme Court ruled in Tennessee v. Garner (1985) that a police officer cannot use lethal force on a fleeing suspect unless “the officer has probable cause to believe that the suspect poses a significant threat of death or serious physical injury to the officer or others.” The Chicago Police Department’s own use-of-force guidelines allow officers to use a range of tools (pepper spray, canines, Tasers) to deal with unarmed fleeing suspects under some circumstances, but the firearm is not one of them.

O’Neal’s shooting would be legal if the officer who shot him had probable cause to believe that he posed a threat of death or serious injury to members of the public or police officers. Given the information available, perhaps most significantly the fact that O’Neal was unarmed, it looks likely that O’Neal’s died as a result of unjustified use of lethal force.

So far, the Chicago Police Department has stripped three officers involved in the chase and shooting of police powers, with Superintendent Eddie Johnson saying that the officers violated department policy. O’Neal’s mother has filed a federal civil rights lawsuit, alleging that her son was killed “without legal justification.”

O’Neal’s shooting is clearly the kind of incident police body cameras should film. There are important debates related to body cameras capturing footage of living rooms, children, or victims of sexual assault. But O’Neal’s death is the kind of incident that body camera advocates have consistently wanted on record. The shooting was outside (thereby posing few privacy considerations) and involved lethal use-of-force. Indeed, Chicago’s own body camera policy states that incidents such as O’Neal’s shooting should be filmed.

Investigators reportedly don’t think that the body camera was intentionally disabled, with the officer’s inexperience with the camera or the crash playing a role in the camera not filming the shooting. This can be handled by better training, but lawmakers should consider policies that harshly punish officers who don’t have their body cameras on when they should.

The American Civil Liberties Union (ACLU) proposed one such policy.  Under the ACLU’s body camera policy, if an officer fails to activate his camera or interferes with the footage the following policies kick in:

1. Direct disciplinary action against the individual officer.

2. The adoption of rebuttable evidentiary presumptions in favor of criminal defendants who claim exculpatory evidence was not captured or was destroyed.

3. The adoption of rebuttable evidentiary presumptions on behalf of civil plaintiffs suing the government, police department and/or officers for damages based on police misconduct. The presumptions should be rebuttable by other, contrary evidence or by proof of exigent circumstances that made compliance impossible.

The third policy recommendation is of note in the O’Neal shooting given that O’Neal’s mother has filed a civil federal lawsuit. If the ACLU’s body camera policy were in place the evidentiary presumption would be on behalf of O’Neal’s mother, not the Chicago Police Department. However, the ACLU’s policy doesn’t make it clear how a judge would oversee this shift in evidentiary presumption in such cases.

It’s unrealistic for criminal justice reform advocates to expect that body cameras will be a police misconduct panacea. We shouldn’t be surprised if reports of cameras not being on when they should have been emerge as more and more police departments issue body cameras. Lawmakers should anticipate body camera growing pains, but they should also consider policies that ensure failure to comply with body camera policies results in harsh consequences.

On July 25, Miami-Dade Florida circuit judge Teresa Pooler dismissed money-laundering charges against Michell Espinoza, a local bitcoin seller. The decision is a welcome pause on the road to financial serfdom. It is a small setback for authorities who want to fight crime (victimless or otherwise) by criminalizing and tracking the “laundering” of the proceeds, and who unreasonably want to do the tracking by eliminating citizens’ financial privacy, that is, by unrestricted tracking of their subjects’ financial accounts and activities. The US Treasury’s Financial Crimes Enforcement Network (FinCEN) is today the headquarters of such efforts.

As an Atlanta Fed primer reminds us, the authorities’ efforts are built upon the Banking Secrecy Act (BSA) of 1970. (A franker label would be the Banking Anti-Secrecy Act). The Act has been supplemented and amended many times by Congress, particularly by Title III of the USA PATRIOT Act of 2001, and expanded by diktats of the Federal Reserve and FinCEN. The laws and regulations on the books today have “established requirements for recordkeeping and reporting of specific transactions, including the identity of an individual engaged in the transaction by banks and other FIs [financial institutions].”  These requirements are collectively known as Anti-Money-Laundering (AML) rules.

In particular, banks and other financial institutions are required to obey “Customer Identification Program” (CIP) protocols (aka “know your customer”), which require them to verify and record identity documents for all customers, and to “flag suspicious customers’ accounts.” Banks and financial institutions must submit “Currency Transaction Reports” (CTRs) on any customers’ deposits, withdrawals, or transfers of $10,000 or more. To foreclose the possibility of people using unmonitored non-banks to make transfers, FinCEN today requires non-depository “money service businesses” (MSBs) – which FinCEN defines to include “money transmitters” like Western Union and issuers of prepaid cards like Visa – also to know their customers. Banks and MSBs must file “Suspicious Activity Reports (SARs)” on transactions above $5000 that may be associated with money-laundering or other criminal activity. Individuals must also file reports. Carrying $10,000 or more into or out of the US triggers a “Currency or Monetary Instrument Report” (CMIR).” Any US citizen who has $10,000 or more in foreign financial accounts, even if it never moves, must annually file “Foreign Bank and Financial Accounts Reports (FBARs).”

In addition, state governments license money transmitters and impose various rules on their licensees.

When most of these rules were enacted, before 2009, there were basically only three convenient (non-barter) conduits for making a large-value payment. If Smith wanted to transfer $10,000 to Jones, he could do so in person using cash, which would typically involve a large withdrawal followed by a large deposit, triggering CTRs. He could make the transfer remotely using deposit transfer through the banking system, triggering CTRs or SARs if suspicious. Or he could use a service like Western Union or Moneygram, again potentially triggering SARs. For the time being, the authorities had the field pretty well covered.

Now come Bitcoin and other cryptocurrencies. Cash is of course still a face-to-face option. But today if Smith wants to transfer $10,000 remotely to Jones, he need not go to a bank or Western Union office. He can accomplish the task by (a) purchasing $10,000 in Bitcoin, (b) transferring the BTC online to Jones, and (c) letting Jones sell them for dollars (or not).  The authorities would of course like to plug this “loophole.” But the internet, unlike the interbank clearing system, is not a limited-access conduit whose users can be commandeered to track and report on its traffic. No financial institution is involved in a peer-to-peer bitcoin transfer. Granted, Smith will have a hard time purchasing $10,000 worth of Bitcoins without using a bank deposit transfer to pay for them, which pings the authorities, but in principle he could quietly buy them in person with cash.

In the recent legal case, it appears that this possibility for unmonitored transfers was noticed by Detective Ricardo Arias of the Miami Beach Police Department, who “became intrigued” and presumably alarmed upon learning about Bitcoin at a meeting with the US Secret Service’s Miami Electronic Crimes Task Force. Detective Arias and Special Agent Gregory Ponzi decided to investigate cash-for-Bitcoin sales in South Florida. (I take details about the case from Judge Pooler’s decision in State of Florida v. Michell Abner Espinoza (2016).) Arias and Ponzi went to to find a seller willing to make a cash sale face-to-face. Acting undercover, Arias contacted one Michell Espinoza, apparently chosen because his hours were flexible. Arias purchased $500 worth of Bitcoin at their first meeting in a Miami Beach coffee shop, and later purchased $1000 worth at a meeting in a Haagen-Daaz ice cream shop in Miami. Arias tried to make a third purchase for $30,000 in a hotel room where surveillance cameras had been set up, but Espinoza rightly suspected that the currency offered was counterfeit, and refused it. At that meeting, immediately after the failed purchase, Espinoza was arrested. He was charged with one count of unlawfully operating a money services business without a State of Florida license, and two counts of money laundering under Florida law.

Judge Pooler threw out all three charges. Evaluating her arguments as a monetary economist, I find that some are insightful, while others are beside the point or confused. On the charge that Espinoza illegally operated an unlicensed money services business, she correctly noted that Bitcoin is not widely accepted in exchange for goods and thus “has a long way to go before it is the equivalent of money.” Accordingly, “attempting to fit the sale of Bitcoin into a statutory scheme regulating money services businesses is like fitting a square peg in a round hole.” However she also offered less compelling reasons for concluding that Bitcoin is not money, namely that it is not “backed by anything” and is “certainly not tangible wealth and cannot be hidden under a mattress like cash and gold bars.” Federal Reserve notes are money without being backed by anything, and bank deposits are money despite being intangible. Gold bars are today not money (commonly accepted as a medium of exchange).

Judge Pooler further correctly noted that Espinoza did not receive currency for the purpose of transmitting it (or its value) to any third party on his customer’s behalf, as Western Union does. He received cash only as a seller of Bitcoin. Nor, she held, does Bitcoin fall into any of the categories under Florida’s statutory definition of a “payment instrument,” so Espinoza was not operating a money services business as defined by the statute. Bitcoin is indeed not a payment instrument as defined by the statue because it is not a fixed sum of “monetary value” in dollars like the categories of instruments that are listed by the statute. It is an asset with a floating dollar price, like a share of stock.

Here Judge Pooler accepted a key defense argument (basically, “the defendant was not transmitting money, but only selling a good for money”) that was rejected by Judge Collyer in U.S. v. E-Gold (2008). In the e-gold system, Smith could purchase and readily transfer to Jones claims to units of gold held at e-gold’s warehouse. Federal officials successfully busted e-gold for “transmitting money” without the proper licenses. Judge Collyer accepted the prosecution’s argument that selling gold to Smith, providing a vehicle for him to transfer it to Jones, and buying it back from Jones is tantamount to transmitting money from Smith to Jones. Of course the Espinoza case is different in that Espinoza did not provide a vehicle for transmitting Bitcoin to a third party, nor did he buy Bitcoin from any third party.

On the charge of money laundering, Judge Pooler found that there was no evidence that Espinoza acted with the intent to promote illicit activity or disguise its proceeds. Further, Florida law is too vague to know whether it applies to Bitcoin transactions. Thus: “This court is unwilling to punish a man for selling his property to another, when his actions fall under a statute that is so vaguely written that even legal professionals have difficulty finding a singular meaning.”

I expect that FinCEN will now want to work with the State of Florida, and other states, to rewrite their statutory definitions of money services businesses and money laundering to reinforce their 2013 directive according to which Bitcoin exchanges must register as MSBs and so submit to “know your customer” and “file reports on your customer” rules. If even casual individual Bitcoin sellers like Espinoza must also register as MSBs, that will spell the end to legal local Bitcoin-for-cash trades.

[Cross-posted from]

Last Friday, President Obama quietly signed legislation requiring special labeling for commercial foods containing genetically modified organisms (GMOs)—plants and animals with desirable genetic traits that were directly implanted in a laboratory. Gene modification typically yields plants and animals that take less time to reach maturity, have greater resistance to drought or disease, or have other desirable traits like sweeter corn or meatier livestock. Yet some people oppose these scientific advances, for reasons that aren’t all that clear.

Most of the foods that humans and animals have consumed for millennia have been genetically modified. Usually this was done through the unpredictable, haphazard technique of cross-fertilization, a technique whose development marked the dawn of agriculture. Yet the new law targets only the highly precise gene manipulations done in laboratories. The labeling requirement comes in spite of the fact that countless scientific organizations—including the American Association for the Advancement of Science, the National Academy of Sciences, the World Health Organization, the American Medical Association, and the British Royal Society—have concluded that GMOs pose no more threat to human health than new organisms developed through traditional methods.

Accordingly, some Obama critics have responded to his bill-signing by sarcastically quoting his earlier vows to rely on science when policymaking. The cleverer critics have even asked what comes next: dihydrogen monoxide warnings? Labels that foods contain no fairies or gremlins?

The critics overlook the incredible weakness of the new law, which can be satisfied with something as unobtrusive as a nondescript QR code linking to a GMO notice. In fact, anti-GMO activists oppose the new law because it preempts more rigorous regulation. And that’s exactly what President Obama and Congress intended to do.

The immediate reason for the new legislation is a more onerous 2014 Vermont law that would have affected the food supply chain, raising consumer prices nationally. Similar requirements were percolating in other states, advanced by anti-GMO activists (and agribusiness groups that don’t want competition from GMO products). With a stroke of the pen, President Obama and federal lawmakers have used a bad but meager requirement to counteract those far worse state laws.

This is not the only time the Obama White House has helped consumers and advanced science, to the frustration of the anti-GMO crowd. His administration previously approved the AquAdvantage salmon that had languished in bureaucratic review hell for decades.

Cato’s Regulation has covered GMOs and the broader biotech controversy for decades. You can see a couple of those articles if you click on the last two links above. Case Western Reserve law professor Jonathan Adler will have an article on the new labeling law in the magazine’s fall issue.