Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

I’ve had lots of requests for a non-Scribd link to the 2004 DoD IG report on the THINTHREAD and TRAILBLAZER programs I mentioned in my piece yesterday, so you can now find it here

I should point out that at the end of the excellent documentary on this topic, A Good American, the film’s creators noted that Hayden, NSA’s Signal Intelligence Division director Maureen Baginski, and two other senior NSA executives involved in this affair declined to be interviewed on camera.

Michael Currier, like more and more defendants in recent years, was charged with multiple, overlapping offenses: (1) breaking and entering, (2) grand larceny, and (3) possession of a firearm as a convicted felon. This charging decision turned on an aggressive application of Virginia’s felon-in-possession statute, because the alleged firearm violation here was fleeting happenstance: Currier supposedly “handled” the victim’s firearms by moving them out of the way in order to commit the different offense of stealing money from a safe. If Currier had been tried on all these charges at once, the evidence needed to show he was a convicted felon would have been unduly prejudicial on the two primary counts (evidence of past, unrelated criminal behavior is generally inadmissible). The Commonwealth recognized this potential for prejudice, and therefore moved to sever the felon-in-possession count. It opted to try the primary offenses first, and the jury acquitted Currier of the breaking and entering and grand larceny charges. Undeterred, the Commonwealth pressed forward on the felon-in-possession count, refining its case to present the same underlying factual theory to a second jury. And on this second go-round, Currier was convicted. 

As Cato argued in our recent amicus brief, Currier’s conviction is squarely in conflict with the Double Jeopardy Clause of the Fifth Amendment. That provision guarantees that no person shall be “twice put in jeopardy of life or limb” for the same offense, and includes the principle that when an issue of ultimate fact has necessarily been determined by a jury acquittal, the government cannot relitigate the same factual question in a second trial for a separate offense. Given how Currier’s charges were tried the first time, the jury necessarily concluded that he wasn’t guilty of participating in the underlying burglary and theft—he simply wasn’t there at all. But that’s the exact same set of facts the government needed to obtain a conviction in the second trial, because Currier was only alleged to have “handled” the guns in the course of the robbery. 

The Commonwealth justifies this result by arguing that Currier waived his double-jeopardy rights by agreeing to severance, and that there was no blatant prosecutorial misconduct. But this position would deprive the Double Jeopardy Clause of much of its significance, and is inconsistent with the historical development of double jeopardy jurisprudence in the United States—in particular, its goal of guarding against the structural power imbalances that exist between prosecutors and defendants. It is also impossible to square the Commonwealth’s position with the sanctity of jury acquittals and the time-honored authority and prerogative of the jury, speaking for the community, to ultimately and finally determine facts. 

If the Commonwealth’s position becomes the law of the land, the government will be further incentivized to charge more offenses based on the same underlying conduct, thus increasing the need for (and likelihood of) multiple trials for the same underlying series of events. This type of overreach will allow the government to run dress rehearsals for successive prosecutions in more and more cases, thereby undermining the sacred liberty interests protected by the Double Jeopardy Clause, and diminishing the responsibility of the jury to stand between the accused and a potentially arbitrary or abusive government. This result would be a travesty; in today’s world of ever-expanding criminal codes and regulatory regimes, the government needs fewer, not greater, incentives for piling on theories of criminal liability.

This article originally appeared on Just Security on December 7, 2017.   

Retired Gen. Michael Hayden, former director of the NSA and CIA (and now, a national security analyst at CNN), has recently emerged as a leading critic of the Trump administration, but not so long ago, he was widely criticized for his role in the post-9/11 surveillance abuses. With the publication of his memoir, Playing to the Edge: American Intelligence in the Age of TerrorHayden launched his reputational rehab campaign.

Like most such memoirs by high-level Washington insiders, Hayden’s tends to be heavy on self-justification and light on genuine introspection and accountability. Also, when a memoir is written by someone who spent their professional life in the classified world of the American Intelligence Community, an additional caveat is in order: The claims made by the author are often impossible for the lay reader to verify. This is certainly the case for Playing to The Edge, an account of Hayden’s time as director of the NSA, and subsequently, the CIA.

Fortunately, with respect to at least one episode Hayden describes, litigation I initiated under the Freedom of Information Act (FOIA) has produced documentary evidence of Hayden’s role in the 9/11 intelligence failure and subsequent civil liberties violations. The consequences of Hayden’s misconduct during this time continue to be felt today. First, some background. 

The War Inside NSA, 1996 to 2001

By the mid-1990s, a group of analysts, cryptographers, and computer specialists at NSA realized that the growing volume of digital data on global communications circuits was both a potential gold mine of information on drug traffickers and terrorist organizations, as well as a problem for NSA’s largely analog signals intelligence (SIGINT) collection, processing, and dissemination systems. As recounted in the documentary A Good American, three NSA veterans—Bill Binney, Ed Loomis, and Kirk Wiebe—set out to solve the problem of handling an ever-increasing stream of digital data while protecting the 4th Amendment rights of Americans against warrantless searches and seizures.

Through their Signals Intelligence Automation Research Center (SARC), they had, by 1999, developed a working prototype system, nicknamed THINTHREAD. A senior Republican House Permanent Select Committee on Intelligence (HPSCI) staffer, Diane Roark, was so impressed with what Binney, Loomis, and Wiebe had developed, that she helped steer approximately $3 million to the THINTHREAD project to further its development. But by April 2000, Roark and the SARC team had run into the ultimate bureaucratic roadblock for their plan: Hayden, who had recently been installed as NSA director.

He had his own, preferred solution to the same problem the SARC team had been trying to solve. As Hayden noted in his memoir:

Our answer was Trailblazer. This much-maligned (not altogether unfairly) effort was more a venture capital fund than a single program, with our investing in a variety of initiatives across a whole host of needs. What we wanted was an architecture that was common across our mission elements, interoperable, and expandable. It was about ingesting signals, identifying and sorting them, storing what was important, and then quickly retrieving data in response to queries.

It was, of course, a description that fit THINTHREAD perfectly—except for the collection and storage of terabytes of digital junk. THINTHREAD’s focus on metadata mining and link analysis was designed to help analysts pinpoint the truly important leads to follow while discarding irrelevant data. Hayden’s concept mirrored that of his successor, Keith Alexander, who also had a “collect it all” mentality.

In his memoir, Hayden spoke of the need to “engage industry” (p. 20) in the effort to help NSA conquer the challenge of sorting through the mind-numbing quantity of digital data, but even Hayden admitted that “When we went to them for things nobody had done yet, we found that at best they weren’t much better or faster than we were” (page 20).

That should’ve been Hayden’s clue that NSA would be better off pursuing full deployment of THINTHREAD, a proven capability. But Hayden chose to pursue his industry-centric approach instead, and he tolerated no opposition or second-guessing of the decision he’d made.

In April 2000, Hayden’s message to the NSA workforce made it clear that any NSA employees who went to Congress to suggest a better way for the NSA to do business would face his wrath. Even so, the THINTHREAD team pressed on, managing to get their system deployed to at least one NSA site in a test bed status, working against a real-world target. Meanwhile, Roark continued to push NSA to make the program fully operational, but Hayden refused, and just three weeks before Sept. 11, 2001, further development of THINTHREAD was terminated in favor of the still hypothetical TRAILBLAZER program.

DoD IG Investigation vs. Hayden’s memoir

As Loomis noted in his own account of the THINTHREAD-TRAILBLAZER saga, within days after the 9/11 attacks, NSA management ordered key components of THINTHREAD—the system Hayden had rejected—to be integrated (without the inclusion of 4th Amendment compliance software) into what would become known as the STELLAR WIND warrantless surveillance program. Terrified that the technology they’d originally developed to fight foreign threats was being turned on the American people, Loomis, Binney, and Wiebe retired from the NSA at the end of October 2001.

Over the next several months, they would attempt to get the Congressional Joint Inquiry to listen to their story, but to no avail. By September 2002, the trio of retired NSA employees, along with Roark, decided to file a Defense Department Inspector General (DoD IG) hotline complaint, in which they alleged waste, fraud, and abuse in the TRAILBLAZER program. Inside NSA, they still had an ally—a senior executive service manager named Tom Drake, who had become responsible for the remnants of THINTHREAD after the SARC team had resigned. Drake became the key source for the subsequent DoD IG investigation, which resulted in a scathing, classified report completed in December 2004.

The TRAILBLAZER-THINTHREAD controversy subsequently surfaced in the press, and I followed the reporting on it while working as a senior staffer for then-Representative Rush Holt (D-N.J.), a HPSCI member at the time. Once Holt was appointed to the National Commission on Research and Development in the Intelligence Community, I asked for and received copies of the published DoD IG reports dealing with the THINTHREAD and TRAILBLAZER programs.

The 2004 report remains the most damning IG report I’ve ever read, and after Holt announced his departure from Congress in 2014, I decided to continue my own investigation into this episode as an analyst at the Cato Institute. In March 2015, I filed a FOIA request seeking not only the original 2004 DoD IG report, but all other documents relevant to the investigation.

After being stonewalled by DoD and NSA for nearly two years, Cato retained the services of Loevy and Loevy of Chicago to prosecute a FOIA lawsuit to help get the documents I sought. In July 2017, the Pentagon released to me a still heavily redacted version of the 2004 DoD IG report. But there are fewer redactions in my copy than there were in the version provided to the Project on Government Oversight (POGO) in 2011, and it provides the clearest evidence yet that Hayden’s account of the THINTHREAD-TRAILBLAZER episode in his memoir is simply not to be believed.

On The IG Investigation Itself

On page 26 of his memoir, Hayden’s only mention of the IG investigation is a single sentence: “Thin Thread’s advocates filed an IG (inspector general) complaint against Trailblazer in 2002.”

Hayden makes no mention of the efforts he and his staff made to downplay THINTHREAD to the IG, or the climate of fear that Hayden and his subordinates created among those who worried TRAILBLAZER was a programmatic train wreck, and that THINTHREAD could, in fact, provide NSA with exactly the critical “finding the needle in the haystack” capability it needed in the digital age.

In its Executive Summary (page ii), the DoD IG report agreed THINTHREAD was the better solution and should be deployed:

And the DoD IG made it clear that NSA management—meaning Hayden—had deliberately excluded THINTHREAD as an alternative to TRAILBLAZER at a clear cost to taxpayers:

On Defying Congress

Hayden’s fury at the SARC team keeping HPSCI staffer Roark in the loop about their progress was palpable, as he made clear on page 22 of his book:

The alliance with HPSCI staffer Roark created some unusual dynamics. I essentially had several of the agency’s technicians going outside the chain of command to aggressively lobby a congressional staffer to overturn programmatic and budget decisions that had gone against them internally. That ran counter to my military experience—to put it mildly.

But Binney, Loomis, and Wiebe didn’t owe their allegiance to Hayden—they owed it to the Constitution and the American people. And to be clear, Roark was the driver behind briefing and information requests, performing her mandated oversight role, a fact Hayden clearly resented—to the point that he was willing to defy her requests, as the IG report noted on page 2:

That defiance of a congressional request went further, as the DoD IG noted on page 99 of their report:

Hayden didn’t just stiff-arm Roark, he stiff-armed the entire committee.

On Incompetent Program Management and Priorities

Hayden makes clear in his memoir (page 20) that he wanted an orderly approach to the digital traffic problem, even if it meant taking a lot of time to do it:

Our program office had a logical progression in mind: begin with a concept definition phase, then move to a technology demonstration platform to show some initial capability and to identify and reduce technological risk. Limited production and then phased deployment would follow.

The DoD IG investigators viewed Hayden’s approach as ill-considered (p. 4):

In other words, Hayden had learned nothing from his mistake in sand-bagging THINTHREAD prior to 9/11, and he kept the original, full program on ice even after the loss of nearly 3,000 American lives and daily concerns in the months after the terrorist attacks about possible “sleeper cells” and follow-on attacks.

On THINTHREAD’s scalability

Hayden argues in his memoir (page 22) that THINTHREAD was not deployable across all NSA elements:

The best summary I got from my best technical minds was that aspects of Thin Thread were elegant, but it just wouldn’t scale. NSA has many weaknesses, but rejecting smart technical solutions is not one of them.

The DoD IG investigators disagreed, as this response to Hayden’s team at the time makes clear (p. 106):

On THINTHREAD’s effectiveness

On page 21 of his book, Hayden gives the reader the impression that THINTHREAD was not that good at actually finding real, actionable intelligence:

We gave it a try and deployed a prototype to Yakima, a foreign satellite (FORNSAT) collection site in central Washington State. Training the system on only one target (among potentially thousands) took several months, and then it did not perform much better than a human would have done. There were too many false positives, indications of something of intelligence value when that wasn’t really true. A lot of human intervention was required.

An analyst who had actually used THINTHREAD after its initial prototype deployment in November 2000 had a very different view (p. 16):

The second to last sentence is worth repeating: “The analyst received intelligence data that he was not able to receive before using THINTHREAD.” “Not able to receive” from any other NSA system or program. Had THINTHREAD been deployed broadly across NSA and focused on al-Qaeda, it could have helped prevent the 9/11 attacks, as the SARC team and Roark have repeatedly claimed.

On THINTHREAD’s legality

Hayden claims in his memoir (page 24) that NSA’s lawyers viewed THINTHREAD as illegal:

Sometime before 9/11, the Thin Thread advocates approached NSA’s lawyers. The lawyers told them that no system could legally do with US data what Thin Thread was designed to do. Thin Thread was based on the broad collection of metadata that would of necessity include foreign-to-foreign, foreign-to-US, and US-to-foreign communications. In other words, lots of US person data swept up in routine NSA collection.

In fact, as the SARC team noted in A Good American, THINTHREAD’s operational concept was just the opposite: scan the traffic for evidence of foreign bad actors communicating with Americans, segregate and encrypt that traffic, and let the rest go by. No massive data storage problem, no mass spying on Americans.

And the account the DoD IG investigators got from NSA’s Office of General Counsel (page 20) flatly contradicts Hayden’s memoir:

The “Directive 18” in question is United States Signals Intelligence Directive 18, which governs NSA’s legal obligations regarding the acquisition, storage, and dissemination of data on U.S. persons.

As you can probably imagine, I could cite many other instances of Hayden’s rewriting of the history of the THINTHREAD-TRAILBLAZER episode, but if you want as much of the story as is currently available, I suggest you read the entire (though still heavily redacted) version of the DoD IG report I obtained in July.

The Story Goes On

What’s remarkable is that Congress was well aware of Hayden’s misconduct and mismanagement while at NSA, but it still allowed him to become the head of my former employer, the CIA. Meanwhile, Roark’s personal example of integrity and fidelity to congressional oversight were rendered meaningless by her then-boss, House Intelligence Committee Chairman (and former CIA operations officer) Porter Goss’s (R-FL) failure to fully investigate the THINTHREAD-TRAILBLAZER disaster, and by his Senate colleagues who elected to confirm Hayden to head the CIA by a vote of 78-15. Hayden definitely got one thing very right: He knew he could snow House and Senate members and get away with it.

My FOIA lawsuit is ongoing, and additional document productions are—hopefully—just a few months away. To date, DoD is continuing to invoke the NSA Act of 1959 to keep many details of this saga—especially the amount of money squandered on TRAILBLAZER—from public view. For me, that’s actually a key issue in this case—testing the proposition as to whether NSA, utilizing the 1959 law, can conceal indefinitely waste, fraud, abuse, or even criminal conduct from public disclosure.

But the larger policy issue for me is laying bare, using a real-world case study, a prime example of a hugely consequential congressional oversight failure. The SARC team and Roark continue to argue that had THINTHREAD been fully deployed by early 2001, the 9/11 attacks could’ve been prevented. Drake asserts in A Good American that post-attack testing of THINTHREAD against NSA’s PINWALE database uncovered not only the attacks that happened, but ones that didn’t for various reasons.

And the SARC team and Roark maintain that THINTHREAD could have accomplished NSA’s digital surveillance and early warning mission without the kinds of constitutional violations seen or alleged with programs like the PATRIOT Act’s Sec. 215 telephone metadata program or the FISA Amendments Act Sec. 702 program, the latter currently set to expire at the end of this month and the subject of multiple legislative reform proposals.

None of this was examined by either the Congressional Joint Inquiry or the 9/11 Commission, which means the real history of how the 9/11 attacks happened has yet to be written.

Also pending are two Office of Special Counsel investigations into aspects of this episode—one involving Drake, and the other looking at former Assistant DoD IG John Crane, as I’ve written previously on this site. I’ll have more to say on all of this as documents become available or as events warrant.

French rocker Johnny Hallyday—the “French Elvis—has passed away at 74. I do not know his music, but it appears that he was an innovator. His sounds were apparently new to French ears, and his willingness to adopt rock styles from the English-speaking world upset the French establishment. But the people adored his music, and he sold 110 million records. So Hallyday and the market got the better of France’s cultural rules.

Hallyday didn’t like French tax rules either. Here is what I wrote in Global Tax Revolution:

The solidarity tax on wealth was imposed in the 1980s under President Francois Mitterrand. It is an annual assessment on net assets above a threshold of about $1 million, and it has graduated rates from 0.55 percent to 1.8 percent. It covers both financial assets and real estate, including principal homes.

One of those hit by the wealth tax was Johnny Hallyday, a famous French rock star and friend of French president Nicolas Sarkozy. Hallyday created a media sensation when he fled to Switzerland in 2006 to avoid the tax. He has said that he will come back to France if Sarkozy “reforms the wealth tax and inheritance law.” Hallyday stated: “I’m sick of paying, that’s all … I believe that after all the work I have done over nearly 50 years, my family should be able to live in some serenity. But 70 percent of everything I earn goes to taxes.” A poll in Le Monde found that two-thirds of the French public were sympathetic to Hallyday’s decision.

France still has its wealth tax, but numerous other countries have scrapped theirs as global tax competition has heated up. As for Hallyday, he spent his last decade avoiding the wealth tax in Switzerland and Los Angeles.

The latest international academic assessment results are out—this time focused on 4th grade reading—and the news isn’t great for the United States. But how bad is it? I offer a few thoughts—maybe not that wise, but I needed a super-clever title—that might be worth contemplating.

The exam is the Progress in International Reading Literacy Study—PIRLS—which was administered to roughly representative samples of children in their fourth year of formal schooling in 58 education systems. The systems are mainly national, but also some sub-national levels such as Hong Kong and the Flemish-speaking areas of Belgium. PIRLS seeks to assess various aspects of reading ability, including understanding plots, themes, and other aspects of literary works, and analyzing informational texts. Results are reported both in scale scores, which can range from 0 to 1000, with 500 being the fixed centerpoint, and benchmark levels of “advanced,” “high,” “intermediate,” and “low.” The 2016 results also include a first-time assessment called ePIRLS, which looks at online reading, but it includes only 16 systems and has no trend data so we’ll stick to plain ol’ PIRLS.

Keeping in mind that no test tells you even close to all you need to know to determine how effective an education system is, the first bit of troubling news is that the United States was outperformed by students in 12 systems. Among countries, we were outscored by the Russian Federation, Singapore, Ireland, Finland, Poland, Norway, and Latvia. Some other countries had higher scores, but the differences were not statistically significant, meaning there is a non-negligible possibility the differences were a function of random chance. Also, between 2011 and 2016 we were overtaken by Ireland, Poland, Nothern Ireland, Norway, Chinese Taipei, and England.

The second concerning finding is that, on average, the United States has made no statistically significant improvement since 2001. As the chart below shows, our 2016 result was not significantly better than our 2001 score. We appear to have made some strides between 2001 and 2011 but have clearly dipped since then.

A few thoughts:

  • It is tempting to attribute the gains between 2001 and 2011 to the No Child Left Behind Act, and it is certainly possible that the standards-and-accountability emphasis of the NCLB era helped to goose scores. It is, however, impossible to conclude that without looking at numerous other variables that affect test scores, including student demographics and such difficult-to-quantify factors as student motivation. More directly, NCLB was passed in very early 2002, so by 2006 it had had several years to start working. But that year reading scores went down for all but the lowest 25 percent of test takers. By 2011, the next iteration, NCLB had become politically toxic.
  • The U.S. PIRLS results are broken down by various student attributes, including race/ethnicity. We need to be very careful about these blunt categories—they contain lots of subsets, and ultimately reduce to millions of individuals for whom race or ethnicity is just one among countless attributes—but they might hint at something of use. Most interesting, perhaps, is that scores for Asian Americans (591) beat the top-performing systems, the Russian Federation (581) and Singapore (576). This might suggest that there is something about culture— East Asian culture especially is thought to focus heavily on academic achievement, general American culture not so much—and that the education system itself might play a relatively small role in broad academic achievement.
  • Or maybe it’s not culture, or culture is wrapped up in lots of other things such as business success, or Asian Americans tend to arrive from wealthier backgrounds to begin with. As seen below, a simple correlation between median household income for each group and their 2016 score is almost perfect at 0.98. (A perfect positive correlation would be 1.0). This also suggests that the system does not have nearly the impact of other factors, but whether it is culture, wealth, or some intertwining of those and many other factors is unclear.

  • If the system does not matter, at least for standardized reading assessments, then what really hurts about U.S. education policy is that we spend more per-pupil than almost any other country for which we have data but get pretty mediocre results. As of 2013 we spent $11,843 per elementary and secondary student, and in 2016 were beaten by several countries that spent less, including Latvia ($5,995), Poland ($6,644), Ireland ($9,324), and Finland ($9,579).
  • That factors such as culture might matter much more than spending or the system might explain why American school choice programs tend to produce only slightly better standardized test scores but at a fraction of the cost of public schools. Of course, there are also many things people want out of education that might be much more important to them than test scores—raising morally upright, compassionate, creative human beings, for instance—and freeing people to get those things might be the most important and compelling argument for school choice.

That’s it for PIRLS 2016 musings. On to the next standardized test results, and other things that may matter far more.

Political debate in the modern world is impossible without memorizing a list of euphemisms, and there is no shortage of public opprobrium for those who talk about certain topics without using them.  In addition to the many euphemisms that are accepted by virtually everybody, the political left has its own set of euphemisms associated with political correctness, while the political right has its own set linked to patriotic correctness.  Euphemisms tend to serve as signals of political-tribal membership, but also as means to convince ambivalent voters to support one policy or the other.  Violating the other political tribe’s euphemisms can even help a candidate get elected President.  This post explores why people use euphemisms in political debate and whether that effort is worthwhile. 

Euphemisms change over time.  Harvard psychologist Steven Pinker termed this linguist evolution the “euphemism treadmill” and, over twenty years ago, argued that replacing old terms with new ones was likely inspired by the false theory that language influences thoughts, a notion that has been long discredited by cognitive scientists.  Pinker described how those who board the euphemism treadmill can never step off:

People invent new “polite” words to refer to emotionally laden or distasteful things, but the euphemism becomes tainted by association and the new one that must be found acquires its own negative connotations.

Few political debates are as riddled with euphemisms as immigration.  The accurate legal term “illegal alien,” which was once said without political bias and is now almost exclusively used by nativists, was replaced with “illegal immigrant” which was supplanted by “undocumented immigrant” and, in rarer cases, “unauthorized immigrant.”  Goofy terms like “border infiltrator” and “illegal invader” have not caught on yet.  Proponents of the new term “undocumented immigrant” argue that nobody can be illegal, so the term “illegal immigrant” is inaccurate as well as rude.  Of course, nobody is undocumented either, as they just lack the certain specific documents for legal residency and employment.  Many have drivers licenses, debit cards, library cards, and school identifications which are useful documents in specific contexts but not nearly so much for immigration.  “Misdocumented immigrant” would be better if the goal was accuracy, but the goal seems to be to change people’s opinions on emotional topics by changing the words they use.

In the immigration debate, the euphemism treadmill can sometimes run in reverse and actually make political language harsher.  This “cacophemism cliff” turned “birthright citizenship” into “anchor baby” and “liberalized immigration” into “open borders.” 

In the long run, stepping onto the euphemism treadmill can seem like a fool’s errand.  As Pinker explains, people’s feelings toward the replaced term are merely transferred to the euphemism because we all have concepts that we use words to describe but we don’t use words to invent new concepts.  The concept-to-word cognitive production process only affects the sound of the output, not its meaning.



Framing – “Undocumented Immigrants” or “Illegal Aliens”

Not all is lost for exercisers on the euphemism treadmill.  They just have to lower their expectations and be satisfied with framing political discourse, rather than the quixotic goal of changing concepts with words.  Framing is a psychological technique that can influence the perception of social phenomena, a political or social movement, or a leader.  Research in political psychology has shown that framing works through making certain beliefs accessible in memory upon exposure to a particular frame. Once certain beliefs are activated through the mechanism of framing, they affect all the subsequent information processing. An example of framing’s power to affect perception is that opinions about a Ku Klux Klan rally vary depending on whether it is framed as a public safety or free speech issue.

Framing can steer public opinion in opposite directions of the political spectrum. The “undocumented immigrant” frame will invoke different beliefs from the “illegal alien” frame. Specifically, the former is describing the issue as a bureaucratic government problem afflicting ordinary immigrants.  The latter frames it as a law and order problem with foreign nationals. These two euphemisms, although meant to represent the same concept, do so in different ways that convey different messages and will pull the receivers of the frames in different directions.  Most people feel sympathy toward those caught up in a cruel bureaucratic morass but are much less sympathetic to lawbreakers.

Following this logic, a policy proposal titled “path to citizenship for undocumented immigrants” is going to attract more support than “amnesty for illegal aliens.” Both “path to citizenship” and “amnesty” here mean legalization. However, the term “legalization” implies that there has been something illegal about that group of people, an association which many proponents want to avoid. “Path to citizenship” is a much softer frame that invokes positive emotions.  On the other side of the debate, “legalization” has been replaced with “amnesty,” which has a more negative meaning.  Proponents and users of the term “amnesty” are emphasizing that it is a pardon for an offense rather than a fix of a bureaucratic problem. “Pathway to citizenship” is also sometimes replaced by “earned legalization” or “comprehensive immigration reform.” These two expressions bring up considerations about legality and reform, both of which are far more cognitively charged than “path to citizenship” and therefore less likely to be used by supporters of such policies.

Dog Whistles and the Threat Frame: “Extreme Vetting,” “Illegal Invader,” and “Anchor Baby”

Euphemisms can help legitimize otherwise prejudiced rhetoric. Consider “extreme vetting”, a phrase that has been referred to as a euphemism for “discrimination against Muslims.” Using this particular euphemism helps one accomplish two goals. First, it helps separate oneself from blatant discrimination based on religion or national origin, which is important because prior research in political science has shown that people are increasingly sensitive to social desirability and so are unwilling to express bluntly prejudiced beliefs since it has become less socially acceptable to do so. Thus, masking such prejudice under a neutral euphemism is rather useful in that regard. Second, it still conveys the overall message of hostility to the audience that is receptive to such rhetoric – also known as a dog whistle. Therefore, you can indicate your own beliefs and connect the audience with similar beliefs without coming across as being bluntly prejudiced.

A somewhat similar idea is behind the use of the “illegal invader” term, which goes even further by invoking a threat frame. Threats could be powerful tools since, once threatened, people tend to overestimate the risk and support policies that minimize the threat no matter how small it actually is.  Threat frames negatively bias listeners against this group.

An important effect of threat frame euphemisms is that they can dehumanize and attach negative attitudes to certain groups. Consider the euphemisms “anchor baby” and “catch and release.” “Anchor baby” stands for children born to foreign nationals who are in violation of their immigration status while on U.S. soil.  Those children have automatic citizenship under the U.S. Constitution.  Such children are called “anchor babies” in order to highlight the idea that they are used by their parents to secure their stay in the country although that rarely actually happens. The term dehumanizes both the parents and their children by describing these individuals through association with an inanimate object, the “anchor,” and that the only purpose for the existence of the children is to resolve the parent’s problem with immigration law.  Threat frames also extend to other criminal activity related to immigrants.  

There are examples of other indirect expressions that are not euphemisms. Let us consider “catch and release” and “sanctuary city.” “Catch and release” is used to describe an act of apprehending illegal immigrants and subsequently releasing them. A “sanctuary city” is a city that limits their cooperation with federal immigration enforcement.  These two are used by both sides of the immigration debate and do not have a positive or a negative substitute. The problem with them is that both the expressions might as well pertain to the “animal kingdom” domain, which can be demeaning and humiliating when used to talk about people. “Catch and release” brings up associations with fishing and hunting, thus dehumanizing those that are being caught and released. Similarly, the word “sanctuary” is frequently used to describe a wildlife refuge. Similar to the “anchor baby,” they are of dehumanizing character.  Both of these euphemistic expressions, although not meant to do any harm and not created by political elites, could generate unfavorable attitudes.

Euphemisms as Subliminal Primes

Euphemisms are effective as subliminal primes because they are short and compact expressions.  Priming is an instrument that activates preconscious expectations according to research in political psychology.  Priming is similar to framing but has important differences, as it invokes an automatic reaction without the reader having to read through the whole article. Even a split-second glimpse at the title has the priming effect. As opposed to frames, primes require less time and less cognitive effort to be successful in shaping public opinion.  Primes color the perception of all information that follows the prime.  Consider the hypothetical article titles “Birthright Citizenship for Children of Undocumented Immigrants” versus “Illegal Alien Anchor Babies.” Although these two expressions technically have a similar meaning, they can subconsciously prime the reader and bias all of his subsequent information processing. The reader who encounters the first of the two expressions is likely to have a pro-immigration bias primed, whereas the second will have the opposite direction bias.

Euphemisms as primes are particularly meaningful for citizens who are ambivalent about immigration. Consider a relatively more liberal person who is undecided on immigration. By encountering a random piece of news that uses “undocumented immigrants” instead of “illegal aliens,” an ambivalent voter is more likely to form a pro-immigration bias at a rather early stage because of his greater innate support for fairness, which is offended by the unequal distribution of documents. Whereas a relatively more conservative person who is undecided about immigration is far more likely to be swayed by the term “illegal alien”, because of their greater support for order and structure, which is offended by illegality. 


This post explores the theoretical base of using euphemisms as tools of influence. Although there is some excellent research into these issues related to immigration, it is a field crying out for more experimental and empirical inquiry. Laboratory experiments with human subjects could confirm the effectiveness of specific euphemisms as primes or frames. Since such studies are often criticized for their external validity, a follow-up study that combines content analysis of relevant media with opinion polls that show changes in attitudes could also be useful.

An underexplored possibility is how euphemisms and frames affect political debate by spreading confusion.  People accustomed to the term “illegal immigrant” to describe foreign-born persons who are currently unlawfully residing in the United States might initially fail to react as negatively to the term “undocumented immigrant” merely because they don’t know what it means.  As soon as they know what it means, however, the negative feelings they associate with “illegal immigrant” would probably attach to the term “illegal alien.”  Another is how euphemisms build walls around political tribes and prevent them from talking to each other, thus deepening policy divisions that prevent middle-ground solutions. 

Special thanks to Jen Sidorova for her initial rough draft as well as her invaluable insights and research.  

Some years ago I published a paper on the banking theory and policy views of the important twentieth-century economist Friedrich A. Hayek, entitled “Why Didn’t Hayek Favor Laissez Faire in Banking?[1] Very recently, working on a new paper on Hayek’s changing views of the gold standard, I discovered an important but previously overlooked passage on banking policy in a 1925 article by Hayek entitled “Monetary Policy in the United States After the Recovery from the Crisis of 1920.” I missed the passage earlier because the full text of Hayek’s article became available in English translation only in 1999, the same year my article appeared, in volume 5 of his Collected Works. Only an excerpt had appeared in translation in Money, Capital, and Fluctuations, the 1984 volume of Hayek’s early essays.[2]

Hayek wrote the article in December 1924, very early in his career. In May 1924 he had returned from a post-doctoral stay in New York City and had begun participating in the Vienna seminar run by Ludwig von Mises. It is safe to say that the passage I am about to quote reflects Mises’ influence, since the article cites him, and in many ways takes positions opposite to those Hayek had taken in an earlier article that he wrote while still in New York.

The main topic of the 1925 article is the Federal Reserve’s policies in the peculiar postwar situation in which, as Hayek put it, the US “emerged from the war … as the only country of importance to have retained the gold standard intact.” The US had received “immense amounts” of European gold during and since the war (Hayek documents this movement with pertinent statistical tables and charts), and now held a huge share of the world’s gold reserves — more gold reserves than the Fed knew what to do with. European currencies, having left the gold standard to use inflationary finance during the First World War, and not having yet resumed direct redeemability, were for the time being pegged to the gold-redeemable US dollar. This was a new and unsettled “gold exchange standard,” unlike the prewar classical gold standard in which major nations redeemed their liabilities directly for gold and held their own gold reserves. Rather than delve into what Hayek had to say about that topic, I want to convey what he said about banking.

In section 8 of the article (pp. 145-47 in the 1999 translation), Hayek gives a favorable evaluation of free banking as against central banking. Having overlooked this passage, I had previously thought that Hayek first addressed free banking in his 1937 book Monetary Nationalism and International Stability. Hayek does not embrace free banking as an ideal, first-best system, because he thought it prone to over-issue (as I discussed in my 1999 article based on Hayek’s other writings). But he criticizes the Federal Reserve Act for relaxing rather than strengthening the prior system’s constraints against excess credit expansion by American commercial banks.

Hayek begins the passage with a caution that the intended result of creating a central bank, when the intention is to avoid or mitigate financial crises, need not be the actual result:

It cannot be taken for granted that a central banking system is better suited to prevent disturbances in the economy stemming from excessive variations in the volume of available bank credit than a system of independent and self-reliant commercial banks run on purely private enterprise (liquidity, profitability) lines.

By standing ready to help commercial banks out of liquidity trouble, central banks give “added incentive … to commercial banks to extend a large volume of credit.” In modern terminology, a lender of last resort creates moral hazard in commercial banking. A free banking system (my phrase, not his) restrains excessive credit creation by fear of failure:

In the absence of any central bank, the strongest restraint on individual banks against extending excessive credit in the rising phase of economic activity is the need to maintain sufficient liquidity to face the demands of a period of tight money from their own resources.

Hayek’s belief that the pre-Fed US system did not restrain credit creation firmly enough is understandable in light of the five financial panics during the fifty years of the federally regulated “National Banking system” that prevailed between the Civil War and the First World War. He might have noted, however, that the National Banking system was a system legislatively hobbled by branching and note-issue restrictions rather than a free banking system or a system “run on purely private enterprise lines.”[3] The Canadian banking system, lacking those restrictions, did not experience financial panics during this period (or even during the Great Depression) despite having an otherwise similar largely agricultural economy.

Despite the flawed character of the pre-Fed system, Hayek judged that the Federal Reserve Act made the situation worse rather than better by loosening the prevailing constraints against unwarranted credit expansions:

Had banking legislation had the primary gold to prevent cyclical fluctuations, its main efforts should have been directed towards limiting credit expansion, perhaps along the lines proposed — in an extreme, yet ineffective way — by the theorists of the “currency school,” who sought to accomplish this purpose by imposing limitations upon the issuing of uncovered notes. … Largely because of the public conception of their function, central banks are intrinsically inclined to direct their activities primarily towards easing the money market, while their hands are practically tied when it comes to preventing economically unjustified credit extension, even if they should favour such an action. …

This applies especially to a central banking mechanism superimposed on an existing banking system. … The American bank reform of 1913-14 followed the path of least resistance by relaxing the existing rigid restraints of the credit system rather than choosing the alternative path …

Thus the Fed was granted the power to expand money and credit, a power that “was fully exploited during and immediately after the war,” not waiting for a banking liquidity crisis. The annual inflation rate in the United States, as measured by the CPI, exceeded 20 percent in 1917, and remained in double digits for the next three years (17.5, 14.9, and 15.8) before the partial reversal of 1921. Hayek (p. 147) observed ruefully “how large an expansion of credit took place under the new system without exceeding the legal limits and without activating in time automatic countermeasures forcing the banks to restrict credit.” He concluded: “There can be no doubt that the introduction of the central banking system increased the leeway in the fluctuations of the volume of bank credit in use.”

Here Hayek reminds us that a less-regulated banking system does not need to be perfect to be better than even well-intentioned heavier regulatory intervention. Good intentions do not equal good results in bank regulation.


[1] Lawrence H. White, “Why Didn’t Hayek Favor Laissez Faire in Banking?“ History of Political Economy 31 (Winter 1999), pp. 753-769. I also published a companion paper on his monetary theory: Lawrence H. White “Hayek’s Monetary Theory and Policy: A Critical Reconstruction,” Journal of Money, Credit, and Banking 31 (February 1999), pp. 109-20.

[2] F. A. Hayek, “Monetary Policy in the United States after the Recovery from the Crisis of 1920,” in Good Money Part I: The New World, ed. Stephen Kresge, vol. 5 of The Collected Works of F. A. Hayek (Chicago: University of Chicago Press, 1999); F. A. Hayek, Money, Capital, and Fluctuations: Early Essays, ed. Roy McCloughry (Chicago: University of Chicago Press, 1984).

[3] See Vera C. Smith, The Rationale of Central Banking (Indianapolis: Liberty Fund, 1990), chapter 11; and George A. Selgin and Lawrence H. White, “Monetary Reform and the Redemption of National Bank Notes, 1863-1913,” The Business History Review 68, no. 2 (1994), pp. 205-43.

[Cross-posted from]

Over a decade ago, James Hamilton was convicted of a felony in Virginia, for which he served no jail time. Since then, the state of Virginia has restored all of his civil rights, including the right to possess firearms. In the years since then, Hamilton has worked as an armed guard, firearms instructor, and protective officer for the Department of Homeland Security. Despite never exhibiting any violent tendencies and leading a stable family, the state of Maryland, where Hamilton now resides, forbids him from possessing firearms because of that decade-old Virginia conviction.

Hamilton challenged Maryland’s absolute prohibition on the possession of firearms by felons as applied to him, arguing that, while there may be reasons for forbidding some felons from owning firearms, the prohibition made no sense when applied to him, a person who committed a non-violent felony over a decade ago. The Fourth Circuit, however, decided that Hamilton was not eligible to bring an as-applied challenge to Maryland’s law, leaving states in the Fourth Circuit wide latitude to abuse the constitutional rights of a huge class of citizens and leaving those citizens with no way to vindicate their rights.

On petition to the Supreme Court, Cato submitted a brief as amicus curiae, arguing for the court to hear Hamilton’s case. We argued that, by allowing the Fourth Circuit to defer to state legislatures in defining who is and is not entitled to Second Amendment protection, the Fourth Circuit allowed Maryland to define the scope of a constitutional right, in direct contravention of Supreme Court precedent, specifically Heller. In general, lower courts have shown tremendous zeal in treating the Second Amendment as a second-class right—even after Heller and McDonald—and those concerns are magnified here, where the Fourth Circuit ruled that a person cannot even bring an as-applied challenge to a law that burdens the exercise of a constitutional right. The Fourth Circuit justified its position by quoting Supreme Court language referring to felon-in-possession bans as “presumptively constitutional.” However, that is not how the Fourth Circuit has treated this law. A restriction that is not capable of being defeated is not “presumptively lawful,” it is absolutely and inviolably lawful, and thus we urged the Supreme Court to step in and rein in this abuse by the lower court. The Supreme Court declined.

Hamilton is another in a long line of Second Amendment cases that the Supreme Court has refused to hear, including one just last week challenging Maryland’s “assault weapons” ban. Hamilton is particularly unfortunate because, if taken far enough, states could deny large portions of their citizens the right to keep and bear arms without any way to remedy their loss. Hamilton’s case was a great vessel for the Supreme Court to clarify Heller and McDonald and finally force the circuit courts to make Second Amendment decisions with some modicum of consistency. A decade-old, non-violent, non-firearm-related felony for which Hamilton served no time is no reason to strip him of the basic human right of effective self-defense.

There are good reasons to believe that fraud took place in Honduras’ presidential election. The Economist did a statistical analysis of the election results and found “reasons to worry” about the integrity of the vote—although they were not conclusive. A report from the Organization of American States Observation Mission points out “irregularities, mistakes, and systemic problems plaguing this election [that] make it difficult… to be certain about the outcome.”

At the heart of the controversy is how the results of the presidential election shifted dramatically after a blackout in the release of information that lasted nearly 38 hours. A first report released by the Electoral Tribunal (TSE) on Monday 27 November at 1:30 am (ten hours after polls closed and after both leading contenders had declared themselves the winners) showed opposition candidate Salvador Nasralla leading incumbent president Juan Orlando Hernández 45.17% versus 40.21%, with 57.18% of tally sheets from polling stations counted.

Then came the blackout, during which officials from Hernandez’s National Party argued that the results would be reversed once the release of information resumed. Their claim was that the tally sheets initially reported came from polling stations in urban areas, whereas the National Party strongholds are in rural areas. Indeed, when the TSE began releasing information again on Tuesday afternoon, Nasralla’s five point lead steadily declined and then disappeared. With almost all votes counted, Hernández is now ahead by 1.6 points.

Other irregularities documented by the OAS include missing tally sheets, opened and incomplete containers with electoral material from polling stations, and undisclosed criteria for processing the ballots that arrived at the TSE collection center.

What now? The opposition is demanding a full Florida-style recount. This would prolong the uncertainty about who won the election, but given the extent of irregularities, it seems a fair request. However, some officials from Nasralla’s camp also claim that the election has been irretrievably tainted. Nasralla himself proposed a run-off vote with Hernández, but the constitution does not allow for such possibility. The real danger is that the opposition will reject anything short of a repeat of the election, even if there is a transparent recount. A repeat of the election, expensive as it is, would also create an ominous precedent for contesting close election results in the future.

It is also fair to say that Nasralla’s camp is not likely to concede defeat under any circumstances. His left-wing coalition—conspicuously named the “Opposition Alliance against the Dictatorship”—was going to cry foul if Nasralla was defeated, regardless of the margin. He also reneged on a signed pledge to respect the result emanating from the TSE and threatened to continue the chaos brought about by his supporters “until the country comes to an end.” Instead of being a responsible actor during the crisis, Nasralla is increasingly giving the impression that he does not want an institutional solution to it. For example, Nasralla has yet to file a formal challenge to the election, despite the fact that a legal deadline was extended until Friday in order to give his Alliance more time to do so. He has not presented evidence of manipulated tally sheets either.

There are no easy ways out of this quagmire and it is likely that one side will end up feeling cheated. Still, a solution needs to be worked out: The TSE should facilitate the verification of all the 18,103 tally sheets and, if anomalies arise, allow for a recount of those where there are discrepancies. This process should be closely monitored by observers from the Organization of American States and the European Union. It is their task to serve as ultimate arbiters and certify whether the conditions have been met for a transparent verification and recount process.

A post-election institutional arrangement could be part of the solution: Since Honduras’ Constitutional Court struck down the prohibition on presidential reelection, the Congress should establish non-consecutive reelection (such as in Chile, Costa Rica, and Uruguay). In addition, a run-off should be introduced for presidential elections. Finally, the appointment of the TSE justices should be taken away from Congress and given to the Supreme Court in order to guarantee their impartiality.  

The federal government has suffered from wasteful spending since the beginning. One of the biggest bureaucracies in the 19th century was the Bureau of Indian Affairs (BIA). An official history says, “the Indian Bureau operated under constant and often well-founded criticism of corruption and inefficiency in its handling of the millions of dollars in supplies purchased each year for the reservations.”

Senator James Lankford’s new study on wasteful spending (“Federal Fumbles”) indicates that BIA mismanagement persists, with waste and failure in its housing, education, and health care programs. I uncovered the same problems with the BIA.

“Fumbles” identifies wasteful programs across the government. The government spent $745 million on an Air Force control center that was scrapped, $85,000 for a music conductor’s birthday party, $148,950 for Alabama’s birthday party, $150,382 to document the Domaaki language in Pakistan, $1 billion for a low-value trolley in San Diego, $17 billion on erroneous EITC subsidies, and $1 billion on federal agency advertising.

Spending on such dubious activities represents a small share of the $4 trillion federal budget. But Lankford’s examples illustrate the broader overspending disease that afflicts Congress and the executive branch, which I discuss here, here, and here. Lankford’s projects are not just random failures, but rather stem from structural features of the government that induce overspending.

Senator Lankford will discuss his report at a Cato forum on Capitol Hill tomorrow at noon. Romina Boccia of Heritage, Steve Ellis of TCS, Ryan Bourne of Cato, and I will comment on the report, discuss the budget situation, and examine prospects for spending cuts. Federal spending is not a free lunch, but Cato forums are. All are welcome.

I visited the Patagonia web site looking for some Christmas presents yesterday and learned that “the president stole my land.” How horrible! So I looked into it and discovered that President Trump took federal land that was managed by a particular set of federal agencies under a particular set of restrictions and changed it into federal land managed by the very same federal agencies under a slightly different set of restrictions. Not to jump on Patagonia, whose clothing I’ve always enjoyed, but where’s the theft in that?

Of course, what Trump did was reverse changes by Presidents Clinton and Obama, who first imposed the slightly different set of restrictions in 1996 (Clinton for the Grand Staircase-Escalante) and 2016 (Obama for the Bears Ears). I can say with absolute certainty that, when they made those changes in 1996 and 2016, many people in Utah said, “the president stole our land.”

Supposedly, one issue is vandalism and destruction to Native American antiquities and artifacts. But such vandalism and destruction was equally illegal (under laws that are equally difficult to enforce) under both sets of restrictions, so claims that Trump’s decision opens the areas to more looting or devastation are red herrings.

Another issue is energy, but that isn’t very important either. Supposedly, there is coal in the Grand Staircase-Escalante area, but at the moment the United States has a surplus of coal and declining demand. Meanwhile, the state of Utah admits that Bears Ears has “very little energy potential.” So why all the fuss?

We call the federal lands public lands, but in fact it would be better to call them political lands because decisions about their use are made by the political system, not by the public. Yes, the agencies pretend that the public has a say, but the reality is that those who want to have a say have to build up political power to do it. This creates an interesting set of incentives.

First, to build political power, you have to convince people there is a crisis. Thus, a small rule change becomes “the president stole your land.” Second, to keep that political power, you have to compete with other groups who are nominally your allies, and the best way to compete is to be more radical than they are. Anyone who compromises is a sell-out and risks losing support to other more radical groups.

As a result, the political system promotes polarization and a winner-take-all mentality. There’s no need to decide whether any particular acre of land is most suitable for wilderness, grazing, timber, or mining. Instead, just demand that all land be dedicated to your favorite use.

President Trump’s reclassification of some Utah lands may lead to only minor changes in on-the-ground management, but he made them to cater to an important political constituency. The shrill response from environmental groups (and some recreation businesses such as Patagonia) caters to another political constituency, some of whom may be secretly happy to see Trump take this action so they can use it as a fund-raising and membership-building tool.

This is very different from the market system, which promotes cooperation and compromise. When a Wisconsin dairy decides to turn their milk into cheese instead of yogurt, you don’t see the National Yogurt Society sending out impassioned emails claiming “the dairy stole your yogurt.” If the price of yogurt goes up, some dairy or another will redirect some milk to yogurt production. In effect, every member of the public has a say in how milk will be used every time they buy (or don’t buy) a dairy product, which in a real sense makes markets more democratic than the political system.

Some will argue that markets can’t work for natural landscapes because they are a finite resource. But federal lands make up just 27 percent of the nation, and so long as the federal government gives away recreation and other “natural” uses of the land, private landowners have no incentive to provide such uses. If managers were allowed to charge market rates for recreation, private landowners would have an incentive to provide similar natural experiences, thus greatly increasing the land available for such uses.

Although many of my Cato colleagues would say the best way to transfer federal lands from the political system to the market system is to privatize them, I’ve argued that we can achieve the same results with less controversy by turning them into fiduciary trusts that are funded out of their own revenues, receiving no tax dollars. If fully carried out, this could take care of problems related to wildfire, endangered species, and a wide variety of other issues.

A less radical solution is the creation of collaborative partnerships that include interest groups and the agencies themselves. These are fragile (one collapsed on the death of just one partner) and often depend on continuing federal subsidies for success, so I am not as enthused about them. But they could be a short-term solution for the southern Utah monument lands.

Those who truly care about the federal lands would seek a better system than the one we have now for managing those lands. Those who seek to perpetuate the political system of management are often more interested in promoting their organizations than in improving on-the-ground management.

A headline today in the Washington Post is “Voter Database Alarms Experts.” The addition of another big government database alarms me as well. The other day I noted the huge vulnerability created by the income tax and resulting IRS data horde. And then there are federal data stockpiles for health care, security, and many other things.

Now a presidential commission apparently wants to create another juicy target for hackers.

From the Washington Post story:

More than a half-dozen technology experts and former national security officials filed an amicus brief Tuesday urging a federal court to halt the collection of voter information for a planned government database.

Former national intelligence director James R. Clapper Jr., one of the co-signatories of the brief, warned that a White House plan to create a centralized database containing sensitive information on millions of American voters will become an attractive target for nation states and criminal hackers.

… the brief focuses on the security implications of aggregating and housing sensitive information, such as names, addresses, party affiliation and partial social security numbers, in one central location, without adequate security and privacy safeguards. “A large database aggregating [personally identifiable information] of millions of American voters in one place, as the Commission has compiled and continues to compile, would constitute a treasure trove for malicious actors,” the signatories wrote.

The brief states that the commission does not appear to have established rules or procedures defining who gets access to the database or how it should be actively protected.

… Clapper and his co-signatories also said that the database will be situated on a re-purposed White House system, and not within the Department of Defense, making the information even more vulnerable to theft. “Aggregating a comprehensive and official set of such data onto one high-profile, widely publicized server maintained by the White House may reduce the technical and practical barriers to a foreign adversary acquiring such information and making use of it without detection,” the brief said.

A new Government Accountability Office (GAO) report claims that, among other issues, the Border Patrol is not efficiently deploying agents to maximize the interdiction of drugs and illegal immigrants at interior checkpoints. I wrote about this here. These checkpoints are typically 25 to 100 miles inside of the United States and are part of a “defense in depth” strategy that is intended to deter illegal behavior along the border. Border Patrol is making suboptimal choices with scarce resources when it comes to enforcing laws along the border. A theme throughout the GAO report is that Border Patrol does not have enough information to efficiently manage checkpoints. Contrary to the GAO’s findings, poor institutional incentives better explain Border Patrol inefficiencies, while a lack information is a result of those incentives. More information and metrics can actually worsen Border Patrol efficiency.

Inefficient Border Patrol Deployments

Border Patrol enforces laws in a large area along the border with Mexico. They divide the border into nine geographic sectors. They further divide each sector into stations that are further subdivided into zones, some of which are “border zones” that are actually along the Mexican border while the remainder are “interior zones” that are not along the border. The GAO reports that this organization allows officials on the zone level to deploy agents in response to changing border conditions and intelligence. 

The GAO states that Headquarters deploys Border Patrol agents to border sectors based on threats, intelligence, and the flow of illegal activity. The heads of each sector then allocate agents to specific stations and checkpoints based on the above factors as well as local ones such as geography, climate, and the proximity of private property. The heads of those stations and checkpoints then assign specific shifts to each agent. The time it takes for a Border Patrol agent to respond to reported activity, their proximity to urban areas where illegal immigrants can easily blend in, and road access all factor into these deployment decisions. 

All of the above factors that managers and supervisors consider for deployment are reasonable but it is still a management black box. How much does each of these factors matter in determining deployments? Does the relative importance of each factor shift over time or between sectors? How can we tell if one set of decisions is consistently better than another set? 

The GAO and other organizations always suggest the same solution to illuminate the black box of Border Patrol agent management decisions: more information. The Border Patrol has about 19,500 agents, about 43 percent of whom can be deployed in the field at any given time, and 143 checkpoint locations along the Southwest border. The GAO has access to extensive data on Border Patrol deployments from the Border Patrol Enforcement Tracking System (BPETS), GPS coordinates for some enforcement operations, seizure information, and the time use of Border Patrol agents by sector when they are on duty. Border Patrol likely has more information available that GAO has not analyzed.  

Yet, more information is always insufficient at gauging the efficiency of agent deployment at checkpoints. The recent GAO report notes that “checkpoints’ role in apprehensions and seizures is difficult to measure with precision because of long-standing data quality issues” that the GAO first complained about in 2009. Checkpoints did not consistently report apprehensions as some included any apprehension within a 2.5-mile radius of a checkpoint as apprehended by the checkpoint, while others had different definitional radii and reporting standards. As a result, the number of apprehensions and seizures cannot be determined. The data reporting “issues continue to affect how Border Patrol monitors and reports on checkpoint performance results” despite several memoranda that were supposed to remedy the data collection issues. Border Patrol finally created the Checkpoint Program Management Office (CPMO) in 2016 that is supposed to remedy data collection inconsistencies. 

The GAO report calls for more detailed information, such as distinguishing whether an illegal immigrant detention occurs “at” rather than “around” a Border Patrol checkpoint. The GAO also suggests accurate “workplace planning needs assessments” to make sure that checkpoints are manned and operated properly. GAO asks Border Patrol to implement internal controls to ensure data accuracy, consistency, and completeness to overcome problems like supervisors who are not required to update BPETS if their actual deployments differ from those planned and recorded in BPETS. Border Patrol should also study the impact of checkpoints on local communities when considering agent deployment. Finally, GAO reiterates its call for an accurate “workplace planning needs assessment” to make sure that checkpoints are manned and operated properly. How Border Patrol is supposed to integrate these new metrics with existing metrics is a mystery.

The Incentive to Patrol the Border

Government law enforcement agencies do have difficult or, in many cases, impossible jobs. Interrupting supply and demand by stopping the flow of unlawful drugs and illegal immigrants into the United States is as Sisyphean a task as Soviet criminal investigators who attempted to stop the black market in gasoline or food. More information won’t remedy the agent deployment problems at Border Patrol, CBP, or any other government agency. The problem with these agencies is not a lack of information but bad incentives. 

The performance of government agents isn’t measured by profit and loss as it is in the private sector but by political factors. In the language of economics, Border Patrol faces a principal-agent problem. Principals are the owners and the agents work for them. That sounds simple enough but principals and agents have different incentives. For instance, principals in private enterprise want to maximize profit while many of their agents (workers) want compensation for doing as little as possible or diverting resources into their own pockets. Principals thus have to structure compensation and manage in such a way as to align the incentives of workers and employees through profit sharing, other financial incentives, or through myriad other ways to mitigate these problems. Information is vital to mitigating a principal-agent problem (it can never be fully solved) but more information by itself without the incentive to use it wisely is wasted.

The Border Patrol principals are the politicians who ultimately determine its budget and appoint the heads of the organization. The incentive of the principals is to stay in elected office by winning elections. Border Patrol employees are the agents who supposedly work for the principals. Satisfying political constituencies has little to do with actually enforcing the law as written. For instance, enforcement of immigration laws typically declines during times of economic growth because businesses demand more workers and labor unions complain less about illegal immigrant workers. No lobbying of Congress is necessary, merely the reactions of Border Patrol employees to changing economic circumstances in anticipation of what they think politicians want. 

What those politicians want changes over time based on what they think the electorate wants. In the past, economic growth was a better predictor of immigration enforcement. Now, immigration-induced changes in local demographics, cultural complaints, and the idea that immigrants and their descendants will vote against incumbent political parties also drive support for immigration enforcement. Thus, removing illegal immigrants is the latest iteration of the Curley Effect. Countering this trend are pro-immigration local policies like Sanctuary Cities, as well as states like Illinois and California that restrict local police cooperation with federal law enforcement.

More Information Can Worsen Management

The call for more information and better metrics for measuring border security is well intentioned but it can also backfire. Some information is required to make accurate decisions but, beyond a certain point, too much information can produce information overload, whereby decisions become less accurate as the decision maker learns more (Figure 1). Information beyond the overload point will confuse a decision maker, affect his or her ability to set priorities, and worsen recall of prior information. A fundamental concept in economics is scarcity, which occurs when there is not enough supply of a good to satisfy all demand at a price of zero. Information overload is a reminder that human attention span, information processing capacity, and accurate decision-making ability are also scarce resources.

Figure 1

Information Overload as the Inverted U-Curve

Source: Martin J. Eppler and Jeanne Mengis.

Information overload can take several forms. Some scholars emphasize how much time it takes to absorb new information, which can diminish the accuracy of decisions that require timely action. That case is most similar to the timeliness of intelligence reports in guiding Border Patrol agent deployment. The value of most intelligence depreciates rapidly and, if it is accurate, must be quickly acted upon to have an effect. Other scholars focus on the quality of information, as it is difficult to measure that without first absorbing it and comparing it to other information. Estimates of the size of black markets, a crucial metric for Border Patrol, are fraught with errors and it is nearly impossible to tell which one is correct. Tasks that are reoccurring routines produce less information overload than more complex and varied tasks. As mentioned above, the organizational design of a firm is another important factor that influences information overload. 

Smugglers and illegal immigrants compound the problem of information overload as they change their behavior in response to Border Patrol policies. Smugglers and illegal immigrants rarely want to be apprehended so they shift away from patrols or areas where there is more enforcement. In the mid-2000s, illegal Mexican border crossers moved east from California and west from Texas into Arizona because of border security. More enforcement in Arizona after 2010 then shifted illegal immigrant entry attempts back east toward Texas. Their constant movement and reaction to Border Patrol and immigration enforcement generally creates more complexity and information that the agency must process. 

The symptoms of information overload are a lack of perspective, cognitive strain and stress, a greater tolerance for error, low morale, and the inability to use information to make a decision. Those symptoms are all common at Border Patrol and its parent organization, the Department of Homeland Security. In terms of a lack of perspective, the chaos below the border is a supposed “existential threat.” Meanwhile, the tolerance for performance and discipline problems in Border Patrol personnel has festered for over a decade, producing numerous errors of all kindsMorale has historically been low in Border Patrol and has only risen recently due to the election of President Trump.     

One common reaction to information overload is that decision makers become highly selective, ignore vast amounts of information, and cherry pick that information which confirms their biases. Information never speaks for itself and it must always be interpreted and applied. By increasing the quantity of information available to managers and supervisors at Border Patrol, their actions could become more erratic and less efficient because they will be able to pull from a vaster array of justifications for their decisions. Like any other self-interested actors, Border Patrol will always select and interpret information to justify the actions they want to undertake while discounting information that supports another course of action. The principal-agent problem means that this rarely gets corrected. 

For instance, 9.4 percent of Border Patrol hours were spent manning checkpoints from 2013–2016. Yet those checkpoints were only responsible for, at most, 3.1 percent of all illegal immigrants apprehended by Border Patrol in those years and 5.4 percent of all marijuana seizures by weight (Figure 2). That might look like an inefficient allocation of Border Patrol agents, but a smart manager can always argue the opposite by saying, as an example, “we don’t catch many illegal immigrants at checkpoints because checkpoints are so effective at deterring illegal immigrants from even trying to use to the road. Imagine how many there’d be without the checkpoints!” That manager would have a good point. 

Figure 2

Percent of Illegal Immigrant Apprehensions & Drug Seizures Made by Border Patrol by Location, 2013–2016


Source: Government Accountability Office, p. 41.

A border wall that diverts illegal immigrants into the interior and away from cities can be used as support for a longer wall to “extend the gains” or as support for no wall at all because “illegal immigrants are just diverted to more remote areas where agents now have to patrol with greater hazards.” More information and data could worsen the decisions made by Border Patrol managers. 

Back to Incentives

Firms employ countermeasures to information overload, such as better technology and algorithms to select the best data, but the countermeasures are only effective if the managers want to make more accurate decisions. The desire to make accurate decisions in a large organization comes back to incentives, which government agencies have a very difficult time aligning with the stated intent of the law.     

If the incentives to act efficiently are in place then the actor has the incentive to discover the information necessary to carry out his task. But perfect information cannot fix poor incentives and, in fact, can make them worse. Rather than focusing on hiring statisticians, econometricians, and other technocrats to create ever-new metrics to judge government efficiency, Congress should think carefully about aligning incentives to get the outcomes they want. That is a near-impossible job for politicians to tackle so, in most cases, they should just pull back the reach of federal law enforcement to focus on a handful of tasks. 


The best information in the world cannot compensate for poor incentives and can make government management less efficient by providing cover for any choice. Government agents are not usually malevolent, or at least any more so than the rest of us, but they have incentives to satisfy political demands. Private firms that behave in these ways often fail or earn lower profits unless they are bailed out by the government, which is usually the source of these poor incentives in the first place. More metrics can even worsen efficiency. We should look to deeper structural reforms of government agencies rather than continuing to appeal to a priesthood of statisticians and econometricians to produce information to guide us.

After this morning’s Supreme Court argument in the Colorado wedding-cake case, the only thing that safe to predict about this case is that it’ll end up 5-4. It’s perhaps unavoidable that a case so politically fraught would break down on conventional ideological lines, with the four “conservatives” (presumably including the silent Justice Clarence Thomas) siding with the baker who didn’t want to create a custom cake for a same-sex wedding, the four “liberals” siding with the couple that wants to use the state’s anti-discrimination law to compel him to do so, and Justice Anthony Kennedy somewhere in the middle. But it’s disappointing – and it’s especially disconcerting that Justice Sonia Sotomayor kept comparing this case to Piggie Park, Katzenbach v. McClung, and other cases from the Jim Crow Era when African Americans were denied service at restaurants altogether.

It’s telling that none of the wedding-vendor cases we’ve seen in the courts (or in the news) the last few years have involved any business that refuses to serve gay people altogether. Jack Phillips certainly has – and offered to sell Charlie Craig and David Mullins anything on display in his store – as has Barronelle Stutzman, the Washington florist whose fate likely depends on the outcome of Masterpiece Cakeshop v. Colorado Civil Rights Commission. We simply don’t have situations like we did in the 1960s when businesses claimed both a religious and expressive right not to accept racial minorities as customers.

If some business, wedding-related or otherwise, didn’t want to serve gay people, that would be an easy case under Supreme Court precedent (leaving the question of the common-law freedom of association to one side). Instead, it’s quite clear to me that not wanting to convey a message of affirmation for a particular event is different from refusing to serve people based on their identity – and also that Jack Phillips’s gorgeous sculptures are just as protected by the First Amendment when made with fondant as they would if made with plaster.

Indeed, unless a “BBQ artist” is asked to concoct some sort of meat-statue with his tender-smoked goodness, there’s no parallel here. That’s why we wrote in our brief that “wedding (and other) vendors who produce and sell expressive works must be free to accept or reject particular jobs, [but] this right does not apply to those who do not engage in protected speech.” “Creating expressive [products] is constitutionally different than nonexpressive activity like delivering food, renting out ballrooms, or driving limousines.”

But that position may not get five votes; Justice Kennedy seemed to focus on the religious animus at play, as well as the uneven way in which the Colorado Civil Rights Commission has applied its law. Indeed, in a line of questioning that has provoked the most pessimism from the pro-force forces, he highlighted that “tolerance is essential in a free society.” In an echo of his opinion in Obergefell, the case that two years ago established same-sex couples’ right to marry, Kennedy said, “It seems to me that the state in its position here has been neither tolerant nor respectful.”

Still, it’s hard to see the grand champion of free speech forcing a baker (or anyone) to express a message he disagrees with, regardless of the implications for religious freedom. As he wrote in Obergefell, “The First Amendment ensures that religious organizations and persons are given proper protection as they seek to teach the principles that are so fulfilling and so central to their lives and faiths, and to their own deep aspirations to continue the family structure they have long revered. The same is true of those who oppose same-sex marriage for other reasons.” It just shouldn’t matter whether an artistic professional declines a expressive commission for reasons that are religious, secular, or “good” or “bad” – or none at all.

There are many ways the Supreme Court could slice this case, with many dividing lines that are anything but half-baked. But, to carry over a theme from yesterday’s case, I wouldn’t bet on any particular outcome.

You can read the argument transcript here and, for an audio-visual version of the same sort of debate, see video of my debate at Cato yesterday.

Spain is now known to food lovers as one of the great cheese producers of the world, but it wasn’t always so. At one of my favorite websites, Atlas Obscura, Jackie Bryant tells the story of how “one of Europe’s oldest and most varied artisanal cheesemaking cultures… was once entirely illegal. And its survival can be largely attributed to a black market of underground cheese.”

The villain in the piece is dictator Francisco Franco, who ruled from 1939 until his death in 1975, his policies on this subject lingering on for some years thereafter. With a taste for centralized command, Franco wanted to impose mass production and its efficiencies of scale on the dairy sector: 

As part of this policy, quotas were enacted that outlawed milk production under 10,000 liters a day. This made small dairies and cheesemaking productions… illegal. To comply with the law, they had to sell their milk to larger companies.

Enric Canut, a Barcelona-born cheesemaker, agricultural engineer, and dairy consultant, recalls a catalogue of Spanish cheeses compiled by the government in 1964. “Five years later,” he says, “most of those same cheeses were illegal!”

So traditional cheesemaking went underground. Especially in independent-minded rural areas like Galicia, most farmers quietly defied the government. They would report milk as having been personally consumed by the farm family itself, even if that meant by the hundreds of gallons a week. And they would meet in covert open-air markets – at times like 5 in the morning – to sell their wares beyond the view of inspectors. 

Canut later reported to the government that at least 25% of daily milk production in Spain went towards making illegal cheese. It was a remarkable refutation of the government’s policy. Franco had imagined large, industrial operations. Instead Spaniards enthusiastically supported small, black market cheesemakers who, as Canut remembers from visits throughout Spain in the 1970s, sometimes kept their cheese in actual caves….

Franco’s policies were slowly phased out, and, in 1985, dairies of all sizes became legal. Canut estimates that in a decade, Spain went from having almost no small dairies to having nearly 1,000—a combination of upstarts and illicit dairies that had been producing all along.

Fom there, another 20 years brings us to the current runaway success story of specialty Spanish cheeses, which figure on the menu at many Michelin-starred restaurants. Read the whole piece here.

P.S. Two weeks ago in this space I quoted an Atlas Obscura report on how here in the U.S. the FDA’s trans fat ban was making life hard for the little business that bakes Baltimore’s fudge-draped Berger cookie. Shortly after that the Baltimore Sun in its own follow-up report revealed a couple of further twists: while the company’s frosting supplier had managed to solve its trans fat problem, it did so in a way that exposed the cookie maker to a new regulatory trip-up. I explain in this Overlawyered post.   

There is a lot that’s wrong with U.S. foreign policy right now, but a broader look at U.S. grand strategy in the post-Cold War era reveals just how broken things have been across administrations of both parties.

The post-Cold War era has seen a continuation of a long global trend toward greater peace and stability, lower rates of conflict, and zero great power wars. More peace and diminishing threats have merely enhanced the remarkable security already enjoyed by the United States thanks to its geographic isolation, weak neighbors, unparalleled economic and military power, and its nuclear deterrent.

But America doesn’t act as if it is safe. Instead, we have a hyper-interventionist foreign policy. Over the last century, according to the Rand Corporation, “there was only one brief period – the four years immediately after U.S. withdrawal from Vietnam – during which the United States did not engage in any interventions abroad.” Indeed, “the number and scale of U.S. military interventions rose rapidly in the aftermath of the Cold War, just as [rates of global] conflict began to subside.”

According to data from the Congressional Research Service, the United States has engaged in more military interventions in the past 28 years than it had in the previous 190 years of its existence.* About 46 percent of Americans have lived the majority of their lives with the United States at war. Twenty-one percent have lived their entire lives in a state of war.

This suggests a truly perverse defect in the way we are carrying out foreign policy. In an era of unprecedented peace and stability, which should permit a less activist foreign policy, we are finding reasons to intervene militarily at an extraordinary pace, making the past three decades a significant outlier in U.S. history.

America’s role in the world underwent a massive expansion following WWII and again at the end of the Cold War. Washington adopted policies and built bureaucracies that incentivized interventionism. As Joseph Schumpeter once put it in an essay on imperialism, “Created by the wars that required it, the machine now created the wars it required.”

In some ways, Americans have been insulated from the worst effects of this aberrant post-Cold War foreign policy (the costs have been borne more acutely by certain foreign populations on the receiving end of it). However, there have been costs here at home. The United States has spent almost $15 trillion on its military since 1990, an enormous price tag that far exceeds what any other country has spent. This constant state of war also tends to undermine liberal values at home by eroding constitutional checks and balances on war powers, incentivizing excessive government secrecy, and infringing on civil liberties in the name of security. In the oft-cited words of James Madison, “No nation could preserve its freedom in the midst of continual warfare.”

As predicted, Donald Trump has maintained and in some ways expanded America’s militaristic and interventionist role in the world. And Trump’s rise is arguably another indication of how democratic norms can erode in the midst of continual warfare. As with most things, however, America’s unusual post-Cold War foreign policy and Trump’s convention-violating brashness has in many ways become normalized.

If we are ever to break out of this apathy and return once again to a realistic and prudent foreign policy commensurate with the low-threat environment we currently inhabit, we will have to reckon with the steep costs of this expansive grand strategy and wrangle the self-sustaining national security bureaucracy into the austerity it desperately needs.

*The data from the CRS report is helpful, but imperfect and incomplete. It lists 416 “notable deployments of U.S. military forces overseas” from 1798-2017. It lists 212 interventions between 1798 and January 1989 and 204 since then. However, many of the individual items listed in the 19th century involve minor actions like deploying a small naval force to gain the release of a captured U.S. citizen abroad or shows of force against pirates or mischievous whalers – deployments that are too minor to merit an individual itemized listing in later periods. Furthermore, “covert operations, disaster relief, and routine alliance stationing and training exercises are not included,” activities that are far more frequent now than they were in the past. One should consider the multiple covert undeclared drone wars the United States has waged in the post-9/11 era and, of course, programs of coordination with foreign militaries in conflict areas where U.S. forces get killed or wounded, as in Niger recently, but which do not make it on to the list. Finally, CRS bundled many individual post-9/11 deployments and interventions together as a single item on the list, even though they are clearly distinct and included multiple countries in separate regions of the world. This is likely because the executive branch bundled them together when informing Congress of the deployments, which is the primary source for CRS’s data. Completely and accurately accounting for these discrepancies would require a full-length study, but my own ad hoc, and I think conservative, adjustments led me to a breakdown of 199 interventions from 1798 to January 1989 and 213 from 1989 to today. 

House Minority Leader and former speaker Nancy Pelosi says that the Republican tax bill, “with stiff competition by some of the other things they have put forth, is the worst bill in the history of the United States Congress.”

That is a tall order. A quick search of the history of the United States Congress reveals that Congress has passed:

the Alien and Sedition Acts in 1798

the Indian Removal Act in 1830

the Fugitive Slave Act in 1850

Public Law 503, codifying President Franklin D. Roosevelt’s Executive Order 9066 authorizing the internment of Japanese, German, and Italian Americans, in 1942

the Eighteenth Amendment (Prohibition), the Espionage Act, and the Selective Service Act, and entered World War I, all in 1917

the Universal Military Training and Service Act in 1951

the Tonkin Gulf Resolution in 1964

the USA PATRIOT Act in 2001 (Pelosi voted for this)

the National Defense Authorization Act, featuring indefinite detention, in 2011 (Pelosi voted for this)

I don’t think the current tax bill is even in the running.

I suppose hyperbole is to be expected in Congress. But this was said on the floor of the House by the former speaker, so presumably it was carefully thought out. I do hope that Leader Pelosi will be granted permission to revise and extend her remarks.

Last month we posted our first “dispatch” from the frontlines of public schooling’s values and identity-based wars, conflicts ultimately entered on the Public Schooling Battle Map, an interactive database of such contests. The monthly dispatch is intended to lay out some of the themes we’ve observed in battles during the month, and to give you a sense over which basic values the public schools—inherently zero-sum arenas—have people battling. Here are the themes of November:

  • Discriminatory Dress Codes: Allegations that school dress codes discriminate against girls, proscribing lots of attire options for them on the grounds that they are too revealing—and may be distracting for boys—while prohibiting far less for the guys were prevalent in November. Of course, dress code conflicts are not new—the Battle Map contains nearly 90 such fights—but it seems those fueled by accusations of gender discrimination, as opposed to, say, freedom of expression, may be growing. Conflicts in November flared up in Oxnard, CA; Loyalsock Township, PA; and Washington Township, IN.
  • Sex Ed: Putting at odds basic beliefs about moral behavior, health, and age appropriateness of instruction, sex education has been a war zone for decades. But it seemed to have faded at bit over the last few years, eclipsed by contests over bathroom access and other, even hotter-button issues. But it made a bit of return in November, with battles over proposed online, parent-selected sex education in Utah; the presence of Sex, Etc. magazine—with articles such as “Where do you stand on Friends With Benefits?” and “The clitoris and pleasure: What you should know”—in a New Jersey middle school; and a proposal in Niagara Falls, NY that could involve escorting Planned Parenthood reps through schools.
  • Curricula: What public schools teach is, of course, controversial, beyond the extremely contentious subject of sex education. In November we also saw Mexican American studies—and one proposed textbook in particular—create fireworks in Texas; disagreements over the definition of “civic readiness” in Nebraska; and a proposal in Florida not just to let parents challenge textbooks, but propose replacements.

There were lots of other conflicts—over The Hate U Give, Bible study, and more—but these seem to be the trends.

By the way, over on the Battle Map Facebook page we have started posting twice-weekly polls on the kinds of conflicts we see repeatedly. They are not scientific, and we are just starting to build traffic on the page, but they often suggest significant divides among, presumably, perfectly decent people. For instance, our question whether school officials or students should decide which bathrooms and locker rooms students can use saw an almost 50/50 split, with 48 percent choosing “public school officials” and 52 percent “students.” Asked whether the tenor of American history taught in public schools tends to be “too critical” or “too celebratory,” 65 percent chose the latter, but a still significant 35 percent picked the former.

Now, head over to the Facebook page and vote on the active questions: Should student journalists or school administrators ultimately decide what gets published in school newspapers, and who should decide what kids read in public schools? Also, please send any values or identity-based battles you find to nmccluskey [at] And ask yourself: Why should we be forced to fight, or sacrifice what matters to us, in educating our children? Why shouldn’t we be free to choose?

Today, the Department of Homeland Security (DHS) released a report detailing deportations (henceforth “removals”) conducted by Immigration and Customs Enforcement (ICE) during the fiscal year of 2017.  This post presents data on removals in historical context combined with information from Pew and the Center for Migration Studies

ICE deported 81,603 illegal immigrants from the interior of the United States in 2017, up from 65,332 in 2016.  Removals from the interior peaked during the Obama administration in 2011 at 237,941 (Figure 1).  ICE also removed large numbers of people apprehended at the border.  Since 2012, border removals have outnumbered those from the interior of the United States.

Figure 1

Interior and Border Removals by ICE, 2008-2017


Source: Immigration and Customs Enforcement.

The Obama administration removed 1,242,486 from the interior of the United States during its full eight years, averaging 155,311 removals per year.  Data from the earlier Bush administration are more speculative but they show more deportations under Obama than under Bush.    

The percentage of all illegal immigrants removed from the United States is a better measure of the intensity of interior enforcement than the total numbers removed (Figure 2).  Based on estimates of the total size of the illegal immigrant population from Pew, the Center for Migration Studies, and my own guesstimates for 2016 and 2017, 0.74 percent of that population was removed from the interior of the United States in 2017, up from 0.59 percent in 2016 but still below the 2014 percentage.  Interior removals as a percent of the illegal immigrant population peaked at 2.11 percent in 2009. 

Figure 2 

Removals as a Percent of the Illegal Immigrant Population


Sources: Immigration and Customs Enforcement, Pew, Center for Migration Studies, Author’s Estimates, Author’s Calculations.

President Obama’s administration removed an average of 1.38 percent of the interior illegal immigrant population each year of his presidency.  The Obama administration’s interior removal statistics show a downward trend beginning in 2011 and continuing until through the end of the fiscal year 2016. 

The Obama administration also focused immigration enforcement on criminal offenders (not all illegal immigrants are criminals).  During the Obama administration, 53.3 percent of all illegal immigrants removed were criminals, including those who violated immigration crimes.  The Trump administration has continued to focus on removing criminals in 2017.  However, criminal removals during the first year of Trump’s administration are slightly below those of 2016 (Figure 3).    

Figure 3

Criminal Removals as a Percent of All Removals

Source: Immigration and Customs Enforcement.

The Trump administration has just begun to ramp up interior immigration enforcement.  The 2017 figures show a reversal of the declining interior immigration enforcement efforts under the Obama administration but they have not reached peak enforcement performance yet.  The increased number of administrative arrests for immigration violations, the worsening immigration court backlog, and revival of Secure Communities all indicate that this administration will continue to increase interior immigration enforcement in subsequent years.   

Since the passage of the Affordable Care Act (ACA) in 2010, many economists have predicted that the Act will cause a reduction in labor market participation and a recent New York Times article seemingly vindicates these expectations. The article recounts how the rapid increase in insurance premiums have led Anne Cornwell to cut her working hours, and thus her yearly income, by 30 percent in order to be eligible for health insurance subsidies. The $24,000 reduction in income allowed Ms. Cornwell and her husband to qualify for $27,000 in subsidies.

Ms. Cornwell’s reduced labor market participation supports economists’ predictions based on how the ACA determines eligibility for subsidies. Subsidies are available for people who purchase coverage from health insurance exchanges created by the ACA and whose household income is between 100 and 400 percent of the Federal Poverty Level. Economists predicted that because the subsidies are based on household income instead of individual income, second earners in many households would reduce their hours in order to qualify.

In 2014, for example, the Congressional Budget Office projected that the ACA would reduce the total number of hours worked by 1.5 to 2 percent between 2017 and 2024. In terms of full-time-equivalent workers, this represents a decline of 2.5 million workers in 2024.

It is not yet clear whether Ms. Cornwell’s decision is representative of a larger population of American workers, but her situation does coincide with economists’ findings. A recent working paper by Stanford economists Mark Duggan, Gopi Shah Goda, and Emilie Jackson—which I review in the upcoming issue of Regulation—looks at how the ACA has affected labor market participation in different regions of the United States since its implementation in 2014.

While they found no change in participation in the aggregate, this result stemmed from two offsetting trends. They found an increase in labor market participation in regions where the share of uninsured and under the poverty line was larger and a reduction in participation in areas where there was a larger number of people who were uninsured and between 139 percent and 399 percent of the poverty line. “These changes suggest that middle-income individuals reduced their labor supply due to the additional tax on earnings while lower income individuals worked more in order to qualify for private insurance.”

Ms. Cornwell’s individual reduction in labor market participation is in line with these results. While the aggregate level of labor market participation may remain the same, the reduction of participation by middle-class individuals could indicate significant losses in tax revenues and employer surplus.

Written with research assistance from David Kemp.