Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Recent terrorist attacks in Europe have increased death tolls and boosted fears on both sides of the Atlantic. Last year, I used common risk analysis methods to measure the annual chance of being murdered in an attack committed on U.S. soil by foreign-born terrorists. This blog is a back of the envelope estimate of the annual chance of being murdered in a terrorist attack in Belgium, France, Germany, Sweden and the United Kingdom. The annual chance of being murdered in a terrorist attack in the United States from 2001 to 2017 is about 1 in 1.6 million per year. Over the same period, the chances are much lower in European countries.

Methods and Sources

Belgium, France, and the United Kingdom are included because they have suffered some of the largest terrorist attacks in Europe in recent years. Sweden and Germany are included because they have each allowed in large numbers of refugees and asylum seekers who could theoretically be terrorism risks.

The main sources of data are the Global Terrorism Database at the University of Maryland for the years of 1975 to 2015, with the exception of 1993. I used the RAND Database of Worldwide Terrorism to fill in the year 1993. I have not compiled the identities of the attackers, any other information about them, or the number of convictions for planning attacks in Europe. The perpetrators are excluded from the fatalities where possible. Those databases do not yet include the years 2016 and 2017, so I relied on Bloomberg and Wikipedia to supply a rough estimate of the number of fatalities in terrorist attacks in each country in those two years through June 20, 2017. The United Nations Population Division provided the population estimates for each country per year.

Terrorism Fatality Risk for Each Country

This section displays the number of terrorist fatalities and the annual chance of a resident of each country being murdered. The results in this section answer three important questions: What is the annual chance of having been killed in a terrorist attack from 1975 through 2017 in each European country? Has the annual chance of being killed in a terrorist attack gone up since the 9/11 attacks? How does the risk in Europe compare to the risk in the United States?

European Terrorism from 1975 through June 20th, 2017

Residents of the United Kingdom have suffered the most from terrorism. Almost 78 percent of the European fatalities reported in Table 1 were residents of the United Kingdom and about 95 percent of those British fatalities occurred before 2001.

Residents of the United Kingdom suffered the most from terrorism with the highest annual chance of dying at one in 964,531 per year (Table 1).

Table 1: Fatalities and Annual Chance of Dying in a Terrorist Attack, 1975–June 20th, 2017

 

Fatalities

Annual Chance of Dying

United Kingdom

2,632

1 in 964,531

Belgium

64

1 in 6,936,545

France

506

1 in 4,984,301

Sweden

20

1 in 19,001,835

Germany

148

1 in 23,234,378

United States

3568

1 in 3,241,363

Sources: Global Terrorism Database, RAND Corporation, United Nations Population Division, Bloomberg, Wikipedia, author’s calculations.

The deadliest terrorist attack across these five European countries was the 1988 bombing of Pan Am 103 over Lockerbie, Scotland, which killed 270. An additional 110 residents of these five countries were murdered in that year. The next deadliest year was 1976 with 354 victims. The third deadliest year was 1975, when there were 1,252 murders in terrorist attacks (Figure 1). The number of fatalities in European terrorist attacks increased to 172 in 2015 and fell to 133 in 2016. Every death in a terrorist attack is a tragedy but Europeans should feel comforted by the fact that their chances of dying of such an attack are minuscule.

Figure 1: Terrorism Fatalities in Belgium, France, Germany, Sweden, and the United Kingdom, 1975–2017

Sources: Global Terrorism Database, RAND Corporation, United Nations Population Division, Bloomberg, Wikipedia.

Terrorism Risk in Europe versus the United States

The annual chance of being murdered in any terrorist attack in the United States from 2001 to 2017 is about 1 in 1.6 million per year (Table 2). The annual chances were much higher in every European country during the same period. Table 2 also includes the United States without the fatalities from the 9/11 attacks as they were such extremely deadly outliers that are unlikely to be repeated. Thus, excluding the 9/11 attacks in one example allows a potentially better cross-country comparison of the annual fatality chances. Strikingly, the annual chance of an American being murdered in a terrorist attack is almost identical across the two periods when 9/11 is excluded – evidence that those attacks were outliers that punctuated an otherwise steady trend.

Prior to 2001, the annual chance of dying in a terrorist attack in every country in Europe was higher than in the United States, with the sole exception of Sweden. When 9/11 occurred, the relative risk to residents in these countries flipped and the United States became more dangerous.

Table 2: Annual Chance of Dying in a Terrorist Attack by Period

 

Annual Chance of Dying in a Terrorist Attack

 

1975–2000

2001–2017

United States

19,767,153

1,602,021

France

6,059,061

4,006,878

Belgium

9,611,873

4,373,511

United Kingdom

590,389

8,796,562

Sweden

22,145,655

15,858,016

United States (exc. 9/11)

19,767,153

19,772,468

Germany

17,338,091

47,429,484

Sources: Global Terrorism Database, RAND Corporation, United Nations Population Division, Bloomberg, Wikipedia, author’s calculations.  Through June 20th, 2017.

Terrorism Risk Since 9/11

Many think that Islamic terrorism since 2001 is deadlier than past terrorism. This is certainly true in the United States where at least 3,246 people were killed on U.S-soil in all terror attacks from 2001 to through 2017 compared to only 322 from 1975 through 2000. Those differences are reflected in the greater, but still small, annual chance of an American dying from terrorism in the later period (Table 2). The chances of being murdered in a terrorist attack are also higher in France, Belgium, and Sweden but they are still tiny. Residents in the United Kingdom and Germany were less likely to die, per year, in a terrorist attack from 2001 through 2017.

The largest decline in risk was in the United Kingdom where the annual chance of being killed by terrorists went from 1 in 590,380 per year prior to 2001 to 1 in 8,796,562 per year from 2001 through June 20th, 2017. For 2016 and 2017 (so far), the chance of a British resident dying in a terrorist attack is about 1 in 3.5 million per year. The chance of a British resident being murdered in a non-terrorist homicide in 2013 was about 133 times as great as his or her chance of being murdered in a terrorist attack in the same year.

Conclusion

The chance of an American being murdered in a terrorist attack is greater than for a European resident of any of these five countries from 2001 through June 20th, 2017. Future terrorist attacks are unlikely to be as deadly as 9/11 even though there is a fat-tailed risk. When the unprecedented deadliness of 9/11 is excluded, the annual risk of being killed in a terrorist attack is reversed and residents of every European country except for Germany have a greater chance of being murdered than an American on U.S. soil.

The number of deaths from terrorism is so tiny that the addition or subtraction of another few murders can drastically change the annual chances of being murdered, which is evidence of how manageable the threat from terrorism actually is. If terrorism was as common or deadly as people erroneously believe it to be then another attack or two would not make a big difference in the annual chances.

A total of 3,370 residents of Belgium, France, Germany, Sweden, and the United Kingdom were murdered by terrorists from 1975 to June 20th, 2017. About 231 million people lived in those five countries in 2015. If they were combined into a single country, the annual chance of dying would be about 1 in 2.8 million per year over that period. The annual chance of being killed in a terrorist attack was a mere 1 in 8.3 million per year if those five European countries were judged as one state from 2001 through June 20th, 2017. That is a lower risk than the 1 in 1.6 million per year chance of an American being murdered in a terrorist attack on U.S. soil from 2001 through 2017. Even in Europe, terrorism is a relatively small and manageable threat.

There has been debate this week about how many libertarians are there. The answer is: it depends on how you measure it and how you define libertarian. The overwhelming body of literature, however, using a variety of different methods and different definitions, suggests that libertarians comprise about 10-20% of the population, but may range from 7-22%.

Furthermore, if one imposes the same level of ideological consistency on liberals, conservatives, and communitarians/populists that many do on libertarians, these groups too comprise similar shares of the population.

In this post I provide a brief overview of different methods academics have used to identify libertarians and what they found. Most methods start from the premise that libertarians are economically conservative and socially liberal. Despite this, different studies find fairly different results. What accounts for the difference?

1) First, people use different definitions of libertarians

2) Second, they use different questions in their analysis to identify libertarians

3) Third, they use very different statistical methods.

Let’s start with a few questions: How do you define a libertarian? Is there one concrete libertarian position on every policy issue?

What is the “libertarian position” on abortion? Is there one? What is the “libertarian position” on Social Security? Must a libertarian support abolishing the program, or might a libertarian support private accounts, or means testing, or sending it to the states instead? A researcher will find fewer libertarians in the electorate if they demand that libertarians support abolishing Social Security rather than means testing or privatizing it. 

Further, why are libertarians expected to conform to an ideological litmus test but conservatives and liberals are not? For instance, what is the “conservative position” on Social Security? Is there one? When researchers use rigid ideological definitions of liberals and conservatives, they too make up similar shares of the population as libertarians. Thus, as political scientist Jason Weeden has noted, researchers have to make fairly arbitrary decisions about where the cut-off points should be for the “libertarian,” “liberal,” or “conservative” position. This pre-judgement strongly determines how many libertarians researchers will find.

Next, did researchers simply ask people if they identify as libertarian, or did they ask them public policy questions (a better method)? If the latter, how many issue questions did they ask? Then, what questions did they ask?

For instance, what questions are used to determine if someone is “liberal on social issues”? For instance, did the researcher ask survey takers about legalizing marijuana or did the researcher ask about affirmative action for women in the workplace instead? Libertarians will answer these questions very differently and that will impact the number of libertarians researchers find.

While there is no perfect method, the fact that academics using a variety of different questions, definitions, and statistical techniques still find that the number is somewhere between 7-22% gives us some idea that the number of libertarians is considerably larger than 0.

Next, I give a brief overview of the scholarly research on the estimated share of libertarians, conservatives, liberals, and communitarians in the American electorate. I organize their findings by methods used starting with most empirically rigorous:

Ask people to answer a series of questions on a variety of policy topics and input their responses into a statistical algorithm

In theses studies, researchers ask survey respondents a variety of issue questions on economic and social/cultural issues. Then, they input people’s answers into a statistical clustering technique and allow an algorithm to find the number of libertarians. This is arguably the strongest method to identify libertarians.

  1. Political scientists Stanley Feldman and Christopher Johnson use a sophisticated statistical method to find ideological groups in the electorate (latent class analysis). They find six ideological groups based on answers to a variety of questions on economic and social issues. Feldman and Johnson’s results indicate that about:
  • 15% are likely libertarians (conservative on economics and liberal on social issues)
  • 23% are likely liberals
  • 17% are likely conservatives
  • 8% are communitarians/populists (liberal on economics and very conservative on social issues)
  • 13% are economic centrists but social liberals
  • 24% are economic centrists but lean socially conservative.   

Ask people to answer a series of questions on a variety of policy topics and plot their average responses on a 2-dimensional plot

In these studies, researchers 1) average responses to multiple questions on economics and then 2) average responses to multiple questions on social/cultural/identity/lifestyle issues. They then take the two averaged scores to plot respondents on a 2-dimensional graph (Economic Issues by Social Issues).

  1. Political scientist Jason Weeden averages people’s responses to questions on economics (income redistribution and government assistance to the poor) and on social issues (abortion, marijuana legalization, the morality of premarital sex) found in the General Social Survey. He finds:
  • 11% of Americans are libertarian
  • 11% are conservative
  • 14% are liberal
  • 9% are communitarian/populist
  • Remaining people are roughly evenly distributed between these groups

  1. Political Scientists William Clagget, Par Jason Engle, and Byron Shafer use answers to variety of questions on economics and the “culture” issues from the American National Election Studies from 1992-2008 to determine that:
  • 10% of the population is libertarian
  • 11% is populist
  • 30% is conservative
  • 30% is liberal
  • (Their methods are unclear and their “culture” index may include questions about spending on crime and support for affirmative action for women.)
  1. Political scientists William Maddow and Stuart Lilie average responses to three questions on government economic intervention and three questions about personal freedom from the American National Election Studies and find that:
  • 18% of the population is libertarian
  • 24% is liberal
  • 17% is conservative
  • 26% is communitarian 
  1. The Public Religion Research Institute added (rather than averaged) responses to 9 questions on social and economic issues and made a decision that cumulative scores of 9-25 would be coded as libertarian. Doing this they find that:
  • 7% of Americans are libertarian
  • 15% lean libertarian
  • 17% lean communalist
  • 7% are communalist
  • 54% have mixed attitudes
  1. For a previous Cato blog post, conducted a similar analyses and created three separate estimations. Each used averaged responses on economic questions, but were plotted alongside their average answers to either 1) social issues questions, 2) race/identity questions, and 3) criminal justice and racial equality questions.
  • Using economic and social issues I find:
    • 19% Libertarian
    • 20% Communitarian
    • 31% Conservative
    • 30% liberal
  • Using economic and race issues, I find:
    • 19% Libertarian
    • 15% Communitarian
    • 33% Conservative
    • 33% Liberal
  • Using economic and criminal justice issue positions I find:
    • 24% Libertarian
    • 15% Communitarian
    • 28% Conservative
    • 33% Liberal

Ask people to answer a question about economic policy and a question about social policy

While not as rigorous as asking people multiple questions, this is another quick way to observe the diversity of ideological opinion in surveys.

  1. Nate Silver of FiveThirtyEight using two questions from the General Social Survey: support for same-sex marriage and whether government ought to reduce income inequality with high taxes on the rich and income assistance to the poor, finds
  • 22% are libertarian
  • 25% conservative
  • 34% liberal
  • 20% communitarian 

  1. David Kirby and David Boaz use answers to 3 survey questions and find that 15% of the population are libertarians (agree that less government is better, and that free markets can better solve economic problems, and that we should be tolerant of different lifestyles)

Ask people if they identify as libertarian and know what the word means

The Pew Research Center found that 11% of Americans agree that the word “libertarian describes me well” and know libertarians “emphasize individual freedom by limiting the role of government.”

Ask people if they identify as socially liberal and fiscally conservative, an oft-used definition of libertarianism

A 2011 Reason-Rupe poll found that 8% of Americans said they were “conservative” on economic issues and also “liberal” on social issues. But the same method found 9% identified as “liberal” on both social and economic issues, 2% identified as liberal on economic issues and conservative on social issues, and 31% identified as conservative on both social and economic issues. They remainder were somewhere in the middle These results are consistent with polls from Rasmussen, and Gallup which finds a public preference for the word “conservative” over “liberal.” This means many people who endorse liberal policy are inclined to self-identify as moderate or conservative.

Conclusions

In sum, the overwhelming body of empirical evidence suggests that libertarians’ share of the electorate is likely somewhere between 10-20% and the conservative and liberal shares’ aren’t that much greater. Libertarians exist, quite a lot, but you have to know what you’re looking for.

Rumor has it that tomorrow is the day Senate Republican leaders will unveil the health care bill they have been busily assembling behind closed doors. So few details have emerged, President Trump could maybe learn something from Senate Majority Leader Mitch McConnell about how to prevent leaks. Even GOP senators are complaining they haven’t been allowed to see the bill.

Here are five questions I will be asking about the Senate health care bill if and when it sees the light of day.

  1. Would it repeal the parts of ObamaCare—specifically, community rating—that preclude secure access to health care for the sick by causing coverage to become worse for the sick and the Exchanges to collapse?
  2. Would it make health care more affordable, or just throw subsidies at unaffordable care?
  3. Would it actually sunset the Medicaid expansion, or keep the expansion alive long enough for a future Democratic Congress to rescue it?
  4. Tax cuts are almost irrelevant—how much of ObamaCare’s spending would it repeal?
  5. If it leaves major elements of ObamaCare in place, would it lead voters to blame the ongoing failure of those provisions on (supposed) free-market reforms?

Depending on how Senate Republicans—or at least, the select few who get to write major legislation—answer those questions, the bill could be a step in the right direction. Or it could be ObamaCare-lite.

The Trump administration’s recent proposal on infrastructure stressed federalism. It said that the “federal government now acts as a complicated, costly middleman between the collection of revenue and the expenditure of those funds by states and localities. Put simply, the administration will be exploring whether this arrangement still makes sense, or whether transferring additional [infrastructure] responsibilities to the states is appropriate.”

Indeed, the federal-middleman arrangement does not make sense. With regard to highways, federal funds go not just to the 47,000-mile interstate highway system (IHS), but also to the vast 3.9 million mile “federal-aid highway system.” But there are few advantages in federal funding over state funding for most the nation’s highways, which are owned by the states and mainly serve state-local needs.

As such, there have been many proposals to devolve at least the non-IHS activities to the states. In such “turnback” proposals, the federal government would cut its highway spending and its gas tax, and allow states to fill the void.

The turnback idea has been around awhile. A major 1987 study by the Advisory Commission on Intergovernmental Relations (ACIR) proposed devolving highway funding except for IHS funding to the states. The ACIR was led by a bipartisan mix of federal, state, and local elected officials, and was known for its top-notch staff experts.

Thirty years later, the ACIR report contains sound advice for today’s policymakers. Here are some excerpts:

The Commission concludes that a devolution of non-Interstate highway responsibilities and revenue sources to the states is a worthwhile goal and an appropriate step toward restoring a better balance of authority and accountability in the federal system (page 2).

It is the sense of the Commission that the Congress should move toward the goal of repealing all highway and bridge programs that are financed from the federal Highway Trust Fund, except for: (1) the Interstate highway system, (2) the portion of the bridge program that serves the Interstate system, (3) the emergency relief highway program, and (4) the federal lands highway program. The Commission urges that the Congress simultaneously relinquish an adequate share of the federal excise tax on gasoline—about 7 cents of the federal tax on motor fuel plus an additional 1 cent for a grant based on lane mileage—to finance the above programs (page 2). [Note: the federal gas tax at the time was just 9.1 cents per gallon].

With state and local governments freed from federal requirements, some of which are unsuitable and expensive, turnbacks offer the possibility of more flexible, more efficient, and more responsive financing of those roads that are of predominantly state or local concern. Investment in highways could be matched more closely to travel demand and to the benefits received by the communities served by those roads (page 3).

Highway turnbacks potentially can add both certainty and flexibility—as well as efficiency and accountability—to the financing of the nation’s transportation infrastructure as well as to the design and operation of both new and modernized roads (page 4).

In time, federal requirements and sanctions have accumulated, which have limited state and local governments’ flexibility in road construction and operation, have restricted these governments’ ability to address specific transportation needs, and have probably increased the cost and time needed for road improvements … The design standards required for receiving federal road grants may often be higher than those actually employed for roads built with state or local funds alone. The result can be that some federally subsidized highways are “gold-plated,” that is, built more lavishly than would be the case if state and local governments made the tradeoffs involved in highway plans and financed their choices by taxes levied on their own constituents (page 11).

[Federal highway regulations] may intrude the most broadly upon the choices of state-local governments and citizens. Examples include the rule that federally aided projects be preceded by an environmental analysis and the Davis-Bacon requirement to pay union wage rates, or the equivalent. The Federal Highway Administration has estimated that the Davis-Bacon requirement added between $293 and $586 million to road costs in FY 1986 (page 12).

The federal restriction on state and local road choices occurs not solely because federal standards are high, but because they tend to be inflexible, inappropriate to circumstances that vary from place to place, and more responsive to national interest groups than to the users of specific highways (page 13).

There is “fiscal equivalence” when the same political community—the same jurisdiction—finances a governmental program, is responsible for its operation, and receives the benefits of that program … The tie between taxing and spending promotes efficiency and careful choices, whether spending levels are high or low. Because various areas’ highway needs and preferences are so different, a nationally uniform program cannot tailor taxing and spending to each other, as state and local programs can (page 22).

With the Interstate system used for long-distance travel, most of the benefits of other federally aided roads are contained within state boundaries. These non-Interstate, federally aided roads should be considered for turnback. Absent federal funding, there is reason to believe that state-local responsibility for the devolved highways would not impair nationwide mobility or interstate commerce. Devolution would move toward “fiscal equivalence.” The same jurisdiction that finances a set of roads will benefit from them. Thus highway spending and highway services would be more closely linked than is presently the case. Efficiency would be enhanced as would political, fiscal, and program accountability (page 48).

The diverse goals and constituencies served by the federal highway program has led to a complex operation and has engendered controversy over the program’s procedures and allocation formulas … Devolution … would sharpen goals and priorities (page 48).

The ACIR report (“Devolving Selected Federal-Aid Highway Programs and Revenue Bases: A Critical Appraisal”) is here.

The federal-government-managed National Flood Insurance Program (NFIP) is $25 billion in debt, stokes moral hazard, and entails a regressive wealth transfer that favors coastal areas. The NFIP is set to expire at the end of September, offering policymakers an important chance to rethink the program. The House Financial Services Committee is considering the Flood Insurance Market Parity and Modernization Act Wednesday, the current version of the bill takes important steps in moving the U.S. towards a private flood insurance market. Private insurance would improve upon the NFIP by ending transfers from the general taxpayers to the wealthy and the coasts and by limiting moral hazard.

Private insurance functions as a market-driven regulator of risk. Private insurers devise premium payments to accurately reflect risk, forcing economic agents to internalize the risk they choose to assume. For instance auto insurance premiums depend both on a driver’s performance as well as other factors that correlate with risk, such as age or area of the country.

The enactment of the NFIP in 1968 reflected a belief that a centrally planned insurance program could better fulfill the regulatory function of insurance than the private market. Government-managed insurance could, it was held at the time, “limit future flood damages without hampering future economic development” and “prompt an adjustment in land use to reduce individual and public losses from floods,” reported a Housing and Urban Development study integral to the program’s design.

However, the NFIP’s fifty-year record shows why the reasoning behind the creation of the program was misguided. The NFIP is beset by many design flaws, especially in terms of how premiums are priced. About 20% of all NFIP policies are explicitly subsidized and receive a 60-65% discount off the NFIP’s typical rate. These subsidies are in no way a subsidy to poor homeowners but instead relate to the age of a property. They turn out to be wildly regressive.

Even the 80% of the NFIP’s so-called “full risk” properties are not priced accurately. For instance, despite their name the full risk rates do not include a loading charge to cover losses in especially bad years, so even these insurance policies are money-losers in the long run.

Moreover, the NFIP’s rates are not set on a property-by-property basis. Instead, they reflect average historical losses within a property’s risk-based categories. As a result, while the subsidies and lack of loading charge mean that the NFIP generally undercharges risk, in some instances premiums are actually overpriced.

Debt is not the only consequence of the NFIP’s misguided premiums. The systemic underpricing of insurance causes moral hazard, by masking the cost of flood risk and encouraging overdevelopment in flood-prone areas. Because the average home in the NFIP is much more valuable than an average American home, the program is regressive on the whole. And since a disproportionate number of properties in the NFIP are on the southeastern coast, wealth is transferred from the rest of the country to homeowners near the coast in those states.

Congress could, theoretically, fix some of these design problems, but past attempts to reform the NFIP to more closely resemble a private insurance company failed miserably, and exemplify why in practice government rarely succeeds in competently managing what should be private business. For instance, in 2012 Congress passed the Biggert-Waters Flood Insurance Reform Act, which required the NFIP to end subsidies and to begin including a catastrophe loading surcharge. However, due to interest group pressure Congress reversed itself just two years later, halting some reforms and getting rid of others outright. The quick backtrack was a classic example of government failing to act in the public interest due to concentrated benefits and diffused costs.

However, one positive aspect of the 2012 reforms has persisted. The Biggert-Waters law ended the NFIP’s de-facto monopoly by allowing property owners to meet mandatory purchase requirements with private market insurance. Private insurers have since returned to the market, successfully competing with the NFIP.

Recent innovations in catastrophic modeling and catastrophic risk hedging mean that private market flood insurance is more viable than ever. Insurance industry experts suggest that private insurers can cover most properties in the NFIP and note that U.S. flood risk is the largest growth area for world-wide private reinsurers.

A forthcoming Cato Policy Analysis discusses technological innovations in the private flood insurance industry and the social benefits of moving to private flood insurance and terminating NFIP. If that is politically impossible, it suggests that any reauthorization of the NFIP should at least include measures that level the playing field between the NFIP and private alternatives.

Measures to encourage private competition include allowing a more flexible array of private coverage terms to meet mandatory purchase requirements, mandating that FEMA release property-level flood data to private insurers, and allowing firms that contract with the NFIP to also issue their own insurance plans. The Flood Insurance Market Parity and Modernization Act contains many of these measures, and would represent an excellent step towards ending a system that subsidizes wealthy coastal homeowners to take imprudent risks.

Special thanks to Ari Blask, who co-authored the forthcoming report and provided copious assistance on this blog post as well. 

It comes as no surprise that the Supreme Court has agreed to hear the case of Gill v. Whitford, in which a district court struck down the Wisconsin legislature’s partisan gerrymander. Conservative justices want to hear the case as a way to correct an error, while liberals see it as their last best chance to tee up a landmark constitutional case on redistricting while Anthony Kennedy is still on the Court. Within hours, however, the grant of review was followed by a kicker – an order staying the court order below, over dissents from the four liberals – that calls in question whether the momentum is really with those hoping to change Kennedy’s mind.

Last time around, in 2004’s Vieth v. Jubelirer, the Court foreshadowed this day. Four Justices led by Scalia declared that for all the evils of political gamesmanship in drawing district lines – a practice already familiar before the American revolution – there was and is no appropriately “justiciable” way for the Court to correct things; it would be pulled into a morass of subjective and manipulable standards that could not be applied in a practical and consistent way and would cost it dearly in political legitimacy. Justice Anthony Kennedy, in a separate concurrence, agreed in dismissing the Pennsylvania case at hand, and said the Court was “correct to refrain from directing this substantial intrusion into the Nation’s political life” that would “commit federal and state courts to unprecedented intervention in the American political process.” But he left the door open to some future method of judicial relief “if some limited and precise rationale were found to correct an established violation of the Constitution.”

That set up a target for litigators and scholars to shoot for: can a formula be found that is “limited and precise” enough, and based on an “established” enough constitutional rationale, to convince Justice Kennedy? After all, the Court’s 1962 Baker v. Carr one-person-one-vote decision on districting had been an unprecedented intervention in the American political process, but also one that could be implemented by a simple formula yielding consistent outcomes and little need for ongoing supervision (take the number of people in a state and divide by the number of districts).  

Plaintiffs in the Wisconsin case are hoping that a newly devised index they call the “efficiency gap” can serve as an adequately objective measure of whether partisan gerrymandering has taken place, given the presence of evidence of such motivation. Even if courts accept this, it is another big jump to the confidence that they can provide consistent and predictable remedies unaffected by judges’ own political prejudices. 

The decision to stay or not stay a lower court order often provides a peek as to which side the Justices expect to prevail. And the five-member majority to stay the Wisconsin order – a majority including unsurprisingly Gorsuch, but more significantly Kennedy – suggests that at this point it is the conservative side’s case to lose. 

Whatever the Court’s disposition of the Wisconsin case, gerrymandering remains a distinctive political evil, an aid to incumbency that promotes the interests of a permanent political class, and a worthy target for efforts at reform. I’ve written more on that here and here

 

The federal government runs more than 2,300 subsidy programs, and they are all susceptible to fraud and other types of improper payments. The EITC program, for example, throws about $18 billion down the drain each year in such payments.

Perhaps the program that generates the most outrageous rip-offs is the $150 billion Social Security Disability (SSDI) program. From the Washington Post today:

Eric Conn, the fugitive attorney who pleaded guilty to orchestrating a scheme to defraud the federal government of $600 million, remains at large since he cut off his court-ordered GPS monitoring bracelet on June 2…Conn in March entered guilty pleas to defrauding the Social Security Administration via bribes he paid to a doctor and a judge to process and approve his clients’ disability claims. 

From 2006 to 2016, Conn processed 1,700 client applications for Social Security benefits with a potential of $550 million in lifetime benefits. Since the revelation of the allegations, the Social Security Administration has contacted many of Conn’s former clients with claims they owe as much as $100,000 for disability payments going back 10 years unless they can prove they have been disabled the entire time…

Conn’s fraud scheme was fueled by television advertisements that included a 3-D television ad from 2010 and one from 2009 in which Conn hired YouTube star “Obama Girl” and Bluegrass music legend Ralph Stanley to sing a version of “Man of Constant Sorrows” with new lyrics that refer to Conn as a “superhero without a cape” and to brag that Conn had “learned Spanish off of a tape.” In a rap video, Conn billed himself as Hispanic-friendly: “Even if you’re Latino, no need to worry cuz this gringo speaks the lingo.”

One greedy lawyer, a corrupt doctor and judge, some jingoism, and our government gets ransacked for $600 million. That’s not very comforting to taxpayers, is it?

In his study of SSDI for DownsizingGovernment.org, Tad Dehaven said, “SSDI is a classic example of a well-intentioned effort to provide modest support to truly needy people that has exploded into a massive entitlement that is driving up the federal deficit.” 

DeHaven proposed these SSDI reforms: 

  • Cut the program’s average benefit levels.
  • Impose stricter eligibility standards to discourage claims from people who should be working.
  • Create a longer delay for the initial receipt of benefits to discourage frivolous applications.
  • Reduce the large number of appeals for people initially denied benefits.
  • Ensure greater quality control and consistency of decisions by officials and judges.
  • Create a “taxpayer advocate” in the administrative law process to challenge dubious claims made by applicants and their lawyers.
  • Apply continuous disability reviews of people receiving benefits in a more vigorous manner.

His study is here

…is data, as the late UC-Berkeley political scientist Ray Wolfinger once said.

David Boaz used Wolfinger’s quote when emailing me this short note from the Economic Policy Journal’s website about the apparent harmful effects on employment of Washington state’s recent minimum wage increase. A snippet:

As we were seated, I couldn’t help but notice that there were no busboys in sight—waitresses and the manager were busy clearing and cleaning tables. There were no young people in sight either, only employees in their late-20s and up.

I waited for the manager to man the checkout register and couldn’t pass up a brief economic discussion. I commented that I’m from out of state (Idaho, where the minimum wage is the federally mandated $7.25/hr) and couldn’t help but notice the impact that Washington’s minimum wage ($11/hr) was having on his restaurant.

Well-intended proponents of higher minimum wages will likely dismiss this note using the far-more-common but very wrong misquotation that “the plural of anecdote isn’t data.” More sophisticated proponents will go further and cite David Card and Alan Kreuger’s 1994 American Economic Review paper on the apparent beneficial effects on employment of a minimum wage increase on fast-food restaurant employment in the Philadelphia metropolitan area in the early 1990s.

Thing is, there has been an awful lot more empirical research on the effects of minimum wage increases than this one paper by Card and Kreuger. The overwhelming balance of that research has found harmful employment effects, falling mainly on an especially disadvantaged population: young black males. In a review of this academic literature, economists David Neumark and William Wascher find:

Nearly two-thirds [of the 102 analyses they reviewed] give a relatively consistent (although by no means always statistically significant) indication of negative employment effects of minimum wages while only eight give a relatively consistent indication of positive employment effects. … [Further, of the 33 analyses we] view as providing the most credible evidence; 28 (85 percent) of these point to negative employment effects. Moreover, when researchers focus on the least-skilled groups most likely to be adversely affected by minimum wages, the evidence for disemployment effects seems especially strong. … We view the literature—when read broadly and critically—as largely solidifying the conventional view that minimum wages reduce employment among low-skilled workers.

The plural of anecdote, indeed.

For more on minimum wage research, see this Cato Policy Analysis by former U.S. deputy assistant labor secretary Mark Wilson. Or this brilliant little Cato Handbook on Policy chapter.

When people hear “democracy,” they tend to get warm, fuzzy feelings. As the Century Foundation’s Richard Kahlenberg writes in an article that, among other things, portrays private school choice as a threat to democracy, “public education…was also meant to instill a love of liberal democracy: a respect for the separation of powers, for a free press and free religious exercise, and for the rights of political minorities.” The fundamental, ironic problem is that both democracy and democratically controlled public schooling are inherently at odds with the individual rights, and even separation of powers, that Kahlenberg says democracy and public schools are supposed to protect.

Let’s be clear what “democracy” means: the people collectively, rather than a single ruler or small group of rulers, make decisions for the group. We typically think of this as being done by voting, with the majority getting its way.

Certainly, it is preferable for all people to have a say in decisions that will be imposed on them than to have a dictator impose things unilaterally. But there is nothing about letting all people have a vote on imposition that protects freedom. Indeed, in a pure democracy, as long as the majority decides something, no individual rights are protected at all. The will of the majority is all that matters.

We’ve seen basic rights and equality under the law perpetually and unavoidably violated by democratically controlled public schooling. It cannot be otherwise: At its core, a single system of government schools—be it a district, state, or federal system—can never serve all, diverse people equally. It must make decisions about whose values, histories, and culture will and will not be taught, as well as what students can wear, what they can say, and what they can do, in order to function.

Public schooling since the days of Horace Mann has found it impossible to uphold religious freedom and equality. Mann himself was constantly assailed by people who felt that by trying to make public schools essentially lowest-common-denominator Protestant institutions, he was throwing out religion or making the schools de facto Unitarian (his denomination). Mann, in response, promised that the Protestant Bible would always be used in public schools. Indeed, Protestantism was often thought essential to being a good American, including supportive of democracy, which meant that if the public schools were to serve their civic purpose they could not treat religious minorities equally, especially Roman Catholics, who were suspected of taking their political orders from the Pope in Rome.

Today, after more than a century of even deadly conflict over religion, the public schools are no longer de facto Protestant, but instead may legally have no connection that could appear to be advancing religion, right down, often, to speeches by individual students at events such as graduation ceremonies or athletic contests. This inherently renders religious people second-class citizens—any values are fair game in public schools except for theirs—while also curbing basic expression rights.

Of course, the inherent inequality of public schooling is not restricted to religion. In a public school a teacher, committee, school board, or other government actor must decide what aspects of history will be taught or literature read. This requires that government elevate some peoples’ speech and perspectives, while deeming others’ essentially unworthy. As a result, we have perpetual battles that tear at the social fabric over which books—The Bluest Eye, The Adventures of Huckleberry Finn, The Absolutely True Diary of a Part-Time Indian—should or should not be read in class or over whose history should be taught, and the losers are rendered unequal under the law.

Public schooling has also constantly intruded on separation of powers. Power is first supposed to be separated among levels of government, with local control often considered ideal for democratic control of schools. But local control has been shrunk as states and the federal government have stepped in to stop discrimination, or because districts have been deemed “in need of improvement.” State authority has been circumscribed for similar reasons. And the separation of federal powers—legislative, executive, and judicial—was shredded under President Obama when he offered states waivers out of the No Child Left Behind Act’s most onerous provisions, but only if they agreed to conditions unilaterally determined by his administration.

Alas, such compression and destruction of subsidiarity is almost guaranteed with democratically controlled schooling. Why? Because if people in a political minority—or even a majority unable to accumulate sufficient political power—cannot get the democratic government closest to it to provide the education they want, they can only with huge difficulty—moving their homes—meaningfully help themselves. They have basically no option but to appeal to a “higher” level of government. And when no level of democratic governance seems to respond, they feel compelled to allow a single person—a mayor, governor, or president—to take over.

The good news is that American government is not supposed to be grounded in democracy. It is grounded in liberty—the freedom of individuals to govern their own lives, and to combine however they freely choose. “Life, liberty, and the pursuit of happiness” are laid out as “unalienable rights” in the Declaration of Independence, and they, not “democracy,” are what government is created to protect.

Ironically, the educational system that is consistent with the liberty on which the country is based is the one that Kahlenberg and other government schooling advocates argue is fundamentally at odds with American values: school choice. And their major worries, at least at first-blush, are not unreasonable: people have a tendency to associate with people like themselves, potentially “balkanizing” the country, and private schools do not have to teach values like religious tolerance. But first-blush is not reality.

First, of course, the protection of individual rights that Kahlenberg wishes to defend is sacrificed the moment people are compelled to fund a government-run school. One school cannot teach both that we were created by God and that we were not. It cannot put even a tiny fraction of all literature on class syllabi. It cannot have a dress code and allow total freedom of expression. The only possible way for government to treat all diverse people equally in education is to enable them to choose what they will teach and learn.

But private schools, especially if they stand for specific beliefs, will fail to promote tolerance and teach civic values, right? Wrong. Quite possibly because chosen schools, especially private ones, are free to say “we stand for this” and “we do not stand for that, choose us if you agree,” research suggests that they are more effective at inculcating the civic knowledge and behaviors, like voting, volunteering in one’s community, and tolerance of those with whom they disagree, than are public schools. Why? Quite possibly because everyone in the school—both educators and families—voluntarily agree to a set of beliefs and standards a school promotes, allowing more rigor and clarity in teaching history, civics, or personal behavior. Public schools, in contrast, must work with diverse populations, and to avoid wrenching conflict and the distinctly un-American imposition of one group’s views on another, will often choose lowest-common-denominator content that may offend few people, but also convey little of clarity or use. Students in private schools might also cherish individual liberties a bit more than those in public schools because they see theirs curbed by the public schools.

Then there’s this: While the evidence is strong that in myriad ways people tend to prefer to associate with others like themselves—and that government can do little to change that—people also want to have commonalities with larger society. It simply makes their lives easier: Speaking the common language makes daily life smoother. Adopting the common culture makes one feel more at home. All these things make succeeding economically easier. So people will seek out commonality on their own. This means that public schooling, or any other government effort to impose commonality, may well be unnecessary, while definitely being inherently conflict-fostering and rights-trampling.

“Democracy” is a confusion-enshrouded, contradictory weapon that has been successfully employed against freedom in education for too long. It is time to reassert liberty as the fundamental American value and cease letting it be trampled by, and for, public schooling.

While it’s apt to get lost in news coverage of this morning’s bigger rulings, a moment should be set aside to applaud today’s solid 8-1 Supreme Court decision in Bristol-Myers Squibb, together with the related 8-0 outcome from May 30 in the case of BNSF v. Tyrrell. Both cases arose from state courts’ attempts to grab jurisdiction over out-of-state corporations for purposes of hearing lawsuits arising from out-of-state conduct affecting out-of-state complainants. And in both instances—with only Justice Sonia Sotomayor still balking—the Justices made clear that some states’ wish to act as nationwide regulators does not allow them to stretch the constitutional limits on their jurisdiction that far. 

For background on the cases, see our April post. We wondered then whether the consensus of Justices displayed in the benchmark 2014 Daimler case would endure rather than be splintered, and the answer was yes, it did and has. Justice Sotomayor, sticking to a once popular position, is still convinced that if states want to do a certain amount of long-arm collaring of cases involving interstate businesses that arose elsewhere and might fit conveniently into their docket, well, that’s fair enough for government work. That led her to file a lone separate partial concurrence in BNSF, as against a majority opinion written by Justice Ruth Ginsburg (who has authored much of the Court’s modern jurisprudence in this area) and an outright dissent in today’s decision in Squibb, authored by Justice Samuel Alito. To no one’s surprise, new Justice Neil Gorsuch joined the majority in both cases.

Many commenters will inevitably group these cases with last month’s 8-0 decision in the patent venue case of TC Heartland v. Kraft Foods, which I described as “a landmark win for defendants in patent litigation—and, on a practical level, for fairer ground rules in procedure.” To be sure, the underlying legal materials were completely different; TC Heartland involved the interpretation of wording in a federal statute. What united the three cases with Daimler is that the contemporary Court is keenly aware of the danger that the tactical use of forum-shopping will eclipse the merits in many categories of high-stakes litigation, turning potentially losing cases into winners through the chance to file them in a more friendly court.

That insight might prove significant at a time when forum-shopping has come to play a prominent role in high-profile ideological litigation—with conservatives running to file suit in the Fifth Circuit, liberals in the Ninth.

In a unanimous judgment that splintered on its reasoning, the Supreme Court correctly held that the “disparagement clause” of the Lanham Act (the federal trademark law) violated the Constitution. The ruling boils down to the simple point that bureaucrats shouldn’t be deciding what’s “disparaging.”

Trademarks, even ones that may offend many people—of which plenty are registered by the Patent and Trademark Office (PTO)—are private speech, which the First Amendment prevents the government from censoring. As Justice Samuel Alito put it in a part of the opinion that all the justices joined (except Neil Gorsuch, who didn’t participate in the case), “If the federal registration of a trademark makes the mark government speech, the Federal Government is babbling prodigiously and incoherently.”

At this point, the Court split. Justice Alito, joined by Chief Justice Roberts and Justices Thomas and Breyer, explained why trademarks don’t constitute a subsidy or other type of government program (within which the government can regulate speech), and that the “disparagement clause” doesn’t even survive the more deferential scrutiny that courts give “commercial” speech. The remaining four justices, led by Justice Anthony Kennedy, would’ve ended the discussion after finding that the PTO here is engaging in viewpoint discrimination among private speech. The end of his opinion is worth quoting in full:

A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all. The First Amendment does not entrust that power to the government’s benevolence. Instead, our reliance must be on the substantial safeguards of free and open discussion in a democratic society.

Fundamentally, this somewhat unusual case brought by an Asian-American electronic-rock band shows that government can’t make you choose among your rights. The Lanham Act’s disparagement clause placed an unconstitutional condition on those who consider the use of an edgy or taboo phrase to be part of their brand: either change your name or be denied the right to use it effectively. Whether you’re a musician, a politician, or a sports team—the Washington Redskins’ moniker will now be safe—it’s civil society (consumers, voters, fans) who should decide whether you’re being too offensive for polite company.

For more, see my previous writings here and here—and of course reading Cato’s “funny brief” is all the sweeter after this ruling.

Attorney General Jeff Sessions writes in Sunday’s Washington Post:

Drug trafficking is an inherently violent business. If you want to collect a drug debt, you can’t, and don’t, file a lawsuit in court. You collect it by the barrel of a gun. 

Sessions correctly understands a major source of crime in the drug distribution business: people with a complaint can’t go to court. But he jumps to the conclusion that “Drug trafficking is an inherently violent business.” This is a classic non sequitur. It’s hard to imagine that he actually doesn’t understand the problem. He is, after all, a law school graduate. How can he not understand the connection between drugs and crime? Prohibitionists talk of “drug-related crime” and suggest that drugs cause people to lose control and commit violence. Sessions gets closer to the truth in the opening of his op-ed. He goes wrong with the word “inherently.” Selling marijuana, cocaine, and heroin is not “inherently” more violent than selling alcohol, tobacco, or potatoes. 

Most “drug-related crime” is actually prohibition-related crime. The drug laws raise the price of drugs and cause addicts to have to commit crimes to pay for a habit that would be easily affordable if it were legal. And more dramatically, as Sessions notes, rival drug dealers murder each other–and innocent bystanders–in order to protect and expand their markets. 

We saw the same phenomenon during the prohibition of alcohol in the 1920s. Alcohol trafficking is not an inherently violent business. But when you remove legal manufacturers, distributors, and bars from the picture, and people still want alcohol, then the business becomes criminal. As the figure at right (drawn from a Cato study of alcohol prohibition and based on U.S. Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 [Washington: Government Printing Office, 1975], part 1, p. 414) shows, homicide rates climbed during Prohibition, 1920-33, and fell every year after the repeal of prohibition. 

Tobacco has not (yet) been prohibited in the United States. But as a Cato study of the New York cigarette market showed in 2003, high taxes can have similar effects:

Over the decades, a series of studies by federal, state, and city officials has found that high taxes have created a thriving illegal market for cigarettes in the city. That market has diverted billions of dollars from legitimate businesses and governments to criminals.

Perhaps worse than the diversion of money has been the crime associated with the city’s illegal cigarette market. Smalltime crooks and organized crime have engaged in murder, kidnapping, and armed robbery to earn and protect their illicit profits. Such crime has exposed average citizens, such as truck drivers and retail store clerks, to violence.

Again, to use Sessions’s language, cigarette trafficking is not an inherently violent business. But drive it underground, and you will get criminality and violence. 

Sessions’s premise is wrong. Drug trafficking (meaning, in this case, the trafficking of certain drugs made illegal under our controlled substances laws) is not an inherently violent business. The distribution of illegal substances tends to produce violence. Because Sessions’s premise is wrong, his conclusion–a stepped-up drug war, with more arrests, longer sentences, and more people in jail–is wrong. A better course is outlined in the Cato Handbook for Policymakers.

 

The negotiations on the UK exiting the EU start today. Here’s the BBC:

Brexit Secretary David Davis will call for “a deal like no other in history” as he heads into talks with the EU.

Subjects for the negotiations, which officially start in Brussels later, include the status of expats, the UK’s “divorce bill” and the Northern Ireland border.

Mr Davis said there was a “long road ahead” but predicted a “deep and special partnership”.

The UK is set to leave the EU by the end of March 2019.

Day one of the negotiations will start at about 11:00 BST at European Commission buildings in Brussels.

Mr Davis and the EU’s chief negotiator Michel Barnier, a former French foreign minister and EU commissioner, will give a joint press conference at the end of the day. 

I’ve sensed some growing concerns about how well these talks might go, and the recent UK general election only made things worse. It’s not clear to me that the politicians who are in charge here can make this a success. Time will tell.

If you are looking for something positive related to Brexit, however, once the UK does leave the EU the personnel situation on the technical side of things is looking good. On Friday, the UK Department of International Trade announced that it had hired Crawford Falconer as the Chief Trade Negotiation Advisor. From the announcement: 

Together with his team Crawford will:

  • develop and negotiate free trade agreements and market access deals with non-EU countries
  • negotiate plurilateral trade deals on specific sectors or products
  • make the department a ‘centre of excellence’ for negotiation and British trade
  • support the UK’s membership of the World Trade Organization (WTO).

Falconer is not a household name, but he is someone that I am very familiar with. I had just been reading his latest co-authored work. He was one of the judges (technically, a “panelist”) on a WTO dispute panel that ruled earlier this month on whether the U.S. has complied with a previous ruling related to the subsidies it provides to Boeing. He has also acted as a judge in 14 other GATT/WTO decisions.

Now, you may say, international adjudication is all well and good, but how about trade negotiations? Does he have any experience there? In fact, he does. He is a dual UK/New Zealand citizen and has been negotiating for New Zealand for many years. He was New Zealand’s Ambassador to the WTO in Geneva from 2005-2008 (and during that time, in his personal capacity, he chaired the Doha Round negotiations on agriculture and cotton). His LinkedIn page has more on his professional background.

There is still a long way to go before we get to the point of the UK negotiating free trade deals of its own. But once we do get there, its trade policy team is in pretty good hands.

Today a broad coalition of more than 40 different scholars from over 30 different think tanks and academic institutions have issued a letter calling on the relevant House and Senate committees to grant the Pentagon authority to reduce excess military infrastructure. Simply, we need another Base Realignment and Closure (BRAC) round. The full letter can be found here.

All of the signatories, myself included, signed as individuals, not as representatives of their respective institutions. But the breadth and depth of the coalition reflected in their affiliations, from the Center for American Progress and Peace Action to Americans for Tax Reform and FreedomWorks, shows just how much support exists for a process that has helped the military to deal with its excess overhead in five rounds beginning in the late 1980s through the mid-2000s, and that could do so fairly again.

The letter stresses points that I have made elsewhere (e.g. here, here and here). The Pentagon has repeatedly requested authority to close unneeded or underutilized bases. It estimates its capacity exceeds its needs by over 20 percent, and that is true even if the U.S. military remains at its current size, or grows modestly. The Obama administration asked Congress to approve BRAC, as has the Trump administration.

The objections to BRAC focus too narrowly on the economic harms that can come to communities affected by a base closure, without seeing the opportunities created when underutilized property is made available to redevelopment. There is pain. No one disputes that. But it is possible for communities to recover from a base closure, some have done so very quickly, and most emerge with a stronger, more diversified economic base after a military base is closed.

We conclude:

BRAC has proven to be a fair and efficient process for making the difficult but necessary decisions related to the configuration of our military’s infrastructure. In the absence of a BRAC, defense communities are hurting. Although members of Congress have blocked base closures with the intent of helping these communities, they are actually making the problem worse. The time to act is now. Congress should grant our military the authority to eliminate waste, and ensure that vital defense resources flow to where they are most needed.

Read the full letter.

Have you ever played fantasy sports for money? Have you ever participated in your office March Madness pool? Well, if you did, you may have broken federal law, which is quite ridiculous. If you’ve bet on your local jai alai match, though, that was probably safe.

In sports gambling, as is so often the case with many things, the law is not keeping up with our behavior and attitudes. There’s a growing movement to modernizing our gambling laws, including some new coalitions, such as the American Sports Betting Coalition (ASBC), and at least one case pending at the Supreme Court. That case, Christie v. NCAA, is a challenge to the constitutionality of the Professional and Amateur Sports Protection Act of 1992 (PASPA). Cato supported the petition, which will be discussed by the justices next week. 

PASPA outlawed sports betting, with the exception of horse racing and jai alai (obviously), in most states. In classic, horse-trading style, carve outs were made for Oregon, Delaware, Montana, and Nevada. Even worse, the law prohibits states from “authorizing” sports gambling “by law,” which should be regarded as a violation of states’ rights protected by the Tenth Amendment—but the Third Circuit didn’t see it that way. The irony, of course, is that 44 states and 47 jurisdictions (D.C., U.S. Virgin Islands, and Puerto Rico) all have government-run lotteries, because evidently those governments are okay with gambling that benefits their budgets.

By overturning the restrictive federal ban on sports betting, states will be empowered to make their own decision about whether or not to allow it. If states were able to make their own laws about sports betting, areas where it is allowed could see economic growth sparked by increased tourism and an increase in betting-related jobs, as well as other industries springing up around this new frontier of economic possibility. Oxford Economics—one of the world’s leaders in global forecasting and quantitative analysis—has estimated that legal sports betting could add $14 billion to the national economy, generate up to $27 billion in total economic impact, and support 152,000 American jobs. In addition to these economic benefits, overturning such a ban would give states and local law enforcement the ability to oversee legal gambling, thus taking power from dangerous underground gambling rings. 

At its root, this is an issue of federalism; people in cities and towns across America should be able to decide for themselves if sports betting is something they want for their communities. Many of them do; nearly 7 in 10 Americans believe that the issue should not be left to the federal government, and 72 percent of avid sports fans support the legalization of sports betting. 

This concern is becoming more relevant as states move to classify fantasy sports leagues as “sports betting” under PASPA and prevent a growing American pastime from thriving in an open, legal market. From 2007 to 2016, 37.6 million people in the U.S. and Canada started playing fantasy sports. As of 2016, a total of 57.4 million people in both countries take part in them. Widespread participation in casual betting extends well beyond fantasy sports websites: during this year’s NCAA March Madness, 70 million brackets were filled out, with people placing a total of $10.4 billion in bets, of which only 3 percent were made legally. Even former presidential candidate Mitt Romney got in on the action in 2015’s March Madness (and nearly had the best bracket in the country).

Clearly, people have not stopped betting because of PASPA; on the contrary, these legally ambiguous arrangements are becoming even more popular and lucrative. The line between legal and illegal is increasingly difficult to define, if not arbitrary. In states that are cracking down on fantasy sports leagues, the leagues are considered “chance based” rather than “skill based.” If a fantasy sports player researches draft picks, checks injury reports, or consults expert opinions, are they merely leaving it to chance? There are surely players who play randomly, with no thought to strategy, but does this justify outlawing such a popular practice over a semantic difference between “chance” and “skill?”

As evidenced by the nation’s annual tradition of March Madness office pools, average people want to bet on sports. It should not be a crime for a regular person to put $5 on the game. Big government bans and one-size-fits-all legislation are ineffective, and states, localities, and individuals should be empowered by repealing the federal ban on sports betting.

(Thanks to legal intern Patrick Moran for assistance with this post.)

That is the subject line from a friend’s email that passed along this story about the latest proposed escalation of the Drug War:

Congress is considering a bill that would expand the federal government’s ability to pursue the war on drugs, granting new power to the attorney general to set federal drug policy. 

My friend explained in a follow-up call that, as he started reading, he assumed the bill reflected Jeff Sessions’ passion for the Drug War, but he then realized the bill is bi-partisan insanity:

The bipartisan legislation, sponsored by powerful committee chairs in both chambers of Congress, would allow the attorney general to unilaterally outlaw certain unregulated chemical compounds on a temporary basis. It would create a special legal category for these drugs, the first time in nearly 50 years that the Controlled Substances Act has been expanded in this way. And it would set penalties, potentially including mandatory minimum sentences, for the manufacture and distribution of these drugs.

Hence my friend’s assessment that “everyone” is terrible (on drug policy).

This is an important point. Much discussion assumes liberals are more libertarian-leaning on drug policy than conservatives. This is partly right; liberals are more likely to favor marijuana legalization, for example.

But many liberals endorse marijuana legalization because they view marijuana as relatively benign, not because of a principled stance for freedom or a consistent understanding that prohibition of any substance almost certainly causes more harm than good. Thus politicians across the spectrum are indeed “terrible” on drug policy.

 

Free society came under attack twice this month, first when Islamists rammed a van into pedestrians and went on a knife-slashing rampage in the Southwark district of London, and then when a gunman opened fire on Republican lawmakers in the Del Rey neighborhood of Northern Virginia.

In both cases, police had barely begun their investigations when an American politician—first the Republican president, then a Democratic governor—seized on the carnage to advocate political causes via electronic media.

In the hours after the London attack, President Trump took to Twitter to push his administration’s proposed travel ban on people from several predominantly Muslim countries:

Then, in the first police briefing on the Del Rey shooting, Virginia Gov. Terry McAuliffe called for expanded gun control:

It’s reasonable for a politician to advocate policies that he thinks will reduce future recurrences of a fresh tragedy. However, Trump’s immigration proposals are supported by people who typically oppose McAuliffe’s gun control proposals, and McAuliffe’s are supported by people who typically oppose Trump’s. This is puzzling because the proposals themselves are remarkably similar: they would constrain individuals’ freedoms in an effort to improve public safety. So why do the two proposals get such different responses from different people?

It’s not that there’s a big difference in the risk to public safety posed by immigrants or guns. Both have proven to be harmful, in the sense that both immigrants and guns have caused violence. But the risk posed by the typical gun or immigrant is tiny.

In the United States, there are roughly 11,000 gun homicides a year, 22,000 other gun deaths (largely suicides), and perhaps 75,000 other gun injuries. Those numbers are grisly, but assuming that a different gun is used in each incident (an extremely conservative, false assumption), the sum of those numbers is dwarfed by the estimated 300 million guns in the United States. Less than 0.04% of guns are involved in one of those incidents each year.

The risk posed by immigrants is also tiny. As Michelangelo Landgrave and Alex Nowrasteh have noted, less than 1% of illegal immigrants (about 123,000 people) and less than 0.5% of legal immigrants (roughly 64,000 people) in the United States were incarcerated in a given year for the commission of any sort of crime (compared to about 1.5% of natives, some 2 million people). Concerning terrorism specifically, the numbers are unimaginably small; Alex calculates that 0.0006% of refugees and 0.00004% of illegals who have entered the United States in the last few decades have committed a domestic act of terror or were convicted of planning one.

Reasonable efforts to reduce dangerous persons’ legal access to guns and to bar entry for dangerous would-be immigrants are sensible. Some such controls are permitted under the U.S. Constitution and American law, though they generally protect the right to keep and bear arms (especially for protection) and prohibit discrimination on the basis of religion (though, it should be acknowledged, there’s no constitutional right to immigrate). But, constitutional concerns aside, the risk numbers hardly support broad travel bans, “extreme vetting,” or “gun controls” that—speaking empirically—don’t appear to enhance public safety.

Beyond constitutionally protected rights, any policy to control guns or immigration should recognize the benefits they provide. Guns offer protection and recreation for millions of Americans. Estimates of the number of crimes thwarted each year by firearms in the United States range between the tens of thousands and millions. Meanwhile, each new wave of immigrants provides the country with more workers, more entrepreneurs, more consumers, and more contributors to the nation’s social fabric.

As a libertarian, I highly value those benefits and freedoms. Given my priors and the numbers mentioned above, I’m skeptical of calls for tighter controls on either immigration or legal gun ownership. But, putting those priors aside, I’m puzzled by the selective risk-intolerance of Trump, McAuliffe, and their supporters.

Why are Trump and so many red-teamers willing to adopt restrictions to curb the tiny risks posed by immigrants, but aren’t willing to do so for the similarly tiny risks posed by guns? Conversely, why are McAuliffe and blue-teamers willing to restrict guns but not immigration? Shouldn’t their risk sensitivities for either be the same?

And, if you agree with me that they should be, then what is this red/blue fight over guns and immigration really about?

Our departed colleague Andrew Coulson spent the last years of his life producing School Inc., a wonderful and informative documentary about the possibilities of private, choice-based schooling. I highly suggest it. Amazingly, at least to me, PBS agreed to air the documentary, and in April it debuted on PBS stations around the country.

Unsurprisingly, a chorus of critics are angered that PBS would air such a program. Media Matters for America seems to call for the outright censorship of any critique of public education on public television by wondering, “why would a public broadcast channel air a documentary that is produced by a right-wing think tank and funded by ultra-conservative donors, and that presents a single point of view without meaningful critique, all the while denigrating public education?” Diane Ravitch, a prominent critic of private schools, complains that “uninformed viewers who see this very slickly produced program will learn about the glories of unregulated schooling, for-profit schools, [and] teachers selling their lessons to students on the Internet,” but “what they will not see or hear is the other side of the story.” Now a petition has been started calling for PBS to air “the other side” of the story by showing the anti-private school film Backpack Full of Cash.

I have nothing against showing the “other side” to Andrew’s series, but we need to put this debate in context. When it comes to PBS and the Corporation for Public Broadcasting, the “other side” that doesn’t get heard is usually the conservative or libertarian side, and CPB has generally been deeply antagonistic to those ideas. That Ravitch and others are now the ones complaining is at least somewhat ironic.

But I do not want to denigrate their efforts. By its very nature, public broadcasting excludes viewpoints (airtime is finite) and requires citizens to speak up when they think something is unbalanced. But public broadcasting is unbalanced all the time. The Flat Earth Society did not get equal time to refute Carl Sagan’s Cosmos, and creationists didn’t get airtime to refute The Ascent of Man.

The true source of bias occurs when certain ideas are labeled either “mainstream” or “extremist.” Extremist ideas don’t deserve airtime, but mainstream ideas do. That lesson was learned in the 70s when Milton Friedman and Bob Chitester fought to put Friedman’s 10-part, pro-free market documentary Free to Choose on PBS. Interestingly, School Inc. was produced by Free to Choose Media and Bob Chitester. Free to Choose Media funds and creates freedom-oriented videos and documentaries, and Bob Chitester has been fighting to put freedom-oriented ideas on PBS for almost five decades.

The fight to put Free to Choose on the air began when W. Allen Wallis, a free-market economist and member of the CPB board, was troubled by PBS’s airing of John Kenneth Galbraith’s 13-hour paean to planned economies and “new socialism” called The Age of Uncertainty. Galbraith was one of the most prominent public intellectuals of the day (somewhat like Paul Krugman but more extreme), and an avowed proponent of increased government involvement in the economy, from higher taxes to the nationalization of some industries. But his ideas were perceived as “mainstream,” so The Age of Uncertainty aired with almost no “balance.” Critics were given three to five minutes to respond to Galbraith at the end of each episode, which was seen as sufficient balance for PBS–55 minutes of Galbraith; 3-5 minutes of critics.

Appalled by this slightly watered-down state-sponsored support for near-socialist ideas, Wallis put into motion the production of Free to Choose and Bob Chitester took the reins. But it wouldn’t be easy. Friedman won the Nobel Prize in Economics in 1976, but, in the words of Wallis, “public broadcasting people regarded Friedman as a fascist, an extreme right-winger.” Galbraith, however, was “a middle-of-the-road person.” In Friedman’s words: “From the point of view of the people who were running PBS, Galbraith’s series was politically correct and mine was incorrect.”

Chitester was asked by PBS executives how he intended to provide “balance” in Friedman’s program. “I don’t intend to have any balance,” he responded, “in light of the thirteen hours given to Galbraith.” Yet Chitester and Friedman decided to address PBS’s concerns by devoting more than half of Friedman’s ten-hour series to critics. The second half of each episode, as well as the entire last episode, features critics challenging Friedman’s ideas.

Even though Friedman skillfully and devastatingly addresses his critics (it’s really the best part of the series), PBS executives could hardly call the series unbalanced. Nevertheless, they still resisted giving the series any more exposure than they thought it deserved. While Galbraith’s series got a choice 9 p.m. weeknight time slot, as part of the core PBS schedule, Free to Choose was relegated to 10 p.m. on Fridays. In New York City, they even showed it opposite the Super Bowl. Friedman and Chitester complained, but complaints from viewers, and the growing popularity of the program, finally pushed PBS to give it a better spot.

Despite “using” the avenue of PBS–and remember that, for many people in 1980, PBS was one of only four networks they could watch–Friedman was always deeply skeptical of government-run media. In Capitalism and Freedom, Friedman explains why markets are the best protectors of those who wish to espouse unpopular ideas:

The suppliers of paper are as willing to sell it to the Daily Worker as to the Wall Street Journal. In a socialist society, it would not be enough to have the funds. The hypothetical supporter of capitalism would have to persuade a government factory making paper to sell it to him, the government printing press to print his pamphlets, a government post office to distribute them among the people, a government agency to rent him a hall in which to talk, and so on.

Friedman also reminds us of a time when government-controlled broadcasting had a pronounced effect on the world:

From 1933 to the outbreak of World War II, Churchill was not permited to talk over British radio, which was, of course, a government monopoly administered by the British Broadcasting Corporation. Here was a leading citizen of his country, a Member of Parliament, a former cabinet minister, a man who was desperately trying by nearly every device possible to persuade his countrymen to ward off the menace of Hitler’s Germany. He was not permitted to talk over the radio to the British people because the BBC was a government monopoly and his position was too “controversial.”

Again, I support the efforts to convince PBS to air “the other side” to Andrew’s views, but I’d be more supportive of abolishing a public broadcasting system that only pretends to be objective. Maybe, when Ravitch and other supporters of government schools are considered “extremist,” they’ll support that too.

History hasn’t been kind to Alexander Hamilton’s hypothesis, in Federalist 68, that “there will be a constant probability of seeing the [office of the presidency] filled by characters pre-eminent for ability and virtue.” Still, he was spot-on in No. 65, when he predicted that impeachment debates would stoke partisan rancor, driving “pre-existing factions [to] enlist all their animosities, partialities, influence, and interest on one side or on the other.”

Impeachment talk started unusually early in the Trump administration, and seems likely to get louder as we go. So far it’s been an even richer source of hyperbole and hypocrisy than the judicial filibuster.

“Congress must begin impeachment proceedings immediately,” insists MoveOn.org, the activist group born in a 1998 campaign urging Congress to “Move On to pressing issues facing the country,” instead of impeaching Bill Clinton. They’ve lately developed an interest in presidential obstruction of justice, so today MoveOn would rather linger. Meanwhile, the American Spectator—the magazine that put itself on the map (and the Paula Jones lawsuit in play) with investigative reporting on Clinton’s sex scandals—already has a case of “impeachment fatigue.”The times are sour and ill-mannered enough without unnecessary strife over removal of a duly elected president of the United States,” William Murchison sniffs at the AmSpec site. 

As I noted in a piece for U.S. News earlier this week, the emerging refrain on the Right is that anyone who dares mention the “I-word” has thrown in with a vast left-wing conspiracy plotting “a coup attempt against a lawfully elected government.” That’s from Dinesh D’Souza, but Gary BauerTom TancredoBen SteinLou Dobbs, and Pat Buchanan are all singing from the same hymnal. If Trump is eventually brought down via impeachment, Buchanan charges, “this city will have executed a nonviolent coup against a constitutionally elected president.” 

In our last national debate over impeachment, the coup was on the other foot (sorry!). Congressional Democrats used the term liberally, railing against the GOP attempt to remove Bill Clinton for perjury and obstruction of justice. “A partisan coup d’etat,” cried Rep. Jerrold Nadler (D-NY); a “Republican coup d’etat,” echoed John Conyers (D-MI). Rep. Maxine Waters (D-CA) pronounced herself appalled by “the raw, unmasked, unbridled hatred and meanness that drives this impeachment coup d’etat, this unapologetic disregard for the voice of the people.’’

All three are, of course, still in Congress today, ready to weigh in Trump’s current predicament. Nadler has affirmed that “impeachment[’s] a possibility”; “Auntie Maxine” is leading the charge, and while it doesn’t appear that Conyers has used the “I-word” yet, it’s surely just a matter of time, given that he’s tried to impeach nearly every Republican president over his five-decade career, (while giving Democrats a pass for similar behavior).  

It’s hard to take the coup comparison seriously or literally. A leading constitutional law casebook observes that:  

Because of the twelfth and twenty-fifth amendments, the successor to the president will most likely be a member of his own party.… Because in the case of Clinton, Democrat Al Gore would have become the next president, charges of usurpation or coup d’etat are ungrounded.  

In the as-yet unlikely event of Donald Trump’s impeachment and removal, he’d be replaced by his hand-picked, lawfully elected, and obsequiously loyal running mate, Mike Pence. Some “coup.”

Our last two serious efforts at presidential impeachment arguably presented greater democratic deficiencies than Trump’s case would. Nixon’s resignation elevated Gerald Ford, who’d never stood before a national electorate, having been installed via the 25th amendment after Vice President Spiro Agnew resigned on corruption charges. And when the House Republicans set the Clinton impeachment in motion in October 1998, they faced a president with a 65 percent approval rating and a public overwhelmingly opposed to their efforts. In fact, it was a lame-duck Republican Congress that impeached Bill Clinton, a month and a half after the GOP lost five seats in the House.

I don’t recall folks like Gary Bauer genuflecting to the “will of the people” when they were trying to oust Clinton. Back then, doing the unpopular thing was an example of “grace under pressure and political courage,” per Pat Buchanan. Today, it’s a corrupt elite’s desperate attempt to overturn the results of the last election.

True, the impeachment remedy is in tension with pure democracy: “countermajoritarian,” like independent judicial review. It’s perfectly valid to argue that impeachment shouldn’t be done cavalierly, and that contemplating it in the first months of a new administration is premature. But if you only cry “coup” when your party’s president is in the dock, maybe sacred democratic principles aren’t what you’re actually worried about.

The 2010 Patient Protection and Affordable Care Act, also known as Obamacare, may perhaps be the most contentious and polarizing law we’ve seen enacted in the past several decades. For seven years, Democrats have remained convinced they like it and Republicans confident that they don’t.

But once we get past the partisanship and polarization, what do Democrats and Republicans think about the fundamental regulations that constitute the core of Obamacare? These core regulations include pre-existing conditions rules that require insurance companies cover anyone who applies (guaranteed issue) and charge people the same rates regardless of pre-existing conditions (community rating).

All government policies and their ostensible benefits come with a price. What are Americans willing to pay?

As I’ve previously written, the Cato Institute 2017 Health Care Survey found that while Americans initially support core Obamacare regulations of community rating and guaranteed issue, support plummets if such regulations harm access to high quality medical services, require higher premiums or higher taxes. That being said, Americans appear to care more about their access to high quality medical services than they care about higher taxes, higher premiums, or universal coverage for those with pre-existing conditions.

Democrats are unique, however. They are the only group who says they’d be willing to pay more if it guaranteed coverage to those with pre-existing conditions. Six in 10 Democrats say they’d be willing to personally pay higher taxes and 58% say they’d pay higher premiums so that insurance companies wouldn’t charge people higher rates based on pre-existing conditions (community rating). Similar shares say they’d pay higher taxes (60%) and premiums (51%) so that insurance companies would cover anyone who applies (guaranteed issue).

Yet, the survey found that Democrats turn against the core regulations of Obamacare if they threaten their access to medical services and treatments, limit their access to top hospitals and surgeons, or require them wait several months to see a specialist.

Indeed, 62% would oppose regulations that ban insurance companies from charging higher rates to people based on pre-existing conditions if it limited their access to medical tests and treatments. Similarly, 65% would oppose the community rating rule if it required they wait several months before getting in to see a specialist for medically necessary care. And 54% would oppose if it limited their access to top rated medical facilities.

Similarly, two-thirds would oppose regulations requiring insurance companies cover anyone who applies if these harmed the quality of health care in the US.

 

 

Why should we care about these results? Academic research examining the impact of these regulations find that they do come with significant costs to the quality of health care—limiting access in the exchanges to top hospitals and surgeons and to medical services and treatments that people need.

Moreover, threats to the quality of health care constitute the one cost that Americans of all partisan persuasions would be unwilling to accept.

This doesn’t mean Americans would oppose other methods to help those with pre-existing conditions obtain the health care they need. Instead, they may support other policies that achieve this goal so long as they don’t come at the expense of quality in the U.S. health care system, raise premiums or hike taxes.

For too long, the national dialogue has focused almost exclusively on the cost versus coverage debate. This misses a third and incredibly important dimension that is a top priority of the American public: health care quality in the United States. Health care regulations and proposed reforms must be assessed not only according to their impact on cost and coverage, but also their impact on Americans’ access to timely high-quality medical care.

Survey results and methodology can be found here. The Cato Institute in collaboration with YouGov conducted two health care surveys online February 22-23, 2017. The first survey interviewed 1,152 American adults with a margin of error of ± 2.93 percentage points. This survey asked about community rating. The second survey interviewed 1,103 American adults with a margin of error of ± 2.85 percentage points. This survey asked about guaranteed issue. The margin of error for items used in half-samples is approximately ± 5.1 percentage points.

Pages